Wikipedia:Reference desk/Archives/Mathematics/2010 November 3
Mathematics desk | ||
---|---|---|
< November 2 | << Oct | November | Dec >> | November 4 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
November 3
[edit]Sums of random variables, and line integrals
[edit]The setting is: Let and be independent random variables (on the reals), with density functions and . The standard result for the density of is . I'm trying to derive this formula by using a line integral, but I'm getting a different result, so I must be doing something wrong.
This is my reasoning: The joint density of on is given by . To find we should integrate over the line . We can parametrize this line by the function . Note that , so . Now if we calculate the line integral (as in Line_integral#Definition), we get . If we compare this with the textbook result above, then my result has a factor that is not supposed to be there. Where did I go wrong? Arthena(talk) 10:03, 3 November 2010 (UTC)
- The problem is in the assumption that the line integral gives the desired result. Let's say we want to find the probability that . This is given by the integral over the region . This region has a thickness of , so the integral over it is equal to the line integral times . However, by assuming that the line integral will give the density, you have implied that the probability will be the line integral times (the density times the interval length). So you'll need to multiply by a correction factor equal to the thickness of the line divided by the corresponding length of the interval of the derived variable.
- Because this can be confusing, it's best to use the cumulative density wherever possible. -- Meni Rosenfeld (talk) 12:03, 3 November 2010 (UTC)
- Thanks for the answer. Arthena(talk) 23:38, 3 November 2010 (UTC)
Interpolation on the surface of a sphere
[edit]I want to to interpolate along a great circle between two points on the surface of a sphere expressed in polar coordinates as (φs, λs) and (φf, λf), with the interpolated point also expressed in polar coordinates as (φi, λi).
I'm doing this in code, and the best I've come up with is to:
- Interpret each point as Euler angles (φ, λ, 0.0)
- Convert the Euler angles to quaternions and use spherical linear interpolation to get an interpolated quaternion Qi
- Convert Qi to Euler angles
- Interpret the Euler angles as the interpolated point.
My questions would be: is this complete nonsense, and is there a simpler, or more direct, way of going about it?
78.245.228.100 (talk) 11:46, 3 November 2010 (UTC)
- For an alternative, how about simply converting your start and end points to Cartesian and taking a linear interpolation of the resulting vectors, converting back to spherical when you want? You could do this symbolically as well; who knows, maybe there are some significant simplifications (though I don't hold out much hope). Seems way simpler than trying to run through Euler angles and quaternions. 67.158.43.41 (talk) 12:38, 3 November 2010 (UTC)
- If I convert to 3D Cartesian and interpolate linearly I not only deviate from the great circle, but I end up tunelling through the sphere. 78.245.228.100 (talk) —Preceding undated comment added 12:44, 3 November 2010 (UTC).
- Sorry, I had meant to say, "interpolate and normalize", though then a linear interpolation of the cartesian vectors is a nonlinear interpolation on the surface. However, making the interpolation linear wasn't a requirement in the OP. If it should be, the correction term for the interpolation speed shouldn't be too hard to derive geometrically with a little calculus. You would have to special-case angles larger than 180 degrees as well. Hmm... that special case makes this not nearly as nice as I thought at first. However, after normalizing, you do not deviate from a great circle, since great circles lie in planes running through the sphere's center, which any linear combination of vectors in that plane also lie in.
- An alternative would be to use a linear map to send the start and end vectors to (1, 0, 0) and (0, 1, 0) with their normal (unit cross product) getting sent to (0, 0, 1). The vectors for would correspond to the points of interpolation on a sphere. The inverse of the linear map so defined would convert back. I like this method better, since the inverse operation has a very simple form.
- Edit: you would have to normalize in this case as well, or you could send the end vector to where is the angle between the start and end vectors, which is the inverse cosine of their dot product, or 2pi minus that for interpolations longer than 180 degrees. The upper limit of is then . The inverse is slightly more complicated in this case.
- Further edit: if my algebra is right, the final interpolation is simply
-
- where s is the start vector, e is the end vector, theta and phi are as above. The singularities of this conversion occur at phi=0 or pi, exactly where they should. Sorry for all the edits; I'm alternating this with doing something deeply tedious.... 67.158.43.41 (talk) 13:24, 3 November 2010 (UTC)
- (edit conflict) To implement your interpolate/normalize with constant speed, write and , where is the result. Then define the unnormalized and observe that . Solving gives (I think this is the correct branch for all ). Then the algorithm is just to take . --Tardis (talk) 15:00, 3 November 2010 (UTC)
- Further edit: if my algebra is right, the final interpolation is simply
- I can't find it in an article, but what I would do is:
- * find the quaternion that rotates one vector to the other.
- * convert it to axis-angle form (alternately you can derive the axis and angle directly, but this is I think is clearer)
- * interpolate the angle from 0 to the angle calculated
- * generate a quaternion from the axis and interpolated angle, and use this to rotate the first of the vectors (the one rotated from in the first step)
- the vector will move linearly along the great circle, i.e. at a constant speed along the surface, as the angle is interpolated linearly. The conversions to and from axis angle are at Rotation representation (mathematics)#Euler axis/angle ↔ quaternion. I can't find anywhere that gives the rotation between two vectors as a quaternion, but it's a straightforward result. --JohnBlackburnewordsdeeds 13:40, 3 November 2010 (UTC)
Begging the question and differentiating ex
[edit]Yesterday in class we were given a fairly simple assignment: show that . This was supposed to illustrate something like the importance of rigourous proofs—the actual problem wasn't important and was in fact a problem from Calc 1. Oddly, a dispute arose over the problem nontheless. One of my classmates claimed that you can show by differentiating the Taylor series . I contested that this was begging the question because to know the Taylor series you have to know the derivative. She said that can be defined by its Taylor series, so this does not beg the question. Who is right? Thanks. —Preceding unsigned comment added by 24.92.78.167 (talk) 23:40, 3 November 2010 (UTC)
- Hmm. It might be worth considering that you only need to know the derivative at to construct your Taylor series (in this case, and it's a Maclaurin series). An alternative way to get to the derivative of at (like, say, the basic definition of e) would remove the circularity ... --86.130.152.0 (talk) 00:09, 4 November 2010 (UTC)
- No, you need to know all derivatives (first, second, third, ...) of evaluated at x=0 to construct the MacLaurin series. However, I support 24.92's classmate: You certainly can take the MacLaurin series as the definition of , and I think this approach is found in several complex analysis texts. There are other reasonable options. You could define as the inverse of ; or you could define it as the unique function that is its own derivative and takes the value 1 at x=0. (My calculus text offers a choice between these two presentations.) Your assignment is ambiguous without a specified starting point, though presumably most people in the class knew what was expected. (But it's always good to stop and think about the assumptions you're making.) The fact that all of these approaches to defining are equivalent is a theorem, and your assignment was one step in the proof of that theorem. 140.114.81.55 (talk) 03:45, 4 November 2010 (UTC)
- I'd agree with the OP in the sense that e^x's Maclaurin series was almost certainly a result of applying calculus to a different definition. That is, historically, the proof provided would very likely be circular. However, definitions can come out of thin air and if your class is using the Maclaurin series one the proof is correct for your purposes and not logically circular.
- I really like the definition of the exponential function as the (unique) solution to the differential equation as mentioned above. Yet another definition could take which gives your result directly from the definition of the derivative. 67.158.43.41 (talk) 05:30, 4 November 2010 (UTC)
- Since I don't see it quoted above, I'd like to make reference to this article. Pallida Mors 08:32, 4 November 2010 (UTC)