Wikipedia:Reference desk/Archives/Mathematics/2011 September 28
Mathematics desk | ||
---|---|---|
< September 27 | << Aug | September | Oct >> | Current desk > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
September 28
[edit]Power Spectrum Estimation
[edit]I have some measurements and I am trying to get a hold of its power spectrum. The measurements are discrete so the estimation of the power spectrum I get is also discrete. My question is, what is a good method to interpolate the values in between the frequencies for power spectrum estimation? I only need interpolation, no extrapolation. This is just an opinion question to see what others more experienced around here think. For example, I may only get 128 frequencies so should I use just linear interpolation, nearest neighbor method, cubic splines, etc? Cubic splines are very sensitive to the boundary conditions but will be very smooth. Which b.c. make sense here anyway (natural, clamped, etc.)? On the other hand linear interpolation is faster and that is what we all see when we plot it (graphing calculators just linearly interpolate between the nodes). Nearest neighbor is dumb but is the fastest and may be justified if my frequencies are very close-by (very small df). I am using Welch's method so I am trying to use a small window (to take more averages and lower the variance) but that lowers my resolution. What do you guys think? Thanks! - Looking for Wisdom and Insight! (talk) 05:03, 28 September 2011 (UTC)
- Why interpolate at all? What are you going to use the interpolated values for? Looie496 (talk) 05:52, 28 September 2011 (UTC)
- Crudely, couldn't you just approximate each integral by a Riemann sum based on your sample? Sławomir Biały (talk) 11:37, 28 September 2011 (UTC)
- The typical way of getting an interpolated power spectrum is a technique called zero-padding. After windowing (or whatever), take each set of data, remove the mean and extend the data set by adding a large number of zeros on the end. Those zeroes will contribute nothing to the Fourier sum / integral, and so they don't affect the values you receive, but it will force the discrete Fourier transform to return values at intermediate points. On platforms that implement the fast Fourier transform algorithm, there is a computational advantage to choosing the total number of points to be an exact power of 2. The power spectrum produced in this way will be smoothed in a natural fashion, but remember that smoothing doesn't add any information beyond what you already had. For example, one needs to know that the uncertainty in peak locations is still comparable to the spacing of frequencies in the original transform. Dragons flight (talk) 17:15, 28 September 2011 (UTC)
Integral Truths and mein eigenes Problem
[edit]I have a problem involving an integro differential equation. It goes :
Now I was advised that the assumption is that , and to isolate the integral by rearranging the equation, such that I got :
Next I was told to take the derivative of both sides, and was told that this meant that since
that
But when I carry out the integration on , according to the Fundamental Theorem of Arithmetic, I get : , or have I done the wrong thing, since I thought that taking the derivative of an integral is where they cancel each other out. But also, if it is assumed that
- , what is
- ?
If I assume that, then I think I get since if , then , as stated to begin with. If I do say that , this seems correct, since it says that , and indeed , and since if \qquad \frac{d}{dt}y(0) = 0, being the first derivative, so will be the second, since that will just be the derivative of zero, so this seems right.
To solve this, I tried LaPlace once more, and got
implies that , so that Laplace of = Laplace of , which means after carrying out all the steps that
- ,
which I thought was
- ,
but it does not seem to work when I check it. Now I had two different books giving two different results, such that in one of them my answer is , but in another it is given as
- , which does actually work.
I also have a second one involving the inverse power method for finding the eigenvalues of a four by four matrix, and how to programme MatLab™ into doing what we want.
We programmed a 4 × 4 into MatLab™ for loops to allow it to give us the array showing the four eigenvalues of the matrix, and now we need a way to carry out the inverse power method to verify the middle two. We had :
% Powers of matrices n = 4;
% pick a starting vector and a matrix x0 = ones(n,1); B = [2 1 7 6; 2 0 5 6; 7 8 8 3; 9 6 3 4] P = transpose(B) A = B*P [V,D] = eig(A) pause
% first the dominant one x = x0; for I = 1:10
x = A*x; x = x/norm(x);
end v1 = x l1 = dot(x,A*x)/dot(x,x) pause
% then the other end of the spectrum x = x0; B = A-402.2821*eye(n); for I = 1:40
x = B*x; x = x/norm(x);
end v2 = x l2 = dot(x,A*x)/dot(x,x)
C = inv(A-402.2821*eye(n));
for I = 1:40
x = C*x; x = x/norm(x);
end
This gave us :
B =
2 1 7 6 2 0 5 6 7 8 8 3 9 6 3 4
P =
2 2 7 9 1 0 8 6 7 5 8 3 6 6 3 4
A =
90 75 96 69 75 65 72 57 96 72 186 147 69 57 147 142
V =
0.6704 0.0834 0.6186 0.4012 -0.6837 -0.3143 0.5742 0.3226 -0.2244 0.6642 -0.2735 0.6585 0.1810 -0.6731 -0.4614 0.5489
D =
0.0038 0 0 0 0 15.0123 0 0 0 0 65.7018 0 0 0 0 402.2821
v1 =
0.4012 0.3226 0.6585 0.5489
l1 =
402.2821
v2 =
-0.5461 0.7183 -0.2871 0.3216
l2 =
6.9144
And this last bit was my attempt to find one of the middle values, that is, to do it by hand and confirm it was 15.0123 , and or the other middle one 65.7018. I am not sure what we were to do to get the computer to work these out.Thank You. Chris the Russian Christopher Lilly 05:38, 28 September 2011 (UTC)
- Please give your two questions separate headerlines such that they can be answered separately. Otherwise this is going to be messy. Bo Jacoby (talk) 07:31, 28 September 2011 (UTC).
- Is it me or is there a minus missing in line 2, where the integral has been moved to the right hand side? Grandiose (me, talk, contribs) 17:23, 28 September 2011 (UTC)
Faith and Begorrah ! You are right - I cannot understand how I got that wrong ! This is the trouble with trying to rearrange equations. Although in this case it was a typo on my part only in writing it all down for this question on Wikipedia™ . This means then that
- which means
- ,
and this is what I had anyway, so the only mistake I made was a typo for the first rearrangement, but other than that, since the mistake was not repeated with the differentiation, it all worked out. This I have since solved to my satisfaction, intending to use the solution
- , which, as stated, does actually work. The only problem I have now is simply understanding why the integral becomes the function
upon differentiation, and what the differentiation of an integral really means.
As for the computer one, sorry to put two on the same thing, but there it is. We are trying to find out how to find the eigenvalues of the matrix with the MatLab™ , and how to use the inverse power method to do so, or that is, how to find all four. Thank You.Chris the Russian Christopher Lilly 22:36, 28 September 2011 (UTC) — Preceding unsigned comment added by Christopher1968 (talk • contribs) 22:31, 28 September 2011 (UTC)
polar range
[edit]Why do we need define a range for polar cordinate system?Exx8 (talk) —Preceding undated comment added 10:49, 28 September 2011 (UTC).
- I think it's because a polar representation is otherwise non-unique. If the tuple for real represents a Cartesian coordinate, the set of such tuples maps to bijectively. If it is interpreted as a polar coordinate, for radius x and angle y, for all integer .--Leon (talk) 16:41, 28 September 2011 (UTC)
- I'm not sure that your notation's right there. In polar coordinates, the point is the same as the point for each whole number k. You can go around another turn and get back to where you started. (Like 6 a.m. and 6 p.m. have the same representation on an analogue clock, even though the hour hand has made a full turn extra when it's 6 p.m.) We sometimes specify a range for the angle θ but we don't have to. It's often best not to, and it doesn't require much more work. For example, the point in cartesian coordinates is given by , where k is any whole number. — Fly by Night (talk) 21:00, 28 September 2011 (UTC)
collide with earth?
[edit]earth orbits around the sun; relative to the speed conjunction of other comets and asteroids will one near moon size ever collide with earth? by mathematics calculation; the the eclipse ellipse of other large masses and earth over time how will it project over 3 light years in very high speed animation? (despite the sun become a red giant) — Preceding unsigned comment added by 207.6.211.175 (talk) 19:32, 28 September 2011 (UTC)
- That's unlikely unless the solar system enters a new phase of instability, similar to when Jupiter and Saturn reached a 2:1 orbital resonance, triggering the Late Heavy Bombardment A possible scenario involves Mercury, whose orbital parameters aren't far from a region of instability. Given the current state of the solar system, taking into account the fact that it is chaotic on a time scale of tens of millions of years, one can show that Mecury can collide with Venus or fall into the Sun within a billion years. When that happens, the solar system will reconfigure itself, which can e.g. involve Mars being ejected from the solar system, or possibly other catastrophic events. Count Iblis (talk) 20:09, 28 September 2011 (UTC)
- Wait, the solar system is chaotic already within tens of millions of years? Then what's the deal with everyone trying to predict whether when the Sun turns to a red giant (much later) it will eat Earth or not? If the orbits are so chaotic, the problem is not the Sun, we don't even know where Earth will be at that time. – b_jonas 10:17, 29 September 2011 (UTC)
- The general features of the Earth's orbit are fairly stable. It's chaotic in as much as we can't really determine where in its orbit the Earth will be in 10 million years, but we can be reasonable sure its orbit won't be significantly different. The only exception is close interactions with other large bodies, like Count Iblis describes, but I don't think they are considered particularly likely. --Tango (talk) 12:00, 3 October 2011 (UTC)
- That's what I thought, but then Count Ibis mentioned a possibility of Mars being ejected from the Solar System, which sounds rather scary. – b_jonas 20:06, 4 October 2011 (UTC)
- The general features of the Earth's orbit are fairly stable. It's chaotic in as much as we can't really determine where in its orbit the Earth will be in 10 million years, but we can be reasonable sure its orbit won't be significantly different. The only exception is close interactions with other large bodies, like Count Iblis describes, but I don't think they are considered particularly likely. --Tango (talk) 12:00, 3 October 2011 (UTC)
- Wait, the solar system is chaotic already within tens of millions of years? Then what's the deal with everyone trying to predict whether when the Sun turns to a red giant (much later) it will eat Earth or not? If the orbits are so chaotic, the problem is not the Sun, we don't even know where Earth will be at that time. – b_jonas 10:17, 29 September 2011 (UTC)
epsilon-delta
[edit]hey, it's me again. So today I was doing some volunteer tutoring for younger students. One student asked me a question about the derivative of at 0: he asked since theoretically it doesn't exist, why does his graphing calculator give a value for the derivative there. I explained that his calculator doesn't differentiate the way we do, but calculates a bunch of for very small ∆x; I was wondering idly how I would turn this into a rigorous (ε,δ) argument; this is what I have:
Suppose ε is the smallest value >0 that your calculator can handle, and also suppose this value is the same when it calculates your Δx and Δy. Then your calculator is saying that for every ε'≥ε, as long as , for some δ≥ε (I know this isn't how a calculator actually thinks), but the calculator can't choose any epsilon less than ε, so as long as it's true for ε',δ≥ε, the calculator says it exists. I know this is roughly right but it's not really clear and not really specific. Can someone please help me refine this argument (keeping to epsilon's and delta's though I'm sure there are plenty more informal you could do it)? — Preceding unsigned comment added by 24.92.85.35 (talk) 22:51, 28 September 2011 (UTC)
- I imagine the calculator isn't doing anything of the sort but is using an estimate for the derivative using a formula from numerical analysis. A typical method to estimate f′ (a) would be to find (f(a+e)-f(a-e))/2e which would have error, assuming f is reasonably well behaved, proportional to e2. The fact that f is not well behaved is, I would guess, why the calculator is giving an answer instead of a divide by zero error. What is actually happening though is anybodies guess since the calculator is running proprietary software written by programmers with confidentiality agreements. Symbolic algebra systems exist that can do this kind of thing exactly and it probably won't be too long before calculators include such features, assuming they don't already.--RDBury (talk) 04:02, 29 September 2011 (UTC)