Wikipedia:Reference desk/Archives/Mathematics/2010 February 2
Mathematics desk | ||
---|---|---|
< February 1 | << Jan | February | Mar >> | February 3 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
February 2
[edit]Asymptotics of Euler Numbers
[edit]I was looking at the Taylor Series of hyperbolic functions, particularly of sech(x). I am confused now about the Euler numbers, which are defined by . This series is supposed to have radius of convergence .
The Euler number article claims . It would all make sense to me with a factorial term instead of that n to the 2n term-- isn't that way too powerful? Won't that overwhelm the n factorial in the denominator of the Taylor series, along with the remaining exponential pieces, so that terms of the Taylor series grow arbitrarily large for any nonzero x?
What am I missing here? 207.68.113.232 (talk) 02:16, 2 February 2010 (UTC)
- You should--I surmise--be able to see that the answer to your question is No--and be able to derive the radius of convergence--from reading Stirling's approximation.Julzes (talk) 03:14, 2 February 2010 (UTC)
- Perhaps you're missing the subscript (not ) on the left-hand side, as I first did? —Bkell (talk) 07:18, 2 February 2010 (UTC)
- Details. Notice that you can very easily derive an asymptotics on the coefficients of f(z)=sech(z) by a standard procedure. The poles of f(z) are the solutions of exp(2z)=-1, that is zn=iπ(2n+1)/2 forall n∈Z. The values n=0 and n=-1 give the poles of minimum modulus π/2, which is therefore the radius of convergence for the expansion of f(z) at z=0. So you can write f(z)=a/(z-iπ/2) + b/(z+iπ/2) + h(z), where a,b are respectively the residues of f(z) at iπ/2 and -iπ/2 (here notice that since f(z) is an even function so is its principal part, and it has to be a=-b), and the function h(z) has a larger radius of convergence, actually 3π/2, corresponding to the next poles of f(z) from the origin. As a consequence the coefficients of h(z) have a growth O((2/3π)n), and the coefficients of the power series expansion of f(z) at z=0 are asymptotically those of its principal part, a/(z-iπ/2) + b/(z+iπ/2) = 2az0/(z2-z02), which is a geometric series. Note also that the residue of f(z) at z0) is the limit of (z-z0)f(z) as z→z0, that is, the reciprocal of the limit of cosh(z)/(z-z0), and the last limit is the derivative of cosh(z) at z=z0. To get En of course you have to multiply by n! using the Stirling formula. So now you should be able to compute that formula and even more precise asymptotics if you wish (consider the Laurent expansions at the next poles). Also note that the rough estimate En=O(n!(2/π)n) is immediately available once you know the minimum modulus of the poles, and that the fact that the En vanish for odd n is a consequence of f(z) being even.--pma 09:11, 2 February 2010 (UTC)
- Actually in this case the residues of f(z) at all poles are easily computed, giving rise to a classic convergent series of the form (I couldn't find it in wikipedia but I'm sure it's there). Then you can expand each term and rearrange into a power series within the radius of convergence π/2. This gives an exact expression for the coefficients of the power series expansion of sech(z); in particular you may derive more refined asymptotics and bounds. --pma 12:26, 2 February 2010 (UTC)
- Thanks! (from the OP) I knew the nearest poles would be pi over 2 away. Stirling's approximation is precisely what I was missing. 146.186.131.95 (talk) 13:17, 2 February 2010 (UTC)
- Good. Btw, following the above lines one immediately finds the exact expression for En in terms of "Sn" reported here. --pma 14:15, 2 February 2010 (UTC)
P-value: What is the connection between significance level of 5% and likelihood of 30%
[edit]In the Wikipedia article P-value it says:
"Generally, one rejects the null hypothesis if the p-value is smaller than or equal to the significance level,[1] often represented by the Greek letter α (alpha). If the level is 0.05, then results that are only 30% likely or less are deemed extraordinary, given that the null hypothesis is true."
This confuses me. I thought that
-if the significance level is 0.05, results with a p-value of 0.05 or less are deemed extraordinary enough,
and that
-a p-value of 0.05 means that the results are 5% likely (to have arisen by chance, considering that the null hypothesis is true), and not 30%.
Georg Stillfried (talk) 14:56, 2 February 2010 (UTC)
- This is probably an error in the article. A p-value of 5% means that the probability of observing what you observed when, in fact, the null hypothesis is true is 5%. Wikiant (talk) 15:00, 2 February 2010 (UTC)
- Uncaught vandalism from the 11th of January. Fixed now. Algebraist 15:03, 2 February 2010 (UTC)
Comparing vectors
[edit]I've been writing a survey paper for a few months and I want to see if there are any other areas of research I can include. The topic is comparing vectors. In this realm, a vector is a set of discrete items in a specific order. The first one is always the first one. The vectors can grow, so a new last one can be added at any time. I've covered a lot of research into comparing the vectors using a cosine function and using Levenshtein-based algorithms. I've tried to find adaptations of BLAST/FASTA used in protein strands, but found nothing. Is there a fundamental method of comparing vectors that I'm missing? There has to be more than two methods. -- kainaw™ 15:22, 2 February 2010 (UTC)
- Are these vectors supposed to be representing something specific? What do you want to achieve by comparing them? What comparison methods are sensible will depend crucially on these things. Algebraist 15:39, 2 February 2010 (UTC)
- By "discrete", I mean that a value in one vector indicates the same thing as that value showing up in another vector. Some examples: vectors of URLs visited by users. Vectors of UPC codes on foods purchased by customers. Vectors of numbers showing up on a lottery. The values have meaning, but what is being compared is the similarity (or lack of similarity) of vectors. -- kainaw™ 15:43, 2 February 2010 (UTC)
- I should have clarified that by stating "survey paper", I am interested in bad methods of comparison as well as optimal methods. I already have over 200 pages of detail on methods I've studied and plan to add another 300 pages or so. -- kainaw™ 15:58, 2 February 2010 (UTC)
- The first problem is that what you are talking about is not really a vector in the sense most commonly used in mathematics. It is really a sequence; or a multiset, if order is not important; or a set, if repetition is impossible\not important. -- Meni Rosenfeld (talk) 16:46, 2 February 2010 (UTC)
- I found a good survey here with a couple algorithms that I haven't studied (yet). From these, I expect to find a few more algorithms that I can include in my survey. -- kainaw™ 05:47, 3 February 2010 (UTC)
- Order is very important (the main point) and repetition is expected. Therefore, it is not a set. Each of the sequences has an origin that does not change (the first item) and continues to the next item and the next item and the next item... In computer science (where the comparison theories are applied), they are called arrays. I don't know of any concept of arrays in mathematics. -- kainaw™ 16:53, 2 February 2010 (UTC)
- I think a finite Sequence is the exact analog of an array. -- Meni Rosenfeld (talk) 17:14, 2 February 2010 (UTC)
- Searching for "sequence similarity" brings up bioinformatics (BLAST/FASTA), which I've already covered in depth. -- kainaw™ 16:56, 2 February 2010 (UTC)
- Order is very important (the main point) and repetition is expected. Therefore, it is not a set. Each of the sequences has an origin that does not change (the first item) and continues to the next item and the next item and the next item... In computer science (where the comparison theories are applied), they are called arrays. I don't know of any concept of arrays in mathematics. -- kainaw™ 16:53, 2 February 2010 (UTC)
- I don't know if you're going to get a good answer because the question is sort of vague. The method you would want for comparing two sequences simply depends on how you might want to define closeness. You could really pick any function you wanted. If you're looking for commonly used functions, that's tied to what the sequences are common used to represent. Besides proteins and DNA sequences, strings of words, or vectors in some n dimensional space, what might you want to represent with sequences and compare? Rckrone (talk) 06:32, 3 February 2010 (UTC)
- I am purposely making it vague because I'm not interested in what the similarity is measuring. I am collecting, categorizing, and describing in high detail as many methods for comparing the similarity of sequences as possible. I'm focusing on sequences of FILL_IN_THE_BLANK in time right now. I haven't found a lot of methods that take time ordering into consideration. -- kainaw™ 06:37, 3 February 2010 (UTC)
Units Problem
[edit]I'm in an intro physics class and completely confused about a unit conversion problem---any help, not the the answer, but pointing me in the right direction would be appreciated!
Suppose x=ay^(3/2) where a=7.81 g/Tm. Find the value of y when x=61.7 Eg (fm)^2/(ms)^3
note that that's femtometers squared OVER miliseconds cubed
I'm just so confused as to how to combine these units! 209.6.54.248 (talk) 17:23, 2 February 2010 (UTC)
- First solve the equation for y using algebra. Then see what that does to the units. 66.127.55.192 (talk) 17:56, 2 February 2010 (UTC)
- OP here, I've solved for y using algebra to come up with y= cube root of (3806.89 x 10³⁶ g² f⁴ m⁴ Ym²) ALL DIVIDED BY cube root of (60.9961 g² m⁶ s⁶)
I'm still stuck!209.6.54.248 (talk) 19:37, 2 February 2010 (UTC)
- First change the units to metres and seconds, then you can divide the numbers, and you can cancel units that occur in both numerator and denominator before taking the cube root of the whole expression as the last step. (Divide powers by 3 to get the cube root). I'm puzzled by the units you give in the question. Could you explain them in words? What are "f" & "Y" in your answer? Perhaps it would help if you looked at some really simple examples first. Dbfirs 21:41, 2 February 2010 (UTC)
- All problems of change of unit use the same principle, as in the simple example of 6 secs to be converted to millisecs. 6 secs X (millisecs per sec) = 6 X 1000 = 6000 millisecs. Note how the "unit A per unit B" acts as a fraction to cancel the multiplying "unit B". This conversion can be done in both numerator and denominator, so that g/sec could be changed to kg/min by applying the separate factors 1000 and 60, as appropriate.→86.152.78.134 (talk) 23:23, 2 February 2010 (UTC)
- First change the units to metres and seconds, then you can divide the numbers, and you can cancel units that occur in both numerator and denominator before taking the cube root of the whole expression as the last step. (Divide powers by 3 to get the cube root). I'm puzzled by the units you give in the question. Could you explain them in words? What are "f" & "Y" in your answer? Perhaps it would help if you looked at some really simple examples first. Dbfirs 21:41, 2 February 2010 (UTC)