Wikipedia:Reference desk/Archives/Mathematics/2008 January 10
Mathematics desk | ||
---|---|---|
< January 9 | << Dec | January | Feb >> | January 11 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
January 10
[edit]Finding the area under a curve (calculus question)
[edit]I've been struggling with a problem. I need to find the area under the curve y = 27 - x^3 on the interval [1, 3].
area = lim [as n --> infinity] ∑ from [i = 1] to [n] of f(ci)∆x
area = lim [as n --> infinity] ∑ from [i = 1] to [n] of (27 - [1 + i/n])(2/n)
I think I know the rules for how to manipulate sigma notation. Maybe I just keep making subtle algebra mistakes, but my friend and I can't figure it out. (The textbook and my class say the answer is 34.) Can anyone show me how to get this answer? Thanks! —anon.
- The easiest way is with integration: , which is exactly what sums become as the partitions become infinitely small. Strad (talk) 00:21, 10 January 2008 (UTC)
- Ah, but we haven't learned how to evaluate definite integrals yet, except with the formula I wrote above... :-( —anon
- So you wanna do it the hard way. FIrst of all, your formula lost the cube. Secondly, should be the start of the interval plus times the length of the interval, which makes it . The sum should be
- Since n is not dependent on i, the 2/n can be taken out of the sum:
- The cube has to be expanded as in :
- Now the sum can be split into simpler sums and everything not dependent on i is moved outside the sums (like the 2/n earlier):
- Those simple sums can all be found at Summation#Identities.
- The rest should be easy:
- And you can see what happens when n goes to infinity.
- Do you know (or can you prove) that the "area" under the curve y = 27 - x3 is equal to [(the area under the curve y = 27) minus (the area under the curve y = x3)] ? This won't solve your underlying problem, but it may make the algebra easier. Tesseran (talk) 03:27, 10 January 2008 (UTC)
roman
[edit]were there any weaknesses in the roman numeral system? —Preceding unsigned comment added by Invisiblebug590 (talk • contribs) 07:39, 10 January 2008 (UTC)
- It's very hard to write large numbers and it's hard to do calculations. -- Meni Rosenfeld (talk) 08:35, 10 January 2008 (UTC)
It was used essentially to number the legions, after all. PMajer (talk) 12:46, 10 January 2008 (UTC)
The biggest problem with the Roman system (along with many other ancient number systems) was the lack of a zero for place holding. This meant they had to continually invent new symbols as larger and larger numbers were required until by the Middle Ages virtually all of the alphabet had a numerical value. I don't entirely hold with the widespread opinion that it is hard to do calculations with roman numerals. In particular, it is often said that multiplication of large numbers and long division are difficult. I have practised both of these a bit just to see if this is true. I found that the difficulty was mainly my unfamiliarity with the system. Once this was overcome I found it actually easier in some respects: the symbols retain the same value wherever you write them so you do not need to worry about column values. SpinningSpark 16:39, 10 January 2008 (UTC)
- Exact, and by the way another thing to recall is that an efficient number system was not so developed, just because there was not so a big need for it, although for instance Greek mathematics was extremely advanced. The point is that, at that time, a geometric formalism was indeed the most natural and satisfactory thing for both the theoretical and practical sake. Think about this: if you have to draw a project, and you have to compute a length, or an angle, you can do it directly by a geometric construction, like the ones described by Euclid (under this respect Euclid's Elements is the AutoCAD of the antiquity). On the contrary for example, measuring a length, then computing numerically the product by square root of 2, then measuring and drawing a segment of that length, is ridiculous compared to the efficiency of the drawing algorithm (make a square on your segment, take the diagonal). The numeric formalism become something really useful and efficient only after the invention of the mechanical printing by Gutenberg. Only then numerals became the perfect way to store mathematical information in a compact way, like in tables of logarithms, etc. If you wish, we had a similar situation with the binary system. It is not that Euler or Gauss ignored it, the fact is that it was of little use for them, and it had been so indeed, till the invention of the computers.PMajer (talk) 18:02, 10 January 2008 (UTC)
Confidence Ellipsoid, Least Squares
[edit]I'm working through a derivation of the confidence ellipsoid of a ordinary least squares problem. Several sources (Scheffe 1959, Wikipedia) all seem to give the following as the solution:
where p is the number of parameters fit in the model. When I do the derivation, I get it should be n, the number of data points. In particular,
- =
And (since X is n by p and is p by 1, the dimension of the product will be n by 1.). Thus should be , not p degrees of freedom, so the ratio
should be distributed , not , which seems to be what every book I check says. Anyone spot my mistake? --TeaDrinker (talk) 20:52, 10 January 2008 (UTC)
- The simplest thing to do is note that β-b is p-dimensional, so that while multiplying it by X gives an n-dimensional result, it is one that is always in a p-dimensional (or less) subspace of Rn. Matrix multiplication can never increase the span of a linear space. Baccyak4H (Yak!) 04:49, 11 January 2008 (UTC)
- I should add that the operative part here is that the df for the numerator chi squared comes from the rank of the covariance matrix of b's normal distribution, which is p for an identifiable model. The rank of the subsequent n-dimensional normal's covariance matrix is thus still p, even though the multivariate normal is n-dimensional. Baccyak4H (Yak!) 15:39, 11 January 2008 (UTC)
- Thanks! I was pretty sure my proof fell apart somewhere, I was just having trouble tracking down where. I'm thinking, based on what you say, the proof falls apart on the independence of the numerator and denominator, needed for the F distribution (although I have not been able to fully convince myself of that). In any event, thanks! -TeaDrinker (talk) 21:23, 11 January 2008 (UTC)
- Actually, I did not see where you dealt with the independence or not in an incorrect way. It is simply that the rank of the covariance matrix of X (β-b) is p, not n. So its squared norm is a χ2 with p degrees of freedom, not n. Baccyak4H (Yak!) 04:44, 12 January 2008 (UTC)
- Hmm, that is odd then. If I understand you, you're saying is not , since that would be a projection of a p dimensional span (betas) into n dimensions and getting an n dimensional span. However,
- Perhaps I am making a simple mistake somewhere. Many thanks, your comments are very much appreciated! --TeaDrinker (talk) 18:04, 12 January 2008 (UTC)
- The only mistake I see there is that on the last step, the projection or hat matrix (the big mess of X things) does not simplify to the n-dimensional identity. Rather it is the projection matrix for X, and n-dimensional matrix of rank p (again, starting with rank p matrix X and involving only multiplication and inversion cannot increase the rank past p). It is the rank of the covariance matrix that is important -- it becomes the degrees of freedom.
- Actually, I did not see where you dealt with the independence or not in an incorrect way. It is simply that the rank of the covariance matrix of X (β-b) is p, not n. So its squared norm is a χ2 with p degrees of freedom, not n. Baccyak4H (Yak!) 04:44, 12 January 2008 (UTC)
- Thanks! I was pretty sure my proof fell apart somewhere, I was just having trouble tracking down where. I'm thinking, based on what you say, the proof falls apart on the independence of the numerator and denominator, needed for the F distribution (although I have not been able to fully convince myself of that). In any event, thanks! -TeaDrinker (talk) 21:23, 11 January 2008 (UTC)
- If you wanted to, you could come up with a matrix to premultiply X (β-b) which would give you an Np(0, I), but which would cancel itself out in the middle of the crossproduct if you reckoned that multiplication first. And what is the squared norm of Np(0, I)? You bet. Baccyak4H (Yak!) 05:40, 13 January 2008 (UTC)
- Ahh, yes, I see it. I had failed to recognize the hat matrix, instead multiplied by (which then very nicely simplifies), the latter part, as I think about it, need not exist. Thanks! -TeaDrinker (talk) 06:19, 13 January 2008 (UTC)
- If you wanted to, you could come up with a matrix to premultiply X (β-b) which would give you an Np(0, I), but which would cancel itself out in the middle of the crossproduct if you reckoned that multiplication first. And what is the squared norm of Np(0, I)? You bet. Baccyak4H (Yak!) 05:40, 13 January 2008 (UTC)