Wikipedia:Reference desk/Archives/Mathematics/2007 May 7
Mathematics desk | ||
---|---|---|
< May 6 | << Apr | May | Jun >> | May 8 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
May 7
[edit]Long division question
[edit]What is (x4 + x3 - 2x2 + x - 3) ÷ (x2 - x - 2)?
I dunno how to use the complicated code to display what I did to get to an answer, but I get stuck and it isn't as it is shown on the memo :(
Someone please help me? Thanks in advances. ► Adriaan90 ( Talk ♥ Contribs ) ♪♫ 19:01, 7 May 2007 (UTC)
- I doubt anyone here will answer your homework for you. Polynomial long division seems to explain this quite well, at a brief inspection. It's possible the answer you've been given is incorrect: if you post it here I (or someone) will verify it for you. 131.111.8.97 19:05, 7 May 2007 (UTC) that was me Algebraist 19:19, 7 May 2007 (UTC)
- I got it as x2 + 2x + 2 ► Adriaan90 ( Talk ♥ Contribs ) ♪♫ 19:13, 7 May 2007 (UTC)
- Remember that the result of a long division problem is a quotient and a remainder. Are you saying that the quotient is x2 + 2x + 2 and the remainder is 0? Algebraist 19:19, 7 May 2007 (UTC)
- Yes :P ► Adriaan90 ( Talk ♥ Contribs ) ♪♫ 19:25, 7 May 2007 (UTC)
- Then you are wrong, as you claimed. Note that (x2 - x - 2)(x2 + 2x + 2) = (x4 + x3 -2x2 -6x -4). Can you see how to save the situation? Algebraist 19:32, 7 May 2007 (UTC)
- I dunno how to save that, but I did the thing again and I got 7x + 1. Can that be correct? I think the problem came when I was supposed to subtract (-2x2) from (-2x2) and I said the answer was 2x2. ► Adriaan90 ( Talk ♥ Contribs ) ♪♫ 19:38, 7 May 2007 (UTC)
- If you're claiming that (x4 + x3 - 2x2 + x - 3)=(x2 - x - 2)(x2 + 2x + 2) + (7x + 1), then I'm sure you can check that as well as I can (remember that to check division all you need is multiplication, which is easy). Algebraist 19:43, 7 May 2007 (UTC)
- I dunno how to save that, but I did the thing again and I got 7x + 1. Can that be correct? I think the problem came when I was supposed to subtract (-2x2) from (-2x2) and I said the answer was 2x2. ► Adriaan90 ( Talk ♥ Contribs ) ♪♫ 19:38, 7 May 2007 (UTC)
- Then you are wrong, as you claimed. Note that (x2 - x - 2)(x2 + 2x + 2) = (x4 + x3 -2x2 -6x -4). Can you see how to save the situation? Algebraist 19:32, 7 May 2007 (UTC)
- Yes :P ► Adriaan90 ( Talk ♥ Contribs ) ♪♫ 19:25, 7 May 2007 (UTC)
- Remember that the result of a long division problem is a quotient and a remainder. Are you saying that the quotient is x2 + 2x + 2 and the remainder is 0? Algebraist 19:19, 7 May 2007 (UTC)
- I got it as x2 + 2x + 2 ► Adriaan90 ( Talk ♥ Contribs ) ♪♫ 19:13, 7 May 2007 (UTC)
Hmmm. This is giving me so much trouble lol! I'll check back tomorrow after detention as I have to get in some sleep now... Anyways, I tried that multiplication thing, but I did it totally wrong! I'm just bad at maths I guess. lol. Well don't think I'm ignoring you, I'll be back, lol :P ► Adriaan90 ( Talk ♥ Contribs ) ♪♫ 20:18, 7 May 2007 (UTC) PS: If i remember, which I hope I will. Good night :D ► Adriaan90 ( Talk ♥ Contribs ) ♪♫ 20:18, 7 May 2007 (UTC)
- Let's try an easy way to multiply: a table. Write the terms of one polynomial, say x2−x−2, vertically; write the terms of the other, say x2+2x+2, horizontally. Now fill in each entry with a product, then sum the entries.
· x2 2x 2 x2 x4 2x3 2x2 −x −x3 −2x2 −2x −2 −2x2 −4x −4
- Thus (x2−x−2)·(x2+2x+2) equals x4+2x3+2x2−x3−2x2−2x−2x2−4x−4; now combine terms with like exponents. --KSmrqT 22:42, 7 May 2007 (UTC)
- Going on with that method, it's probably better to adopt lattice multiplication - basically the same, but you add the terms within the lattice --Fir0002 23:46, 8 May 2007 (UTC)
- That's easy? I've always just done multiplication in my head... I'm probably the wrong person to help here, aren't I? Algebraist 10:07, 8 May 2007 (UTC)
- Yes, it is a challenge to match the understanding and abilities of questioners. The table is mostly a visual reminder that we need to multiply each term of the first polynomial times each term of the second. It also serves as a bookkeeping device, so we don't accidentally overlook a pair. (Four kinds of mistake are likely: not knowing what to do, missing a pair, flubbing a sum, and the usual arithmetic and sign accidents.)
- Some teachers and textbooks use a controversial guide for binomial multiplication called the "FOIL rule". It is of dubious utility at the point of introduction, and spreads confusion and grief thereafter. The table will never let us down (not even with multinomials and noncommutative multiplication).
- Another popular method for multiplying polynomials in a single variable mimics integer multiplication, with columns arranged by power of x rather than power of 10. Polynomial division follows the same pattern, looking much like "long division".
- Personally, I have made accidental calculation errors, and seen many more — some in peer reviewed publications. So I prefer to rely on a computer algebra system to check my work. Still, it's no guarantee; I could enter the terms wrong!
- And it cannot substitute for understanding. For example, I still remember my surprised delight when I encountered the theorem that the remainder of division by a monic linear polynomial relates to evaluation:
- It was so unexpected, relating division and evaluation; yet the statement is simple and easy to prove. No computer algebra system can replace that moment. --KSmrqT 15:26, 8 May 2007 (UTC)
- use synthetic division, it's easier than long division, though there are times when only long division works. Coolotter88 23:20, 8 May 2007 (UTC)
- But synthetic division (if I've understood the article, I've never heard of it before) is just a cleaned-up algorithm for long division. And since the cleaning up has obscured the actual meaning of what's being done, it seems likely to lead to worse understanding. Sure, it allows faster calculation, but as KSmrq points out, calculation is what computers are for. I've never been happy with giving someone an opaque algorithm for a task, asserting (without proof) it works, and claiming to have taught them something. Algebraist 10:52, 9 May 2007 (UTC)
- Ok, well I'm back as I said I would be, albeit one day late. Anyways, I made a stupid error in my calculation and in the end the answer was x2 - x - 2. Thanks for all of your help, een if it was a bit difficult to understand at some points lol :D Please comprehend because I am not a native English speaker... but thanks anyways :P ► Adriaan90 ( Talk ♥ Contribs ) ♪♫ 17:28, 9 May 2007 (UTC)
- But synthetic division (if I've understood the article, I've never heard of it before) is just a cleaned-up algorithm for long division. And since the cleaning up has obscured the actual meaning of what's being done, it seems likely to lead to worse understanding. Sure, it allows faster calculation, but as KSmrq points out, calculation is what computers are for. I've never been happy with giving someone an opaque algorithm for a task, asserting (without proof) it works, and claiming to have taught them something. Algebraist 10:52, 9 May 2007 (UTC)
ln(ln(...ln(x))...)
[edit]We all know that limit x->infinity of ln(x) = infinity. ln(ln(x)) would go to infinity too, but more slowly. What about ln(ln(...ln(x))...)? --Taraborn 21:43, 7 May 2007 (UTC)
- Well, the logarithm already has a smaller rate of growth than any function (k > 0), and similarly, log(log x) = o(log x). Thus the n'th iteration will have even smaller rate of growth but still approach infinity. See also logarithmic growth and Iterated logarithm, though those articles need to be expanded. nadav 23:10, 7 May 2007 (UTC)
- This assumes you mean a possibly large, but finite n-fold iterated application of the ln function to argument x: they all go to infinity, however large n is. This can easily be shown by mathematical induction on n. What you need is the fact that if f(x) goes to infinity, then so does ln(f(x)). This is a special case of the fact that the composition of two functions having limit infinity at infinity also has that property. --LambiamTalk 23:27, 7 May 2007 (UTC)
- Induction covers the finite cases pretty easily, but does it hold for the limit as n->inf? Black Carrot 18:47, 8 May 2007 (UTC)
- If you instead mean that an infinite number of logarithms should be taken before letting x go to infinity, then the answer doesn't exist. For any given x, you will after some finite number of logarithms arrive at a negative number, which does not itself have a logarithm.
- More precisely, the answer doesn't exist if we are talking about real numbers. If you allow complex values, and thus use the complex logarithm, then the logarithm of that first negative number will have an imaginary part of Pi, and some real part. Iterating the complex logarithm on this will converge to the number that satisfies ln(z)=z, which is 0.318+1.337i. This value would thus be the result of the infinitely iterated logarithm of any number (!), and therefore also the limit if you after you let x go to infinity. --mglg(talk) 18:56, 8 May 2007 (UTC)
- I'm not so sure about this. Let's define F(x,n) to be the function where "ln" is repeated n times. Also let function f(x) and g(x) on the reals be given with . Then we're looking at examining . I think this is an Indeterminate form because if g(x) grows much more quickly than f(z) then the large number of iterated logs will further and further reduce the final result. In particular you can pick g(z) to be a large enough number of iterated logs to bring f(z)=z down to any desired reasonable floor value. Dugwiki 20:51, 8 May 2007 (UTC)
- I was just referring to the case when the limit n→∞ is taken first, for a given x, before letting x→∞. You are referring to a more complicated general case of taking both limits simultaneously. I agree that it seems indeterminate in that general case. --mglg(talk) 23:24, 8 May 2007 (UTC)
- I think mglg was talking about the function lln defined (or not) by lln(x) = limn→∞ lnn(x), in which lnn denotes the ln function iterated n times. The argument was, if I understood correctly, that lln(x) = 0.318+1.337i for all x, since this is the only fixpoint of the (complex) ln function. Indeed, if lln(x) is defined, it has to be a fixpoint of ln; what remains to be shown is (A) that the value given is the only fixpoint, and (B) that the sequence of iterations always ends up there in the limit, in other words, that the basin of attraction is the whole complex plane – or, if you wish, encompasses the real line. I haven't examined this in any detail, but (B) does not hold in full generality. Starting with an argument of the form x = expk(0) will bring you, in k+1 iterations, to ln(0), which is quite undefined also for the complex logarithm. --LambiamTalk 21:33, 8 May 2007 (UTC)
- Yes, Lambiam, that is what I meant. You are right that the limit fails at the points x={0, 1, e, ee, eee,...}. For all other (complex) x, however, lnn(x) seems to converge to the given fixpoint (judging quite unrigorously from numerical plots). --mglg(talk) 23:24, 8 May 2007 (UTC)
- There are at least two fixpoints: if x+yi is a fixpoint of ln, then so is its complex conjugate x−yi. --LambiamTalk 04:19, 9 May 2007 (UTC)
- from this old question seems to answer the question quite well, the infinite nested logarithm appears to be constant except for a small, but infinite number of values of x Thepalm 10:02, 9 May 2007 (UTC)
- I thought there was something interesting to say about the other fixed point, but it's actually quite trivial: lln(z) yields (except for the aforementioned broken cases) the fixed point whose imaginary part has the same sign as z's imaginary part. (Since we define instead of , the reals go to the fixed point with positive imaginary part.) --Tardis 15:07, 9 May 2007 (UTC)
- I stand corrected. Thanks Lambiam & Tardis. For completeness, we should maybe mention that other branches of the complex logarithm yield other fix points such as 2.06228+-7.58863i, 2.65319+-13.9492i,... Some non-standard branches, such as with the branch cut near a fix point, lead to non-trivial attraction basins and/or basins of two-point oscillation. (One example: the branch 1.31<=arg(z)<1.31+2 Pi.) --mglg(talk) 21:33, 9 May 2007 (UTC)