Wikipedia:Reference desk/Archives/Mathematics/2014 April 7
Mathematics desk | ||
---|---|---|
< April 6 | << Mar | April | May >> | April 8 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
April 7
[edit]How can I integrate these two rational functions?
[edit]Help me in integrating these two rational functions (w.r.t. x).
1) 1/(x^3 + 1) and
2) 1/(x^4 + 1) — Preceding unsigned comment added by 117.242.108.241 (talk) 13:11, 7 April 2014 (UTC)
- This looks like homework, but in case it is not, use Partial fraction decomposition and integrate over the parts. 95.112.167.155 (talk) 13:42, 7 April 2014 (UTC)
- Any rational function can be integrated by a combination of one or more of (1) polynomial long division, (2) partial fractions, (3) completing the square. For 1/(x^3+1) factor the denominator as (x+1)(x^2-x+1) and apply partial fractions. For the second, factor the denominator as and apply partial fractions. Sławomir Biały (talk) 13:49, 7 April 2014 (UTC)
- Or just write down the sum of all the principal parts of the Laurent expansions around each pole. You have that. 1/(x^n + 1) has singularities at x_k = exp[(2k+1)pi i/n]. We have:
- Lim x to x_k of (x-x_k)/(x^n + 1) =1/(n x_k^(n-1)) = 1/n exp[-(2k+1)(n-1)pi i/n]
- This is then the coefficient of 1/(x-x_k) in the Laurent expansion about x = x_k. The partial expansion is thus given as:
- 1/(x^n + 1) = sum from k = 0 to n-1 of 1/n exp[-(2k+1)(n-1)pi i/n] 1/(x-exp[(2k+1)pi i/n])
- It's not difficult to integrate this term by term and recast this in a manifestly real form using the relations between logarithms of complex arguments and the arctan function. Count Iblis (talk) 18:57, 7 April 2014 (UTC)
How to check my antiderivative?
[edit]Is there any way(or method or trick) to check whether the antiderivative I got after integrating a function is right or wrong? I know it can be done by differentiating the antiderivative and matching it with the original function, but I am searching for some other method. 117.242.108.241 (talk) 16:07, 7 April 2014 (UTC)
- Well, you could use graphic methods to get an approximation, by looking at the slope of a curve (then curvature) or the area under the curve, depending on which direction you are going. What's good about this method is that it's totally independent of other mathematical methods, so a mistake made there will not be replicated graphically. On the negative side, the graphic method can only check the curve within a certain domain/range, not over it's entire length. StuRat (talk) 16:11, 7 April 2014 (UTC)
- The feat of making it's mean two completely different things in the same post should not go unnoticed. 84.209.89.214 (talk) 17:25, 7 April 2014 (UTC)
- What I sometimes do is numerical differentiation. I did that just yesterday when I wrongly integrated arccos(1/t). I needed the integral to do some calculations, so I had programmed the antiderivative in my programmable calculator. Then it was easy to estimate the derivative in some point as
- [F(x+h) - F(x-h)]/(2h)
- for small h, which is much more accurate than the estimate [F(x+h) - F(x)]/h, so you can make h larger which leads to less loss of significant terms. I caught a stupid error that way and then corrected the mistake. Examples:
- [Log(10.01) - Log(9.99)]/0.02 = 0.1000000333333533
- [exp(1.01)-exp(.99)]/0.02 - exp(1) = 0.000045304923665
- Some of these rather simple formulas are remarkably precise, as Count Iblis mentioned above. For example, an area below the curve between x=L and x=R can be approximated by (R–L) f((R+L)/2) quite precisely. This is known as the rectangle rule. As long as the slope (the derivative of f) doesn't change sign between L and R, the "missing" area will at least partially cancel the "spurious" area at the other side of the rectangle. In most cases, it is even better than the other simple formula, (R–L) (f(R)+f(L))/2, which takes two f values.
- I found that with well-behaved functions, most parts of the curve are approximately parabolic; thus, the true area is usually somewhere between the two rough estimates and closer to the rectangle rule. If the second derivative doesn't change sign, the difference is a rigorous upper bound of the error term. - ¡Ouch! (hurt me / more pain) 09:07, 10 April 2014 (UTC)
Can't you calculate exactly how long an infinite amount of monkeys would take to type "Hamlet"?
[edit]The normal version of the theorem I understand; eventually a lone typing monkey will write Hamlet. What I don't get is why an infinite amount of typing monkeys are said to eventually type Hamlet, when a more accurate statement is as quickly as it is possible to type Hamlet.
If there are an infinite amount of typing monkeys, then a set infinite amount of them will successfully type the first letter of Hamlet on their first try (as they will all letters). A set infinite amount of those monkeys will then type the second letter, and so on until a set of them have typed Shakespeare's Hamlet from the millisecond they sat down and began.
Therefore, the "eventually" is actually a known number. Just calculate how much time it takes to type a single character on a typewriter (T), and the amount of characters in Hamlet (C), then it would take an infinite amount of typing monkeys T*C long to do it. Right? — Preceding unsigned comment added by 50.43.180.176 (talk) 22:20, 7 April 2014 (UTC)
- An infinite "amount" of monkeys? Do you buy your monkeys by the gallon, or what? – Trovatore
- Why not? I buy my apes by the planet. - ¡Ouch! (hurt me / more pain) 07:32, 11 April 2014 (UTC)
- Actually, if you have an infinite number of these idealized monkeys (really random-number generators), then it's true that almost surely there will be some monkey that types Hamlet straight through without a single false start. However, almost surely is different from surely. They could actually type forever and never type Hamlet, though the probability of that is zero.
- Anyway, these calculations are a little beside the point, I think. --Trovatore (talk) 22:42, 7 April 2014 (UTC)
- Well, there are an infinite number of editors editing Wikipedia, with identical copies of each of us about 10^(10^29) meters apart. Count Iblis (talk) 23:41, 7 April 2014 (UTC)
- Except that while they remain identical, they're typing the same thing... —Quondum 00:00, 8 April 2014 (UTC)
- Well, there are an infinite number of editors editing Wikipedia, with identical copies of each of us about 10^(10^29) meters apart. Count Iblis (talk) 23:41, 7 April 2014 (UTC)
- I had an unusually unsuccessful session at the keyboard yesterday. Unfortunately, the result was nothing like Hamlet. YohanN7 (talk) 23:59, 7 April 2014 (UTC)
- But how much are your mistress' eyes like the Sun? --Trovatore (talk) 00:02, 8 April 2014 (UTC)
- I had an unusually unsuccessful session at the keyboard yesterday. Unfortunately, the result was nothing like Hamlet. YohanN7 (talk) 23:59, 7 April 2014 (UTC)
- Not that this changes the calculations in the original question, but I believe that the saying is that an infinite number of monkeys, typing for an infinite amount of time, will eventually type all of Shakespeare (not just one play, Hamlet). See here: Infinite monkey theorem. Joseph A. Spadaro (talk) 15:50, 8 April 2014 (UTC)
- But, yes. I think that your original calculation (T x C) is correct. If we have an infinite number of monkeys, an infinite number will type Hamlet correctly on the first try; an infinite number will make 1 mistake; an infinite number will make 2 mistakes; ... and so on, until an infinite number will make every mistake possible (and, hence, never type Hamlet at all). But, as one editor above said, this all misses the point of the theorem. The point is not to calculate the amount of time it would literally take; but, rather to show that this feat (as is any other feat) is indeed possible to achieve. Thanks. Joseph A. Spadaro (talk) 15:58, 8 April 2014 (UTC)
- With probability 1, all those things will happen. But not for sure. --Trovatore (talk) 19:24, 8 April 2014 (UTC)
- An important ingredient that nobody's mentioned is that the monkeys need to all be typing "at random" in such a way that after T × C steps they have collectively produced all possible texts of length C. (This is not guaranteed just by saying there's infinitely many of them.) Even so, there are many (infinite length) texts which will never be produced by such a collective because of Cantor's diagonalization argument. Staecker (talk) 16:05, 8 April 2014 (UTC)
- Oh, are you one of those poor souls who only have a countably infinite number of monkeys? PrimeHunter (talk) 16:15, 8 April 2014 (UTC)
- Excuse my Hebrew, but " , or not ", is that the question?" - ¡Ouch! (hurt me / more pain) 08:37, 9 April 2014 (UTC)
- Oh, are you one of those poor souls who only have a countably infinite number of monkeys? PrimeHunter (talk) 16:15, 8 April 2014 (UTC)
- But, yes. I think that your original calculation (T x C) is correct. If we have an infinite number of monkeys, an infinite number will type Hamlet correctly on the first try; an infinite number will make 1 mistake; an infinite number will make 2 mistakes; ... and so on, until an infinite number will make every mistake possible (and, hence, never type Hamlet at all). But, as one editor above said, this all misses the point of the theorem. The point is not to calculate the amount of time it would literally take; but, rather to show that this feat (as is any other feat) is indeed possible to achieve. Thanks. Joseph A. Spadaro (talk) 15:58, 8 April 2014 (UTC)
- If we start imagining infinite numbers of physical objects then I think things get weird according to quantum theory. You don't need monkeys or typewriters. Just place an infinite number of ink bottles in front of stacks of paper, and an infinite subset of them should transform into Hamlet within a millisecond. Then do away with the ink and paper. PrimeHunter (talk) 16:09, 8 April 2014 (UTC)
- @PrimeHunter: If you want to go down that route, you only need one bottle of ink and one stack of paper in the real world, under the MWI. —Quondum 16:30, 8 April 2014 (UTC)
- You only need a cloud of hydrogen, because even at room temperature, the probability of nuclear fusion is nonzero, so we can get our ink and paper from pure hydrogen. :P That'd explain the pricing of HP cartriges, too.
- However, one huge cloud of hydrogen won't do the trick; we'd need several smaller clouds, lest it collapse into one dense mtherfucker and take our copy of Hamlet with it.
- Uh-oh, we should stop now. - ¡Ouch! (hurt me / more pain) 06:27, 10 April 2014 (UTC)
- @Staecker: The requirement of an infinite number of typing monkeys having produced every single text of a given length in the shortest possible time is still unity, because the number of texts of a given length is finite (assuming a finite alphabet). The probability of producing all finite texts (of all lengths) in infinite time might be more interesting, as this looks more like the indeterminate form 1∞. —Quondum 16:30, 8 April 2014 (UTC)
- If you have a countable set of events, each of which has probability 1, then the probability that all the events occur is also 1. This follows from countable additivity. --Trovatore (talk) 01:53, 9 April 2014 (UTC)
- Would you care to show how this applies? Let's assume you only have a countably infinite number of monkeys that start typing in succession one keystroke apart, and you require that every possible finite text occurs for some monkey from the start of its typing. I'd say that finding a threshold length that is a function of time that tends to infinity such that the probability of having all texts up to that length tends to certainty is going to be quite a challenge. —Quondum 02:31, 9 April 2014 (UTC)
- You're making it too hard. Assuming there are only countably many keys on the typewriter, there are only countably many finite texts that can be created. For each such text, the probability that it does not show up is 0. Therefore the probability that at least one of them does not show up is bounded by the sum of an infinite series all of whose terms are 0. --Trovatore (talk) 02:48, 9 April 2014 (UTC)
- I don't agree. You cannot choose the order in which you take two limits. The probability of any given text length T being typed by a given monkey by time T drops very rapidly with time: much faster than 1/T. Even if we start all monkeys typing at the same time, but simply calculate the sequence of probabilities of all texts up to length T having been typed by time T as T tends to infinity, we get a sequence of which the limit is very strongly zero. Yet we've included the probability of all finite texts having been collectively typed by countably infinite monkeys in countably infinite time. —Quondum 04:56, 9 April 2014 (UTC)
- It's not a question of agreeing or not. I gave a proof. Limits have nothing to do with it, and time has nothing to do with it. The probability measure is countably additive, which means the probability of the join of countably many incompatible events is equal to the sum of the probabilities of those events. Drop "mutually exclusive", and it becomes less-than-or-equal-to, but less-than-or-equal-to zero is zero. --Trovatore (talk) 07:37, 9 April 2014 (UTC)
- But what happens if you replace the monkeys with Wikipedia editors? YohanN7 (talk) 12:03, 9 April 2014 (UTC)
- I apologize for my tone in the above. Quondum, can you say what you take it to mean that the probability of an event has a given value, when the condition can neither be verified nor falsified in finite time? I'm using the notion from measure theory; it's not defined in terms of a limit as time goes to infinity, though you might with sufficient cleverness be able to express it in some such way. (Possibly relevant is Fatou's lemma, though I don't actually remember the statement of the lemma, so it might not be.) --Trovatore (talk) 20:55, 9 April 2014 (UTC)
- It's not clear to me that the concept of "the probability that all finite texts will be typed in infinite time" even has a meaning, and I'm not familiar with measure theory or the necessary math to deal with this one, so I can't comment. Let's just say I find the whole idea rather challenging; I'm not going to argue the point. —Quondum 04:06, 10 April 2014 (UTC)
- It's not a question of agreeing or not. I gave a proof. Limits have nothing to do with it, and time has nothing to do with it. The probability measure is countably additive, which means the probability of the join of countably many incompatible events is equal to the sum of the probabilities of those events. Drop "mutually exclusive", and it becomes less-than-or-equal-to, but less-than-or-equal-to zero is zero. --Trovatore (talk) 07:37, 9 April 2014 (UTC)
- I don't agree. You cannot choose the order in which you take two limits. The probability of any given text length T being typed by a given monkey by time T drops very rapidly with time: much faster than 1/T. Even if we start all monkeys typing at the same time, but simply calculate the sequence of probabilities of all texts up to length T having been typed by time T as T tends to infinity, we get a sequence of which the limit is very strongly zero. Yet we've included the probability of all finite texts having been collectively typed by countably infinite monkeys in countably infinite time. —Quondum 04:56, 9 April 2014 (UTC)
- You're making it too hard. Assuming there are only countably many keys on the typewriter, there are only countably many finite texts that can be created. For each such text, the probability that it does not show up is 0. Therefore the probability that at least one of them does not show up is bounded by the sum of an infinite series all of whose terms are 0. --Trovatore (talk) 02:48, 9 April 2014 (UTC)
- Would you care to show how this applies? Let's assume you only have a countably infinite number of monkeys that start typing in succession one keystroke apart, and you require that every possible finite text occurs for some monkey from the start of its typing. I'd say that finding a threshold length that is a function of time that tends to infinity such that the probability of having all texts up to that length tends to certainty is going to be quite a challenge. —Quondum 02:31, 9 April 2014 (UTC)
- If you have a countable set of events, each of which has probability 1, then the probability that all the events occur is also 1. This follows from countable additivity. --Trovatore (talk) 01:53, 9 April 2014 (UTC)
- @PrimeHunter: If you want to go down that route, you only need one bottle of ink and one stack of paper in the real world, under the MWI. —Quondum 16:30, 8 April 2014 (UTC)
A better defined question is the following. Since the decimal expansion of almost all real numbers will contain an infinite number of Hamlets, one can ask where the first Hamlet contained within the decimal expansion of pi is located. Count Iblis (talk) 19:33, 8 April 2014 (UTC)
I think the next question is "how long does it take to find the monkey that wrote Hamlet?" A copy of Hamlet isn't worth much when you have to sort through an infinite stack of random garbage text to find it, especially since the only way you'll know it is right is by comparing it with the copy you brought with you. :-P Katie R (talk) 19:04, 10 April 2014 (UTC)
Abstract Algebra: GCD
[edit]I have a simple problem, but I am unsure about one main idea. The question is for any integer a, b, c prove that gcd(a, b)=gcd(a, b+xa) for any x in Z.
My goal is to show that if the gcd(a,b) =s and the gcd((a, b+xa)=s then they are equal to each other. My proof is as follows:
Let gcd(a,b) =s. Therefore, k|a and k|b. And for any u in Z if u|a and u|b then u|s.
Then, I go on to prove that gcd(a, b+xa)=k by using the definition of gcd.
Can I use the properties of gcd attained from assuming gcd(a,b) =s in the second part of proof (proving gcd(a, b+xa)=s)??
For example, to prove if u|a and u|b+xa then u|k, can I use the fact that the gcd(a, b)=s can be written as a linear combination s=ax+by? Am I going wrong somewhere here? — Preceding unsigned comment added by Abstractminter (talk • contribs) 22:47, 7 April 2014 (UTC)
- In the absence of an answer from anyone more mathematically informed, I would suggest:
- Define s = gcd(a, b) and t = gcd(a, b+xa).
- Then we have s|a and s|b so s|b+xa so s|t so s ≤ t (both being positive)
- Next let c =b+xa so t = gcd(a, c) and s = gcd(a, c-xa)
- Then we have t|a and t|c so t|c-xa so t|s so t ≤ s (both being positive)
- So s = t