Wikipedia:Reference desk/Archives/Mathematics/2011 January 31
Mathematics desk | ||
---|---|---|
< January 30 | << Dec | January | Feb >> | February 1 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
January 31
[edit]Start at different points, add 1. Evaluate evenness.
[edit]This is a situation I just came up with. Not sure if it even has enough information to answer.
- You start out with the number 0. Over the next 100 seconds +1 is added to the number at random intervals.
- You start out with the number 1. Over the next 100 seconds +1 is added to the number at random intervals.
Assuming you ran this 50,000 times which starting point would give you more even numbers, 0 or 1? Or would it not matter? Yes, I already know I'm an idiot. --71.240.162.87 (talk) 06:25, 31 January 2011 (UTC)
- You're going to have to describe more precisely what you mean when you say "at random intervals." —Bkell (talk) 06:33, 31 January 2011 (UTC)
- My first guess as to the meaning of "random intervals" would be a Poisson process. But it's specified that it takes 100 seconds, but NOT specified what the average frequency of these additions of +1 is, making it impossible to know what to make of the "100 seconds". Either way, the number of even numbers you get in 100 seconds is random. In the first case, you already have an even number at the beginning; in the second case you don't; that's the only relevant difference between the two processes. If the additions of +1 occur on average every β seconds, then the even numbers occur on average every 2beta; seconds. The expected number of even numbers you'd get in the first case would be 1 more than in the second case. But if β is small, then the standard deviation of the number of even numbers you'd get in either case would be large, so you'd have close to a 50% chance of getting more even numbers in the first case and a 50% chance of getting more even numbers in the second case. "Close to", but not exact. More tomorrow maybe..... Michael Hardy (talk) 07:24, 31 January 2011 (UTC)
- I think the OP is asking about the probability that the final number after 100 seconds is odd or even. That is, what is the probability that a valuable with Poisson distribution is even? Clearly this depends on the distribution's parameter -- with very low the sum is overwhelmingly likely to be 0, which is even, but high one expects the probability of odd and even sums both to converge towards 1/2.
- Unless I'm mistaken, P(even number of events) is . Therefore, starting with 0 will always give a higher probability of an even end sum than starting with 1, but the difference will be extremely slight when one expects many increments to happen during the 100 seconds. –Henning Makholm (talk) 08:32, 31 January 2011 (UTC)
- My first guess as to the meaning of "random intervals" would be a Poisson process. But it's specified that it takes 100 seconds, but NOT specified what the average frequency of these additions of +1 is, making it impossible to know what to make of the "100 seconds". Either way, the number of even numbers you get in 100 seconds is random. In the first case, you already have an even number at the beginning; in the second case you don't; that's the only relevant difference between the two processes. If the additions of +1 occur on average every β seconds, then the even numbers occur on average every 2beta; seconds. The expected number of even numbers you'd get in the first case would be 1 more than in the second case. But if β is small, then the standard deviation of the number of even numbers you'd get in either case would be large, so you'd have close to a 50% chance of getting more even numbers in the first case and a 50% chance of getting more even numbers in the second case. "Close to", but not exact. More tomorrow maybe..... Michael Hardy (talk) 07:24, 31 January 2011 (UTC)
Interchange of summation and differentiation with Fourier series
[edit]I'm aware of simple, piecewise-smooth functions that can be characterized almost everywhere by Fourier series but where term-wise differentiation leads to non-convergent series, illustrating why the operations of differentiation and summation are not always interchangeable. However, are there examples of such functions whose Fourier series, when differentiated term-wise, lead to convergent but incorrect results almost everywhere? By incorrect, I mean where the term-wise differentiated Fourier series does not correspond to the derivative of the function represented.--Leon (talk) 08:13, 31 January 2011 (UTC)
Probabilistic Inclusion Exclusion.
[edit]How do I prove the probabilistic version of the Inclusion-exclusion principle without using induction. I understand the proof sketch given in the article but what I want is the probabilistic version without taking expectations. The reason is that the book I am reading from (Ross's A first course in Probability) hasn't defined expectations yet and has still given a sketch of the result by counting. This would have appeared natural to me in the counting measure, but I cant relate that to probabilities, can I?-Shahab (talk) 11:42, 31 January 2011 (UTC)
- If the counting is of equally probable events then yes you can just use a measure equal to the probability of the events in a set, that is just the number of elements in a set compared to the overall number. You can use the inclusion-exclusion principle directly to calculate with those probabilities. is that what you're thinking of or am I missing something? Dmcq (talk) 12:54, 31 January 2011 (UTC)
- What exactly is it that you want to prove? Why do you want to avoid induction? It is not clear what you think the connection between induction and expectations is (I cannot offhand think of one).
- In general, the inclusion-exclusion principle works for any measure, including both the counting measure and probability measures. The proof sketched in our article supposes that you can integrate over the measure, but that's just a shortcut (and if you know enough measure theory to think of the counting measure, you can probably do the integration even without knowing that "integral over the probability measure" is later going to be abbreviated "expectation"). You can prove it instead by induction over the number of sets, in which case all you need is finite additivity of the measure. But again, why don't you want induction? –Henning Makholm (talk) 13:34, 31 January 2011 (UTC)
- Inclusion-exclusion is a special case of Moebius inversion see here, where you have to chose the partial order to be set inclusion. Count Iblis (talk) 13:49, 31 January 2011 (UTC)
- Hmmm...It seems that as usual I was confused myself. (Just to give a little background I am a self learner). If I understand correctly, there are two proofs: 1. Induction. 2. Integration. My question then was (and is still) this: Is there any other "simpler" proof?. Thanks to all of you.-Shahab (talk) 14:12, 31 January 2011 (UTC)
- The whole business becomes rather obvious if you use sum up 1−(1−a)(1−b)(1−c)... for all the points in the union of A,B and C setting a,b,c to 1 or 0 depending on whether they are in those respective sets. Then ab for instance is 1 for the intersection of A and B, and the original expression is 1 for all the elements in the union of the sets. The − here works out as being the exclusive or operation. Dmcq (talk) 19:26, 31 January 2011 (UTC)