Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2010 April 16

From Wikipedia, the free encyclopedia
Mathematics desk
< April 15 << Mar | April | May >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 16

[edit]

Internet searches for non-linear notations

[edit]

Please see Wikipedia:Reference desk/Computing#Internet searches for non-linear notations. -- Wavelength (talk) 01:20, 16 April 2010 (UTC)[reply]

Why don't AND and OR form a ring?

[edit]

From Exclusive or#Relation to modern algebra:

The systems and are monoids. This unfortunately prevents the combination of these two systems into larger structures, such as a mathematical ring.

What property of a ring does not satisfy? NeonMerlin 01:40, 16 April 2010 (UTC)[reply]

There's no additive inverse (no matter which of the operations you want to call addition). However, see Boolean ring, which is "morally" the same thing, and is a ring. --Trovatore (talk) 01:43, 16 April 2010 (UTC)[reply]
In other words, "and" and "exclusive or" form a ring. Michael Hardy (talk) 18:23, 17 April 2010 (UTC)[reply]
Actually, on {T,F}, they form a field. But their extensions to multi-bit values form a ring. —David Eppstein (talk) 18:38, 17 April 2010 (UTC)[reply]
And every field is a ring... Algebraist 21:37, 18 April 2010 (UTC)[reply]

Markov Chains

[edit]

At the top of the article Markov chain, it reads "This article may be too technical for most readers to understand"
AND IT IS! :P
Consider three states, i j and k.










How does one work out the probability, assuming the current state is 'i', that the state in n time steps will be x (i, j, or k).
ps I have zero experience with Markov Chains.--203.22.23.9 (talk) 03:07, 16 April 2010 (UTC)[reply]

First you write the transition probabilities as a matrix where the entry is the probability to move from state m to n (I'll rename i, j and k as states 1, 2 and 3):
Then calculate the matrix power . This represents the probability of transition between states in n steps. So, for example, the (1, 2) entry of is the probability that in n steps you will be in state 2, given that you are currently in state 1. -- Meni Rosenfeld (talk) 05:33, 16 April 2010 (UTC)[reply]

what's the most useful theorem a computer programmer should know exists/is true?

[edit]

i'll throw one out there. it is helpful if you know how to do boolean algebra, you can simplify long statements and thereby complexity. now you: are there still more useful theorems?

The formula for the distance between two points comes up quite often in graphics applications. StuRat (talk) 21:03, 16 April 2010 (UTC)[reply]
Surely the Ninety-ninety rule. Abecedare (talk) 21:14, 16 April 2010 (UTC)[reply]

Number generation based on past data

[edit]

I am trying to generate a "random number" that is based on past data. For instance, based on past rainfall, I am trying to generate simulationed numbers for future years. I am not sure what sort of model I should use for this(my statistics is very limited). D--216.96.255.184 (talk) 13:46, 16 April 2010 (UTC)oes anyone have any ideas or know of a good place to look to find this out?[reply]

I don't think there is an easy answer to this, as it depends on what assumptions you make about your data. If you assume that each year's rainfall is independent of all other years, then you are looking for a probability distribution that fits your data. A common distribution (or, more precisely, family of distributions) is the normal distribution. You can use your data to estimate the mean and standard deviation of a suitable normal distribution. There are then various normality tests which will tell you whether this distribution is a good fit to your data. If it isn't, then you can try a different type of distribution - there are plenty to choose from - but beware of the danger of overfitting. If, on the other hand, you think one year's rainfall may be influenced by rainfall in the previous year (or several years) then you may want to use a Markov chain model - or maybe somethiong even more complex than that. Gandalf61 (talk) 15:14, 16 April 2010 (UTC)[reply]
Note that, in any case, you'd want to weight the rainfall in recent years more heavily than from many years back. This is because conditions change over time, so more recent data is more applicable. In the case of rainfall; global warming, sunspot cycles, the El Nino cycle, etc., may be changes affecting the weather. Also, for cycles which repeat over a short term, like the last two, you might want to compare this year against years when we were at the same points in those cycles. StuRat (talk) 21:08, 16 April 2010 (UTC)[reply]
The trick, then, is in knowing how much weight to give to 1) recency, 2) sunspot cycle, 3) El Nino cycle, 4) anything else you can think of. I suggest trying out various weights and using each to predict last year's rainfall. Whichever combo of weights predicts the rainfall closest to the actual rainfall may also be best at predicting this year's rainfall. You might also want to run that formula against many years, to see how it does overall. StuRat (talk) 21:19, 16 April 2010 (UTC)[reply]

Probability perception

[edit]

Not quite perfect for this desk, but a human side to mathematics: how many heads have to be flipped continuously before the average person believes the coin is fixed? For example, if I flipped HHHHHHHH, I'd think that was fixed (the "probability" being 1/256); HH, on the other hand, and I wouldn't think so. I've tried researching this for a bit, but the wide choice of words isn't ideal. Of course, the setting makes a difference, but any data would be great. - Jarry1250 [Humorous? Discuss.] 16:03, 16 April 2010 (UTC)[reply]

We have an article, Checking whether a coin is fair. Does it help? Note if the person suspects ahead of time that the coin is biased, ambiguity aversion may cause people to overestimate the probability. 66.127.52.47 (talk) 17:37, 16 April 2010 (UTC)[reply]

Here's the difference between a probabilist and a statistician: The probabilist says no matter how many times you get "heads", the probability of "tails" on the next toss is still 1/2, but the statistician starts to suspect bias. Michael Hardy (talk) 20:43, 16 April 2010 (UTC)[reply]

Also note that, while HHHHHHHH has a 1/256 probability, so does TTTTTTTT and HTHTHTHT and THTHTHTH and HHHHTTTT and TTTTHHHH and TTHHTTHH and HHTTHHTT. They each seem rather suspicious, but, collectively, there's a 1/32 chance you will get one, so it's not all that unlikely. StuRat (talk) 20:56, 16 April 2010 (UTC)[reply]
It depends on how much you trust the person doing the flipping! Dbfirs 08:15, 17 April 2010 (UTC)[reply]
In statistics about 5% is considered borderline significant, 1% very significant and 0.1% highly significant - the figures vary. Since you'd be surprised by either all heads or all tails then you'd need six heads or tails for a statistician to start taking notice and eight for them to sit up. A prior expectation and Bayes theorem is the way to go ideally, however in the real world things don't work out so smoothly. If people start off with a prejudice it often happens they become more convinced they are right the more evidence you give them showing they are wrong! Dmcq (talk) 08:41, 17 April 2010 (UTC)[reply]

Tossing coins can be considered taking a sample from an big population of heads and tails. Consider first the simpler problem of taking a sample from a finite population, say n=1 item in the sample and N=4 items in the population. Depending on the number of heads in the population you get 0 or 1 head in the sample with odds according to the following table

4 3 2 1 0
0 1 2 3 4

The row index, k=0,1, refers to the number of heads in the sample and the column index, K=0 to 4, refers to the number of heads in the population. The table element indexed by (k,K) is the product of two binomial coefficients: . If you know the number of heads in the population (K) you may estimate the number of heads in the sample (k). This is called deduction. Using the notation X ≈ μ±σ to signify that a random variable X has the mean value μ and the standard deviation σ, you get that k ≈ 0±0 for K=0, k ≈ 0.25±0.43 for K=1, k ≈ 0.5±0.5 for K=2, k ≈ 0.75±0.43 for K=3, and k ≈ 1±0 for K=4. The general deduction formula is

k ≈ f(N,n,K)=(nK/N)±√(nK/N)(1-n/N)(1-K/N)/(1-1/N).

Conversely, if you know the number of heads in the sample (k) you may estimate the number of heads in the population (K). This is called induction. You get that K≈1±1 for k=0 and K≈3±1 for k=1. The general induction formula is

K ≈ −1−f(−2−n,−2−N,−1−k),

where −(μ±σ) = −μ±σ and a+(μ±σ) = (a+μ)±σ

Insert n=k=8 and N=∞ into the formula to estimate the probability for heads if you flipped HHHHHHHH:

P = K/N ≈ 0.90±0.09

A fair coin has P ≈ 0.50±0.00. The difference is (0.50±0.00)−(0.90±0.09) = −0.40±0.09, so the P of a fair coin is 40/9=4.4 standard deviations below the mean. This difference is highly significant. The original question: how many heads have to be flipped continuously before the average person believes the coin is fixed? refers to the beliefs of an average person, so it is a sociological question rather than a mathematical question. The coin maker should be given the benefit of the doubt for HHH, but HHHH gives P ≈ 0.83±0.14, so 0.50±0.00 is 2.37 standard deviations away from the mean value. HHHHH gives P ≈ 0.857±0.124, so 0.50±0.00 is 2.89 standard deviations away from the mean value. HHHHHH gives P ≈ 0.875±0.110, so 0.50±0.00 is 3.40 standard deviations away from the mean value. This is significant. Bo Jacoby (talk) 23:37, 17 April 2010 (UTC).[reply]

... but, as StuRat mentions above, only if you have already formulated the hypothesis that the coin is biased towards heads. The human brain is highly adapted to spotting patterns, even when they don't exist. Dbfirs 07:45, 18 April 2010 (UTC)[reply]