Wikipedia:Reference desk/Archives/Mathematics/2013 April 28
Mathematics desk | ||
---|---|---|
< April 27 | << Mar | April | May >> | April 29 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
April 28
[edit]"The probability that a certain coin is fair given a certain outcome" (based on Meredith Kercher-BBC-article)
[edit]Hello,
I was reading this article on the statistics related to the Meredith Kercher-case: [1]. I was confused when I read this part: "She compares it to an experiment to find out whether a coin is biased. You do a first test and obtain nine heads and one tail... The probability that the coin is fair given this outcome is about 8%, [and the probability] that it is biased, about 92%."
How does one obtain these numbers? (I mean: how does one strictly define "the probability that it is fair") (I could do the converse: given that a coin is fair, I could compute the probability of having at least nine heads, using a binomial distribution). Evilbu (talk) 21:09, 28 April 2013 (UTC)
- The standard approach would be Bayesian. You need to know (or, at least, to have somehow come to an estimate you're happy with) the prior probability of a coin being biased, and the probability of getting a head given a biased coin.
- We can possibly reverse-engineer the values used to produce the numbers in the article. Bayes' theorem (in its odds-ratio form) gives:
- where I stands for our initial state of information (or ignorance).
- So, putting in the numbers quoted, let's try to reveal the underlying numbers used:
- and
- which implies
- which implies
- i.e.
- Substituting this back into the first line gives
- So
- or
- So the expert in the article appears to be taking a prior probability of approximately 50/50 on whether the coin is biased, with a biased coin producing heads with a probability ratio of approximately 68.7 to 31.3.
- Which seems to have been a rather curious number to have picked -- if the coin is biased, on the face of the data it might appear more likely to be more biased than this. Jheald (talk) 21:57, 28 April 2013 (UTC)
- Another approach might be to again take a prior probability of 50/50 for whether the coin is biased, but take a beta distribution for the (initially unknown) probability of heads given a biased coin, which is then updated given the data. Jheald (talk) 22:37, 28 April 2013 (UTC)
- A general prior for this problem is a probability distribution over probabilities of getting heads. Your initial derivation used a sum of two delta functions as the prior, while your follow-up suggestion is the sum of a beta distribution and a delta function. Many other choices are possible and potentially reasonable depending on what you know about the world of biased coins. -- BenRG 00:32, 29 April 2013 (UTC)
- You can't objectively define "the probability that a coin is fair". You can objectively define a false negative rate, as you said. The prosecutor's fallacy is assuming the latter is the same as the former. It's great that Colmez is speaking out against this, but she seems to be making the same kind of error herself. -- BenRG 00:32, 29 April 2013 (UTC)
- Oh, please, you can't rely on newspaper accounts. The Beeb is probably better than most, but hardly reliable. My guess is that she explained that she was assuming a certain Bayesian prior, and this was just left out because the reporter didn't think the readership would understand it or be interested, or perhaps because the reporter didn't understand it in the first place. --Trovatore (talk) 05:10, 29 April 2013 (UTC)
- Based on my unfortunate experience, the most likely answer is that the so-called "expert" doesn't know what the hell he or she is talking about, and is simply making the very common error of confusing the p value at which a null hypothesis is rejected with the probability that the null hypothesis is false. Looie496 (talk) 01:10, 29 April 2013 (UTC)
- Not as simple as that, I think. The p-value for obtaining one tail out of ten throws is 11/1024 = 0.0107; for two out of ten throws is 56/1024 = 0.0547. These are not the 92% and 84% quoted in the article. Jheald (talk) 09:56, 29 April 2013 (UTC)
- I checked and found that Wikipedia has an article on this Checking whether a coin is fair. Have at it. I've no comment now on this though because its been many long years since my college days when I studied statistics, thus I'll get my head wrapped around this later, or perhaps I won't. -Modocc (talk) 02:04, 29 April 2013 (UTC)
Aha. It helps to Read The Friendly Article. :-)
She compares it to an experiment to find out whether a coin is fair, or weighted to give heads 70% of the time.
So it was the first model I suggested above. Jheald (talk) 10:47, 29 April 2013 (UTC)
- obtain nine heads and one tail. The probability that the coin is fair cannot be computed, but the probability that the next throw will be heads can be computed, not assuming that the coin is fair. (If you know that the coin is fair this probability is of course 50%, but there is no point in computing such probabilities if you assume that the coin is fair). The next trow may be either heads or tails, so the totality of eleven throws may be either nine or ten heads. If the totality is nine heads the observation (nine heads and one tail) can be obtained in = 2 ways. If the totality is ten heads the observation can be obtained in = 10 ways. So the odds for head are ten to two, and the probability of heads is 10/(2+10) = 83% and the probability of tails is 2/(2+10) = 17%. Bo Jacoby (talk) 12:46, 2 May 2013 (UTC).