Jump to content

Barnard's test

From Wikipedia, the free encyclopedia
(Redirected from Barnard's exact test)

In statistics, Barnard’s test is an exact test used in the analysis of 2 × 2 contingency tables with one margin fixed. Barnard’s tests are really a class of hypothesis tests, also known as unconditional exact tests for two independent binomials.[1][2][3] These tests examine the association of two categorical variables and are often a more powerful alternative than Fisher's exact test for 2 × 2 contingency tables. While first published in 1945 by G.A. Barnard,[4][5] the test did not gain popularity due to the computational difficulty of calculating the p value and Fisher’s specious disapproval. Nowadays, even for sample sizes n ~ 1 million, computers can often implement Barnard’s test in a few seconds or less.

Purpose and scope

[edit]

Barnard’s test is used to test the independence of rows and columns in a 2 × 2 contingency table. The test assumes each response is independent. Under independence, there are three types of study designs that yield a 2 × 2 table, and Barnard's test applies to the second type.

To distinguish the different types of designs, suppose a researcher is interested in testing whether a treatment quickly heals an infection.

  1. One possible study design would be to sample 100 infected subjects, and for each subject see if they got the novel treatment or the old, standard, medicine, and see if the infection is still present after a set time. This type of design is common in cross-sectional studies, or ‘field observations’ such as epidemiology.
  2. Another possible study design would be to give 50 infected subjects the treatment, 50 infected subjects the placebo, and see if the infection is still present after a set time. This type of design is common in clinical trials.
  3. The final possible study design would be to give 50 infected subjects the treatment, 50 infected subjects the placebo, and stop the experiment once a pre-determined number of subjects has healed from the infection. This type of design is rare, but has the same structure as the lady tasting tea study that led R.A. Fisher to create Fisher's exact test.

Although the results of each design of experiment can be laid out in nearly identical-appearing 2 × 2 tables, their statistics are different, and hence the criteria for a "significant" result are different for each:

  1. The probability of a 2 × 2 table under the first study design is given by the multinomial distribution; where the total number of samples taken is the only statistical constraint. This is a form of uncontrolled experiment, or "field observation", where experimenter simply "takes the data as it comes".[a]
  2. The second study design is given by the product of two independent binomial distributions; the totals in one of the margins (either the row totals or the column totals) are constrained by the experimental design, but the totals in other margin are free. This is by far the most common form of experimental design, where the experimenter constrains part of the experiment, say by assigning half of the subjects to be provided with a new medicine and the other half to receive an older, conventional medicine, but has no control over the numbers of individuals in each controlled category who either recover or succumb to the illness.
  3. The third design is given by the hypergeometric distribution; where both the total numbers in each column and row are constrained. For example an individual is allowed to taste 8 cups of soda, but must assign four to each category "brand X" and "brand Y", so that both the row totals and the column totals are constrained to four.[b]

This kind of experiment is complicated to manage, and is almost unknown in practical experiments[citation needed].

The operational difference between Barnard’s exact test and Fisher’s exact test is how they handle the nuisance parameter(s) of the common success probability, when calculating the p value. Fisher's exact test avoids estimating the nuisance parameter(s) by conditioning on both margins, an approximately ancillary statistic that constrains the possible outcomes. The problem with that Fisher's procedure is that it falsely[citation needed] excludes some of the outcomes which are indeed[citation needed] possiblities for almost all types of experiments. Barnard’s test is better[citation needed], in that it considers all legitimate possible values of the nuisance parameter(s) and chooses the value(s) that maximizes the p value. The theoretical difference between the tests is that Barnard’s test uses the double-binomially distributed, whereas Fisher’s test, because of the (usually false)[citation needed] conditioning uses is the hypergeometric distribution, which means that the estimated p values it produces are not correct[citation needed]; in general they are too large, making Fisher's test too 'conservative': Prone to unnecessary type II errors (excessive numbers of false negatives). However, even when the data come from double-binomial distribution, the conditioning (that leads to using the hypergeometric distribution for calculating the Fisher's exact p value) produces a valid test, if one accepts that Fisher's test will necessarily miss some positive results.[3] Barnard's test is not biased in this way, and is more suitable for a broader range of experiment types, including those which are most common, in which there is no experimental constraint on one of either the row sum or the column sum of the table.

Both tests bound the type I error rate at the α level, and hence are technically 'valid'. However, for the design of almost all actually conducted experiments Barnard’s test is much more powerful than Fisher’s test, because it considers more ‘as or more extreme’ tables, by not imposing a false constraint ('conditioning') on the second margin, which the procedure for Fisher’s test requires (incorrectly so, with the exception of a few rarely-used experimental designs, where the conditioning for Fisher's test is valid)[citation needed]. In fact, a variant of Barnard’s test, called Boschloo's test, is uniformly more powerful than Fisher’s test.[6] Barnard’s test has been used alongside Fisher's exact test in project management research.[7]

Criticisms

[edit]

Under specious pressure from Fisher, Barnard retracted his test in a published paper,[8] however many researchers prefer Barnard’s exact test over Fisher's exact test for analyzing 2 × 2 contingency tables,[9] since its statistics are more powerful for the vast majority of experimental designs, whereas Fisher’s exact test statistics are conservative, meaning the significance shown by its p values are too high, leading the experimenter to dismiss as insignificant results that would be statistically significant using the correct (and less conservative) double-binomial statistics of Barnard's tests rather than the almost-always invalid (and excessively conservative) hypergeometric statistics of Fisher's 'exact' test. Barnard's tests are not appropriate in the rare case of an experimental design that constrains both marginal results (e.g. ‘taste tests’); although rare, experimentally imposed constraints on both marginal totals make the true sampling distribution for the table hypergeometric.

Barnard's test can be applied to larger tables, but the computation time increases and the power advantage quickly decreases.[10] It remains unclear which test statistic is preferred when implementing Barnard's test; however, most test statistics yield uniformly more powerful tests than Fisher's exact test.[11]

See also

[edit]

Footnotes

[edit]
  1. ^ For "field observations" of multinomially distributed data the chi-squared test is most commonly used methods of analysis; it produces "statistically correct" results, but is based on a normal approximation rather than exact statistics. Other methods also apply, and are discussed in the article on Pearson's chi-squared test.
  2. ^ The experimental result is only revealed in the interior of the table, with the count of the number of cups either correctly or incorrectly identified.

References

[edit]
  1. ^ Mehrotra, D.V.; Chan, I.S.F.; Berger, R.L. (2003). "A cautionary note on exact unconditional inference for a difference between two independent binomial proportions". Biometrics. 59 (2): 441–450. doi:10.1111/1541-0420.00051. PMID 12926729. S2CID 28556526.
  2. ^ Ripamonti, E.; Lloyd, C.; Quatto, P. (2017). "Contemporary frequentist views of the 2 × 2 binomial trial". Statistical Science. 32: 600–615. doi:10.1214/17-STS627. hdl:10281/182103.
  3. ^ a b Fay, M.P.; Hunsberger, S.A. (2021). "Practical valid inferences for the two-sample binomial problem". Statistics Surveys. 15. arXiv:1904.05416. doi:10.1214/21-SS131.
  4. ^ Barnard, G.A. (1945). "A new test for 2 × 2 tables". Nature. 156 (3954): 177. doi:10.1038/156177a0. S2CID 186244479.
  5. ^ Barnard, G.A. (1947). "Significance tests for 2 × 2 tables". Biometrika. 34 (1–2): 123–138. doi:10.1093/biomet/34.1-2.123. PMID 20287826.
  6. ^ Boschloo, R.D. (1970). "Raised conditional level of significance for the 2 × 2 table when testing the equality of two probabilities". Statistica Neerlandica. 24: 1–35. doi:10.1111/j.1467-9574.1970.tb00104.x.
  7. ^ Invernizzi, Diletta Colette; Locatelli, Giorgio; Brookes, Naomi J. (1 January 2019). "An exploration of the relationship between nuclear decommissioning projects characteristics and cost performance" (PDF). Progress in Nuclear Energy. 110: 129–141. Bibcode:2019PNuE..110..129I. doi:10.1016/j.pnucene.2018.09.011. ISSN 0149-1970. S2CID 125100734.
  8. ^ Barnard, G.A. (1949). "Statistical Inference". Journal of the Royal Statistical Society. Series B. 11 (2): 115–149. doi:10.1111/j.2517-6161.1949.tb00028.x.
  9. ^ Lydersen, S.; Fagerland, M.W.; Laake, P. (2009). "Recommended tests for association in 2x2 tables". Statistics in Medicine. 28 (7): 1159–1175. doi:10.1002/sim.3531. PMID 19170020. S2CID 3900997.
  10. ^ Mehta, C.R.; Hilton, J.F. (1993). "Exact power of conditional and unconditional tests: Going beyond the 2 × 2 contingency table". The American Statistician. 47 (2): 91–98. doi:10.1080/00031305.1993.10475946.
  11. ^ Berger, R.L. (1994). "Power comparison of exact unconditional tests for comparing two binomial proportions". Institute of Statistics. Mimeo Series. 2266: 1–19.
[edit]