Hand's paradox
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
In statistics, Hand's paradox arises from ambiguity when comparing two treatments. It shows that a comparison of the effects of the treatments applied to two independent groups that can contradict a comparison between the effects of both treatments applied to a single group.
Paradox
[edit]Comparisons of two treatments often involve comparing the responses of a random sample of patients receiving one treatment with an independent random sample receiving the other. One commonly used measure of the difference is then the probability that a randomly chosen member of one group will have a higher score than a randomly chosen member of the other group. However, in many situations, interest really lies on which of the two treatments will give a randomly chosen patient the greater probability of doing better. These two measures, a comparison between two randomly chosen patients, one from each group, and a comparison of treatment effects on a randomly chosen patient, can lead to different conclusions.
This has been called Hand's paradox,[1][2] and appears to have first been described by David J. Hand.[3]
Examples
[edit]Example 1
[edit]Label the two treatments A and B and suppose that:
Patient 1 would have response values 2 and 3 to A and B respectively. Patient 2 would have response values 4 and 5 to A and B respectively. Patient 3 would have response values 6 and 1 to A and B respectively.
Then the probability that the response to A of a randomly chosen patient is greater than the response to B of a randomly chosen patient is 6/9 = 2/3. But the probability that a randomly chosen patient will have a greater response to A than B is 1/3. Thus a simple comparison of two independent groups may suggest that patients have a higher probability of doing better under A, whereas in fact patients have a higher probability of doing better under B.
Example 2
[edit]Suppose we have two random variables, and , corresponding to the effects of two treatments. If we assume that and are independent, then , suggesting that A is more likely to benefit a patient than B. In contrast, the joint distribution which minimizes leads to . This means that it is possible that in up to 62% of cases treatment B is better than treatment A.
References
[edit]- ^ Fay MP, Brittain EH, Shih JH, Follmann DA, and Gabriel EE (2018) Causal estimands and confidence intervals associated with Wilcoxon-Mann-Whitney tests in randomized experiments. Statistics in Medicine, 37, 2923-2937.doi:10.1002/sim.7799
- ^ Greenland S., Fay M.P., Brittain E.H., Shih J.H., Follmann D.A., Gabriel E.E, & Robins J.M. (2020) On Causal Inferences for Personalized Medicine: How Hidden Causal Assumptions Led to Erroneous Causal Claims About the D-Value, The American Statistician, 74:3, 243-248, DOI: 10.1080/00031305.2019.1575771 doi:10.1080/00031305.2019.1575771
- ^ Hand D.J. (1992) On comparing two treatments, The American Statistician, 46, 190–192.doi:10.1080/00031305.1992.10475881