Wikipedia:Reference desk/Archives/Miscellaneous/2024 October 28
Appearance
Miscellaneous desk | ||
---|---|---|
< October 27 | << Sep | October | Nov >> | Current desk > |
Welcome to the Wikipedia Miscellaneous Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
October 28
[edit]Social choice theory paradox
[edit]How is it called when the more people are involved in decision-making, the more subjective and biased (rather than well-thought) such a decision becomes? (Suggesting that for a better decision, it should be made either by one person or by a narrow circle of people, technically implying an authoritarian approach). I guess it's somewhat similar to Arrow's impossibility theorem, but not sure. 212.180.235.46 (talk) 17:42, 28 October 2024 (UTC)
- I did not find mentions of the alleged counterintuituive effect of crowd size negatively impacting decision quality. Instead, I see such claims as, "
We found that increasing the crowd size improves the quality of the outcome. This improvement is quite large at the beginning and gradually decreases with larger crowd sizes.
"[1] Since the cost or effort of determining the crowd decision increases with crowd size and the rate of increase will hardly go down with increasing crowd size, in any given situation there will be an optimal size beyond which the limited gain in outcome quality does not justify the cost increase. - It is established wisdom, experimentally verified, that social influence can have a negative effect on the wisdom of the crowd,[2] but this effect is not specifically related to crowd size. --Lambiam 10:06, 29 October 2024 (UTC)
- "Choice overload" and "overchoice" were common terms about 30 years ago. I do not know how common they are now. A phrase most people understand that means the same thing is "design by committee". Then, there are many old phrases that refer to the same phenomenon, such as "too many witches spoil the brew." There are many related observations, such as the observation that the intelligence of a crowd is equivalent to the dumbest person in the crowd (which I've heard is actually translated from a Polish phrase used to describe how Hitler's speaches to large crowds were accepted so well). In opposition, there is wisdom of the crowd, which can be confused. You are asking about decisions being made by a crowd. The wisdom of the crowd asks for a specific answer to a question, such as "What is the total number of potatoes that are turned into French Fries every day?" Nobody is likely to know, but everyone will guess. Half will guess too high. Half will guess too low. If you average it all together, you get an answer that tends to be accurate. But, as mentioned, that is entirely different than getting a crowd to decide what font to use for a business presentation. 12.116.29.106 (talk) 16:20, 29 October 2024 (UTC)
- Old phrases are not reliable sources for the existence of the alleged effect. The so-called jury theorems apply to crowd-based decision making in general, not just to estimating the value of a scalar quantity. --Lambiam 17:04, 29 October 2024 (UTC)
- @Lambiam, taking your first finding we must assume that the premices of the inquiry (inquiries) were reasonably prepared, that the question(s) attracted the attention of a lost of the available experts aware of the related problematics, that only in the end the curious and the bystanders started joining the crowd. That specific claim they made was indeed about data collection campaigns among a preselected population of experts (your link ). Subjectivity and bias maybe would be about dealing with politics, social matters rather than technical matters. Which does not mean that the reduced comitee necessarily starts on healthy premices, that's for sure, only that the individual members of it may stand more under a possible public scrutiny than when they're and after a while lost amongst the crowd. --Askedonty (talk) 21:39, 29 October 2024 (UTC)
- The crowdworkers in the experimental setup of the paper were not selected on being experts; the experimenters had no control over the level of expertise of people signing up for the task. Discussing the problem of discrepancies between the reference data and ground truth, the authors of the paper even write, explicitly, "
Even if we would replace the crowdworkers with experts, this problem would not be completely solved.
" Given their evaluation method, also no distinction was made between early and late signers-up. I see no argument why we "must assume" any of what you claim. - All of this is hardly relevant to the original question. Can you find any papers discussing a negative effect of crowd size on outcome quality? --Lambiam 07:29, 30 October 2024 (UTC)
- We had several examples here on Wiki where many editors voting on a particular proposal created a mess and the discussion became sidetracked, ultimately being closed as inconclusive. Don't know about academic papers, but it appears that in some cases the involvement of a greater number of decision-makers shifts the potential well-thought outcome towards inconclusive and biased as the probability of inexperienced and hotheaded people rises. 212.180.235.46 (talk) 08:53, 30 October 2024 (UTC)
- In most academic studies, the premise is that individuals first reach their decisions independently, whereupon a fixed algorithm consolidates these many decisions into a single crowd decision. If there is a preceding open discussion, there can be many confounding factors. Some people know how to sound authoritative and persuasive while they actually know next to nothing of the subject matter. Others may sidetrack the discussion by raising issues that, however important by themselves, are not relevant for the issue at hand. People may argue that A because of B, after which discussion may focus on the validity of B, although it has little bearing on the validity of A and refuting B does not tell us anything about A. See also FUD.
- Reaching a decision through voting in which there are several alternatives, some of which are mere variants of each other (A1, A2, B1, B2a, B2b, B2c(i), B2c(ii), ...), it makes a tremendous difference how the voting is arranged and which voting system is used. Bad arrangements and systems can lead to outcomes no one wanted. This problem is well known, but independent of crowd size or pre-vote discussions. Without studying the examples you have in mind I can't tell which of these issues made it a mess, but I doubt that the number of voters was by itself a major cause. A greater number of decision makers means that the probability of having experienced, levelheaded and smart people aboard also rises. I see no clear reason why this should have a less powerful effect than the increase in know-nothings and firebrands.
- We do not need more anecdotal evidence. I still have seen no references of any kind to papers discussing a negative effect of increased crowd size on outcome quality. --Lambiam 10:34, 30 October 2024 (UTC)
- Ok, but what we need is to more explicitly define what we're talking about. Studies are formatted to give a standard ranking, individual, decision-making group, wisdom of crowds. Samples here assorted with two crucial dates, 1904 and 1907. You can't count the public to not wonder whether the concept is not somehow flawed given the events posterior to that era. Then you'll have the (U.S. gov) Library of Medicine, and they do leave it there may exist other corridors behind some doors: "... group decision-making was not better than the wisdom of crowds, showing inconsistency with the results of Navajas et al. (2018)." They agree that parametrization of the sudies do play some role: "This inconsistency in result occurs because of no difference found in creativity and utilization of resources between group decision-making and the wisdom of crowds in complex information integration", "because confidence cannot accurately predict correct answers", "weighting confidence would lead to worse rank aggregation". The landscape left behind should not be so arid that the people wonder whether the scientists simply were reluctant to jeopardize their position (mustard gas, you know all that trenches stuff etc.) --Askedonty (talk) 12:50, 30 October 2024 (UTC)
- One may hope that reports on studies examining the effect of crowd size on outcome quality define the assumptions, the procedure and the quality measure. We do not have to decide that for them here. There are actually many such papers and even whole books, which use different definitions and methods. What we are still missing is references to studies that support the allegation of the OP. --Lambiam 14:42, 30 October 2024 (UTC)
- Ok, but what we need is to more explicitly define what we're talking about. Studies are formatted to give a standard ranking, individual, decision-making group, wisdom of crowds. Samples here assorted with two crucial dates, 1904 and 1907. You can't count the public to not wonder whether the concept is not somehow flawed given the events posterior to that era. Then you'll have the (U.S. gov) Library of Medicine, and they do leave it there may exist other corridors behind some doors: "... group decision-making was not better than the wisdom of crowds, showing inconsistency with the results of Navajas et al. (2018)." They agree that parametrization of the sudies do play some role: "This inconsistency in result occurs because of no difference found in creativity and utilization of resources between group decision-making and the wisdom of crowds in complex information integration", "because confidence cannot accurately predict correct answers", "weighting confidence would lead to worse rank aggregation". The landscape left behind should not be so arid that the people wonder whether the scientists simply were reluctant to jeopardize their position (mustard gas, you know all that trenches stuff etc.) --Askedonty (talk) 12:50, 30 October 2024 (UTC)
- We had several examples here on Wiki where many editors voting on a particular proposal created a mess and the discussion became sidetracked, ultimately being closed as inconclusive. Don't know about academic papers, but it appears that in some cases the involvement of a greater number of decision-makers shifts the potential well-thought outcome towards inconclusive and biased as the probability of inexperienced and hotheaded people rises. 212.180.235.46 (talk) 08:53, 30 October 2024 (UTC)
- The crowdworkers in the experimental setup of the paper were not selected on being experts; the experimenters had no control over the level of expertise of people signing up for the task. Discussing the problem of discrepancies between the reference data and ground truth, the authors of the paper even write, explicitly, "
- @Lambiam, taking your first finding we must assume that the premices of the inquiry (inquiries) were reasonably prepared, that the question(s) attracted the attention of a lost of the available experts aware of the related problematics, that only in the end the curious and the bystanders started joining the crowd. That specific claim they made was indeed about data collection campaigns among a preselected population of experts (your link ). Subjectivity and bias maybe would be about dealing with politics, social matters rather than technical matters. Which does not mean that the reduced comitee necessarily starts on healthy premices, that's for sure, only that the individual members of it may stand more under a possible public scrutiny than when they're and after a while lost amongst the crowd. --Askedonty (talk) 21:39, 29 October 2024 (UTC)
- Old phrases are not reliable sources for the existence of the alleged effect. The so-called jury theorems apply to crowd-based decision making in general, not just to estimating the value of a scalar quantity. --Lambiam 17:04, 29 October 2024 (UTC)