Talk:Intelligence quotient/Archive 7
This is an archive of past discussions about Intelligence quotient. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | ← | Archive 5 | Archive 6 | Archive 7 | Archive 8 | Archive 9 |
Neuroimaging - Issues with the newly added section
The Hampshire and Owen paper "Fractionating Human Intelligence" that is the main subject of this new 'Neuroimaging' section is not thought of highly in the field. I will explain why and will provide sources.
The paper's main claim is that g or general intelligence is not a valid concept. First of all, there are a massive number of papers and studies going back almost 100 years in support of the concept of g or general intelligence. It's been studied in every possible way and with every available technology. It is, without question, the single most researched topic in psychology. And the weight of that research confirms the claim that g is an objectively real variable and the best existing measure of human cognitive abilities. (https://www.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf) (http://differentialclub.wdfiles.com/local--files/definitions-structure-and-measurement/Intelligence-Knowns-and-unknowns.pdf)
As for the paper itself... it claims to have a huge sample size, based on results of an online "IQ test" made up of 12 different cognitive tasks. They received 45,000 usable responses, which would be a large sample size - however, the central claims of the study rely exclusively on fMRI tests used to measure brain activity caused by each of the 12 tasks... and there were only 16 volunteers for the fMRI section of the study, a very small sample size indeed.
The paper shows evidence that different areas of the brain are used for different tasks. The authors believe that since human intelligence is formed from multiple cognitive components (different brain areas), a higher-order intelligence factor therefore does not exist. That's the jist of the paper. A more detailed description is that there are essentially two or three major areas of the brain involved in what we call "general intelligence" (memory, logic, and possibly verbal networks), and that many of the 12 tasks required each of these 2 or 3 networks in varying degrees. The authors argue that because of that, a single higher-order intelligence factor does not really exist. If you're thinking how absurd of an argument that is... you're not alone. As I said, this paper has been commented on negatively by a number of people in the field, including Richard J. Haier, Sherif Karama, Roberto Colom, Rex Eugene Jung, Wendy Johnson, Michael C. Ashton, Kibeom Lee, and Beth A. Visser, among others.
Quotes from other scientists about the paper:
- "We stand by the opinion expressed in our preview: the Hampshire et al. paper is an interesting but flawed exercise and their conclusions are not as definitive, or original, as they believe." (https://www.researchgate.net/publication/263093285_Yes_but_flaws_remain)
- "There’s a sense, though, in which it doesn’t matter. If all tasks require both memory and reasoning (and all did in this study), then the sum of someone’s memory and reasoning ability is in effect a g score, because it will affect performance in all tasks. If so, it’s academic whether this g score is ‘really’ monolithic or not. Imagine that in order to be good at basketball, you need to be both tall, and agile. In that case you could measure someone’s basketball aptitude, even though it’s not really one single ‘thing’…"(http://blogs.discovermagazine.com/neuroskeptic/2012/12/24/how-intelligent-is-iq/#.VnWcXbYrKXY)
- But the best response, IMO, was given by Ashton, Lee, and Visser. They dismantle the paper piece by piece in their subsequent "Higher-order g versus blended variable models of mental ability: Comment on Hampshire et al": "Here we use CFA to compare a higher-order g model with a task mixing or blended variable model in relation to the data of Hampshire et al., and we find that the higher-order g model provides a much closer fit to the data. Following Thurstone (1938), we suggest that it is conceptually implausible that every task is influenced by every factor of mental ability. We also suggest that the non-existence of g would be demonstrated by finding mutually orthogonal markers of those factors; however, the data of Hampshire et al. and other mental ability datasets suggest that this cannot be achieved." (http://www.sciencedirect.com/science/article/pii/S0191886913012804)
- And then again by Ashton, Lee, and Visser in "Orthogonal factors of mental ability? A response to Hampshire et al" where they say: "We explain that Hampshire, Parkin, Highfield, and Owen (2014b) have not demonstrated any orthogonality of brain network capacities and that a model of mental ability structure must make testable predictions about individual differences in task performance. Evidence thus far indicates that g-based models account for intercorrelations among mental ability tasks better than do task mixing or blended variable models."(https://www.researchgate.net/publication/260029415_Orthogonal_factors_of_mental_ability_A_response_to_Hampshire_et_al)
However, the most important point to realize is that it doesn't matter how many brain networks or "neural systems" are involved in general intelligence, and in fact, we wouldn't expect general intelligence to be centered in just one area of the brain. The current most widely accepted models of consciousness, for instance Stanislas Dehaene's "global neuronal workspace", suggests that consciousness is a distributed system, made up of many different brain networks or "neural systems" that come together to form a single concept. Just like general intelligence. Here's a related quote from Dehaene's paper: "Because GNW (global neuronal workspace) neurons are broadly distributed, there is no single brain center where conscious information is gathered and dispatched but rather a brain-scale process of conscious synthesis achieved when multiple processors converge to a coherent metastable state." (http://www.cs.helsinki.fi/u/ahyvarin/teaching/niseminar4/Dehaene_GlobalNeuronalWorkspace.pdf)
Lastly, a minor quibble: the article says: "They postulated that instead of using a single concept of G or intelligent quotient..." The general intelligence factor or 'g' is always printed as a lower case italic.
Because of the above - the fact that the only paper mentioned in the Neuroimaging section of this article is a relatively fringe paper that not only has a claim that is contrary to the general consensus in the field, but also has a number of technical and logical flaws that have been pointed out by other researchers in the field - I have deleted that section. I hope I made a good argument for my edit, but if not, please let me know in this talk page. thanks.
Bzzzing (talk) 19:57, 19 December 2015 (UTC)
Race versus ethnicity
I have undone the recent edit changing "Race and Intelligence" to "Ethnicity and Intelligence" since it is incorrect for the following reasons: 1)The term "race" is generally associated with biology, while "ethnicity" is associated with culture. "Races" are genetically distinct populations within the same species, while groups of different "ethnicity" may or may not be genetically distinct, but differ only in some cultural aspect, such as language, religion, customs, etc.; and 2)there is already an entire article on wikipedia called Race and intelligence.
There is a movement among some groups to try to avoid the term "race" when referring to humans, or to downplay it as a "socially constructed" term, and I suspect that is why the edit was made. All terms are socially constructed, but that doesn't mean the term is any less useful or that what it refers to is any less real. The term "race" is very useful in human biology, and conveys very real, objectively measurable information. Yes, humans do exist on a biological continuum, but that continuum is not perfectly smooth and there are "bulges" on it. Those "bulges" are what we call "races".
The edit also added a link to Nations and intelligence, but I have left that for now, although I think it probably should be removed as well, since that subject hasn't had much good research done yet. Does anyone have any strong opinions on either leaving it or taking it out? thanks
Bzzzing (talk) 16:54, 21 December 2015 (UTC)
Neuroimaging - Issues with the newly added section RESPONSE
I won't undo changes you made to the section that I had previously added... since you do have a point that it is outweighed by the current consensus. However I will respond for the sake of an argument regarding the topic of IQ since I am very much interested in intelligence and cognition.
Despite how long researchers have studied IQ or G whether 100 or 1000 years, it is still a flawed concept. It doesn't take much to notice that, unless of course one is convinced through appeals to authority as many have been. Major argument is that IQ represents a pure measure of visual-spatial ability and reasoning, nothing more. That is solely evident by the fact that higher-order IQ tests such as Raven's Progressive matrices relies it's test items purely based on solving visual-spatial shapes and mental rotating puzzles(and nothing else). So does Catell's Culture Fair III and even WAIS. For example, WAIS items and their measurement of mental abilities:
WAIS III IQ test
Object Assembly section - spatial and mechanical items
Picture arrangement - visual-spatial task
Picture completion - visual-spatial task
Block Design- Visual-Spatial task
Letter-number sequencing - requires visuospatial working memory therefore not a pure measure of verbal ability http://www.ncbi.nlm.nih.gov/pubmed/10868248
Arithmetic - requires mental rotation therefore not a pure measure of verbal ability. http://www.sciencedirect.com/science/article/pii/S0001691813001200
Cancellation - visual selective attention, visual-motor ability
Information questions on the WAIS is based on degree of general information acquired from culture ( general knowledge gained from experience, outside reading and interest and therefore not a measure of cognitive ability
Vocabulary questions are also based on past experience and environmental exposure. For the picture vocabulary questions, obviously it requires visual recongition. — Preceding unsigned comment added by Doe1994 (talk • contribs) 18:44, 22 December 2015 (UTC)
So no, "G" is simply visual-spatial processing power which relies on the fronto-parietal network in the brain, and all IQ tests also correlate with each other because they measure this single cognitive mental ability. I have talked to Richard Lynn, Wendy Williams and other psychologists about this and they have no defence against my arguments. Richard Lynn's argument was "spatial ability" is important in intelligence while Wendy Williams gave nothing in response. Their unwillingness to criticize the 100 year models of intelligence is blinded by their orthodoxy and creationist like mindstate-loyalty to their field.
The Fractonizing intelligence study does have a big point which is that different cognitive abilities rely on different cortical networks in the brain. For example, there is a separate network for processing verbal, auditory, communicative and language-based information such as the temporal cortex versus brain networks that process visual-spatial and numerical information such as the parietal cortex. IQ tests also measure zero verbal cognitive abilities and it does a poor job in measuring short term memory and working memory as the Fraction study pointed out.
I look further to this discussion with you.
Doe1994 (talk) 02:54, 22 December 2015 (UTC)User:Doe1994Doe1994 (talk) 02:54, 22 December 2015 (UTC)
IQ correlates strongly with every type of cognitive measurement ever devised. It also correlates strongly with academic success and future life success. The fact of the matter is that IQ measures something far more than the narrow "visual-spatial" aspect you claim... it measures one's ability to learn. If you know of any type of measurement at all which is better than an IQ test at determining one's ability to learn, I am curious to hear about it. Bzzzing (talk) 22:36, 22 December 2015 (UTC)
Also... I'm not sure you read my entire post explaining my reasons for removing the study. I answered many of points you brought up in it. I also wrote about how "g factor" is always printed as a lower case italic, not uppercase as in G. The fact that you keep typing it as "G" leads me to believe you didn't read my whole reasoning above. Bzzzing (talk) 22:42, 22 December 2015 (UTC)
Thanks for the response. I think you said it yourself, IQ merely correlates with success, and that implies that it's partly measuring something that is "causal" or related to success . Which goes back to my original argument that IQ tests measures visual-spatial ability which is a part of this "causal" factor of intelligence and therefore allows IQ tests to predict success without even measuring the entire spectrum of intelligence. Or in another analogy, it's like measuring muscle strength to predict future performance in sports without actually measuring the entire range of athletic abilities.
I strongly disagree that IQ tests measures one's ability to learn because it's atually based on the concept of fluid intelligence not crystallized, and therefore it measures one's natural aptitude to solve given problems and not one's ability to adapt, have retention and sustain information over prolonged period of time (learning).
The concept of g factor is also pointless, because if researchers don't know what this mysterious g is then it's redundant to draw any conclusions from such an unknown factor. I personally don't believe g exists and that human intelligence is merely the integration of cognitive abilities such as reasoning, working memory & etc in response to processing different kinds of information such as verbal, spatial, visual or social. IQ tests only measures the visual-spatial processing and working memory part.
Anyway, my own opinion is that researchers in intelligence are not very intelligent themselves which is why the concept of IQ is is not very convincing to the public. And trust me, I have talked to all the pioneers in intelligence such as Richard Lynn, Nyborg, Roberto Colom, Wendy Johnson & Scott Barry Kaufman.
Doe1994 (talk) 04:27, 26 December 2015 (UTC)User:Doe1994Doe1994 (talk) 04:27, 26 December 2015 (UTC)
Regarding the Fractioning intelligence paper
Regarding that study and the concept of g, I don't understand why on Earth researchers would think there is a higher-order intelligence or "general factor". The fact that this was deduced by past researchers on the correlation among paper IQ tests just automatically puts this concept at doubt, since correlations is not causation. (also all the IQ tests measure the same thing - Visual-spatial ability)
Even evolutionary wise, it wouldn't make sense for there to be higher order cognitive system. Intelligence probably evolved separately as different cognitive abilities in response to processing information in different environments and therefore different systems for different cognitive abilities would have evolved over time for humans. What would have driven humans to evolve a separate higher system such as general factor? Psychometrists can't answer this and neither can they define what general factor even is. My own proposition is that there are different systems for different cognitive/intelligence abilities such as --visual-spatial intelligence which probably evolved as a response to early hominid visual-spatial navigation & hunting while verbal intelligence evolved as a response to in-group communication, conversations and social dynamics. There is no requirement for this concept of g.
Doe1994 (talk) 06:43, 27 December 2015 (UTC)User:Doe1994Doe1994 (talk) 06:43, 27 December 2015 (UTC)
- The Theory of multiple intelligences is referenced in the section just preceding where neuroimaging was put. Neuroimaging was a bad name for the section. I think perhaps the theory of multiple intelligences should have more informative section name for it. However as to all this business about personally knowing people and them not answering and having your own thoughts on the matter - that is all irrelevant to putting something in the article. The article needs to be based on citations with due weight. The article about multiple intelligences is not very supportive of iit, if you hhave citations which show something else please do add them. However when I reaad the first citation that was added to the 'neuroimaging' section it did not really support what was said here. Yes it said different intelligences seemed to be supported by different parts of the brain but it also talked about general intellignce as in recruiting the various parts to work together. Dmcq (talk) 12:37, 27 December 2015 (UTC)
Fractioning intelligence paper had nothing to do with Theory of Multiple Intelligences, it had to do with the testing construct of the IQ tests such as Raven's or WAIS. Current psychologists think that short term memory, reasoning and verbal ability can all be measured in one test but the Fractioning intelligence paper pointed out that this assertion is superficial because each of those abilities reside in separate networks in the brain and therefore requires three separate tests in order to accurately measure them. Or in other words, the current IQ tests do not measure the full capacity & efficiency of those three networks.
Shootingstar88 (talk) 00:24, 28 December 2015 (UTC)User:shootingstar88
- 'Current psychologists think that short term memory, reasoning and verbal ability can all be measured in one test'? What gives you that impression? Or that even any of those can't be broken down more? Or that it makes much difference as far as this article is concerned? Dmcq (talk) 00:35, 28 December 2015 (UTC)
Because IQ test questions are not constructed in way as to measure the capacity for short term memory, reasoning and verbal ability. They are constructed in a way to measure ability to solve novel problems regardless of how much short term memory, reasoning or verbal ability it takes. Therefore it does not actually measure capacity of one's core utilities of intelligence which the Fractioning paper defines as three cognitive abilities I had listed above. It's like measuring a person's ability perform a novel physical tasks in order to generalize about his athletic ability, without actually measuring the full capacity of his stamina, endurance, speed and flexibility. Do you understand?
I go even further to suggest that IQ tests do not measure verbal ability..period. There is no indication that WAIS measures any verbal fluid ability while it is already established that Raven only measures visual-spatial ability. Current literature also categorizes Arithmetic as "verbal" even though it requires spatial-visualization and mental rotation both of which are spatial abilities. Therefore in line with common popular assumption that IQ test are flawed and superficial.
User:shootingstar88 —Preceding undated comment added 19:43, 28 December 2015 (UTC)
- You seem to be reading a lot more into the paper than is there. It is an interesting paper and the technique seems useful. However you seem to think that because they have produced evidence that the the three factors they extracted account for a large fraction of IQ test scores that therefore IQ tests should be changed to specifically measure those factors. That simply does not follow. More importantly it is not what the authors said. We really have to wait till some author says they are criticizing the IQ test before we write that they are. I think though it would be okay to write down what they said as a view about IQ though as being composed of number of factors. They did have something to say about g factor though in that they saw little evidence of a single g factor. Dmcq (talk) 14:35, 29 December 2015 (UTC)
How can we do with all of them
abstract IQ make new notes — Preceding unsigned comment added by 80.157.80.122 (talk) 12:29, 21 April 2016 (UTC)
The scholarly consensus is that genetics is more important than environment in determining intellectual standing
This article states "Environmental and genetic factors play a role in determining IQ. Their relative importance has been the subject of much research and debate." There is some debate about which is more important, and environment certainly plays a role, but the consensus among scholars is overwhelming that genetics is more important than environment in determining an individual person's intellect if a person is in a normal environment (an exception would be be if a person was starved, tortured or sleep deprived, in those cases environment would probably be more important because that does real damage to intelligence and that is something mainstream scholars accept as an exception, another exception some mainstream scholars except is a person who is part of a discriminated against group, such as African Americans being given inferior education, but mainstream scholars generally think that a person who is not suffering from one of those issues is going to have their intelligence determined more by genetics than environment). I believe the consensus among scholars leans more in Hans Eysenck's direction than Stephen Jay Gould's with regards to individual differences in intelligence. So I think the article should be changed to not give undue weight to the environmentalist view, which is a minority view at this point. I'm going to review Eysenck's writing and some other writing I've looked at to back up my point that the consensus is that genes are more important than environment in individual differences in normal environments. RandomScholar30 (talk) 02:40, 27 May 2016 (UTC)
- Eysenck stated in Intelligence: A New Look "It has been known for many years that heredity contributes more than environment to differences in IQ, but recent years have brought forth a veritable flood of evidence to support and strengthen this early finding. " Eysenck, HJ Intelligence: A New Look Transaction Publishers page 9 So Eysenck was stating that the scholarly consensus held his view that intelligence was primarily genetic. I will look for more evidence though and provide it as evidence in favor of changing what that sentence says. RandomScholar30 (talk) 03:04, 27 May 2016 (UTC)
- Eysenck also quoted in his autobiography a statement that said "On the whole scholars with any expertise in the area of intelligence and intelligence testing (defined very broadly) share a common view of the most important components of intelligence and are convinced that it can be measured with some degree of accuracy. An overwhelming majority also believe that individual genetic inheritance contributes to variations in IQ within the white community...". That is from page 290 of Eysenck's Rebel with a Cause Transaction Publishers 1990 [1]. The context of who made the statement was not on the page quoted from, I own a copy of it but don't want to dig it out right now, I will later. The point is this supports my point that there is already a scholarly consensus that genetics is more important than environment for intelligence. RandomScholar30 (talk) 03:11, 27 May 2016 (UTC)
- This meta-analysis from 2014 says, "Taken together, these findings appear to be most consistent with transactional models of cognitive development that emphasize gene–environment correlation and interaction." It does not appear that there is overwhelming consensus, so I would say the current wording is an accurate reflection of the current status of the debate. And actually this meta-analysis highlights the different views quite well, so I'm going to add it as a source for that sentence you quoted. —PermStrump(talk) 03:51, 27 May 2016 (UTC)
- Eysenck also quoted in his autobiography a statement that said "On the whole scholars with any expertise in the area of intelligence and intelligence testing (defined very broadly) share a common view of the most important components of intelligence and are convinced that it can be measured with some degree of accuracy. An overwhelming majority also believe that individual genetic inheritance contributes to variations in IQ within the white community...". That is from page 290 of Eysenck's Rebel with a Cause Transaction Publishers 1990 [1]. The context of who made the statement was not on the page quoted from, I own a copy of it but don't want to dig it out right now, I will later. The point is this supports my point that there is already a scholarly consensus that genetics is more important than environment for intelligence. RandomScholar30 (talk) 03:11, 27 May 2016 (UTC)
- I've read Eysenck's book. (I've used at least one of Eysenck's books as a source for a related Wikipedia article.) But you owe it to yourself to read newer books, because this is an ongoing field of research, and Eysenck has been dead for a while. -- WeijiBaikeBianji (Watch my talk, How I edit) 03:56, 27 May 2016 (UTC)
- Eysenck has been dead for a while, but I don't think he is as discredited as Jung, Freud or Skinner are, for example. His ideas are still respected. He can be used as a source in combination with other sources. This New York Times article seems to suggest that the current consensus is genetics is more important than environment but environment still plays a role, "A century’s worth of quantitative-genetics literature concludes that a person’s I.Q. is remarkably stable and that about three-quarters of I.Q. differences between individuals are attributable to heredity. " http://www.nytimes.com/2006/07/23/magazine/23wwln_idealab.html?pagewanted=print The article is from 2006. RandomScholar30 (talk) 06:23, 27 May 2016 (UTC)
- Also, I don't think Eysenck was a racist or a right-wing extremist, in contrast to other IQ researchers such as Richard Lynn and Phillipe Rushton. So he is ok to use as a source. RandomScholar30 (talk) 06:36, 27 May 2016 (UTC)
- Eysenck has been dead for a while, but I don't think he is as discredited as Jung, Freud or Skinner are, for example. His ideas are still respected. He can be used as a source in combination with other sources. This New York Times article seems to suggest that the current consensus is genetics is more important than environment but environment still plays a role, "A century’s worth of quantitative-genetics literature concludes that a person’s I.Q. is remarkably stable and that about three-quarters of I.Q. differences between individuals are attributable to heredity. " http://www.nytimes.com/2006/07/23/magazine/23wwln_idealab.html?pagewanted=print The article is from 2006. RandomScholar30 (talk) 06:23, 27 May 2016 (UTC)
Hampshire et al. study
There's 100+ years of research on the construct of IQ, and this research is described in numerous textbooks, review articles, meta-analyses, etc. In light of that, can someone explain to me why there is a need to cover the study by Hampshire et al. [2] in this article? Can you at least cite some scholarly secondary sources that describe and contextualize the study, instead of that Independent article that mostly consists of pompous and absurd statements from the study's authors? Why do you think that a study of 13 individuals (that's their sample size in the brain scan part) should be discussed in Wikipedia?
There's also the fact that the Hampshire et al. study has a peculiar history, as described in Haier et al. The editors of the journal Neuron, where the study was published, lacked any expertise in psychometrics or intelligence research, so they commissioned an outside expert, the psychometrician Richard Haier, to write a commentary on the study before it was published. However, Haier, together with some colleagues, concluded that the study was highly flawed and said that it shouldn't be published without major revision. Neuron's editors, however, rejected this advise and published the study essentially unchanged and refused to publish Haier's highly critical commentary.
Later, Hampshire et al. had back-and-forths about the study with psychometricians in the journals Personality and Individual Differences and Intelligence. These psychometricians, experts in the very topic of the structure of intelligence, rejected the argument of the study on numerous grounds.
It's quite clear that discussing this study gives it undue weight as it is not covered, and will likely not be covered, by any major reviews or textbooks discussing cognitive ability. There's a long history of researchers challenging the g theory, from Thomson and Thurstone to Horn and van der Maas. This history can be discussed in this article if needed and there are many reliable sources documenting it, but there's no reason to give inconsequential self-promoters like Hampshire any space here.--Victor Chmara (talk) 08:46, 24 May 2016 (UTC)
- Disagree: Why should we ignore actual Science in favor of Psychology? "Psychometrics" isn't a science. It's practitioners are mostly psychologists, not neuroscientists. The article was published in Neuron, a neuroscience journal. Secondly, just because something has been considered valid for "100+" years, doesn't mean it can't be wrong. You're also ignoring that the sample size of the total study was 46,000+, selected out of 100,000+. If you're going to ignore the questionnaire sample set completely, then you might as well ignore all of psychometrics, because that's basically what it's based on in the first place. The actual neuroimaging sample set was an additional add on to verify the findings. cӨde1+6TP 11:23, 24 May 2016 (UTC)
- Let's get real. Psychometrics is a mature science that produces highly replicable results with large effect sizes. Neuroscience, in contrast, is an immature field that, as any honest neuroscientist will admit, struggles with reproducibility and lack of basic statistical understanding among its practitioners[3]. Neuroscientific measures are far away from challenging behavioral measures in the prediction and understanding of behavior.
- Neuroscience methods can be profitably combined with psychometrics but that requires understanding of both fields, something that Hampshire et al. lack. Haier, for example, has published a number of studies that use brain imaging methods, but he would not, in this day and age, publish a study with N=13, and certainly would not make far-ranging claims based on such meager evidence.
- The brain imaging part of Hampshire et al. is the only part of the study that has any hope of providing new evidence. The fact that you think that the brain data was only "an additional add on to verify the findings" means that you don't understand the study at all and shouldn't be commenting on this.
- As to their behavioral data, they are of the type that are a dime a dozen in differential and educational psychology, although tests with such poor psychometric properties as those of Hampshire et al. are unusual. Note that when Ashton et al.[http://www.sciencedirect.com.sci-hub.cc/science/article/pii/S0191886913012804] compared the fit of a standard higher-order g-factor model to that of their parameterization of the Hampshire model using a correlation matrix provided by Hampshire, the fit of the g model was clearly superior. Therefore, Hampshire et al's own behavioral data provides strong evidence against their model.--Victor Chmara (talk) 12:29, 24 May 2016 (UTC)
- I think the Hampshire et al study is a reasonable inclusion - the back and forth between them and the psychometricians can be summarized as well. It should of course not include it the way it was originally included, and it probably doesnt merit more than a couple of sentences. But, psychometricians do not have a monopoly on studying intelligence (educational research, cognitive psychology, neuroscience, information sciences, AI, philosophy), and frankly it seems absurdly to me to think that neuroscience and cognitive science has nothing to contribute to our understanding of intelligence. I will refrain from giving my own opinions about psychometrics, and hence not respond to Victor Chmara's descriptions of the field. ·maunus · snunɐɯ· 14:01, 24 May 2016 (UTC)
- The only things notable about the study are the way Neuron bungled the peer review process and the absurd media campaign Hampshire et al. waged. There are hundreds of studies on the neuroscience of IQ differences, most of them better than Hampshire's, with sample sizes larger than 13, but there's no reason to discuss any individual study in this article. As this article is about IQ and not intelligence in general, psychometric research is inevitably at the foreground, but views from other disciplines can of course also be incorporated, provided that they meet normal Wikipedia requirements.
- The reason the Hampshire study caused debate in differential psych journals was that the strong claims made in it were inconsistent with the weak evidence presented. Nothing very interesting emerged from the debate, and the topics discussed -- factor rotation, ergodicity, selection effects, etc. -- are not a good fit for a general article like this.--Victor Chmara (talk) 14:33, 24 May 2016 (UTC)
- One helpful suggestion I received when I mentioned this article as an entry for the latest Core Contest is that this article is way too long by Wikipedia article length guidelines. We should be using hypertext and summary style more here to actually shorten the article, not dump into it paragraphs after paragraphs of text about unreplicated primary research studies or fringe views on IQ testing (pro or con). Victor is correct that there is plenty of reliable secondary scholarly literature on this article's topic (always new textbooks and handbooks coming out, which I find in libraries and mention here on the article talk page from time to time). We should use resources like those to improve the article, rather than cherry-picking primary research publications mentioned in the latest press release. That's simply upholding the Wikipedia guideline on reliable sources. -- WeijiBaikeBianji (Watch my talk, How I edit) 17:54, 24 May 2016 (UTC)
Code16 and Maunus insist that a discussion of a study whose authors claimed that their "findings conclusively disproved the notion of a general intelligence factor" must be included in this article. Notably, this study involved a small neuroimaging analysis of 13 individuals of unknown ability levels (the original sample was 16, but 3 were excluded for failing to conform to the factor model they forced on the data). The g factor is perhaps the oldest still current concept in scientific psychology, and the literature on it -- spanning many disciplines from psychometrics to evolutionary psychology and from behavioral and molecular genetics to neuroscience -- is voluminous. The small study by Hampshire et al. is but a speck in the ocean of arguments and studies about the construct. It would be extraordinary if one small and methodologically unimpressive study "conclusively" proved anything about the g factor.
One of Wikipedia's principles is that exceptional claims require multiple high-quality sources. Exceptional claims are those that "are contradicted by the prevailing view within the relevant community, or that would significantly alter mainstream assumptions, especially in science, medicine, history, politics, and biographies of living people." The claims by Hampshire et al. are certainly exceptional in this sense, as seen in the subsequent comments on the study in scholarly journals. However, instead of multiple high quality sources, the only source supporting the claims made that is cited is an article in The Independent. As a further illustration of the quality of that newspaper's science reporting, I'll note that last year it published an article[4] claiming that a Nigerian professor had proved the Riemann Hypothesis. Following Maunus and Code16's reasoning, I guess we'll have to change the Riemann hypothesis article to reflect this "fact" -- no need to have any other sources.
So, I'd like to see Maunus and Code16 justify how the Hampshire et al. study fulfills the requirements of WP:UNDUE, WP:SECONDARY, and WP:EXCEPTIONAL.--Victor Chmara (talk) 12:34, 27 May 2016 (UTC)
- That's not how it works. We've already listed our reasons. It's now up to other editors to decide and collectively arrive at consensus, given the positions. You can't resort to WP:Wikilawyering to force deletion of reliably sourced content from scientific journals. So please refrain from further reverts until consensus has been achieved. Thanks. cӨde1+6TP 16:27, 27 May 2016 (UTC)
- Asking people how Wikipedia policies support their editing decisions is not "wikilawyering." On the contrary, it's the way disputes are supposed to be resolved around here. As far as I can discern, you have presented two reasons for including this study.
- Firstly, you argued that neuroscientists are real scientists while psychometricians aren't, so the views of the former should be given precedence. This is, of course, an absurd argument if you know anything about the two fields. More importantly, Wikipedia does not recognize any hierarchy of sciences where one science yields more reliable results than another, so your personal opinions about neuroscience and psychometrics are irrelevant and can be disregarded.
- The second reason you gave was that the sample size of the study was 46,000 and the fact that the neuroimaging analysis had a tiny N is irrelevant because that part of the study was just an unimportant add-on. Aside from the fact that you clearly didn't understand the study, what you are arguing is that the psychometric part of the study is the reason why the study should be discussed in this article. However, the behavioral data (online tests) Hampshire et al. reported support a g-based understanding of intelligence, as shown by Ashton et al. in their reanalysis. Moreover, a sample size of 46,000 is nothing to write home about. For example, this study found support for g in a sample of 370,000 people.
- In sum, you have not provided any reasons for including the study that are relevant in light of Wikipedia's content policies.--Victor Chmara (talk) 08:49, 28 May 2016 (UTC)
Editing standards--let's bring this article up to good article status
I see there has been discussion about improvement of this article among several editors. One of the reasons I like to apply the letter and spirit of the WP:MEDRS guideline to articles about IQ testing is that those IQ tests are used for medical diagnoses, they have implications for what is called "cognitive epidemiology," and they are used in legal proceedings. Oh, and also many things that "everyone knows" about IQ tests are flat wrong. When Wikipedians write about a topic of such broad popular interest (this article has a lot of page views), it's only fair to the readers of Wikipedia to get the facts right. I've been concerned for a long time that the edit history of this article suggests a lot of attempts to push minor points sourced to a single primary research study, and a basic lack of reading sound reference books on the article topic to bring out due emphasis and balance in consideration of controversial issues related to the topic (which are numerous).
Anyway, I've seen an article improve a lot and actually become less subject to edit-warring--even though the article was semiprotected for years beforehand--when a group of editors committed to using reliable sources that multiple editors had access to to revise from top to bottom the article English language just more than a year ago. Maunus was a big part of that effort, and he demonstrated an ability to work collaboratively with other editors and diligently check sources. The editors jytdog and VictorChmara, among probably many other editors following the discussion here, have a lot of experience in looking up reliable sources and have a lot of perspective on the issues covered by this article. I'd be happy to join them and others in finally sourcing this article at least to the basic WP:RS requirements of sourcing to secondary rather than primary sources and making sure that the sources are current and mainstream. How many of us are on board to do the actual work of collaborating to check the sources and make sure none are fudged and that all statements in the article are well verified as the article is revised and reorganized until it meets the Wikipedia good article standards? -- WeijiBaikeBianji (Watch my talk, How I edit) 03:49, 30 May 2016 (UTC)
- The principal problem MEDRS is supposed to treat is people self--diagnosing. Wikipedia can't do much about the millions of test your own IQ quizzes around the place and no fixing of this article will make a blind bit of difference to them and no-one is going to self harm because they read that IQ has three factors or whatever in it or there is a g factor. The uses you talk about are very preliminary and IQ does not count in any proper tests. There is no particular danger which has to be avoided by this article. One might as well say an article about weightlifting should be under MEDRS because medical people test your grip.
- It is not our job to get the facts right and present the one true truth. We should be presenting the various opinions that are out there with due weight as supported by reliable sources. We are not here to be fair to readers. Our job to readers is to present what is out there in a good readable manner as befits an encyclopaedia.
- The amount of controversy in this article is easily manageable and has been manged well for years.There are only six pages of archives and it is not mentioned as a contentious page anywhere, the only contentious link is Race and Intelligence which is a different article with 96 archives.
- I support good sourcing and MEDRS supports good sourcing. But going from that to supporting MEDRS for this article is simply bad logic and actually a wrong thing to do. MEDRS is a butt covering guideline so we aren't responsible for wrong medical advice. It suppresses opinions that aren't peer reviewed and have secondary sources reviewing them. In the case above information from a peer reviewed source that was cited to a newspaper secondary source and also had a journal article critiquing it was removed because that was not enough under MEDRS. The article did not make a claim that was extraordinary in any way. Exactly what is served by that sort of thing here?
- So overall I think your idea of using MEDRS is a bad idea. It would produce an article that was directed at being safe rather than being an encyclopaedia with opinions in due weight. we want a reliable encyclopaedia rather than a safe one for most of Wikipedia. IQ tests have less health implications that weightlifting, and the Israeli Palestinian conflict has far far more health and controversy issues. And Race and Intelligence is not under MEDRS despite that RfC being raised there. Dmcq (talk) 08:42, 30 May 2016 (UTC)
- It is not accurate that "The principal problem MEDRS is supposed to treat is people self--diagnosing". You don't understand MEDRS nor how it used in WP, and more importantly there is no point having this discussion in multiple places. Jytdog (talk) 08:46, 30 May 2016 (UTC)
- In general changing your comment after someone has responded is not what we do here. But pointing out that dif - sure, if someone wants content about how some training regimen or performance-enhancing substance improves performance, yes that is WP:Biomedical information. An article about weightlifting would have boatloads of content that isn't biomedical. MEDRS applies to content that is biomedical information, not whole articles. Like I said, you don't understand MEDRS nor how we use it; I don't understand why you are arguing about it. Jytdog (talk) 08:54, 30 May 2016 (UTC)
- You pointed out whyMEDRS to me as justifying that it was about that, but then it said things like "The use of WP:PRIMARY sources is really dangerous in the context of health." Now you come back just repeating your assertion. Who is it that isn't reading things and understanding them? I think about multiple discussions you are referring to Wikiproject Medicine where I enquired about this and pointed here. A person answered there rather than here and I replied there. You said above you had been considering putting a note there but didn't and now you complain because I respond to a person? You launched a personal attack on me as your first action in the previous discussion and now you stick in more snide remarks. Can I yet again ask you to keep to the subject and points raised in the discussion thanks? Dmcq (talk) 09:03, 30 May 2016 (UTC)
- As to IQ being biomedical information - it just isn't. The factors in it and their determination may be biomedical information okay but IQ is principally a social construct rather like the triathlon. It measures something about intellect just like a triathlon score measures fitness but it is a very general thing which is massaged in various ways and doesn't measure anything in particular. As WeijiBaikeBianji says it can be used in legal cases, but that is just appealing to the MEDRS danger aspect you reject. Dmcq (talk) 09:22, 30 May 2016 (UTC)
- I understand you are asserting that IQ isn't biomedical information. First, I am not saying that everything about IQ is biomedical information; some things about it, like how people make use of IQ scores, are not. But as soon as we get into psychology, neurology, or neurosciece and claims about IQ from a scientific or medical standpoint, we are on MEDRS ground. You are not getting consensus around the perspective that this isn't biomedical information, and at some point you will need to recognize that. I am sorry you are taking my remarks as snide as they are not intended that way. I am being very direct. You keep writing things that show you don't understand MEDRS nor how it is applied. And I really don't understand why you are making arguments about things you don't understand. That is very direct too. Jytdog (talk) 09:59, 30 May 2016 (UTC)
- Could you cut out the crap about that I am too stupid to understand MEDRS and how you know all about it thanks. Provide some evidence that IQ is considered a biomedical measurement. From WP:Biomedical information to what extentt iis it an attribute of a disease or a condition, Attributes of a treatment or drug, a medical decision criterion, a health effect, Population data and epidemiology, or biomedical research? Or exactly why is it important to apply MEDRS here rather than normal Wikipedia standards? The purpose of MEDRS is stated quite clearly in the first sentence "Wikipedia's articles are not medical advice, but are a widely used source of health information. For this reason, all biomedical information must be based on reliable, third-party published secondary sources, and must accurately reflect current knowledge." Dmcq (talk) 10:46, 30 May 2016 (UTC)
- I have made no claim as to why you don't understand MEDRS and how we use it. None, and I would never go there. I don't know why you don't understand these things. The fact that you don't understand them is clear in what you are writing here. Jytdog (talk) 18:12, 30 May 2016 (UTC)
- (Reposting attempt after edit conflict.) Of course we have had ICD codes on this article in diagnostic tests template for a long time. And that's because IQ tests, as written above, have uses in medical diagnosis. Maybe the way forward here is for editors who think we should do anything less than apply the WP:MEDRS standard of verifying article content here to give a positive rationale for how that could make the article better than it now is. What I'm here for is to edit an encyclopedia, and the current state of the article (and the state it has been in since I came here in 2010) is very little like an encyclopedia article about IQ testing--I've read most of those that are available in English. -- WeijiBaikeBianji (Watch my talk, How I edit) 11:36, 30 May 2016 (UTC)
- What do you mean by quality? That is the problem. The issue of the 2012 study was dealt with in a proper way in #Hampshire et al. study above. It was not dealt with properly by invoking MEDRS to blanket ban PRIMARY and ignore a reliable newspaper because as MEDRS says "Primary sources should generally not be used for medical content – as such sources often include unreliable or preliminary information, for example early in vitro results which don't hold in later clinical trials." and 'Articles in newspapers and popular magazines generally lack the context to judge experimental results. They tend to overemphasize the certainty of any result, for instance, presenting a new and experimental treatment as "the cure" for a disease or an every-day substance as "the cause" of a disease'. That's talking about health reasons for not putting in information which for another article very possibly would be reasonable to put in if there was no health concern. Quality for MEDRS means information which we're fairly certain has passed a number of checks for accuracy and correctness. Quality for other articles means presenting the issues with due weight as given by reliable sources. They are different things. Dmcq (talk) 11:55, 30 May 2016 (UTC)
- By the way, could you give an example or two of an encyclopaedia that you think treats IQ better thanks. Dmcq (talk) 12:03, 30 May 2016 (UTC)
- Newspapers and primary sources are simply not reliable for this kind of topic - that is a mistake to think so. They will inevitably represent a single viewpoint (that of the author) since they are not peer reviewed, and this is a field where opinions are so diverse and the discussions are so complex that giving any credence to non-peer reviewed sources on the topic would be a mistake. Science writers and journalists cannot be expected to represent the topic with sufficient subtlety to be useful. On the other hand, if you have a specific news source or primary source and would like to argue for its inclusion I think that a consensus could of course decide to do so. However, I see nothing to be gained by argueing for a laxer sourcing standard than necessary on this or any other page. If the article can be written following a MEDRS-like sourcing principle then by definition it will be of a higher quality than if we write it without doing so.·maunus · snunɐɯ· 12:08, 30 May 2016 (UTC)
- That is an argument for chopping out newspapers in practically any topic. Whenever a newspaper article has been about something I know about they've got something wrong. But can we just follow Wikipedia's policies and guidelines please and report on what's out there rather than trying to just have the ultra clean truth thanks? Dmcq (talk) 12:13, 30 May 2016 (UTC)
- Errr....no? People here are actively working to take this article to the highest possible level of quality. If you prefer working on articles at a mediocre level of quality where it is not important to select the best possible sources, there are many other articles to work on. Newspaper articles should never be used as sources for FA and GA level articles if better sources are available. In this case they are available and editors working on the article are both willing and able to use them. There really is no reason that I can see to argue for the inclusion low quality ssources that do not meet the strictest sourcing requirements.·maunus · snunɐɯ· 00:59, 31 May 2016 (UTC)
- That is an argument for chopping out newspapers in practically any topic. Whenever a newspaper article has been about something I know about they've got something wrong. But can we just follow Wikipedia's policies and guidelines please and report on what's out there rather than trying to just have the ultra clean truth thanks? Dmcq (talk) 12:13, 30 May 2016 (UTC)
- Newspapers and primary sources are simply not reliable for this kind of topic - that is a mistake to think so. They will inevitably represent a single viewpoint (that of the author) since they are not peer reviewed, and this is a field where opinions are so diverse and the discussions are so complex that giving any credence to non-peer reviewed sources on the topic would be a mistake. Science writers and journalists cannot be expected to represent the topic with sufficient subtlety to be useful. On the other hand, if you have a specific news source or primary source and would like to argue for its inclusion I think that a consensus could of course decide to do so. However, I see nothing to be gained by argueing for a laxer sourcing standard than necessary on this or any other page. If the article can be written following a MEDRS-like sourcing principle then by definition it will be of a higher quality than if we write it without doing so.·maunus · snunɐɯ· 12:08, 30 May 2016 (UTC)
- Could you cut out the crap about that I am too stupid to understand MEDRS and how you know all about it thanks. Provide some evidence that IQ is considered a biomedical measurement. From WP:Biomedical information to what extentt iis it an attribute of a disease or a condition, Attributes of a treatment or drug, a medical decision criterion, a health effect, Population data and epidemiology, or biomedical research? Or exactly why is it important to apply MEDRS here rather than normal Wikipedia standards? The purpose of MEDRS is stated quite clearly in the first sentence "Wikipedia's articles are not medical advice, but are a widely used source of health information. For this reason, all biomedical information must be based on reliable, third-party published secondary sources, and must accurately reflect current knowledge." Dmcq (talk) 10:46, 30 May 2016 (UTC)
- I understand you are asserting that IQ isn't biomedical information. First, I am not saying that everything about IQ is biomedical information; some things about it, like how people make use of IQ scores, are not. But as soon as we get into psychology, neurology, or neurosciece and claims about IQ from a scientific or medical standpoint, we are on MEDRS ground. You are not getting consensus around the perspective that this isn't biomedical information, and at some point you will need to recognize that. I am sorry you are taking my remarks as snide as they are not intended that way. I am being very direct. You keep writing things that show you don't understand MEDRS nor how it is applied. And I really don't understand why you are making arguments about things you don't understand. That is very direct too. Jytdog (talk) 09:59, 30 May 2016 (UTC)
- In general changing your comment after someone has responded is not what we do here. But pointing out that dif - sure, if someone wants content about how some training regimen or performance-enhancing substance improves performance, yes that is WP:Biomedical information. An article about weightlifting would have boatloads of content that isn't biomedical. MEDRS applies to content that is biomedical information, not whole articles. Like I said, you don't understand MEDRS nor how we use it; I don't understand why you are arguing about it. Jytdog (talk) 08:54, 30 May 2016 (UTC)
- It is not accurate that "The principal problem MEDRS is supposed to treat is people self--diagnosing". You don't understand MEDRS nor how it used in WP, and more importantly there is no point having this discussion in multiple places. Jytdog (talk) 08:46, 30 May 2016 (UTC)
Genuinely curious here: how about if we just uphold the best practice now described in the general Wikipedia content guideline on reliable sources and keep in mind the Wikipedia criteria for good articles and move forward from there? The content guideline reminds us that primary research articles in general are not good sources for Wikipedia editing in general by the section on primary, secondary, and tertiary sources, which says, in relevant part,
Wikipedia articles should be based mainly on reliable secondary sources. ... Primary sources are often difficult to use appropriately. Although they can be both reliable and useful in certain situations, they must be used with caution in order to avoid original research. Although specific facts may be taken from primary sources, secondary sources that present the same material are preferred. Large blocks of material based purely on primary sources should be avoided.
The topic of this article is blessed by dozens of good reference books and textbooks for practitioners and for graduate students of psychology, as well as by book-length treatises on most important subtopics related to this article's main topic, and we may as well use those sources to guide us as to what the most central issues are in this article's topic scope. And the good article criteria remind us that
A good article is—
Well written: the prose is clear and concise, and the spelling and grammar are correct; and it complies with the manual of style guidelines for lead sections, layout, words to watch, fiction, and list incorporation.[3] Verifiable with no original research:[4] it contains a list of all references (sources of information), presented in accordance with the layout style guideline;[5]
all in-line citations are from reliable sources, including those for direct quotations, statistics, published opinion, counter-intuitive or controversial statements that are challenged or likely to be challenged, and contentious material relating to living persons—science-based articles should follow the scientific citation guidelines;[6]
I'd be happy to collaborate with other editors here in applying those guidelines and criteria thoroughly to this article, rewriting from top to bottom in cooperation with other editors, to shorten this article (as the good article criteria include "the prose is clear and concise") and to make sure that all of the inline references have been checked (so that none are fudged) and all follow the scientific citation guidelines. Along the way, it will be easy to improve this article by improving linked articles on subtopics of the main topic of IQ, which makes better use of the hypertext flexibility of Wikipedia. Who else would like to join in this effort? I've previously produced the good article IQ classification about a subtopic closely related to the main topic of this article, and I personally own more than two dozen reference books related to the topic of this article, and can obtain dozens more from local libraries or online databases. What does everyone think about this? Oh, and by the way, what sources do each of you recommend that fully fit the guideline details of the current Wikipedia content guideline on reliable sources and are squarely on-topic with the topic of this article? Let's discuss what sources to use, and how to use them, to make this article indisputably better. -- WeijiBaikeBianji (Watch my talk, How I edit) 21:41, 30 May 2016 (UTC)
- Yes MEDRS and RS are indistinguishable if you are aiming high per RS, V, OR, NPOV etc. Every single policy and guideline urges editors to use secondary sources and there is good reason for it, deep in the guts of WP. Jytdog (talk) 22:11, 30 May 2016 (UTC)
- I'm happy with what WP:SCIRS says. And with many of the suggestions in WP:MEDRS about where to look for sources. But unlike what Jytdog has just said MEDRS is replete with 'must's instead of 'urge' or the 'should's in SCIRS. And I have no problem with those musts when talking about drugs and medical diagnosis and procedures. Dmcq (talk) 17:17, 1 June 2016 (UTC)
- I'm glad to see some consensus emerging. Let's move ahead by doing what all of Wikipedia's guidelines urge us to do when editing the most visited articles of Wikipedia. The good article criteria will be a good guide for improving this article. Who has previously uncited reliable, secondary sources to recommend for further improvements to this article? I have found some more since I last posted bibliographic information about sources to this article talk page. -- WeijiBaikeBianji (Watch my talk, How I edit) 18:41, 1 June 2016 (UTC)
The passage about dysgenics
The article says
A 1998 textbook, IQ and Human Intelligence, by N. J. Mackintosh, noted that before Flynn published his major papers, many psychologists mistakenly believed that there were dysgenic trends gradually reducing the level of intelligence in the general population. They also believed that no environmental factor could possibly have a strong effect on IQ. Mackintosh noted that Flynn's observations have prompted much new research in psychology and "demolish some long-cherished beliefs, and raise a number of other interesting issues along the way."[53]
I can't find an online version of Mackintosh, N. J. (1998). But I have found an updated second edition in Google Books. I read through p.26-29, and I don't think he thought the belief was a mistake, although he believes some figures have been an overestimation. He asked, "does the Flynn effect represent a real increase in intelligence, or just an increase in IQ scores?" So this passage has either misquoted Mackintosh, or Mackintosh's mind has changed in these years. I hope someone will read related pages on Google Books, and update the article with its current content.--The Master (talk) 12:33, 2 June 2016 (UTC)
- A couple of comments. I have both editions of Mackintosh's book at hand for full context, and I also have many of the books and articles his books and Flynn's books cite as references. There simply isn't any evidence of dysgenic trends in modern developed countries. Another point is that this article Intelligence quotient is too long by Wikipedia rules about article length. It's doubtful that such a minor point, barely related to IQ testing as such, needs to be mentioned at all in this article except by way of linking to the article Flynn effect and perhaps a few other related subarticles (all of which should also be reviewed top to bottom for sourcing and referencing improvement as we proceed to improve this article to good article status). Thanks for using Google Books to glimpse the books--even better is to read the books from cover to cover, as I have. -- WeijiBaikeBianji (Watch my talk, How I edit) 15:58, 2 June 2016 (UTC)
- So do you think that passage should be removed or being kept as it is? --The Master (talk) 00:09, 3 June 2016 (UTC)
- I don't see any reason to keep it as it is since it does not reflect the content of the second edition. If you don't write your version, I will revert to my version. --The Master (talk) 11:46, 4 June 2016 (UTC)
- So do you think that passage should be removed or being kept as it is? --The Master (talk) 00:09, 3 June 2016 (UTC)
2012 study
In my view, we should be following MEDRS in discussing the science here, using reviews in the literature or statements by major insitutions for content. The sourcing below is the popular press for the bit, and a "comment" letter from the literature. This is not how this content should be sourced; high quality sources would yield different content. Am going to go look for the original study and reveiws that discuss it and i encourage others to as well. Let's find high quality sourcing to base content on about this....
A 2012 study based on comparing the factor model of IQ with factor models of brain functioning, argued that at least three different cognitive skills were needed to support IQ - memory, reasoning and verbal skills. The study's authors argued that their findings conclusively disproved the notion of a general intelligence factor, and that intelligence was instead a mixture of various cognitive tasks.[1] The conclusions were criticized by other psychometricians who argued that the task-mixing model did not sufficiently account for the data that suggests differential g-loading of different types of mental tasks. [2]
References
- ^ "IQ tests are 'fundamentally flawed' and using them alone to measure intelligence is a 'fallacy', study finds | Science | News | The Independent".
- ^ Ashton, M. C., Lee, K., & Visser, B. A. (2014). Higher-order g versus blended variable models of mental ability: Comment on Hampshire, Highfield, Parkin, and Owen (2012). Personality and Individual Differences, 60, 3-7.
Jytdog (talk) 16:43, 27 May 2016 (UTC)
- OK the paper being reported on by The Independent is PMID 23259956. It is, per MEDRS, a primary source, not a review, so we shouldn't use it. Looking for a review now... Jytdog (talk) 16:46, 27 May 2016 (UTC)
- I'll agree with Jytdog's sourcing requirements. They are strict, but his argument is valid. cӨde1+6TP 16:54, 27 May 2016 (UTC)
- The most recent review i have found is PMID 26267702; this article published about a year ago. It is behind a paywall and I can send it to anybody who wants it. 17:51, 27 May 2016 (UTC)
- From the abstract, seems like it's a positive review. cӨde1+6TP 21:32, 27 May 2016 (UTC)
- I disagree with Jytdog. IQ is nothing to do with medicine or psychiatry. The sources should just be of good scholarly quality as there is scholarship on the subject. There is no need to have unnecessary requirements on this article. Dmcq (talk) 23:09, 27 May 2016 (UTC)
- Not sure what the basis is for that. "Intelligence" is a phenotype arising from biology, just like any state of any human (disease, health, or other status, etc). See WP:Biomedical information. Of course MEDRS applies - we want very solid science here. For an explanation of why using the high quality sources called for by MEDRS (which is actually no different from RS) is essential for biomedical information, you can see the intro to the essay WP:Why MEDRS? (which I wrote most of) Jytdog (talk) 23:53, 27 May 2016 (UTC)
- Wouldn't MEDRS require a rewrite of the entire article though? I don't know, I haven't inspected it in detail, just thought that might be needed if applying MEDRS. cӨde1+6TP 00:32, 28 May 2016 (UTC)
- A reworking, probably yes. Slow and steady. Jytdog (talk) 04:18, 28 May 2016 (UTC)
- Wouldn't MEDRS require a rewrite of the entire article though? I don't know, I haven't inspected it in detail, just thought that might be needed if applying MEDRS. cӨde1+6TP 00:32, 28 May 2016 (UTC)
- At this rate you'll want to put the shot putt and running in as medical articles. They certainly have more doctors measuring them and advising on them than IQ. Are you really serious in thinking that any article where they do a measurement that might be affected by genes it is medical? At that rate we'll have all the games articles and loads of education articles and social sciences ones under MEDRS. How about we just leave MEDRS for articles where they actually study something that can be observed straightforwardly rather than third level effects like IQ? Dmcq (talk) 10:34, 28 May 2016 (UTC)
- counter to your edit note there is nothing silly about high quality sourcing; it is the basis for high quality content. There are many reasons why primary sources in the biomedical sciences are unreliable. You don't get it and are not interested in understanding, so be it. Jytdog (talk) 10:49, 28 May 2016 (UTC)
- I don't appreciate personal attacks. I never said anything about using less good sources with the sources that are already in use. I was opposing the use of MEDRS. Dmcq (talk) 11:31, 28 May 2016 (UTC)
- No one who is serious about WP thinks that high quality sourcing is silly. Jytdog (talk) 11:32, 28 May 2016 (UTC)
- At another place you said I was proposing to do something which I was not proposing to do even when I clearly stated I wasn't. And here you're implying I think high quality sourcing is silly. That is not so. I say again, I do not think high quality sourcing is silly. What I think is silly is applying MEDRS standards to this article about IQ. I think standard Wikipedia policy and guidelines are enough. Sources should be of comparable standard to those already here to have weight enough to be put in this article. IQ is not a medical matter, it is an agreed social construct which has many factors in it, probably many more and much fuzzier than measuring a persons biceps and with less medical implications than weightlifting. Dmcq (talk) 12:24, 28 May 2016 (UTC)
- No one who is serious about WP thinks that high quality sourcing is silly. Jytdog (talk) 11:32, 28 May 2016 (UTC)
- I don't appreciate personal attacks. I never said anything about using less good sources with the sources that are already in use. I was opposing the use of MEDRS. Dmcq (talk) 11:31, 28 May 2016 (UTC)
- counter to your edit note there is nothing silly about high quality sourcing; it is the basis for high quality content. There are many reasons why primary sources in the biomedical sciences are unreliable. You don't get it and are not interested in understanding, so be it. Jytdog (talk) 10:49, 28 May 2016 (UTC)
- At this rate you'll want to put the shot putt and running in as medical articles. They certainly have more doctors measuring them and advising on them than IQ. Are you really serious in thinking that any article where they do a measurement that might be affected by genes it is medical? At that rate we'll have all the games articles and loads of education articles and social sciences ones under MEDRS. How about we just leave MEDRS for articles where they actually study something that can be observed straightforwardly rather than third level effects like IQ? Dmcq (talk) 10:34, 28 May 2016 (UTC)
There was already a thread about this study above. Anyway, the Conway & Kovacs review from 2015 mentioned by Code16 is available here. It doesn't mention Hampshire et al. 2012.
I agree that this article should be based on scholarly secondary sources and primary sources like Hampshire et al. should be avoided, but I don't think MEDRS is generally followed in psychology articles.--Victor Chmara (talk) 09:15, 28 May 2016 (UTC)
- MEDRS is used in the main psychology articles, but they don't all get the same attention and for some topics, the best sources aren't that great. This topic has plenty of high quality sources, so there's no reason to use primary research and popular press. Here's a 2014 review (free full text) that I skimmed the other day. I cited it in one part of the article already, but it looked like there's probably more that could be used. —PermStrump(talk) 10:21, 28 May 2016 (UTC)
- They might study factors they think contribute to IQ, but they don't study IQ as such. Do any of them think IQ is a real measurable medical thing or is it more like some arbitrary mixture of weight and height which has some social uses? Dmcq (talk) 11:19, 28 May 2016 (UTC)
- Who is "they"? —PermStrump(talk) 12:01, 28 May 2016 (UTC)
- The 'they' referred to the people studying IQ and writing the source articles about it. Articles don't study or do anything so references to articles doing something refer to the authors instead. Dmcq (talk) 12:19, 28 May 2016 (UTC)
- Who is "they"? —PermStrump(talk) 12:01, 28 May 2016 (UTC)
- I think there are many very good reasons to follow the strictest sourcing requirement here, and we have previously opted to do so based on local consensus at several closely related articles such as Race and Intelligence. I think we should do the same here, for the same reasons - namely that the field is controversial and primary sources can be found argueing for extreme positions in all directions, without reflecting the mainstream viewpoint. So indeed I think we should stick to high quality secondary sources such as review articles and high quality textbooks. If this decision is made, then on that account I would have no problem supporting the removal of the 2012 study and the responses to it. We can wait to see how secondary sources summarize the issue.·maunus · snunɐɯ· 14:05, 28 May 2016 (UTC)
- The article on Race and Intelligence seems fine and it isn't under WP:MEDRS. We don't need to follow a guideline that explicitly deprecates Scientific American because it isn't peer reviewed. Race and Intelligence is not a medical article and neither is this one. Dmcq (talk) 15:44, 28 May 2016 (UTC)
- I think that is exactly what we need. Journalists simply cannot be trusted to represent complex or controversial scientific topics in a resonable way.·maunus · snunɐɯ· 12:03, 30 May 2016 (UTC)
- The article on Race and Intelligence seems fine and it isn't under WP:MEDRS. We don't need to follow a guideline that explicitly deprecates Scientific American because it isn't peer reviewed. Race and Intelligence is not a medical article and neither is this one. Dmcq (talk) 15:44, 28 May 2016 (UTC)
- Using the best available sources applies to all wikipedia articles, especially contentious ones. WP:MEDRS lays out in a really clear way how to tell which sources are better quality, but it's not the only reason that it makes sense for this article to strive to use the best available sources. —PermStrump(talk) 15:55, 28 May 2016 (UTC)
- There is no health issue involved in this article. It is mainly a sociological issue. What is wrong with using MEDRS is that it puts on too stringent a requirement and one which is devised for a different purpose and a different area. The Arab Israeli dispute for example is very contentious. MEDRS is totally inappropriate for it. Climate change is contentious, MEDRS is not applied to it despite its very high science content. How would it treat subjects like climate change denial needed peer reviewed articles for everything? Just because people don't like to treat contentious subjects doesn't mean we should use things like MEDRS to say everything in them has to be peer reviewed and so chop out a large portion of the topic. Gay right is a much better candidate for being a medical subject by the arguments here but it would be daft to use MEDRS on it. The article is getting on fine without MEDRS. MEDRS is inappropriate for the topic. Dmcq (talk) 16:12, 28 May 2016 (UTC)
- Call it MEDRS or call it using the best available sources, whatever you call it, its what all articles should be striving to do. If any article has been citing a certain source and then other editors find a better source, 9.9 times out of 10, the source should be changed and wording updated to better reflect the higher quality source. In most cases, systematic review articles and meta-analyses, when they exist, are going to trump primary research and popular press articles. I assure you that all of the controversial statements that reflect scientific findings in Climate change denial are supported by the strongest sources that exist. Popular press articles are only used for information that reflects public opinion or for information that hasn't been contested by other editors yet (which is few and far between) or material that has been clearly demonstrated to reflect the mainstream, scholarly view and it was decided by consensus that the popular press article did the best job articulating that view for lay readers. —PermStrump(talk) 16:23, 28 May 2016 (UTC)
- Just call it Wikipedia:Identifying reliable sources (medicine) which is its name. It isn't the standard for other things and it isn't something which would be good if applied to other articles. Medicine has special requirements because there's loads of people who'll look up Wikipedia about health matters and we don't want to be responsible for making them unwell. As it says at the start of the guideline "Wikipedia's articles are not medical advice, but are a widely used source of health information. For this reason, all biomedical information must be based on reliable, third-party published secondary sources, and must accurately reflect current knowledge." IQ is not medicine. We should follow the standard Wikipedia practice. Can you give a good reason why this article should be treated as a health article besides this idea of purity of sources as a goal? And exactly where in Wikipedia would MEDRS not apply according to you? Dmcq (talk) 17:14, 28 May 2016 (UTC)
- There is a pretty clear consensus here that we will use MEDRS for sources about IQ per se going forward (not society and culture stuff but IQ itself). Dmcq will you acknowledge that, per WP:CONSENSUS? Thanks. Jytdog (talk) 18:47, 28 May 2016 (UTC)
- I believe what you are doing is wrong and what the support others here have is for is what has already been done which is using sources of appropriate weight rather than the stringent requirements of WP:MEDRS. MEDRS is there to restrict articles because of a real danger. There is no such imperative in this article. This is not a medical or health article, only some underlying factors might be and they are covered in different articles. As far as I can see the reasoning is to cut down on controversial aspects by invoking an irrelevant guideline. I shall raise this on the WT:WikiProject Medicine talk page and see what they think there even though it isn't within the medicine project. Dmcq (talk) 20:13, 28 May 2016 (UTC)
- Great, I was thinking the same thing - and I see you did: Wikipedia_talk:WikiProject_Medicine#Use_of_MEDRS_at_IQ_article. Thanks for doing that neutrally! btw this is highly charged and contested notion and it is really important that the science be based on high quality sourcing as described in MEDRS. Jytdog (talk) 20:33, 28 May 2016 (UTC)
- Wikipedia has lots of mechanisms to deal with controversial content. That is not an appropriate use of MEDRS. Dmcq (talk) 20:53, 28 May 2016 (UTC)
- So you have said 10 times. Raising source quality is basic guidance for controversial articles - see WP:Controversial articles#Raise source quality. I was addressing why we should high quality sources here because you limited it to "danger"; again if you read WP:Why MEDRS? you will see that is not about "danger" per se; it is that the subject matter is complex, the primary literature is too (if often not replicable, and is not intended for the general public) and there is a lot of garbage out there. That is exactly the case here. Jytdog (talk) 21:35, 28 May 2016 (UTC)
- The lead paragraph of that is "Editors who are new to health-related content on Wikipedia are often surprised when their edits are reverted with the rationale of "Fails WP:MEDRS", a shorthand reference to Wikipedia's guideline about sources considered reliable for health-related content. This essay attempts to explain why these standards exist." Note health repeated twice. It attempts to explain why the standards exists for health-related content. You are just ignoring sentences like "The use of WP:PRIMARY sources is really dangerous in the context of health." Dmcq (talk) 09:14, 29 May 2016 (UTC)
- Anyway where did you get the notion this article was particularly controversial? The article on Race and Intelligence might be but this one has been pretty much okay. Dmcq (talk) 10:18, 29 May 2016 (UTC)
- So you have said 10 times. Raising source quality is basic guidance for controversial articles - see WP:Controversial articles#Raise source quality. I was addressing why we should high quality sources here because you limited it to "danger"; again if you read WP:Why MEDRS? you will see that is not about "danger" per se; it is that the subject matter is complex, the primary literature is too (if often not replicable, and is not intended for the general public) and there is a lot of garbage out there. That is exactly the case here. Jytdog (talk) 21:35, 28 May 2016 (UTC)
- Wikipedia has lots of mechanisms to deal with controversial content. That is not an appropriate use of MEDRS. Dmcq (talk) 20:53, 28 May 2016 (UTC)
- Great, I was thinking the same thing - and I see you did: Wikipedia_talk:WikiProject_Medicine#Use_of_MEDRS_at_IQ_article. Thanks for doing that neutrally! btw this is highly charged and contested notion and it is really important that the science be based on high quality sourcing as described in MEDRS. Jytdog (talk) 20:33, 28 May 2016 (UTC)
- I believe what you are doing is wrong and what the support others here have is for is what has already been done which is using sources of appropriate weight rather than the stringent requirements of WP:MEDRS. MEDRS is there to restrict articles because of a real danger. There is no such imperative in this article. This is not a medical or health article, only some underlying factors might be and they are covered in different articles. As far as I can see the reasoning is to cut down on controversial aspects by invoking an irrelevant guideline. I shall raise this on the WT:WikiProject Medicine talk page and see what they think there even though it isn't within the medicine project. Dmcq (talk) 20:13, 28 May 2016 (UTC)
- There is a pretty clear consensus here that we will use MEDRS for sources about IQ per se going forward (not society and culture stuff but IQ itself). Dmcq will you acknowledge that, per WP:CONSENSUS? Thanks. Jytdog (talk) 18:47, 28 May 2016 (UTC)
- Just call it Wikipedia:Identifying reliable sources (medicine) which is its name. It isn't the standard for other things and it isn't something which would be good if applied to other articles. Medicine has special requirements because there's loads of people who'll look up Wikipedia about health matters and we don't want to be responsible for making them unwell. As it says at the start of the guideline "Wikipedia's articles are not medical advice, but are a widely used source of health information. For this reason, all biomedical information must be based on reliable, third-party published secondary sources, and must accurately reflect current knowledge." IQ is not medicine. We should follow the standard Wikipedia practice. Can you give a good reason why this article should be treated as a health article besides this idea of purity of sources as a goal? And exactly where in Wikipedia would MEDRS not apply according to you? Dmcq (talk) 17:14, 28 May 2016 (UTC)
- Call it MEDRS or call it using the best available sources, whatever you call it, its what all articles should be striving to do. If any article has been citing a certain source and then other editors find a better source, 9.9 times out of 10, the source should be changed and wording updated to better reflect the higher quality source. In most cases, systematic review articles and meta-analyses, when they exist, are going to trump primary research and popular press articles. I assure you that all of the controversial statements that reflect scientific findings in Climate change denial are supported by the strongest sources that exist. Popular press articles are only used for information that reflects public opinion or for information that hasn't been contested by other editors yet (which is few and far between) or material that has been clearly demonstrated to reflect the mainstream, scholarly view and it was decided by consensus that the popular press article did the best job articulating that view for lay readers. —PermStrump(talk) 16:23, 28 May 2016 (UTC)
- There is no health issue involved in this article. It is mainly a sociological issue. What is wrong with using MEDRS is that it puts on too stringent a requirement and one which is devised for a different purpose and a different area. The Arab Israeli dispute for example is very contentious. MEDRS is totally inappropriate for it. Climate change is contentious, MEDRS is not applied to it despite its very high science content. How would it treat subjects like climate change denial needed peer reviewed articles for everything? Just because people don't like to treat contentious subjects doesn't mean we should use things like MEDRS to say everything in them has to be peer reviewed and so chop out a large portion of the topic. Gay right is a much better candidate for being a medical subject by the arguments here but it would be daft to use MEDRS on it. The article is getting on fine without MEDRS. MEDRS is inappropriate for the topic. Dmcq (talk) 16:12, 28 May 2016 (UTC)
- Using the best available sources applies to all wikipedia articles, especially contentious ones. WP:MEDRS lays out in a really clear way how to tell which sources are better quality, but it's not the only reason that it makes sense for this article to strive to use the best available sources. —PermStrump(talk) 15:55, 28 May 2016 (UTC)
Getting back on topic, has anyone been able to find reviews for the 2012 study in medical journals? cӨde1+6TP 10:29, 30 May 2016 (UTC)
- PMID 26267702 was mentioned above. My objections above were about ignoring a primary source which had been noted in a reliable newspaper and been talked about in another journal article. I can't see the review but above they said it seemed to be positive. Dmcq (talk) 11:20, 30 May 2016 (UTC)
- Yes but didn't someone say that paper doesn't actual mention this 2012 study? I think we're still missing a secondary source for this that fits the standard. I'll still prefer medical or neuroscience journals here to diminish the monopoly of psychiatry in this article. cӨde1+6TP 16:30, 30 May 2016 (UTC)
- Dunno, I don't see that, but WeijiBaikeBianji (talk · contribs) offered above to give access if you want to check it out. Dmcq (talk) 17:11, 30 May 2016 (UTC)
- The original article can be seen at [5] and it has 51 articles citing it which I would have normally have thought indicated notability and being right or wrong would be secondary and its Altmetric score places it in the top 5% research articles scored by them. But yeah if you want to follow MEDRS you'll need a lot more than that before allowing our sensitive readers to be exposed to it swo you'll have to check out those cites to see if it is okay. Dmcq (talk) 17:32, 30 May 2016 (UTC)
- The original article is a primary source for the results of the research done by its authors and their interpretation of their results. A review article putting those findings in context is secondary. That is how the scientific literature works. Jytdog (talk) 17:59, 30 May 2016 (UTC)
- And of course an article in a reliable newspaper specifically about it doesn't count as a secondary source because "They tend to overemphasize the certainty of any result, for instance, presenting a new and experimental treatment as "the cure" for a disease or an every-day substance as "the cause" of a disease". And this might I guess lead to some cure for the problem of not doing well in IQ tests. Dmcq (talk) 18:23, 30 May 2016 (UTC)
- Review articles aren't the only method for figuring out whether a scientific paper is generally accepted, and the further you get from the subject of pharmaceutical treatments, the less relevant it is. WhatamIdoing (talk) 01:13, 1 June 2016 (UTC)
- And of course an article in a reliable newspaper specifically about it doesn't count as a secondary source because "They tend to overemphasize the certainty of any result, for instance, presenting a new and experimental treatment as "the cure" for a disease or an every-day substance as "the cause" of a disease". And this might I guess lead to some cure for the problem of not doing well in IQ tests. Dmcq (talk) 18:23, 30 May 2016 (UTC)
- The original article is a primary source for the results of the research done by its authors and their interpretation of their results. A review article putting those findings in context is secondary. That is how the scientific literature works. Jytdog (talk) 17:59, 30 May 2016 (UTC)
- Yes but didn't someone say that paper doesn't actual mention this 2012 study? I think we're still missing a secondary source for this that fits the standard. I'll still prefer medical or neuroscience journals here to diminish the monopoly of psychiatry in this article. cӨde1+6TP 16:30, 30 May 2016 (UTC)
PMID 25948648 and PMID 24094101 cite this paper (these are recent reviews; many more papers that aren't tagged as reviews also cite it). Perhaps those two will say something relevant. WhatamIdoing (talk) 01:13, 1 June 2016 (UTC)
- The first paper has this to say about the study:
"Normal and aberrant activity within transmodal cortex is indicative of individual differences in cognitive ability (Duncan and others 2000; Hampshire and others 2012; Mueller and others 2013; Seeley and others 2007) and mental health (Buckner and others 2009; Menon 2003; Mueller and others 2013)."
- The second paper has this:
"Given differences in connections, cytoarchitecture, etc., it seems certain that the anatomically distinct parts of the MD system must have somewhat different physiological functions, and the current literature contains several important proposals (e.g., Hampshire et al., 2012)"
- This is the full extent to which Hampshire et al. 2012 is discussed in these reviews. Couple this with the no-show of the study in Conway & Kovacs's 2015 review focusing specifically on recent sampling-type theories of g, it's easy to see that Hampshire et al. are the only ones to see their study as some kind of breakthrough. Others treat it as just another iteration of a familiar argument.--Victor Chmara (talk) 11:13, 1 June 2016 (UTC)
- I haven't seen any actual scientific journal discrediting or disparaging the results of the study so far... WhatamIdoing has pointed out that many other papers also cite this study. cӨde1+6TP 14:34, 1 June 2016 (UTC)
- Victor Chmara, is the "familiar argument" adequately described in the article? Or are this particular iteration and the general concept both being ignored? WhatamIdoing (talk) 23:05, 1 June 2016 (UTC)
- I haven't seen any actual scientific journal discrediting or disparaging the results of the study so far... WhatamIdoing has pointed out that many other papers also cite this study. cӨde1+6TP 14:34, 1 June 2016 (UTC)
- WhatamIdoing, I don't know if there's a need to go into great detail about it in this article. There's plenty of discussion of it in the g factor article and a bit in the g factor section of this article.
- Code16, the study was taken apart by Ashton et al. and Haier et al. in their commentaries.--Victor Chmara (talk) 12:23, 2 June 2016 (UTC)
- I don't really care about a bunch of psychologists talking trash about actual SCIENCE. Has any scientific paper actually trashed the study? Yes/No ? cӨde1+6TP 16:33, 2 June 2016 (UTC)
- I'm extremely interested in hearing what your demarcation criteria for science are, and how they help to discriminate between neuroscience and psychology. I'm half expecting that you think that way because 'neuroscience' contains the word science, while 'psychology' doesn't, but I hope you have something more substantial.
- I'm also interested in knowing what you think about the neuroscience work of several of the critics of Hampshire et al. While their disciplinary backgrounds vary -- e.g., Haier's and Jung's doctorates are in psychology, Karama's in neuroscience -- they have all studied intelligence by marrying classical psychometrics with neuroscience methods. They have published widely in neuroscience journals, and the designs of their studies are similar to Hampshire's except for the fact that they use larger samples and do not make elementary statistical mistakes.--Victor Chmara (talk) 17:07, 2 June 2016 (UTC)
- Science is empirical, psychology is not. I can convince you that I have schizophrenia by pretending to have its symptoms. I can't convince a doctor I have tumor in my brain by faking it. That's the difference. And I really don't care what individual wrote what in a psychology journal, the question was: Has any SCIENTIFIC paper actually trashed the study? Yes/No? cӨde1+6TP 18:01, 2 June 2016 (UTC)
- Psychology is plenty empirical. What you seem to be opposed to are self-reports and other behavioral measures. This puts you in a bind as the Hampshire study is entirely dependent on self-reports. In the standard neuroscience manner, they correlated brain measures with psychological self-reports (IQ test performance, measured in the slapdash manner typical of neuroscientists who are ignorant of psychometrics). They attempted to explain a psychological variable (IQ) by correlating it with measurements of the brain. This is, of course, exactly what thousands of psychologists do as well. The division between neuroscience and psychology is entirely non-existent here. Your thinking is based on some kind of word magic (neuroSCIENTISTS versus psychologists) rather than on a consideration of the substantive issues. Without links to behavioral measures neuroimaging data are completely meaningless and uninterpretable.
- You may think that schizophrenia doesn't exist, but medical science thinks otherwise. Schizophrenia diagnosis is based on behavioral measures, but it doesn't make it any less real. Wikipedia articles must be based on reliable sources, which include the research literatures in psychology, neuroscience, medicine, and other fields. Your personal opinions to the contrary are completely irrelevant when it comes to editing Wikipedia articles.--Victor Chmara (talk) 19:22, 2 June 2016 (UTC)
- Once again, you've completely avoided the question and resorted to ad-hominem attacks. And instead of issuing your own commentary on the study, or that of psychologists, neither of which has relevance to a paper published in Neuron, the question asked was: If any actual scientific paper has disagreed/discredited the study? Yes/No? p.s. I never said schizophrenia doesn't exist, I implied that psychologists are completely incapable of adequately understanding and/or treating conditions which are far beyond their witch-doctor like methods (and will one day be completely obsolete.) cӨde1+6TP 20:01, 2 June 2016 (UTC)
- I think you are getting the idea of Wikipedia a bit wrong. The main criterion for inclusion in Wikipedia is that secondary sources discuss something. The second criterion when one has sources like that is weight. How much are they talked about and how much note do people take of the sources? And how does that compare with other material in an article? Your question about whether sources have agreed or disagreed with the study is in fact not directly a major factor - it is only important because things which are agreed with tend to be talked about more and in better sources. At least that is the case in most of Wikipedia. Your idea corresponds more with MEDRS but in that case you'd need to produce reviews of the source - so it is up to you to produce support not up to others to refute it. Dmcq (talk) 22:19, 2 June 2016 (UTC)
- Which is why I originally asked if the paper was discussed and reviewed in scientific litriture, the answer to that is already provided in the affirmative. But the other user is claiming that it should be sidelined because his psychology sources deride it. To which I asked if any actually important sources dismiss it, to which he has no answer. The only thing missing is a detailed review, but i dont have access to medical or neuroscience journals... otherwise I wouldve already answered this question myself, and if I found one I wouldve included it in the article already. cӨde1+6TP 22:46, 2 June 2016 (UTC)
- I ask again: Given that neuroscientists and psychologists use identical methods to study the neurobiological underpinnings of intelligence, how can you say that research by the former is science while research by the latter is not? Is the criterion simply the disciplinary association of the journal where a given study was published?
- I have access to scientific journals, including psychological, medical and neuroscience ones, and as far as I can see, the only extended discussions of the Hampshire study are the highly critical commentaries in Personality and Individual Differences and Intelligence. The secondary sources most relevant to this article -- textbooks and reviews on cognitive ability -- ignore the study.
- This article should discuss the neuroscience of intelligence in the same way that reliable secondary sources do. Here's a good review article on the topic, published in the leading neuroscience review journal, so it should satisfy your peculiar requirements for sources. Oh but look, it was written by three psychologists, so I guess we can't use it after all. LOL.--Victor Chmara (talk) 07:28, 3 June 2016 (UTC)
- The paper you cited doesn't criticize the Hampshire study at all, it is actually pleading neuroscience to take g-factor seriously (indicating that they currently don't take it seriously.) Not sure how you thought this would help your case, but thanks for helping mine. Also, the paper contradicts your statement that neuroscientists and psychmetricians use "identical" methods. The "Method" includes the MODEL you use, not just the tools you use to gather data. e.g. a meter to measure voltage is a tool, while the equation V=-Integral(E.dl) comes from an overarching theoretical model. That paper was actually a sales pitch to neuroscience to use its model. I don't see scientists buying it though, but good luck with the snake oil pitches. cӨde1+6TP 14:13, 3 June 2016 (UTC)
- There's a consensus on not including the Hampshire et al. study in this article, given that other researchers have not bought into their snake oil pitch, as you put it. Therefore, this discussion is purposeless. However, if you want to continue embarrassing yourself, I'm game.
- The Deary et al. review was published two years prior to Hampshire et al., so it'd be miraculous if they discussed that study. The review is somewhat didactic in tone, reflecting the fact that neuroscientists are generally ignorant of individual differences, as exemplified by the bumbling efforts of Hampshire et al.
- Methods are, of course, independent of the model used, and the same methods can be used to test different models. The model favored by Deary et al. is the higher-order g model, which is backed up by various lines of evidence, including behavioral genetic evidence. Hampshire et al.'s model is just something that they make up on the fly by imposing an orthogonal structure on their data. The method is really the model in Hampshire's study, and they don't compare their model to others, such as those based on an oblique rotation (Ashton et al. did this with Hampshire's behavioral data, showing that the oblique/higher order model fits better).--Victor Chmara (talk) 17:40, 5 June 2016 (UTC)
- Well I would expect any model which expands another model with more parameters will fit better. The real question is how much better and is it significant and what does it mean. I would hope that Ashton et al answered that question instead of just saying what you said. Have you got a source to back up your talking in such a derogatory fashion about their work? Is it a general opinion? In most articles something which causes a lot of aggro also gains weight as being something to note. Dmcq (talk) 18:06, 5 June 2016 (UTC)
- In Ashton's reanalysis, Hampshire's model has more free parameters than Ashton's model. Ashton's fits better in terms of standard fit indices. Qualitatively, I would say that Ashton's model fits very well while the fit of Hampshire's model is not good but not awful either. (They also compared the models in an independent dataset. The fit of Ashton's model was pretty good, while Hampshire's was unacceptable.) Ashton et al. wrote three articles replying in detail to the claims by Hampshire et al., showing why Hampshire's model was statistically and theoretically improbable. Haier et al. listed a number of other problems in the study in their commentary. I have described above the probable reasons why this study has raised acrimony.--Victor Chmara (talk) 19:21, 5 June 2016 (UTC)
- A study that gets a better fit with fewer parameters certainly starts off with extra Brownie points in my estimation! Dmcq (talk) 21:58, 5 June 2016 (UTC)
- Chmara, realize that the only reason the study is not in the article (yet) is because of the on-going debate on MEDRS sourcing standards (which I actually support) and has nothing at all to do with your derogatory claims based purely in the psychometric paradigm. Even here on these threads, you will find no support for psychometrics being given primacy over neuroscience, and you were already told this by another editor on your own thread. Secondly, the "bumbling" and "ignorant" (as you call them) efforts of neuroscientists are based in an honest attempt to study the subject. This is why I'm actually supporting MEDRS being applied because the long-term result of this would be a virtual sidelining of psychometric-psychology papers, as the neuroscience in this field matures. I'll gladly sacrifice a pawn to capture the queen. cӨde1+6TP 20:23, 5 June 2016 (UTC)
- Whether we use MEDRS or ordinary RS standards, that study has no place in the article. The article could use some more discussion of the dimensionality of IQ test batteries, but there are any number better sources for that.
- There is no conflict between neuroscience and psychology. Differential neuroscience is an attempt to explain the results of classical differential psychology in terms of neurobiology. However, to explain intelligence in neuroscience terms, you must understand what intelligence is in behavioral, observable terms. This is why the best-supported "neuro-model" of intelligence was developed by psychologists.
- Perhaps one day IQ tests can be replaced by brain scans and intelligent behavior can be reduced to brain parameters, but until then the psychological approach has primacy.--Victor Chmara (talk) 21:09, 5 June 2016 (UTC)
Terrible ways
IQ is a terrible way to measure intelligence says the average person. Statistically though it's an accurate method. I will rephrase the initial phrase: "IQ is a terrible way to measure high intelligence when there isn't."
- They usually mean social functionality of the individual. Then a well functioning anterior cingulate cortex is more crucial, but general intelligence remains important. We should create a new Wikipedia page and explain the difference among various terms.
- (Other non scholars confuse iq with happiness, personal fulfilment, some conservatives confuse high iq with a high offspring number. These people never speak analytically. They aren't aware of their confusion, so you have to read long texts or speak with them to understand what they mean, they don't reveal their confusion directly but it's obvious. They mean that iq isn't the only thing that matters but are unable to compose an unbiased analytical text without passion. The way they speak isn't scientific but if we correct their methodology we can express the same thing by explaining what each term means and what different societies value [important: or believe they value!!!!!!!] more.) — Preceding unsigned comment added by 2A02:587:4105:7600:5E8:7D50:B437:AB62 (talk) 17:33, 10 July 2016 (UTC)
- They usually mean social functionality of the individual. Then a well functioning anterior cingulate cortex is more crucial, but general intelligence remains important. We should create a new Wikipedia page and explain the difference among various terms.
Music
There seems to be a confusion in that paragraph, it starts about correlation of Musical Training with IQ score, but then ends in "seems to last only 10-15 minutes" which clearly relates to the so called Mozart effect (http://www.intelltheory.com/mozarteffect.shtml) and not to musical training. — MFH:Talk 09:26, 23 July 2016 (UTC)
new unsourced section
Moving this from the article to here until it can be sourced.
- IQ Variations
While less popular, there are other cognitive constructs that are recognized by the field of psychology. The Gardner multiple intelligences theory describes intelligence as different aptitudes. The following list are examinations for different forms of creativity.
- Watson-Glaser Critical Thinking Appraisal is a test used to assess and develop decision making skills and judgment.
- Torrance Tests of Creative Thinking is a test of creativity.
- Cappon IQ2