Jump to content

Talk:Replication crisis/GA1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

GA Review

[edit]
GA toolbox
Reviewing

Article (edit | visual edit | history) · Article talk (edit | history) · Watch

Reviewer: Tommyren (talk · contribs) 00:19, 31 December 2021 (UTC)[reply]


I am excited to review this article!

Before evaluating the article based the good article standards, I'll just be listing a few thoughts that struck me as I read though the article. Some, or perhaps all of them, may go beyond the good article standards.

Lead

[edit]

1. "The replication crisis most severely affects the social and medical sciences." This statement is not supported by the inline citation that follows it, as the source only says that replication is a problem in social and medical sciences, but does not necessarily say that it is most severe in social and medical sciences. A similar issue has been discussed in the section "Sources for most impacted fields." However, from citation #10, the Fanelli article, it seems that we just might argue that the medical sciences are most severely affected. From citation #79, the Duvendack et al. article, it seems that we are argue that the economic sciences are more severely affected than others.

checkY I've modified the statement. --Xurizuri (talk) 10:31, 31 January 2022 (UTC)[reply]
I agree with your modification. However, as mentioned above, there is existing literature on what fields seem to have more of a replication crisis, and I feel that this information should appear somewhere in the article. Tommyren (talk) 16:43, 31 January 2022 (UTC)[reply]
I haven't really seen reviews or large scale research comparing fields (other than that Nature survey, which I wouldn't describe as a RS for this particular claim), so I'm not sure. By "above", do you mean your statement here? The Fanelli study is specifically about fabrication/falsification, and I would say it's not an RS for a statement about the replication crisis generally. I would summarise Duvendack as being more focussed on the slow pace of change within economics - it talks mainly about how little is being done to do more replications and make more effort to have reproducible results, rather than describing an actual lack of reproducible results. You'll see I changed the phrasing of the statement it's cited for because of that. --Xurizuri (talk) 10:27, 1 February 2022 (UTC)[reply]
I need to go through the Begley and Ioannidis paper to make sure that nothing along these lines was mentioned there.--Xurizuri (talk) 14:31, 2 February 2022 (UTC)[reply]

2. Why does Ioannidis's (2005) paper deserve to be screenshotted but not other people's papers? I'm not saying that we should delete the image. In fact, to me the paper's title is quite effective at piquing my interest for the whole wikipedia article. But I just want to make this comment here in case others come up with a better image. — Preceding unsigned comment added by Tommyren (talkcontribs) 14:34, 31 January 2022 (UTC)[reply]

I had a look through wikimedia, I wasn't able to find anything better unfortunately. --Xurizuri (talk) 10:27, 1 February 2022 (UTC)[reply]

Background

[edit]

I've done about a third of the background section now. The plan is to have an explanation of replication/reproducibility and its importance (done), an explanation of what the replicability crisis is and how it fits into the scientific process (not done), and potentially some explanation of significance and effect size testing (not done, still figuring out how necessary it would be). --Xurizuri (talk) 10:27, 1 February 2022 (UTC)[reply]

The section of the talk page referring to a “General” section had sources that may be useful here, as would the three reviews I mentioned under "Xurizuri's concerns". I would also definitely need to add information on the statistics. A lot of the causes and remedies sections are nonsense without it. --Xurizuri (talk) 14:31, 2 February 2022 (UTC)[reply]
"This should result in 5% of hypotheses that are supported being false positives (an incorrect hypothesis being erroneously found correct), assuming the studies meet all of the statistical assumptions." I don't think this is a correct interpretation of the p-value. If the true effect sizes are overwhelmingly large, the percentage will be higher than 5%. A 0.05 p value means that if there were no effect, the statistical procedure would still find a statistically significant result 5% of the times. Tommyren (talk) 14:34, 3 February 2022 (UTC)[reply]

1. Last paragraph needs citations. — Preceding unsigned comment added by Tommyren (talkcontribs) 03:04, 18 February 2022 (UTC)[reply]

2. The difference between systematic and conceptual replication could be made a bit clearer. — Preceding unsigned comment added by Tommyren (talkcontribs) 00:32, 6 March 2022 (UTC)[reply]

In Psychology

[edit]

1. This section contains a lot of information on causes of the replication crisis, including QRPs and the disciplinary social dilemma. Information of a similar vein is given later in the "Causes" section. I wonder if it would be a good idea to take information about causes of the crises from this section and move it to the "Causes" section. I think this may make the article feel less repetitive. Similarly, this section also contains a lot of information on potential remedies of the replication crisis, such as the discussion on whether inviting the original author in replication efforts and result-blind peer review. This should go into the "Remedies" section. Also, perhaps the "methodological terrorism" controversy can go under the "Consequences" section.

Tommyren, Marisauna, I think that with the current L2 heading of "Scope", it would make sense to remain where it is, because the sources are directly addressing psychology. However, from checking the actual content of the "Scope" section under the other academic fields, I think that it should actually be renamed to "Prevalence" or something similar - "Scope" could theoretically have that meaning, but the lack of clarity is still a problem. Regardless, if the content is moved, it needs to be attributed to being specific to psychology, per the sources. (Also I hope it's correct for me to add comments to points like this.) --Xurizuri (talk) 01:43, 27 January 2022 (UTC)[reply]
I went ahead and did this. --Xurizuri (talk) 03:19, 31 January 2022 (UTC)[reply]
 Done I didn't initially move the methodological terrorism stuff, but I have now. --Xurizuri (talk) 10:31, 31 January 2022 (UTC)[reply]

2. "Several factors have combined to put psychology at the center of the controversy." This sentence is misleading to me because it seems to say that the replication crisis is most severe in the field of psychology. However, if you go into the source for this sentence, it actually says largely the opposite--other fields could have replication crises just as severe. So far, it seems to me that psychology is at the center of the controversy largely because it has received the most scholarly and media attention, not necessarily because the crisis is particularly bad in this field.

 Done I added a statement clarifying this --Xurizuri (talk) 10:31, 31 January 2022 (UTC)[reply]

3. "Social priming." Maybe we should explain what social priming is. Also, I am not 100% sure that The Chronicle of Higher Education is a reliable source.

4. I don't think we should mention the "Psycho-babble" report by the Independent. "Psycho-babble" is such a colloquial word and can mean different things to different people. Is the Independent invalidating all aspects of non-replicable research? Would that be fair?

I agree. It's not a fair conclusion to draw, and while WP:RSP#The Independent has it as "generally reliable", that rating only applies to its non-specialist articles. This is absolutely a specialist topic. I've removed that part of the sentence. checkY --Xurizuri (talk) 10:31, 31 January 2022 (UTC)[reply]

5. "Early analysis of result-blind peer review, which is less affected by publication bias, has estimated that 61 percent of result-blind studies have led to null results, in contrast to an estimated 5 to 20 percent in earlier research." This sentence appears twice in the article. The same information is also presented under the Remedies section. I feel that we can just keep the latter.

6. "First open empirical study." I saw nothing in the source suggesting that this is the first of such studies.

7. "Replications appear particularly difficult when research trials are pre-registered and conducted by research groups not highly invested in the theory under questioning." I do not see how this sentence fits in the paragraph. The first sentence of the paragraph seems to indicate that replication is an issue in psychology partly because some of the theories tested may not be tenable, whereas pre-registration and researcher investment seem more related to the issues of QRPs.

Yeah I don't know what to think about this paragraph. It seems like a weird attack on a specific theory, but I'll try to hunt down whatever that paragraph may have come from.
Note: I said this at some point on 1 Feb. --Xurizuri (talk) 03:28, 2 February 2022 (UTC)[reply]
I've now removed most of that paragraph because I cannot figure out where it could've come from and as I said, it seems like a weird attack. --Xurizuri (talk) 03:28, 2 February 2022 (UTC)[reply]

8. "p-hacking." It may be helpful to give a brief definition of what p-hacking is.

checkY This isn't my nomination, but I've made an attempt to address this. -- Xurizuri (talk) 09:18, 9 January 2022 (UTC) // edited to add a tick 03:28, 2 February 2022 (UTC)[reply]
Given that I rearranged a bunch of content, I moved the definition to what is now the first time the term is used. --Xurizuri (talk) 10:31, 31 January 2022 (UTC)[reply]

9. What exactly is BWAS? Tommyren (talk) 14:27, 15 May 2022 (UTC)[reply]

In Medicine

[edit]

1. The part on commonalities of unreplicable papers could go under the "Causes" section.

 Done --Xurizuri (talk) 10:31, 31 January 2022 (UTC)[reply]

2. "A survey on cancer researchers found that half of them had been unable to reproduce a published result." For reasons described above, this is to be expected and does not necessarily show that a replication crisis exists.

I've responded to the above point. --Xurizuri (talk) 10:27, 1 February 2022 (UTC)[reply]

3. "Flaws." What flaws is the word referring to?

checkY I've attempted to address it, but I don't have any way to get access to the full article so I may be missing details. --Xurizuri (talk) 10:27, 1 February 2022 (UTC) // edited to add a tick 03:28, 2 February 2022 (UTC)[reply]

4. What is the purpose of the block quote? — Preceding unsigned comment added by Tommyren (talkcontribs) 19:51, 31 January 2022 (UTC)[reply]

checkY I truly can't remember why I did that, it does seem bizarre. I've integrated it back into the text. --Xurizuri (talk) 10:27, 1 February 2022 (UTC) // edited to add a tick 03:28, 2 February 2022 (UTC)[reply]

5. Does cancer research merit its own tiny little section?

checkY I've removed the subheading. The main "In medicine" section also discussed cancer research already anyway. --Xurizuri (talk) 10:31, 31 January 2022 (UTC)[reply]

In Marketing

[edit]

1. The part of globalization to me seems to be a "Cause" of the crises and does not belong in the "Scope" Section.

I'm not sold on it being a cause. It seems more to me like a reason why it's really important to attempt to replicate findings in marketing, rather than a reason why results don't replicate. --Xurizuri (talk) 10:31, 31 January 2022 (UTC)[reply]

In Economics

[edit]

1. The part about the fragility of econometrics to me seems to be a "Cause" of the crises and does not belong in the "Scope" Section.

 Done --Xurizuri (talk) 10:31, 31 January 2022 (UTC)[reply]

Across Fields

[edit]

1. As explained by others in this talk page (see comments by Lavateraguy, also see section on "Failure to reproduce figures in 'Outline' section misleading"), the fact that many scholars have encountered unreplicable studies is in fact expected and not necessarily problematic. I personally do not see it as necessary to include the first sentence of this section.

I may try a few different configurations of the "prevalence" section. I think talking about encountering unreplicable studies is important - all of the replicability crisis, other than the QRPs and fraud, can and has been characterised as expected. Also, if reliable sources think it's important to talk about it, then so do we. And sources do talk about that survey. --Xurizuri (talk) 10:27, 1 February 2022 (UTC)[reply]

2. "Only a minority." Do we have a concrete percentage for this?

The article doesn't give one. --Xurizuri (talk) 13:36, 31 January 2022 (UTC)[reply]

3. "The authors also put forward possible explanations for this state of affairs." Would it be possible to elaborate on what these explanations are, especially in terms of why unreplicable studies are cited more?

 Done --Xurizuri (talk) 13:36, 31 January 2022 (UTC)[reply]
Thanks for the edit! However, I'm not quite sure I understand this sentence: "the trend is not affected by publication of failed reproductions, and only 12% of citations following this will mention the failed replication." What does "this" mean? Tommyren (talk) 14:39, 31 January 2022 (UTC)[reply]
checkY Good point, hopefully fixed now. --Xurizuri (talk) 10:27, 1 February 2022 (UTC) // edited to add a tick 03:28, 2 February 2022 (UTC)[reply]

Causes

[edit]

1. "Generation of new data/publications at an unprecedented rate." According to the original source, it does not seem to be a "trigger" of a crisis but seems more like something that makes things worse.

2. "A success and a failure." I understand it to mean "a successful and a failed attempt at finding evidence in support of the alternative hypothesis." Is this correct? Maybe we can clarify this.

checkY I've made an attempt at clarifying - I settled on making a "Notes" section because I prefer to avoid too much jargon in the main text, especially if there's any possible way to explain the implications of something without having to go into the details of it. --Xurizuri (talk) 10:27, 1 February 2022 (UTC) // edited to add a tick 03:28, 2 February 2022 (UTC)[reply]

Historical and sociological roots

[edit]

1. What is scientometrics?

 Done --Xurizuri (talk) 14:31, 2 February 2022 (UTC)[reply]

2. I would suggest changing the title of this section to just "Historical Roots," and we should move arguments by Mirowski, "a group of STS scholars," and Smith to before this section together with other recently publishes sources because these works were published rather recently.

Someone made it part of the "Causes" section, which I agree with. I changed the heading it to "Historical and sociological roots" instead, because the section is discussing the sociology of how the current situation arose. Under this name, none of the content should need to move. --Xurizuri (talk) 10:31, 31 January 2022 (UTC)[reply]
It was you that did it! Cheers. --Xurizuri (talk) 10:27, 1 February 2022 (UTC)[reply]

3. "Attention." What exactly is this word referring to?

I won't lie, I am avoiding this section a little, because sociology is a bit out of my wheelhouse. I'll just have to sit down and power through it at some point. --Xurizuri (talk) 03:28, 2 February 2022 (UTC)[reply]

4. Should the five numbered points go under "Historical and sociological roots?"

I'm genuinely not sure if it fits under history/sociology. Regardless, I'd actually like to summarise the points, I doubt that it's due weight to be talking that much about their argument. --Xurizuri (talk) 10:27, 1 February 2022 (UTC)[reply]
I have summarised the points and created a "publish or perish" subsection. --Xurizuri (talk) 13:40, 2 February 2022 (UTC)[reply]

Publish or perish culture in academia

[edit]

1. It would be nice if there can be proper citation for Ravetz's book.

2. I have a general feeling that this section can be more clearly structured and written. While the opening paragraphs connects nicely to more theoretical discussions in the last section, they do prevent readers from getting straight away what the publish or perish culture really is.

Questionable research practices and fraud

[edit]

1. "They consist of applying different methods of data screening, outlier rejection, subgroup selection, data transformations, models, concomittant variables, and alternative estimation and testing methods, and finally reporting the variety that produces the most significant result." This sentence is becoming really confusing. It also seems a little repetitive considering that the following sentence comes in the next paragraph: "Examples of QRPs include selective reporting or partial publication of data (reporting only some of the study conditions or collected dependent measures in a publication), optional stopping (choosing when to stop data collection, often based on statistical significance of tests), post-hoc storytelling (framing exploratory analyses as confirmatory analyses), and manipulation of outliers (either removing outliers or leaving outliers in a dataset to cause a statistical test to be significant)"

Absolutely agree. I'll need to go through and figure out firstly which terms are actually in the sources and secondly which are being used for the same concepts. I may need to use more footnotes, actually. --Xurizuri (talk) 10:27, 1 February 2022 (UTC)[reply]
checkY I've attempted to address this. --Xurizuri (talk) 03:28, 2 February 2022 (UTC)[reply]

2. "However, most scholars acknowledge that fraud is, perhaps, the lesser contribution to replication crises." There is a "who" tag that needs addressing.

checkY I reworded it to suit an actual source. --Xurizuri (talk) 10:27, 1 February 2022 (UTC) // edited to add a tick 03:28, 2 February 2022 (UTC)[reply]

3. "Serious." I know the original author also used "serious," but a reader might wonder in what sense is fraud "serious." Is it the most morally reprehensible? Does it lead to the most wrong study results?

Good point. The author isn't actually explicit about what they mean. On reflection, that part of the sentence isn't really necessary, so I changed it. checkY --Xurizuri (talk) 03:28, 2 February 2022 (UTC)[reply]

4. "Positive and negative controls" Would it be clearer to just say control here? How important is it for readers to know the difference between positive and negative controls? If it is important, should we explain what the two terms mean?

5. What is confirmation bias?

 Done --Xurizuri (talk) 13:40, 2 February 2022 (UTC)[reply]
It would be nice if the term can be explained in-text.Tommyren (talk) 18:35, 5 June 2022 (UTC)[reply]

6."Some examples of QRPs..." This sentence may suffer from overciting. Also, the jargons would preferrably have in-text explanations.

7. "A 2012 survey of over 2,000 psychologists..." Given the critique on survey methods, I am leaning towards not including this source in this article at all.Tommyren (talk) 03:19, 7 June 2022 (UTC)[reply]

Statistical issues

[edit]

1. "According to a 2018 survey of 200 meta-analyses, 'psychological research is, on average, afflicted with low statistical power'."Are we using British or American English in this article? Sometimes I see periods/commas within quotation marks, and sometimes I see them outside quotation marks.

I've seen a few uses of center and behavior outside of proper nouns, so I believe we're using US English. Regardless, MOS says to use "logical quotation" which essentially ends up with a mix of punctuation inside and outside the quotes. I'll change any that aren't currently logical, but I haven't noticed any so far. --Xurizuri (talk) 03:19, 31 January 2022 (UTC)[reply]
Partially as a result of my own writing (I use Australian English, which is mostly the same as British English), the article needs to be checked for spelling variants. Mostly around using s instead of z. --Xurizuri (talk) 14:31, 2 February 2022 (UTC)[reply]

Response in academia

[edit]

1. I suspect the methodological terrorism incident is not very representative of the entire scholarly community. Can we add more information to this section?

The best sources for this would probably be the same ones as I listed for the Background section. --Xurizuri (talk) 14:31, 2 February 2022 (UTC)[reply]

Pharmaceutical Industry

[edit]

1. "Amgen Oncology's cancer researchers were only able to replicate 11 percent of 53 innovative studies they selected to pursue over a 10-year period; a 2011 analysis by researchers with pharmaceutical company Bayer found that the company's in-house findings agreed with the original results only a quarter of the time, at the most. The analysis also revealed that, when Bayer scientists were able to reproduce a result in a direct replication experiment, it tended to translate well into clinical applications; meaning that reproducibility is a useful marker of clinical potential." Maybe this information should go under the "In Medicine" section?

Will do. I also want to shorten the amount of pharm industry content, I think that much info on it is undue weight. --Xurizuri (talk) 03:28, 2 February 2022 (UTC)[reply]
 Done as said. --Xurizuri (talk) 14:31, 2 February 2022 (UTC)[reply]

Metascience

[edit]

1. I wonder if the second paragraph is needed. There seems to be no sources for it, and most information seems to occur elsewhere. What are the CONSORT and EQUATOR guidelines anyways.

Pre-registration of studies

[edit]

1. From what is currently in the article it's a little hard to tell the difference between result-blind peer review and pre-registration

Addressing misinterpretation of p-values

[edit]

1. What do the Bayesion methods refer to? Bayes is mentioned three times in this section. Are they referring to the same methods?

2. What logical problems is "The Problem with p-values" referring to?

Open Science

[edit]

1. "Unless software used in research is open source, reproducing results with different software and hardware configurations is impossible." This doesn't sound right.

2. Do we need the CERN example?

Notes

[edit]

1. "The null hypothesis (the hypothesis that the results are not reflecting a true pattern) is rejected when the probability of the null hypothesis being true is less than 5%" This is not true. The null hypothesis is either 100% true or 100% false.

Tommyren (talk) 00:19, 31 December 2021 (UTC)[reply]

Status query

[edit]

Tommyren, Marisauna, where does this nomination stand? As far as I can tell, Maurisauna has never edited the article at all, much less to address any of the issues raised in this nomination. Xurizuri has made a few edits to address one or more issues above, and Tommyren has made various edits over the past four weeks since the article was nominated. If there isn't anyone who is able to address the issues raised last month in a timely manner, then the nomination should be closed. Thank you. BlueMoonset (talk) 01:35, 29 January 2022 (UTC)[reply]

If Marisauna has moved on, I'm happy for the nomination responsibility to be transferred to me - assuming that's feasible with the technical components of how GANs are monitored. --Xurizuri (talk) 02:11, 29 January 2022 (UTC)[reply]
Go ahead, I've moved on. Marisauna (talk) 05:04, 29 January 2022 (UTC)[reply]
Thank you for checking in. As things stand, it seems very likely that we will have to fail this article in this round of nominations. Two issues are particularly outstanding so far. 1) There's a lot of content saying that a large percentage of researchers have encountered unreplicable research. As discusssed above and elsewhere on the Talk Page, this information is actually not necessarily indicative of a problem. I'm not saying that this information should not appear on this article, but it should be accompanied with adequate explanation so that readers will have a clear idea of what the replication crisis really is. 2) There are a few points of information without proper citation. Xurizuri has put up some citation needed tags, and I believe I also referred to some other points of uncited information above.
I apologize for taking so long to review this article. However, WP:GAN/I#R1 does require me to not only read through the whole article but also understand all sources. Maybe it's just because I'm a new reviewer, but it seems that following WP:GAN/I#R1 to the letter would take quite a bit of extra time. Tommyren (talk) 18:35, 29 January 2022 (UTC)[reply]
For your point about the large percentage of researchers encountering unreplicable research, I'm currently planning to add some explanation of the scientific process and the role of replication to the "Background" section I created. I also don't believe it will be hugely difficult to find the citations on the statements marked/mentioned. --Xurizuri (talk) 10:31, 31 January 2022 (UTC)[reply]

Xurizuri's concerns

[edit]

I'm noticing that a fair amount of the article is based on blogs and opinion pieces. Those aren't inherently not-RS, because they're written generally by experts and the statements are being attributed (now). But I am concerned about the amount of the article that is based on them, especially when the article is also under-using reviews and meta-analyses published in reliable journals (Begley & Ioannidis, Shrout & Rodgers, and Stanley, Carter & Doucouliagos spring to mind). I also have assorted other concerns, some of which I am planning to address, including:

  • the lack of non-technical explanation of significance
  • a robust explanation of why people think the replication crisis is a crisis
  • the lead doesn't have a proper summary of the causes, consequences or remedies sections
  • some of the history and sociology content is nigh-incomprehensible to an untrained audience (i.e. me).
  • the de Solla-Price and Ravetz paragraphs aren't actually citing their sources (which I believe I have found, [1] and [2])

And then also the ones you've mentioned. I could address all of these given a few days, but this is starting to give me the vibes of doing a uni assignment at the last minute. And I'm not currently a uni student for a reason. Some of this may be above the needs of a good article, I'm really not sure, but that's why I'm not reviewing :) I'm going to add comments under yours above explaining what my plans are for it. That way, you can make a proper decision about how much work there is left, and this can be a complete record of recommendations. --Xurizuri (talk) 14:15, 2 February 2022 (UTC)[reply]

I forgot to mention another one: the "Emphasize triangulation, not just replication" section really should be prose, not a quote. --Xurizuri (talk) 14:31, 2 February 2022 (UTC)[reply]
First, THANK YOU SO MUCH FOR YOUR HARD WORK ON THIS ARTICLE! I think the article has made a lot of improvements in the last few days.
I do think for the article to reach good article status, one needs to explain what statistical significance is, make the lead section a proper summary of the article, render the article readable to an average undergraduate student, and sort out why the crisis is a crisis, and make sure all information is properly sourced. Personally I do not expect anyone to do all this in just a few days.
What's more, I'm not even sure if my current review is even half way done, as I have yet to start a deep review of all of the sources used in this article. I expect more issues to pop up.
Therefore, you by no means have to feel that you are obliged to single-handedly turn this article into a GA. I, however, want to at least follow through with WP:GAN/I#R1, reading through the articles and checking all sources. This could take some more time... Tommyren (talk) 16:59, 2 February 2022 (UTC)[reply]
Hello! I was thinking to address some of the concerns you have listed. I would like to start by writing a couple of paragraphs on the causes, remedies and consequences of the crisis. I will do that by referencing the corresponding subsections below in order to ensure adequate coherence. But I have a couple of questions before starting.
First off, it is not perfectly clear if the "causes" section refers to the causes of the replication crisis, or to the causes of low rates of replicability in scientific research. The first is a historical event, the second is what I would define a "metascientific" fact. To show this, notice how the first subsection talks about sociological factors that might generally be responsible for the occurrence of a crisis in science, while other sections talk about factors that are responsible for low replication rates (e.g. low power, low base rate of true hypotheses). Therefore, I'm not a hundred percent sure on how to proceed. I was thinking to mention primarily the cause on why there seem to be such low replication rates in scientific research.
Secondly, I believe that some of the explanations given on why some factors are relevant causes of the replication crisis are not clear. I'm referring in particular to the content of the section "Historical and sociological roots". From what I can see, the works reported in this section provide explanations on why the quality check system of scientific research might have deteriorated over the last decades. Although I can see how this can be connected with the idea of scientific practice undergoing a period of "crisis", there is no explicit mention to how this is connected to the replication crisis. This results in putting the burden on the reader on trying and imaging why the things discussed in this section are relevant to the replication crisis. One possible solution to this lack of explicit connection might be to re-frame the subsection as concerning "scientific quality" in general. The connection to why this is relevant for the replication crisis might be explained in the first paragraph, and then the various works could be discussed at length.
Further, I was wondering if the "publish-or-perish culture" section should be listed as a sociological factor. I understand that in light of its relevance it might deserve a section on its own. Yet, I think it is reasonable to consider this particular cause as sociological. The publish-or-perish culture in academia is concerned with systemic norms on what are the incentives to do scientific work. Does that not count as a sociological aspect of scientific practice? Moreover, I noticed how the section talks about publication practices in general, not just the issue of the publish-or-perish academic culture. Would it make sense to rename it?
Lastly, I was hoping to get some feedback on whether the way I'm planning to structure this little paragraph in the lead would be okay. I was thinking to list all possible causes of the replication crisis by mentioning the general explanatory dimensions (e.g. methodology, statistics, sociology, theory) and for each include one or two examples in brackets.
A provisional draft of how the causes section might look like would be:
"The possible causes of the low rates of replicability in scientific research are multifaceted. Generally, the absence of replicable results in scientific fields can be attributed to methodological issues in scientific research (e.g. questionable research practices), deficient statistical standards (e.g. undepowered studies, misuse of null-hypothesis significance testing), sociological factors (e.g. publication bias, publish-or-perish culture), and theoretical shortcomings (e.g. base rate of hypothesis accuracy)."
This is just an example, it is not necessarily complete and coherent with the content of the entry. I was still wondering if the general structure is okay.
To whoever might answer this long message, thanks in advance for the help. ProgressiveProblemshift (talk) 11:44, 3 July 2023 (UTC)[reply]

Suggestion to close

[edit]

Tommyren, since you established nearly two months ago that the nomination will not pass without significant additional work, it's time to fail the nomination. You are certainly welcome to continue working on the article and posting here as to issues that will need to be addressed prior to any subsequent nomination, but the end result is clear. Thank you for your detailed work. BlueMoonset (talk) 17:35, 27 March 2022 (UTC)[reply]