Ruth Williams in The Scientist:
In a test of scientific reproducibility, multiple teams of neuroimaging experts from across the globe were asked to independently analyze and interpret the same functional magnetic resonance imaging dataset. The results of the test, published in Nature today (May 20), show that each team performed the analysis in a subtly different manner and that their conclusions varied as a result. While highlighting the cause of the irreproducibility—human methodological decisions—the paper also reveals ways to safeguard future studies against it.
“This is a landmark study that demonstrates clearly what many scientists suspected: the conclusions reached in neuroimaging analyses are highly susceptible to the choices that investigators make on how to analyze the data,” writes John Ioannidis, an epidemiologist at Stanford University, in an email to The Scientist. Ioannidis, a prominent advocate for improving scientific rigor and reproducibility, was not involved in the study (his own work has recently been accused of poor methodology in a study on the seroprevalence of SARS-CoV-2 antibodies in Santa Clara County, California). Problems with reproducibility plague all areas of science, and have been particularly highlighted in the fields of psychology and cancer through projects run in part by the Center for Open Science. Now, neuroimaging has come under the spotlight thanks to a collaborative project by neuroimaging experts around the world called the Neuroimaging Analysis Replication and Prediction Study (NARPS).
…“The lessons from this study are clear,” writes Brian Nosek, a psychologist at the University of Virginia and executive director of the Center for Open Science. To minimize irreproducibility, he says, “the details of analysis decisions and the underlying data must be transparently available to assess the credibility of research claims.” Researchers should also preregister their research plans and hypotheses, he adds, which could prevent SHARKing.
More here.