22 Eylül 2012 Cumartesi

Most brain imaging papers fail to provide enough methodological detail to allow replication

Amidst recent fraud scandals in social psychology and other sciences, leading academics are calling for a greater emphasis to be placed on the replicability of research. "Replication is our best friend because it keeps us honest," wrote the psychologists Chris Chambers and Petroc Sumner recently.

For replication to be possible, scientists need to provide sufficient methodological detail in their papers for other labs to copy their procedures. Focusing specifically on fMRI-based brain imaging research (a field that's no stranger to controversy), University of Michigan psychology grad student Joshua Carp has reported a worrying observation - the vast majority of papers he sampled failed to provide enough methodological detail to allow other labs to replicate their work.

Carp searched the literature from 2007 to 2011 looking for open-access human studies that mentioned "fMRI" and "brain" in their abstracts. Of the 1392 papers he identified, Carp analysed a random sample of 241 brain imaging articles from 68 journals, including PLoS One, NeuroImage, PNAS, Cerebral Cortex and the Journal of Neuroscience. Where an article featured supplementary information published elsewhere, Carp considered this too.

There was huge variability in the methodological detail reported in different studies, and often the amount of detail was woeful, as Carp explains:
"Over one third of studies did not describe the number of trials, trial duration, and the range and distribution of inter-trial intervals. Fewer than half reported the number of subjects rejected from analysis; the reasons for rejection; how or whether subjects were compensated for participation; and the resolution, coverage, and slice order of functional brain images."
Other crucial detail that was often omitted included information on correcting for slice acquisition timing, co-registering to high-resolution scans, and the modelling of temporal auto-correlations. In all, Carp looked at 179 methodological decisions. To non-specialists, some of these will sound like highly technical detail, but brain imagers know that varying these parameters can make a major difference to the results that are obtained.

One factor that non-specialists will appreciate relates to corrections made for problematic head-movements in the scanner. Only 21.6 per cent of analysed studies described the criteria for rejecting data based on head movements. Another factor that non-specialists can easily relate to is the need to correct for multiple comparisons. Of the 59 per cent of studies that reported using a formal correction technique, nearly one third failed to reveal what that technique was.

"The widespread omission of these parameters from research reports, documented here, poses a serious challenge to researchers who seek to replicate and build on published studies," Carp said.

As well as looking at the amount of methodological detail shared by brain imagers, Carp was also interested in the variety of techniques used. This is important because the more analytical techniques and parameters available for tweaking, the more risk there is of researchers trying different approaches until they hit on a significant result.

Carp found 207 combinations of analytical techniques (including 16 unique data analysis software packages) - that's nearly as many different methodological approaches as studies. Although there's no evidence that brain imagers are indulging in selective reporting, the abundance of analytical techniques and parameters is worrying. "If some methods yield more favourable results than others," Carp said, "investigators may choose to report only the pipelines that yield favourable results, a practice known as selective analysis reporting."

The field of medical research has adopted standardised guidelines for reporting randomised clinical trials. Carp advocates the adoption of similar standardised reporting rules for fMRI-based brain imaging research. Relevant guidelines were proposed by Russell Poldrack and colleagues in 2008, although these may now need updating.

Carp said the reporting practices he uncovered were unlikely to reflect malice or dishonesty. He thinks researchers are merely following the norms in the field. "Unfortunately," he said, "these norms do not encourage researchers to provide enough methodological detail for the independent replication of their findings."

_________________________________ResearchBlogging.org

Carp J (2012). The secret lives of experiments: Methods reporting in the fMRI literature. NeuroImage, 63 (1), 289-300 PMID: 22796459

--Further reading-- Psychologist magazine opinion special on replication.
An uncanny number of psychology findings manage to scrape into statistical significance.
Questionable research practices are rife in psychology, survey finds.

Post written by Christian Jarrett for the BPS Research Digest.

Hiç yorum yok:

Yorum Gönder