Brain scans are prone to false positives, study says

Greg Miller,Science  15 Jul 2016:353(6296):208-209

http://science.sciencemag.org/content/353/6296/208

"Anders Eklund, an electrical engineer at Linköping University in Sweden, and colleagues examined statistical methods in three software packages commonly used to analyze fMRI data. They found that certain common settings in the software gave rise to a false positive result up to 70% of the time. In the context of a typical fMRI experiment, that could lead researchers to wrongly conclude that activity in a certain area of the brain plays a role in a cognitive function such as perception or memory."

"Eklund's co-author Thomas Nichols, a statistician at the University of Warwick in Coventry, U.K., estimated that about 3500 studies used the problematic software settings ..."

"In the new study, the researchers drew on several public databases that contain fMRI data collected while subjects were resting in the scanner, not engaged in any particular task. The researchers analyzed those data as if they were running a typical fMRI experiment, looking for regions of brain activation related to a task. The team simulated nearly 3 million fMRI experiments. Based on the statistical threshold they'd set, they expected to get a false positive result (that is, a positive hit for task-related activity even though there was no task) 5% of the time. Instead, depending on the software and the settings, up to 70% of the results were positive. They also identified a bug in the AFNI software package that had existed for 15 years and may have contributed to false positives. The team alerted the developers last year, and the bug has been fixed."

"Much of the software used in fMRI research hasn't been validated with actual data, as Eklund and colleagues have done, says Russell Poldrack, a neuroscientist at Stanford University in Palo Alto, California. “You'd hope that when we build a whole [scientific] field that the fundamental tools would have been validated with real data, not just theory and simulation,” he says. “It took 20 years to happen.”"

"It will largely be up to the original labs to reanalyze the work—if they choose to. “We would hope that researchers would be interested to know whether their previous claims stand, but realistically there is very little incentive (and lots of disincentives) to show that one's previous results are wrong,” Poldrack says."