Scientists Make Mistakes. I Made a Big One.

A researcher learns the right thing to do when the wrong thing happens

Julia Strand
Elemental
Published in
6 min readMar 24, 2020

--

Image: Jose A. Bernat Bacete/Getty Images

InIn 2018, I published a paper that reported the most interesting finding of my career. A year later, while trying to figure out why I couldn’t replicate the effect, I discovered a massive error in the original experiment. The central finding was the result of a software glitch and was completely untrue. I had published a paper with invalid data and false conclusions.

This research was about the cognitive effort people use while listening to speech — think of that feeling of “squinting your ears” while trying to understand someone in a noisy bar. The 2018 paper showed a clever way to dramatically reduce cognitive effort: Present the speech with a modulating circle that got bigger when the speech got louder. Participants were faster to respond when they could see the circle than in the control condition when they couldn’t.

The data set was gorgeous — every single one of the 96 participants showed the effect. When publishing the study, my co-authors and I employed many open science practices: The analyses were preregistered, and we publicly shared our materials, data, and code on the Open Science Framework. The paper got glowing reviews and was published in Psychonomic Bulletin & Review. We replicated the effect at another university and felt very pleased with ourselves.

We planned follow-up studies, started designing an app to generate the modulating circle for use in clinical settings, and I wrote and was awarded a National Institutes of Health grant (my first!) to fund the work.

Several months later, we ran a follow-up study to replicate and extend the effect and were quite surprised that, under very similar conditions, the finding did not replicate. In fact, the circle slowed people down. I considered everything that might account for the difference between the studies: code, stimulus quality, computer operating system, stimulus presentation software version, you name it. The change was massive enough that I was confident it wasn’t just a fluke: You don’t go from 100% of participants showing an effect to 0% without something being systematically different.

--

--

Julia Strand
Elemental

Julia is an Assistant Professor of Psychology at Carleton College in Northfield, MN. She studies speech perception and spoken word recognition. @juliafstrand