Scientists Make Mistakes. I Made a Big One.

A researcher learns the right thing to do when the wrong thing happens

Julia Strand
Elemental
6 min readMar 24, 2020

--

Image: Jose A. Bernat Bacete/Getty Images

InIn 2018, I published a paper that reported the most interesting finding of my career. A year later, while trying to figure out why I couldn’t replicate the effect, I discovered a massive error in the original experiment. The central finding was the result of a software glitch and was completely untrue. I had published a paper with invalid data and false conclusions.

This research was about the cognitive effort people use while listening to speech — think of that feeling of “squinting your ears” while trying to understand someone in a noisy bar. The 2018 paper showed a clever way to dramatically reduce cognitive effort: Present the speech with a modulating circle that got bigger when the speech got louder. Participants were faster to respond when they could see the circle than in the control condition when they couldn’t.

The data set was gorgeous — every single one of the 96 participants showed the effect. When publishing the study, my co-authors and I employed many open science practices: The analyses were preregistered, and we publicly shared our materials, data, and code on the Open Science Framework. The paper got glowing reviews and was published in Psychonomic Bulletin & Review. We replicated the effect at another university and felt very pleased with ourselves.

We planned follow-up studies, started designing an app to generate the modulating circle for use in clinical settings, and I wrote and was awarded a National Institutes of Health grant (my first!) to fund the work.

Several months later, we ran a follow-up study to replicate and extend the effect and were quite surprised that, under very similar conditions, the finding did not replicate. In fact, the circle slowed people down. I considered everything that might account for the difference between the studies: code, stimulus quality, computer operating system, stimulus presentation software version, you name it. The change was massive enough that I was confident it wasn’t just a fluke: You don’t go from 100% of participants showing an effect to 0% without something being systematically different.

Finally, I found the issue. In the original experiment, I had unintentionally programmed the timing clock to start before the stimuli were presented in the control condition — which is akin to starting a stopwatch before a runner gets to the line. This meant that the modulating circle didn’t make people faster, but rather that the timing mistake made the control condition look slower. The effect that we thought we had discovered was just a programming bug.

When I identified the error, I was shocked. I felt physically ill. I had published something that was objectively, unquestionably wrong. I had celebrated this finding, presented it at conferences, published it, and received federal funding to keep studying it. And it was completely untrue. I felt deeply embarrassed to have made such a stupid mistake, disappointed that my finding was junk, guilty for wasting everyone’s time and polluting the literature, and worried that admitting the error and retracting the paper would jeopardize my job, my grant funding, and my professional reputation.

This had been my mistake, but it would also have consequences for my co-authors — a former student of mine and my post-doc mentor. The replication at another institution (that used the same flawed program) was the basis for my former student’s masters project and her defense was scheduled in two weeks. A student at another university had just proposed a thesis extending the work. My grant funding was based in part on these results. And I was currently under review for tenure.

When I identified the error, I was shocked. I felt physically ill. I had published something that was objectively, unquestionably wrong.

When I found the mistake, I was home alone on my laptop — working late in the evening. While I sat in the dark (crying), I briefly considered what would happen if I never told anyone. The bug was hard for me to identify; maybe no one else would ever find it. I could just go on with other research and nobody would ever know.

Obviously, I decided not to go that route.

The list of what I had to do was pretty devastating: call my co-authors, tell my former student to cancel her master’s defense, write to the journal editor to initiate retraction, contact the National Institutes of Health program officer, alert the department chair and dean overseeing my tenure review, and tell my research students. I stayed up all night writing email drafts and, after a new flare-up of panic, checking every other program I’d ever run to see if I’d made the same mistake elsewhere. (I hadn’t.)

The next day was the worst day of my professional career. I spent all day emailing and calling to share the story of how I had screwed up. After doing so, part of me wanted to tell as few other people as possible. So why write about it now and share this with an even wider audience?

One reason is that I’ve never heard of a comparable situation. Part of the gut punch of finding this mistake was that I had no idea what would happen to me as a result of it, particularly because I was freshly grant-funded and pre-tenure.

I’ve heard of people finding mistakes early in the research process and having to rerun experiments. I knew about the scientists who have stepped up to nominate findings of their own that they have lost confidence in. I’ve heard of people who have had problems in their research exposed by others. But I’d never heard of anyone who found an error in their own published paper that invalidated the conclusions. It’s been reassuring to witness several prominent retractions recently, but when I found and reported this issue in October 2019, those had not yet become public. I had no model to follow.

The biggest reason I wanted to share this story is that the fallout wasn’t as bad as I expected. Everyone I talked to — literally everyone — said something along the lines of, “Yeah, it stinks, but it’s best that you found it yourself and you’re doing the right thing.” I didn’t lose my grant. I got tenure. The editor and publisher were understanding and ultimately opted not to retract the paper but to instead publish a revised version of the article, linked to from the original paper, with the results section updated to reflect the true (opposite) results. After spending months coming to terms with the fact that the paper would be retracted, it wasn’t.

Finally, I wanted to write about my experience because even though this mistake didn’t ruin my career, the fear that it could highlights some serious issues in scientific publishing.

Regardless of the nature of errors, the most common fate for papers that are wrong is “RETRACTED.” This can happen when authors self-correct honest mistakes or when researchers are found guilty of scientific misconduct like deliberately faking data. Given that the majority of retractions happen for pretty damning reasons, it’s hard to ask people to self-nominate for that category. I expected that revealing my error would lead to a retraction, and that was one of the things that made it difficult to disclose.

Yet in reality, of course, mistakes happen. We should embrace systems designed to reduce mistakes, but some will sneak through. When they do, it is in the best interests of scientific progress that they come to light. However, for individual researchers who attach their professional worth to their work, there are many, many incentives not to reveal errors.

What are alternatives to outright retraction? Some journals have experimented with “retraction with replacement” that replaces original versions of articles with updated ones. This is similar to what Psychonomic Bulletin & Review did, by publishing a “related article” with notices in both versions that link to each other. This model is a great step toward encouraging authors to disclose their own errors (though I’ve encouraged the publisher to make the notice more prominent as it’s currently very easy to miss). Another option is implementing a distinct category like “withdrawn at the author’s request” or “self-retraction” for situations in which an author initiates or cooperates with an inquiry to distinguish those situations from instances of misconduct.

I like the idea of contributing to more progress, and less shame, around the issue of mistakes. I’m sharing my story to help normalize admitting errors. Although this process has been difficult, the consequences were much less dire than I’d feared. I know that changing a professional culture is hard, but one step toward building better science is publicly revealing our own errors and showing how we fix them.

--

--

Julia Strand
Elemental

Julia is an Assistant Professor of Psychology at Carleton College in Northfield, MN. She studies speech perception and spoken word recognition. @juliafstrand