The value of publishing negative data
Timothy Fessenden (Credit: Lori Chertoff)
Scientific journals love news-worthy results. Editors want to publish studies with novel data that scientists will eagerly read and cite in their own work.
Because of this desire for novelty, studies that share null or negative results—disproving a hypothesis rather than confirming a novel finding—can be very hard to publish, and scientists often lack motivation to even write up such results. This dynamic has led to an overall publication bias against negative results across scientific journals.
Yet there are many positives to publishing the negative, says Tim Fessenden, executive editor of Life Science Alliance, an open-access, peer-reviewed journal from Rockefeller University Press. Here, Fessenden shares his insider perspective on why and how the scientific community should actively encourage the publication of negative results.
First, what exactly are negative results?
There is not actually a formal definition of negative data—which is troublesome, in a sense—but I believe most people would say a negative result occurs when a hypothesis is disproven. For example, I hypothesize that a gene causes X; I test it; the gene does not cause X.
Negative results can also be viewed from an emotional dimension. Scientific research is resource-intensive—involving hours, weeks and months of work—and it is genuinely disappointing when experimental results disprove a working theory. Perhaps a scientist wouldn’t be excited to share it with their colleagues, even though these results often have value.
Describe that value. Why do negative data, or null results, matter?
Bottom line: If it’s sound science—if the conclusions are supported by the data—then it should be published in some form. In fact, Springer Nature did a great survey of over 11,000 researchers, asking about their attitudes toward, and experiences with, sharing null results. They found some things I find very surprising.
First, 98 percent of researchers believe that negative results are valuable. Scientists cited benefits such as helping identify issues with methodology, preventing the duplication of unnecessary research—in other words, avoiding waste—and that negative data inspires new hypotheses. That’s exciting and surprising, because while we often think about negative data as disappointing data, one may also think it’s boring data. And it’s not.
Negative data makes a valid observation of how the world works, and so it is valuable. Scientists understand this; it’s one reason you can find many examples of negative data scattered inside a manuscript whose headline is a positive result. I recently had a conversation with a researcher who said his lab members put negative results in the supplementary section of publications as a unique way to maintain those results for the lab’s records. Otherwise, they’re likely to lose the data and end up repeating the work later.
If researchers are literally slipping negative data into papers anyway, then why aren’t more negative data routinely published?
Journals have to worry about their impact factor—a measure of a journal’s influence, based on the average number of citations the papers in that journal accrue—and they often assume readers are not interested in negative results. Likewise, authors assume journals don’t want negative results. It’s a vicious cycle that’s hard to break.
In your experience as an editor, how do reviewers react to negative data during the peer review process?
Overall, reviewers seem appreciative of negative results. They often remark on the value of such findings. But sometimes, you see two reviewers of the same paper making different conclusions. For example—and I’ll keep this anonymized—for a recent paper with negative results, one reviewer remarked, “Overall, this study is important as it provides more information on an orphan receptor that is of interest as a target for addiction.” A second reviewer commented, “The findings would be of limited impact as they are descriptive and do not teach us much…”
These two reviewers are reading the same paper and understanding the same results, but they reached different conclusions. As a policy at Life Science Alliance, we consider descriptive results and null results, and this paper meets those criteria. So, as an editor, I have no trouble overruling the second reviewer.
How can editors destigmatize, or even encourage, the publication of negative data?
In the Springer Nature survey, people stated that the top reasons they did not pursue publishing negative results was because those results were not likely to be accepted for publication, and they didn’t know which journals they could submit to. A fair amount of people didn’t think they could submit negative data to any journal.
Many journals will publish negative results, but might not advertise that they do. I think there’s a big role for journals to play by being proactive, by simply and clearly stating if they will consider negative results.
At Life Science Alliance, we explicitly welcome negative data and null results, and we include it online in the journal’s policy about submitting. This was baked into the founding of our journal; we think it’s a matter of scientific integrity, of reducing wasted research efforts, and of lowering barriers to publication. We want to be a place for authors who have trouble finding an outlet for research with a main negative result. We believe that work should be out there, and we want to reward their rigorous efforts.