Chocolate shown to cure toe cancer AGAIN!
Or, an article you’d never see. Sensational exploratory studies like these don’t get much attention outside of the daytime health talk show circuit and filler content between the local weather and traffic report. The funding tends to dry up before a follow-up study can be done to test the validity of the conclusion and the first is sent to the press. Either way, people don’t care enough to follow up. Increasingly, researchers have less confidence in the published literature in their field (according to a Nature survey conducted in 2016) because fewer and fewer studies even get a chance at replication. The reproducibility crisis in scientific research has decimated the number of reliable works in a slew of fields. And it’s not like these doubts are solely limited to their respective spheres; untrustworthy data begets an untrusting public. And when people don’t trust the science, we get climate change deniers and parents who don’t vaccinate their kids.
But wait, let’s back up. If the problem is studies aren’t being replicated, why don’t researchers just start replicating their studies? Well, a lot of the scientists from the same Nature survey blame a pervasive pressure to publish exciting findings. Those Dr. Oz style health tips? They come from a pop science model that values a constant flow of questionable, intriguing results over legitimate, well-founded and relevant research. It rewards publishing as fast as possible in the most lucrative outlets as possible. Researchers want their findings to get a lot of attention, and therefore a lot of money. The truth is nobody cares about the second person to do something. There just isn’t incentive to go over the same material more than necessary. A vast majority of researchers are moving on as soon as the results are published in order to stay in the public eye. This leads to half-baked hypotheses that make headline-worthy claims for people to talk about over brunch. The more significant and important-sounding names you get attached to the publication, the better. John Couchman, editor for the Journal of Histochemistry and Cytochemistry, recounts a time he and several of his colleagues at the journal were approached to join boards where they had no expertise. That’s like asking an astrophysicist to date a sample from an archeological dig.
And even if someone wanted to go back and try to reproduce all those bogus-sounding studies, they probably wouldn’t be able to. A mix of p-hacking (manipulating “statistically significant” data to show what you want), undocumented alterations to the experiment, or just plain old flawed methodology causes results to be irreplicable, even with a margin of experimental error. Some of this is motivated by the previously-mentioned desire to publish. The public likes seeing positive results or things directly lead to a solution to a problem, usually curing a disease. Another reason is that scientists, understandably, don’t want to flush up to years of research down the drain. Sometimes labs need the funding that comes with those sensational, pop headlines. More attention equals more money.
Funding. The underlying cause to everything presented so far. Money keeps the lights on, and since the 1970’s, the majority of it has come from private sponsors (according to a Project MUSE article). This can introduce another level of bias or be the original source of it in the first place. Corporate sponsors can push for certain results in their studies and keep raw figures from the public eye. Smaller labs have trouble getting funding compared to some established counterparts. It is harder for these labs to replicate their studies because it doubles the amount of resources, time, and money they need to conduct the experiments again. So either results are suppressed or swayed, and then not checked because small independent labs can’t find funding to run almost anything other than those pop science procedures.
All together, these issues cause a massive deficiency in a multitude of fields that is destroying the image of science in public opinion. It can lead people to believe scientists don’t know what they’re talking about or it’s all controlled by massive companies to sell their products better. People need to believe that science isn’t just some huge hoax so we as a population can learn more about how the world works. In addition to public perception, research also needs to have quality for its own sake. Current discoveries are based on ones that came before. If there is only a cracked foundation, we can only build a crooked house. Mistakes will compound until there is no going back, and may cause the very real loss of human lives in the long run.
So, how do we fix this? D. B. Resnik suggests that we eliminate those financial incentives that motivate people to publish faulty research. If labs have a stable source of income, they will be less susceptible to dubious research methods. Government funding, led by public vote, could also cut down on influence from private corporations. On the other hand, the scientists surveyed in the Nature article say a better understanding of statistics would help the most. Couchman offers up his peer review process as a model for other publications to screen junk research. But even before this whole process, in an article called “Why most published research findings are false,” John Ioannidis develops a mathematical formula to evaluate how likely a hypothesis will be proven correct. The main tenets of the method say not to lean on statistical evidence by itself and to not separate the hypothesis from the rest of the literature in the field, because- again- science doesn’t exist in a vacuum and it’s all based of what has come before.
Until some of the bigger issues are fixed, there are a few things we as individuals can do to help move the culture surrounding research along. Support small labs with ethical and honest practices. Don’t promote articles or reports that don’t disclose all of their data. Look into the sources of funding for reports that seem to promote a product or behavior that leads back to one company. If a report looks fishy, notify the publication. Get involved on public peer review sites to look over recent publications.
We all know chocolate can’t cure any kind of cancer, but we need to eliminate the things that cause people to publish these things in the first place. The integrity of science as a whole depends on our ability to replicate, as near as possible, the results of the studies we conduct. While it may be entertaining now to have fun, gossip-y science facts to talk about, these kinds of misunderstandings can cause innumerable deaths and waste billions of dollars every year. If needed, we should publicly fund small labs so we can get more reliable results. Individual labs need to start implementing their own measures to replicate their data and record every change that happens with the experiment. Any “crisis” can be a mess to deal with, but that shouldn’t deter us from trying to fix it anyway.
by: M. Daniel
Baker, Monya. “1,500 scientists lift the lid on reproducibility.” Nature News 533.7604 (2016): 452.
Begley, C. Glenn, and John PA Ioannidis. “Reproducibility in science.” Circulation research 116.1 (2015): 116-126
Couchman, John R. “Peer review and reproducibility. Crisis or time for course correction?.” Journal of Histochemistry &Cytochemistry 62.1 (2014): 9-10.
Ioannidis, John PA. “Why most published research findings are false.” PLoS medicine 2.8 (2005): e124.
Resnik, D. B. “Financial Interests and Research Bias.” Perspectives on Science. 8.3 (2000): 255-285.