mead cohen berger shevtsova garfinkle michta grygiel blankenhorn
To Be or Not to Be…Published


“Publish or perish” is the familiar exhortation to academics seeking tenure, but maybe “publish rubbish or perish” is more apt. A good number of the articles published in humanities journals are more like vanity projects than substantive contributions to a discipline. According to a recent article in The Economist, rubbish now is piling up in scientific research as well:

Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties….

Science still commands enormous—if sometimes bemused—respect. But its privileged status is founded on the capacity to be right most of the time and to correct its mistakes when it gets things wrong. And it is not as if the universe is short of genuine mysteries to keep generations of scientists hard at work. The false trails laid down by shoddy research are an unforgivable barrier to understanding.

This trend of publishing for publishing’s sake is deeply disconcerting. Scientific progress depends on researchers questioning results and debating the less flashy issues, irrespective of whether their endeavors are likely to land them a spot in some journal. Read the whole thing. 

[Library books photo courtesy of Shutterstock]

Features Icon
show comments
  • Jacksonian_Libertarian

    This is really ugly, if you add in the Global Warming hoax, it is clear that some kind of policing is needed. Credit agencies use credit history to score consumers credit worthiness, something similar is needed for journals and scientist’s reports in order to filter out the liars and frauds.

    • Andrew Allison

      Peer review with consequences rather than “I’ll scratch you back in return for or in anticipation of reciprocity” would be a good start. Name and shame those who endorse or print garbage science. Most of the “science” news published daily online is, to a reasonably well-educated layman, arrant nonsense. Today’s example: the “explanation” of Earth’s magnetic field, based on a solid core which was, just last week shown to be liquid.

  • Andrew Allison

    “This trend of publishing for publishing’s sake is deeply disconcerting.” misses the point. Publish or perish may or may not be appropriate; what’s not appropriate is to publish, and even worse endorse, rubbish.

  • wigwag

    There is plenty of rubbish published in scientific journals, but “The Economist” should be well acquainted with rubbish, the article cited by Professor Mead in this post is the very definition of rubbish.

    Yes, for a scientific funding to be verified it has to be duplicated but the suggestion that a failure to duplicate a finding means that it should never have been published in the first place is beyond idiotic. The way that other scientists learn about a finding in the first place is by reading a publication documenting the supposed discovery in a scientific journal. Had the original article never been published, other scientists would never have known to even try to duplicate the experiment. Thus the point that “The Economist” was trying to make is an absurdity.

    But it goes beyond that. What are we supposed to glean from the fact that scientists at Amgen or Bayer were unable to replicate the results of several published studies? Just because scientists at those companies couldn’t replicate the findings doesn’t mean that other scientists couldn’t. It entirely possible that the results from the original studies were accurate while Amgen’s or Bayer’s results were inaccurate.

    Many of the techniques used in scientific experiments are very complex; the fact that one group of scientists can’t replicate findings may be more the result of their lack of expertise in the relevant technique than anything else.

    Garbage in garbage out is the old saying. The Economist should know.

    • Matt B

      Peer-reviewed journals are supposed to publish real findings that add to scientific knowledge. They aren’t there to support speculation or results that can’t be replicated. And speaking of speculation, the idea that a majority of results can’t be replicated because the other scientists aren’t smart enough is laughable.

      What’s happening in science is the rise of careerism and self-promotion over simple diligence and professionalism. It’s happening in every field, unfortunately science is no exception.

  • Anthony

    “A simple idea underpins science: trust and verify. Results should always be subject to challenge from experiment.” The pressure to publish comes with the aspiration however quality control from outset may need revision – tightening self policing, peer reviewing, protocol standards, statistical models, committees, etc. But, WigWag’s point (implied) that a good research paper/experiment can be useful even if it fails remains both valid and sound.

© The American Interest LLC 2005-2016 About Us Masthead Submissions Advertise Customer Service