To Be or Not to Be…Published
show comments
  • Jacksonian_Libertarian

    This is really ugly, if you add in the Global Warming hoax, it is clear that some kind of policing is needed. Credit agencies use credit history to score consumers credit worthiness, something similar is needed for journals and scientist’s reports in order to filter out the liars and frauds.

    • Andrew Allison

      Peer review with consequences rather than “I’ll scratch you back in return for or in anticipation of reciprocity” would be a good start. Name and shame those who endorse or print garbage science. Most of the “science” news published daily online is, to a reasonably well-educated layman, arrant nonsense. Today’s example: the “explanation” of Earth’s magnetic field, based on a solid core which was, just last week shown to be liquid.

  • Andrew Allison

    “This trend of publishing for publishing’s sake is deeply disconcerting.” misses the point. Publish or perish may or may not be appropriate; what’s not appropriate is to publish, and even worse endorse, rubbish.

  • wigwag

    There is plenty of rubbish published in scientific journals, but “The Economist” should be well acquainted with rubbish, the article cited by Professor Mead in this post is the very definition of rubbish.

    Yes, for a scientific funding to be verified it has to be duplicated but the suggestion that a failure to duplicate a finding means that it should never have been published in the first place is beyond idiotic. The way that other scientists learn about a finding in the first place is by reading a publication documenting the supposed discovery in a scientific journal. Had the original article never been published, other scientists would never have known to even try to duplicate the experiment. Thus the point that “The Economist” was trying to make is an absurdity.

    But it goes beyond that. What are we supposed to glean from the fact that scientists at Amgen or Bayer were unable to replicate the results of several published studies? Just because scientists at those companies couldn’t replicate the findings doesn’t mean that other scientists couldn’t. It entirely possible that the results from the original studies were accurate while Amgen’s or Bayer’s results were inaccurate.

    Many of the techniques used in scientific experiments are very complex; the fact that one group of scientists can’t replicate findings may be more the result of their lack of expertise in the relevant technique than anything else.

    Garbage in garbage out is the old saying. The Economist should know.

    • Matt B

      Peer-reviewed journals are supposed to publish real findings that add to scientific knowledge. They aren’t there to support speculation or results that can’t be replicated. And speaking of speculation, the idea that a majority of results can’t be replicated because the other scientists aren’t smart enough is laughable.

      What’s happening in science is the rise of careerism and self-promotion over simple diligence and professionalism. It’s happening in every field, unfortunately science is no exception.

  • Anthony

    “A simple idea underpins science: trust and verify. Results should always be subject to challenge from experiment.” The pressure to publish comes with the aspiration however quality control from outset may need revision – tightening self policing, peer reviewing, protocol standards, statistical models, committees, etc. But, WigWag’s point (implied) that a good research paper/experiment can be useful even if it fails remains both valid and sound.

© The American Interest LLC 2005-2017 About Us Masthead Submissions Advertise Customer Service
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.