“These
companies have become so successful, Franco says, that for the first
time in history, scientists and scholars worldwide are publishing
more fraudulent and flawed studies than legitimate research—maybe
ten times more. Approximately 10,000 bogus journals run rackets
around the world, with thousands more under investigation, according
to Cabell’s International, a publishing-services company. 'We’re
publishing mainly noise now,' Franco laments. 'It’s nearly
impossible to hear real signals, to discover real findings.' "
I know of no hard facts that substantiate these claims by Franco,
which I suspect are exaggerated. I also failed to find any justification in the
article that there are great numbers of “bogus journals” that are running
“rackets.” The only thing the article describes are scientific
journals that publish scientific papers without doing much to exclude
bad papers.
Let
us imagine that you start an open-access scientific journal with a
non-restrictive publication policy. You decide that anyone who writes
up a scientific experiment can publish it in your journal. Are you
guilty of running a “bogus journal” and running a racket because
you do not get scientists to peer-review the work submitted? I think
not. You are simply someone who has started a journal with a
publication policy that differs from social norms. Of
course, if you claim that your journal is peer-reviewed but you do
not actually engage in peer-review (by hiring scientists to review submitted
articles), that would be bogus, because it would be a
misrepresentation.
Peer-review
of scientific papers has always been something that is a mixture of
something good mixed with something that is very bad.
The good done by peer-review is that some bad papers get
excluded from publication. But it is not at all true that peer-review
is an effective quality control system. One reason is that
peer-review does not involve reviewing the source data behind an
experiment.
Imagine
you submit to a scientific journal a paper describing an experiment
involving animals. If your paper is being peer-reviewed, the reviewer
does not come over to your laboratory and ask to check the data you
used to write up your paper. The peer-reviewer does not ask to see
your daily log book or see photographs you took to document the
experiment. Instead, the peer-reviewer assumes the honesty of the
person writing the paper.
So
what types of things are excluded by peer-reviewers? Things like
this:
- Obvious logical errors or obvious procedural errors that can be detected by reading the paper.
- Obvious mathematical errors that can be found in the paper.
- Deviations from belief customs of scientists. A paper may be rejected by peer-reviewers mainly because it presents evidence against a cherished belief of scientists, or if the paper seems to have sympathy for some idea that is a taboo in the scientific community.
- Papers producing null results, which fail to confirm the hypothesis they were testing. Such papers are very often excluded on the grounds of being uninteresting, and sometimes excluded because a scientist would prefer to believe the hypothesis is correct.
Because
peer-review acts like a censorship system, it does great harm.
Peer-review helps to keep scientists in “filter bubbles” in which
they only read about results that are consistent with their world
views. The scientist reading his peer-reviewed journal and reading only results consistent with a materialist worldview may be like a
1970's Soviet Union citizen reading his daily edition of Pravda,
and reading only information compatible with a Marxist-Leninist
worldview. The exclusion of null results (experiments that did not
confirm the hypothesis tested) is a very great problem that often
leads scientists to think certain effects are more common or
better-established than they are, or that certain claims are better substantiated than they are.
And
since you can't very effectively police bad scientific papers without
doing a detailed audit that asks to look at source data, peer-review
doesn't do very much to prevent scientific fraud. A more effective
system would be one in which there was no peer-review except for a
certain small percentage of experimental papers which would be
randomly selected to undergo a thorough audit, with the auditor
allowed to conduct detailed interviews with all the experimenters,
and with an inspection of the original source data. A scientist
would be unlikely to commit fraud if he thought there was a 5% chance his
experiment would have to face such a detailed audit.
Peer-review
as it has been traditionally practiced is such a mixed bag that is no
obvious evil for a scientific journal to dispense with it altogether
and allow unrestricted publication for any scientist presenting a
paper. That would result in some more bad papers, but also allow the
publication of many papers that should have been published but were not
published because of being wrongly blocked by a typical peer-review system.
It
seems, therefore, that if there are many junk science papers being
published, the people we should mainly blame are not publishers
failing to uphold dubious peer-review conventions, but instead the
scientists who wrote the junk science papers. It's rather silly to
be suggesting “there's so much junk science – damn those bad
publishers,” when the main person to be blamed for a bad science
paper is the author of that paper, not its publisher.
One big problem with the Gillis article is that it creates a simplistic narrative that may lead you to think that junk science exists almost entirely in junk science journals that do not follow proper peer-review standards. But the truth is that junk science is all over the place. Very many of the scientific papers published in the most reputable scientific journals are junk science papers.
There are several reasons why a sizable fraction of the scientific papers published should be called junk science. One reason is that very many scientific papers consist of groundless speculation, flights of fancy in which imaginative guesswork runs wild. For example, a large fraction of the scientific papers published in cosmology journals, particle physics journals, neuroscience journals and evolutionary biology journals consist of such runaway speculation.
Another reason is that a sizable fraction of all experimental papers involve sample sizes that are too small. A rule-of-thumb in animal studies is that at least 15 animals should be used in each study group (including the control group) in order for you to have moderately compelling evidence in which there is not a high chance of a false alarm. This guideline is very often ignored in scientific studies that use a much smaller number of animals. In this post I give numerous examples of memory studies that were guilty of such a problem.
The issue was discussed in a paper in the journal Nature, one entitled Power failure: why small sample size undermines the reliability of neuroscience. The article tells us that neuroscience studies tend to be unreliable because they are using too small a sample size. When there is too small a sample size, there's a too high chance that the effect reported by a study is just a false alarm.
The paper received widespread attention, but did little or nothing to change practices in neuroscience. A 2017 follow-up paper found that "concerns regarding statistical power in neuroscience have mostly not yet been addressed." Exactly the same problem exists in the field of psychology. It is interesting that the peer-review process (supposedly designed to produce high-quality papers) totally fails nowadays to prevent the publication of scientific studies that are probably false alarms because a too-small sample size was used.
An additional reason for junk science in mainstream journals is that a great deal of biomedical research is paid for by pharmaceutical companies trying to drive particular research results (such as one suggesting the effectiveness of the medicine they are selling). Yet another reason for junk science in mainstream journals is that the modern scientist is indoctrinated in unproven belief dogmas that he is encouraged to support, and he or she often ends up writing dubious papers trying to support these far-fetched ideas. Such papers may commit any of the sins listed in my post, "The Building Blocks of Bad Science Literature."
A widely discussed 2005 paper entitled "Why Most Published Research Findings Are False" stated the following:
"Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias."
There are very many junk-science studies in even the best science journals. Scientists know some of the ways in which the amount of junk science papers can be reduced (such as increasing the sample size and statistical power of experimental studies). But thus far there has been little progress in moving towards more rigorous standards that would reduce the number of junk science papers.
Many science textbooks contain a great deal of junk science, mixed in with factual statements. Since textbooks do little more than summarize what is written in science journals, a large number of false published research findings will inevitably result in a huge number of false claims being made in science textbooks.
"Junk" means "something of little value," and when I speak of "junk science" here I include any science paper that is of little value, for reasons such as being too speculative or too trivial or because of drawing conclusions or making insinuations that are poorly supported or not likely to be true.
Most scientific claims must be critically scrutinized, and we must always be asking questions such as, "Do we really have proof for such a thing?" and "Why might they have gone wrong when reaching such a conclusion?" and "What alternate explanations are there for the observations?" We cannot simply trust something because it is published by some publisher with a good reputation.
No comments:
Post a Comment