Most scientific articles and papers are good solid information. But our
news outlets will often give us misleading articles on
scientific topics. Such articles may be based on poor reporting of
sound scientific studies, but often the problem lies both in an
article promoting a scientific study and the scientific study itself.
Let us look at the various tricks that help to build up this type of
misinformation, a topic that is also discussed in this post.
Building
Block #1: The Bunk “Click Bait” Article Headline
When
it comes to reporting scientific studies, shameless hyping and
unbridled exaggeration are very common, and simple lying is not very
uncommon. Some research suggesting a possibility only very weakly may
be trumpeted as dramatic proof of such a possibility, or the research
may be described with some claim completely at odds with what the
research actually found. It's pretty much an “anything goes” type
of situation.
Why
does this happen? It all boils down to money. The way large modern
web sites work financially is that the more people access a
particular page, the more money the web site gets from its online
ads. So large web sites have a financial motive to produce “click
bait” stories. The more sensational headline, the more likely
someone will be to click on it, and the more the web site will make.
A
recent outrageous example of such a bunk headline was the article
headline “Fingerprints of Martian Life” in the web site of Air
and Space magazine published by the Smithsonian. The article merely
reported on some carbon compounds found on Mars, compounds that were
neither the building blocks of life nor the building blocks of the
building blocks of life.
Building
Block #2: The Misleading Below-the-Headline Teaser
Sometimes
a science article will have a short unpretentious title, but then we
will see some dubious teaser text below the title or headline, some
text making some claim that is not substantiated by the article. An
example is this 2007 article that appeared in Scientific American.
Below the unpretentious title of “The Memory Code,” we had some
teaser text telling us, “Researchers
are closing in on the rules that the brain uses to lay down
memories.” This claim was false in 2007, and in 2018
there is still no scientist who has any understanding of
how a brain could physically store episodic memories or conceptual
memories as brain states or neural states.
Building
Block #3: The Underpowered Study with a Too-Small Sample Size
A
rule-of-thumb in animal studies is that at least 15 animals should be
used in each study group (including the control group) in order for
you to have moderately compelling evidence in which there is not a
high chance of a false alarm. This guideline is very often ignored in
scientific studies that use a much smaller number of animals. In this
post I give numerous examples of MIT memory studies that were guilty
of such a problem.
The
issue was discussed in a paper
in
the journal Nature, one entitled Power
failure: why small sample size undermines the reliability of
neuroscience.
The article tells us that neuroscience studies tend to be unreliable
because they are using too small a sample size. When there is too
small a sample size, there's a too high chance that the effect
reported by a study is just a false alarm.
The paper received widespread attention, but did little or nothing to change practices in neuroscience. A 2017 follow-up paper found that "concerns regarding statistical power in neuroscience have mostly not yet been addressed."
Building
Block #4: The Weak-Study Reference
A
weak-study reference will occur when one scientific paper or article
refers to some previously published paper or article, without
including the appropriate caveats or warnings about problems in such
a paper. For example, one scientist may have a “low statistical
power” too-small-sample study suggesting that some gene
manipulation causes rats to be smarter. He may then try to beef up or
bolster his paper by referring to some previous study that claimed
something similar; but he may completely fail to mention that the
previous study was also a “low statistical power”
too-small-sample-size study.
Building
Block #5: The Misleading Brain Visual
Except
for activity in the auditory cortex or the visual cortex, a brain
scan will typically show differences of only 1% or less in the brain.
As the post here shows with many examples, typical brain scan
studies will show a difference of only a half of one percent or less
when studying various types of thinking and recall. Now imagine you
show a brain visual that honestly depicts these tiny differences. It
will be a very dull visual, as all of the brain regions will look the
same color.
But
very many papers have been published that show such results in a
misleading way. You may see a brain that is entirely gray, except for
a few small regions that are in bright red. Those bright red regions
are the areas in the brain that had a half of one percent more
activity. But such a visual is very misleading, giving readers the
entirely inaccurate impression that some region of the brain showed
much more activity during some mental activity.
Building
Block #6: The Dubious Chart
There are a wide variety of misleading charts or graphs that show up in the scientific literature. One example is a "Composition of the Universe" pie chart that tells you the universe is made up of 71.4% dark energy and 24% dark matter. Since neither dark energy nor dark matter has been discovered, no one has any business putting up such pie charts, which imply some state of knowledge that mankind does not have.
Another example of dubious charts are charts showing the ancestry of humans from more primitive species. If I do a Google
image search for “human evolution tree,” I see various charts
trying to put fossils into a tree of human evolution. But the charts
are speculative, particularly all their little lines suggesting particular paths of ancestry. The recent book “Almost Human” by two
paleontologists tacitly admits this, for in its chart of hominid
evolution, we see four nodes that have question marks, indicating
hypothetical ancestors that have not been found. There is also
inconsistency in these charts. Some of these charts list
Australopithicus africanus as a direct ancestor of humans, while
the “Almost Human” chart does not list Australopithicus
africanus as a direct ancestor of humans. Some of these charts
list Homo erectus as a direct ancestor of humans, while others
indicate Homo erectus was not a direct ancestor of humans. Given all the uncertainties, the best way to do such a chart is to have a chart like the one below, which simply shows different fossil types and when they appeared in the fossil record, without making speculative assumptions about paths of ancestry. But almost never do we see such a chart presented in such a way.
From a page at Brittanica.com
Building
Block #7: The Dubious Appeal to Popular Assumptions
Often
a scientific study will be suggesting something that makes no sense
unless some particular unproven assumption is true. The paper will
try to lessen this problem by making some claim such as “It is
generally believed that....” or “Most scientists believe....”
Such claims are almost never backed up by specific references to show
that such claims of support are true. For example, a scientific paper
may say, “Most neuroscientists think that memories are stored in
synapses,” but the paper will almost never do something such as citing
an opinion poll of neuroscientists citing some specific percentage of
scientists believing such a thing.
Building
Block #8: The Dubious Causal Inference
Experiments
and observations rarely produce a result in which a causal
explanation “cries out” from the data. A much more common
situation is that something is observed, and the reason or reasons
for such a thing may be unknown or unclear. But a scientist will not
be likely to get a science paper published if it has a title such as
“My Observations of Chemicals Interacting” or “Some Brain Scans
I Took” or “Some Things I Photographed in Deep Space.” The
scientist will have a much higher chance of getting his paper
published if he has some story line he can sell, such as “x causes
y” or “x may raise the risk of y.”
But
a great deal of the causal suggestions made in scientific papers are
not warranted by the experimental or observational data the paper
describes. A
scientific paper states the following: “Thirty-four percent of
academic studies and 48% of media articles used language that
reviewers considered too strong for their strength of causal
inference.” In other words, a full one third of scientific papers
are doing things such as suggesting that one particular thing causes
another particular thing, when their data does not support such
statements.
Such
causal statements are often made in studies that suggest one thing
causes another thing, when the studies were not designed to look for
such a thing.
Building
Block #9: The Dubious Discounting of Alternate Explanations
When
a scientist suggests one particular thing causes another particular
thing, he or she often has to say a bad word or two about some
alternate causal explanation. Sometimes this involves putting
together some kind of statistical analysis designed to show that some
alternate explanation is improbable. The paper will sometimes state
that such an analysis shows the alternate explanation is
“disfavored.”
Often,
this type of analysis can be very dubious because of its bias. If
scientist W is known to favor explanation X for observation Y rather
than explanation Z, such a scientist's analysis of why explanation Z
does not work may have little value. What often goes on is that
“strawman assumptions” are made about the alternate explanation.
To discredit alternate explanation Z, a scientist may assume some
particular version of that explanation that is particularly easy to
knock down, rather than some more credible version of the explanation
which would not be as easy to discredit.
Building
Block #10: The Slanted Amateurish “Monte Carlo” Simulation
A
Monte Carlo simulation is a computer program created to show what
might happen under chance conditions. Many a scientific study will
include (in addition to the main observational results reported) some
kind of Monte Carlo simulation designed to back up the claims of the
study. But there are two reasons why such studies are often of
doubtful value. The first is that they are typically programmed not
by professional software programmers, but by scientists who
occasionally do a little programming on the side. The reliability of
such efforts is often no greater than what you would get if you let
your plumber do your dental work. Another reason why such studies are
often of doubtful value is that it is very easy to do a computer
simulation showing almost anything you want to show, just by
introducing subtle bias into the programming code.
Virtually
always the scientists who publish these Monte Carlo simulations do
not publish the programming source code that produced the simulation,
often because that would allow critics to discover embarrassing bugs
and programming biases. The rule of evaluation should be “ignore
any Monte Carlo simulation results if the source code was not
published.”
Building Block #11: The Cherry-Picked Data
The term "cherry picking" refers to presenting, discussing or thinking about data that supports your hypothesis or belief, while failing to present, discuss or think about data that does not support your hypothesis or belief. One type of cherry picking goes on in many scientific papers: in the paper a scientist may discuss only his or her results that support the hypothesis claimed in the paper title, failing to discuss (or barely mentioning) results that do not support his hypothesis. For example, if the scientist did some genetic engineering to try to make a smarter mouse, and did 10 tests to see whether the mouse was smarter than a normal mouse, we may hear much in the paper about 2 or 3 tests in which the genetically engineered mouse did better, but little or nothing of 4 or 5 tests in which the genetically engineered mouse did worse.
A very different type of cherry-picking occurs in another form of scientific literature: science textbooks. For many decades biology and psychology textbook writers have been notorious cherry pickers of observational results that seem to back up prevailing assumptions. The same writers will give little or no discussion of observations and experiments that conflict with prevailing assumptions. And so you will read very little or nothing in your psychology textbook about decades of solid experimental research backing up the ideas that humans have paranormal abilities; you will read nothing about many interesting cases of people who functioned well despite losing half, most, or almost all of their brains due to surgery or disease; and you will read nothing about a vast wealth of personal experiences that cannot be explained by prevailing assumptions. Our textbook writer has cherry picked the data to be presented to the reader, not wanting the reader to doubt prevailing dogmas such as the dogma that the mind is merely the product of the brain.
Building Block #12: The All-But-Indecipherable Speculative Math
Thousands of papers in theoretical physics are littered with all-but-intelligible mathematics equations. The meaning of a complex math equation can always be made clear, if a paper documents the terms used. For example, if I use the equation f
= (G*m1m2)/r2., I can have some lines underneath the equation specifying that m1
and m2 are the masses of two bodies, G is the universal gravitational constant, f is the gravitational force between the bodies, and r is the distance between them.
But thousands of theoretical physics paper are filled with much more complex equations that are not explained in such a way. The author will have no clarification of the symbols being used. We may wonder whether deliberate obscurity is the goal of such authors, and can compare them to the Roman Catholic priests who would for centuries deliberately recite the Mass in Latin rather than a language people could understand.
Building Block #13: Data Dredging
Data dredging refers to techniques such as (1) getting some body of data to yield some particular result that it does not naturally yield, a result you would not find if the data was examined in a straightforward manner, and (2) comparing some data with some other bodies of data until some weak correlation is found, possibly by using various transformations, manipulations and exclusions that increase the likelihood that such a correlation will show up. An example may be found in this paper, where the authors do a variety of dubious statistical manipulations to produce some weak correlations between a body of genetic expression data and a body of brain wave data, which should not even be compared because the two sets of data were taken from different individuals.
Building Block #14: Tricky Mixing of the Factual and the Speculative
An extremely common practice of weak science literature is to mix up references to factual observations and references to speculative ideas, without labeling the speculative parts as being speculations. This is often done with a ratio such as ten or twenty factual statements to each speculative statement. When the reader hears the speculation (which is not described as such), there will be a large chance that he will interpret it as something factual, as the previous ten or twenty statements (and the next ten or twenty statements) are factual.
Building Block #15: The Pedantic Digressions of Little Relevance
Another extremely common practice of weak science literature is to pile up digressions that may impress the reader but are not very relevant to the topic under discussion. Such a practice is extremely common when the topic under discussion is one that scientists do not understand. So, for example, a scientist discussing how humans retrieve memories (something scientists do not at all understand) may give all kinds of discussions of assorted research, with much of the research discussed having little relevance to the question at hand. But many a reader will see all this detail, and then think, "Oh, he understands this topic well."
Building Block #16: The Suspicious Image
This article states, "35,000 papers may need to be retracted for image doctoring, says new paper." It refers to this paper, which begins by stating, "The present study analyzed 960 papers published in Molecular and Cellular Biology (MCB) from
21 2009-2016 and found 59 (6.1%) to contain inappropriately duplicated images."
Building Block #17: The Shady Work Requested from a Statistician
A page on the site of the American Council on Science and Health is entitled "1 in 4 Statisticians Say They Were Asked to Commit Scientific Fraud." The page says the following:
A stunning report
published in the Annals of Internal Medicine concludes that
researchers often make "inappropriate requests" to
statisticians. And by "inappropriate," the authors aren't
referring to accidental requests for incorrect statistical analyses;
instead, they're referring to requests for unscrupulous data
manipulation or even fraud.