The
following thing happens again and again in the world of science and
science reporting.
- Someone will publish a paper providing feeble or faulty evidence for some thing that the science community really wants to believe in.
- The flaws of the paper will be overlooked, and the paper will be hailed as a great research advance.
- Innumerable web sites will hail the feeble research, often hyping it and making it sound like it was proof of something.
- Countless other scientific papers in future years will cite the faulty paper, meaning that there will be a gigantic ripple effect in which weak science gets perpetuated.
We
might call this last effect “the afterlife of bad science.”
Flawed and faulty research will enjoy a long afterlife, reverberating
for years after its publication, particularly if the research matched
the expectations and worldview of the scientist community.
We
see an example of this in the case of a 2002 paper that was entitled
“Molecular Evolution of FOXP2, a Gene Involved in Speech and
Language.” The 2002 paper said, “We show that the FOXP2 gene
contains changes in amino acid coding and a pattern of nucleotide
polymorphism, which strongly suggest that this gene has been the
target of selection during recent human evolution.”
The
study rang the bell of those favoring orthodox explanations of the
origin of man and the origin of human language. Since FOXP2 was being
called a “language gene,” the study suggested that natural
selection might have helped to spread this language gene, and that
natural selection may have had a role in the origin of language. In
the following 16 years, the study was cited more than 1000 times by
other scientific papers.
With
that many citations, we can say that the study won superstar status.
But no such thing should have occurred, for the study had the most
obvious defect. According to an article in Nature, “It was based on
the genomes of only 20 individuals,” and “has never been
repeated.” The authors had no business drawing conclusions about
the natural selection of a gene from such a small sample size.
Now,
according to the journal Nature, a study using larger data has
overthrown the 2002 study. We read the following:
They
found that the signal that had looked like a selective sweep in the
2002 study was probably a statistical artefact caused by lumping
Africans together with Eurasians and other populations. With more —
and more varied — genomes to study, the team was able to look for a
selective sweep in FOXP2,
separately, in Africans and non-Africans — but found no evidence in
either.
No
selective sweep means no natural selection in regard to the FOXP2
gene, which wipes out the idea of any gene evidence for a Darwinian
explanation for the origin of language. But what could be more
predictable? Whenever you do a scientific study with a very small
sample size, there's always a very large chance of a false alarm.
The
story here is: feeble study gets highly hailed, because it fits in
with the belief expectations of the scientific community. Just such a
thing goes on over and over again in the field of neuroscience.
Every year there are multiple studies published that are trumpeted as
evidence for neural memory storage in animals, as evidence for an engram. Usually such studies suffer from exactly the same problem
as the 2002 FOXP2 study: too small a sample size. As discussed here, these “engram”
studies typically suffer from even smaller sample sizes than the 2002
FOXP2 study, and may involve fewer than 10 animals per study group.
With such a small sample size, the chance of a false alarm is very
high. Similarly, almost all brain imaging studies involve sample
sizes so small that they have little statistical power, and do not
provide good evidence of anything.
But
the scientific community continues to cite these underpowered studies
over and over again, and the popular press trumpets such feeble
studies as if they were good evidence of anything. Critics don't seem to
get anywhere complaining about “underpowered studies” and
“studies of low statistical power.” Perhaps it's time to start
saying these too-little-data studies are examples of crummy
research. “Crummy” is a word meaning “poor because of a
small size” – for example, “You must be kidding if you think
I'm going to live with you in your crummy studio apartment.”
Using
the term “crummy research” for studies that draw conclusions or
inferences based on too little data, we may say that a large fraction
of the research in neuroscience and evolutionary biology is crummy
research -- research such as animal studies that used too few animals, brain scans that used too few subjects, or natural history studies based on too few fossils or too small fragments of DNA. Evolutionary biologists have a bad habit of drawing
conclusions based on DNA data that is too fragmentary. The problem is
that DNA has a half-life of only about 521 years, meaning that every 521 years half of a DNA sample will decay away. So when an
evolutionary biologist draws conclusions about the relation between
humans and some hominid species that lived more than 200,000 years
ago, such conclusions are based on only tiny fragments of DNA found
in such very old species – only the tiniest fraction of the full
DNA. Anyone who draws firm conclusions from such fragmentary data
would seem to be giving us an example of crummy research.
Some
orthodox Darwinists probably gnashed their teeth when they read the
new study showing no evidence of natural selection in the FOXP2 gene.
For this gene was thought to be one of the few genes that showed
evidence of such a thing. In the 2018 book Who We Are and How We
Got Here by David Reich, a professor of genetics at Harvard
Medical School, the author makes this revealing confession on page 9:
“The sad truth is that it is possible to count on the fingers of
two hands the examples like FOXP2 of mutations that increased in
frequency in human ancestors under the pressure of natural selection
and whose functions we partly understand.”
Judging from this statement, there are merely 10 or fewer cases where we know of some mutation that increased in the human population because of natural selection. And now that FOXP2 is no longer part of this tiny set, the number is apparently 9 or fewer. But humans have something like 20,000 genes. If only 9 or fewer of these genes seem to have been promoted by natural selection, isn't this something that overwhelmingly contradicts the claim that human origins can be explained by natural selection? If such a claim were true, we would expect to find thousands of genes that had been promoted by natural selection. But the scientific paper “The Genomic Rate of Adaptive Evolution” tells us “there is little evidence of widespread adaptive evolution in our own species."
In the study here, an initial analysis found 154 positively selected genes in the human genome -- genes that seemed to show signs of being promoted by natural selection. But then the authors applied something they called "the Bonferroni correction" to get a more accurate number, and were left with only 2 genes in the human genome showing signs of positive selection (promotion by natural selection). That's only 1 gene in 10,000. Call it the faintest whisper of a trace -- hardly something inspiring confidence in claims that we are mainly the product of natural selection. The 2014 study here finds a similar result, saying, "Our overall estimate of the fraction of fixed adaptive substitutions (α) in the human lineage is very low, approximately 0.2%, which is consistent with previous studies." That's only about 1 gene in 500 showing promotion by natural selection.
Judging from this statement, there are merely 10 or fewer cases where we know of some mutation that increased in the human population because of natural selection. And now that FOXP2 is no longer part of this tiny set, the number is apparently 9 or fewer. But humans have something like 20,000 genes. If only 9 or fewer of these genes seem to have been promoted by natural selection, isn't this something that overwhelmingly contradicts the claim that human origins can be explained by natural selection? If such a claim were true, we would expect to find thousands of genes that had been promoted by natural selection. But the scientific paper “The Genomic Rate of Adaptive Evolution” tells us “there is little evidence of widespread adaptive evolution in our own species."
In the study here, an initial analysis found 154 positively selected genes in the human genome -- genes that seemed to show signs of being promoted by natural selection. But then the authors applied something they called "the Bonferroni correction" to get a more accurate number, and were left with only 2 genes in the human genome showing signs of positive selection (promotion by natural selection). That's only 1 gene in 10,000. Call it the faintest whisper of a trace -- hardly something inspiring confidence in claims that we are mainly the product of natural selection. The 2014 study here finds a similar result, saying, "Our overall estimate of the fraction of fixed adaptive substitutions (α) in the human lineage is very low, approximately 0.2%, which is consistent with previous studies." That's only about 1 gene in 500 showing promotion by natural selection.
No comments:
Post a Comment