Yesterday
the news media were discussing a PLOS-One scientific paper entitled:
“Why Do You Believe in God? Relationships between Religious Belief,
Analytic Thinking, Mentalizing and Moral Concern.” Most of the
media uncritically regurgitated the silly press release on this
study, and failed to notice any of the very large deficiencies with
this study, which could have been noticed with a casual inspection
of the scientific paper.
The
paper in question was written by authors who make very clear near the
beginning of their paper that they are in the “you have religious
beliefs because you are stupid” camp. They make this clear by
stating in the second paragraph of their paper “analytic thinking
discourages religious and spiritual beliefs,” after giving no
support for such a statement other than references to other papers
with problems similar to those in their own paper. The authors then
attempt to support this unwarranted and outrageously over-general
claim by presenting their own little experiments. But wait a second –
what happened to not reaching a conclusion until you discuss your
experiments? The standard procedure in a paper is to first suggest a
hypothesis (with a degree of impartiality, treating it merely as a
possibility), then to discuss an experiment testing the hypothesis,
and then to discuss whether the evidence supports the hypothesis.
When scientists start speaking like “we believe in this doctrine,
now here our some experiments we did that support our belief,”
they should cause us to suspect a very clear experimental bias that
calls their results into question.
What
are the problems with this study? The first problem is that the
scientists did experiments that used a ridiculously small sample
size. Their first study involved only 236 persons, and a similar
sample size was used for their other studies. It is preposterous to
try to make any claims whatsoever about a relation between religious
belief and analytic abilities based on such a ridiculously small
sample size. The smaller the data size, the more likely that false
correlations will be found. There is no excuse for such a small
sample size, given the fact that the studies were merely
multiple-choice surveys, and that nowadays almost any ace programmer
can build a website easily capable of doing such studies in a way
that might get many thousands of respondents.
The
second problem with the study is the very small correlation reported.
The study found a negative correlation between religious belief and
analytic ability that is reported as having a P-value of “<.05”
or less than .05. Sound impressive? It isn't. The P-value is quite
a slippery concept. It is commonly thought of as the chance of you
getting the result when there is no actual correlation. But
according to an article in Nature, such an idea is mistaken. Here is
an excerpt from that article:
According
to one widely used calculation, a P value of 0.01 corresponds to a
false-alarm probability of at least 11%, depending on the underlying
probability that there is a true effect; a P value of 0.05 raises
that chance to at least 29%.
So
the P value of “less than 0.05” reported in the “Why Do You
Believe in God” study means very little, with a chance as high as
28% that a false alarm is involved. And the chance of a false alarm
could easily be more than 50%, given the experimental bias the study
shows by very quickly announcing near its beginning the claim that “analytic thinking discourages religious and spiritual
beliefs” before providing any data to support the claim.
The
“Why Do You Believe in God” study shows strong signs of what is
called p-hacking or data-dredging. This is when scientists gather
data and then slice it and dice it through various tortuous
statistical manipulations, data-fishing until they can finally claim
to have discovered something “statistically significant” (usually
ending up with some borderline result which means little). The
hallmark of data-dredging is that the study veers into claiming
correlations that it wasn't explicitly designed to look for, and that
seems to be what is going on in this study (a study that announces no
clear objectives at its beginning, unlike a typical paper announcing
the study was designed to test some particular hypothesis).
The
study in question, and similar studies, are misrepresented by
Professor Richard
Boyatziz, who makes the following doubly-misleading statement:
Given that Boyatziz's study has a ridiculously small sample size, and a very unimpressive borderline P-value, it very clearly does not confirm anything. As for Boyatziz's claim that studies have shown that people who have faith “are not as smart as others,” it is as bogus as his claim about his own study.
Let's
look at the biggest such study. It was a meta-analysis described in
this 2013 Daily Mail story which has this misleading headline, one
that doesn't match the actual results of the study: “Those with
higher intelligence are less likely to believe in God claims new
review of 63 scientific studies stretching back decades.” But
when we look at the main graph in the study, we find quite a
different story.
The graph shows some blue dots representing countries. The graph indicates atheism is very rare among people with low intelligence. But what about people with normal intelligence or slightly higher-than-normal intelligence? Among those countries with an IQ of about 100 (between 95 and 105), the great majority (about two-thirds) show up in the graph as having atheism rates of 40% or less. Among those countries with above-average intelligence (IQ of 105 of more), 80% show up as having atheism rates of 30% or less.
IQ (left) compared to % of population that are atheist
The graph shows some blue dots representing countries. The graph indicates atheism is very rare among people with low intelligence. But what about people with normal intelligence or slightly higher-than-normal intelligence? Among those countries with an IQ of about 100 (between 95 and 105), the great majority (about two-thirds) show up in the graph as having atheism rates of 40% or less. Among those countries with above-average intelligence (IQ of 105 of more), 80% show up as having atheism rates of 30% or less.
This
graph clearly shows that Boyatziz's claim that “people who have
faith (i.e., are religious or spiritual) are not as smart as others”
is a smug falsehood. Such a claim could only be justified if the
chart above showed a strong predominance of atheism among groups of
average or above-average intelligence. The graph shows the opposite
of that. The correct way to describe the graph above is as follows:
atheism is extremely uncommon among people of low intelligence, and
fairly uncommon among both peoples of average intelligence and people
of above-average intelligence. In fact, in the graph above the
country with the highest percent of atheists has a below average IQ,
and the country with the highest average intelligence has only about 12%
atheists.
Boyatziz's
“not as smart as others” claim has the same low credibility as
claims made by people who say (based on small differences in IQ
scores) that “black people are not as smart as white people.”
Slight statistical differences do not warrant such sweeping
generalizations, which hardly make sense when a white person has no
idea whether the black person to his left or right is smarter than he
is. The tactics used here are similar to those used by white
supremacists: get some data-dredging tiny-sample study finding some
dubious borderline result, and then shamelessly exaggerate to the
hilt, saying something like that proves that “those type of people
are dumber.”
It
is basically a waste of time to try to justify either belief or
disbelief by searching for slim correlations between belief and
intelligence. Imagine someone who tried to decide how to vote in
November by exhaustively analyzing the intelligence of Democrats and
Republicans, trying to figure out which was smarter. This would be a
gigantic waste of time, particularly so if the results were
borderline. How to vote should be based on issues, not some kind of
“go with the slightly smarter crowd” reasoning. Similarly,
belief or non-belief in spiritual realities should be based on
whether there are good reasons for such belief or non-belief. If 95%
of smart people do not believe in something, you should still believe
in it if you have convincing reasons for believing in it –
particularly if such reasons are subtle or complicated reasons that
those 95% have little knowledge of.
It
is interesting to note that the Pearce-Rhine experiments on ESP had
an overall p-value of 10-22 (or .0000000000000000000001), as reported
in the last column of the table in this link. Here is a result that
is more than 100,000,000,000,000,000 times more compelling than
the unimpressive p-value of about .05 reported by this “Why Do You
Believe in God?” study. But the same people who will groundlessly
reject the Pearce-Rhine study (because of ideological reasons) will
warmly embrace the study with the paltry p-value of only about .05
(again, for ideological reasons). Can we imagine a stranger double
standard?
Postscript: Back in the old days, studies like this might have a thin veneer of scientific objectivity. But the press release for the study makes clear that the researchers "agree with the New Atheists," which heightens our suspicions that we have here some agenda-driven research created mainly to be used as an ideological battering ram.
Postscript: Back in the old days, studies like this might have a thin veneer of scientific objectivity. But the press release for the study makes clear that the researchers "agree with the New Atheists," which heightens our suspicions that we have here some agenda-driven research created mainly to be used as an ideological battering ram.
No comments:
Post a Comment