A
recent article on medium.com is a piece by Gleb Tsipursky entitled
“We're in an Epidemic of Mistrust in Science.” The author cites a
poll stating that only 14 percent of respondents showed “a great
deal of confidence” in academia. He cites a 2017 poll in which only
35 percent of respondents have “a lot” of trust in scientists,
with 45% of respondents choosing instead that they have only “a little”
trust in scientists.
Tsipursky
has a simple explanation for this fact that we don't all trustingly
yield to all the claims of scientists. His explanation is that it's
all the stupid public's fault. He says that the public has failed to
realize the principle that experts are “much more likely to be
right.” Tsipursky states the following:
That
doesn't mean an expert opinion is more likely to be right –it's
simply much more likely to be right than the opinion of a
nonexpert....While individual scientists may make mistakes, it is
incredibly rare for the scientific consensus as a whole to be wrong.
It
is not a valid principle that expert opinion is much more likely to
be right than the opinion of a nonexpert. It is probably correct that
regarding small-scale, tangible physical things such as plumbing or
farming or surgery, expert opinion is usually correct. But there is
no general principle that expert opinion is generally correct in
matters that are moral or philosophical or highly abstract, or in
anything involving origins or large-scale trends. In this post I
discuss many examples of expert opinions that were wrong with
disastrous consequences, with the results often costing many
thousands of lives.
Tsipursky
admits that “ideological biases can have a strongly negative impact
on the ability of experts to make accurate evaluations,” and he
links to a book about politics. But ideological biases occur not just
in regard to political matters, but also in regard to all kinds of
questions relating to physics, cosmology, biology and psychology.
When scientists are trained to hold particular jobs such as
evolutionary biologists and neuroscientists, they are conditioned in
ideological enclaves where poorly established theories may be
required beliefs, and unreasonable taboos may be prevalent. We
should no more expect such enclaves to produce highly accurate
opinions than we should expect some randomly chosen theology school
to produce highly accurate opinions about matters of eternal truth.
Another
major reason why experts can reach wrong opinions is that an expert is often not like an impartial juror, but like a juror who has been
bribed. Although jurors sometimes make mistakes, all in all the jury
system is an excellent method for producing reliable opinions; and at
its center is the idea of impartiality. We are careful to select only
jurors who have no financial interest in the matter they are
deciding. But a large fraction of experts have financial interest in
reaching particular judgments. If you shell out $100,000 in graduate
school tuition to get a degree in some field, you are expected to
conform to the dogmas and intellectual taboos of that field, and not
express opinions defying the majority viewpoint. If you state
opinions defying the majority viewpoint, you will be less likely to
get your scientific papers published, less likely to get research
grant money, less likely to be appointed as a tenured professor, and
less likely to get a good return on your hefty tuition investment. So
it is quite common for an expert to have a large financial incentive
to conform to majority opinions. An expert with such an incentive is
not like an impartial juror, but is like a juror who has been bribed
to reach some particular decision.
An article on widespread sexual harassment in scientific academia reminds us of how someone trying to become a scientist is totally dependent on the approval of other scientists, a type of situation that will minimize contrarian free-thinking and maximize "me too" thinking in which someone yields to peer pressure:
There
is a complete dependency, in a way that there isn’t in the
corporate world, on the people who are above you,” she says.
“[Academics and committees] have to pass your comprehensive exams;
they have to pass your dissertation proposal; they have to pass your
defense of your dissertation; they have to write you letters for your
first job, they have to write you letters for your funding – at all
of those stages you are vulnerable. If they say no, you have no
recourse; there is nobody else you can substitute for them to write
that letter for you.”
We can imagine a system that would maximize the chance that scientists would be impartial judges of truth. A person would become a certified scientist by simply passing a very hard standardized 3-hour multiple choice test. Once he had passed the test, he would be assured a government salary for 5 years, along with a certain amount of money for research. The scientist would get the salary and the research money no matter what opinions he stated. There would be no committees analyzing someone for conformity before appointing him as a tenured professor. The only way someone could stay a certified scientist would be by passing the very hard test every 5 years. The publication of the papers of all certified scientists would be guaranteed, and scientists wouldn't have to worry about votes of approval from “peer review” paper reviewers. Such a system would maximize the chance of impartial and objective scientists, but it would be totally different from the current system.
Tsipursky
does nothing to back up his claim that it is incredibly rare for a
scientific consensus to be wrong, other than to link to another
weakly reasoned blog post, and to make the laughable reasoning below:
Scientists
get rewarded in money and reputation for finding fault with
statements about reality made by other scientists. Thus, when the
large majority of them agree on something — when there is a
scientific consensus — it is a clear indicator that whatever
they agree on accurately reflects reality.
Anyone
familiar with the “echo chambers” that are certain branches of
science may chuckle at this claim. The situation is that quite a few
unproven and implausible ideas have become popular among different
tribes of scientists, just as quite a few unproven and implausible
ideas have become popular among different religions. Scientists who
criticize a prevailing dogma that is poorly established or
implausible will not at all “get rewarded in money and reputation
for finding fault with statements about reality made by other
scientists.” They will instead be treated as heretics and lepers,
and will have a much smaller chance of having their papers published,
and a much smaller chance of getting appointed as tenured professors.
Besides
blaming the public for distrust in academia, Tsipursky tries to tell
us it's the Internet's fault. He states the following:
Before
the internet, we got our information from sources like mainstream
media and encyclopedias, which curated the information for us to
ensure it came from experts, minimizing the problem of confirmation
bias. Today, the lack of curation means thinking errors are causing
us to choose information that fits our intuitions and preferences, as
opposed to the facts.
Anyone
familiar with the extremely high rate of confirmation bias (and
general ideological bias) in the writings of experts of many types
will chuckle at this idea that previously people didn't get biased
information when they read from encyclopedias and mainstream media.
Tsipursky
has an idea for how trust in academic scientists can be increased.
His idea is for people to sign something called
the Pro-Truth Pledge. One of the promises in that pledge is a
promise to
“recognize the opinions of experts as more likely to be accurate
when the facts are disputed.” That is not a sound general
principle, since communities of experts often may be wrong because
of sociological and groupthink reasons, and because a person
belonging to a community of experts often has a financial interest in
reaching some opinion that echoes that of the community. Such an
expert is not like an impartial juror, but like a juror who has been
bribed to reach some particular decision.
There's
a better way to get the public to have increased confidence in
academic scientists: have the scientists themselves do things that
will increase the public's confidence in their statements. The
following are some of the things that scientists could do.
- Scientists could stop pretending to understand things they do not understand. Many a modern scientist speaks as if he understands things that he does not actually understand. Nobody understands the origin of life, the origin of biological complexity, the origin of human minds, how it is that a newly fertilized ovum is able to progress to become a full baby, why proteins fold into conveniently functional three-dimensional shapes, how a human is able to have memories lasting for 50 years, how humans are able to generate ideas, why humans are able to instantly remember obscure facts and memories, or why the universe's fundamental constants are so fine-tuned. But scientists often speak as if they understand such things. A conspicuous example of this type of intellectual sin is a recent story in New Scientist with some teaser text saying, “The idea of an infinite multitude of universes is forced on us by physics.” This statement is completely false, and there is zero evidence of any universe other than our own. Another recent example was cosmologist Ethan Siegel telling us fine details of what supposedly happened at the time of the Big Bang, details he cannot possibly know, because the first 300,000 years of the universe's history are forever closed off to telescopic observations because of photon scattering.
- Scientists could stop describing as “impossible” or “unscientific” things for which there is much empirical evidence. Besides dogmatically advancing some claims for which there is no good evidence, many a modern scientist will refuse to acknowledge evidence of the paranormal and the psychic, even when such evidence includes decades of very convincing laboratory experiments (as in the case of ESP). So a scientist such as Sean Carroll tells us that ESP is impossible (despite decades of experimental research establishing its existence), while also falsely claiming that the multiverse idea is not a hypothesis, as if there is any convincing empirical basis for believing in it, which there is not. In such cases we get the impression of a scientist believing precisely what he wants to believe, regardless of the evidence.
- Scientists could work on cleaning up problems in their scientific papers. In this post “The Building Blocks of Bad Science Literature,” I discuss more than a dozen problems that we commonly see in scientific papers, including data cherry-picking, unwarranted causal attribution, studies with too low a statistical power because of an inadequate sample size, misleading brain visuals, all-but-indecipherable speculative math, and data dredging. By reducing these problems, scientists would increase public confidence in them.
Opinion
pieces like Tsipursky's tend to make it sound as if it's only the
poorly educated who are suspicions of the theoretical pronouncements
of scientists, but that isn't the case. There's plenty of distrust in
such pronouncements coming from the well educated. For example,
philosophers of science have extensively discussed the issue known in
the technical literature as “the underdetermination of scientific theories.” This is the fact that in many cases the evidence will
support equally well a prevailing scientific theory and rival
theories that are called “empirical equivalents.” Countless
philosophy of science papers have been written about this issue of
the underdetermination of scientific theories. Outside of academic
philosophy departments, there are many sociologists who study
scientist communities objectively, regarding them as just another
social community with the same sociological tendencies as other
social communities, such as tendencies to construct group norms and
taboos, with sanctions punishing those who deviate from such norms and taboos. Such sociologists often conclude that some popular
scientific theories are social constucts created largely to serve the
ideological, economic or sociological needs of those who teach
such theories.
It is interesting that a government web site gives us a "hierarchy of evidence" pyramid, one of a number of similar pyramids you can find by doing a Google image search for "hierarchy of evidence." In the hierarchy of evidence (ranging from weakest at the bottom to strongest at the top), "expert opinion" is at the very bottom of the pyramid. So why is it we are so often asked to believe this or that explanation for some important matter, based on expert opinions?
Postscript: A study found that "nearly all scientific papers" are "controlled by six corporations." We may only wonder about what shadowy agenda such thought overlords may have.
No comments:
Post a Comment