Header 1

Our future, our universe, and other weighty topics


Showing posts with label scientific consensus. Show all posts
Showing posts with label scientific consensus. Show all posts

Friday, November 26, 2021

Don't Be Impressed by a Consensus That Is Enforced or Sociogenic

 At the New Atlantis web site we have a very long essay by M. Anthony Mills on the topic of scientific consensus. The title is "Manufacturing Consensus" and the subtitle is "Science needs conformity — but not the kind it has right now."  We see a picture at the top of a herd of sheep all heading in the same direction. But don't be fooled by the photo, the title and the subtitle, which are all kind of like quarterback pump fakes (when a quarterback pretends to throw in one direction, but then throws in another direction). In the essay Mills rather speaks as if he wants you to be one of the unquestioning sheep, one of the herd of meek minions who blindly believe as they are told to believe by science professors. 

The purpose of Mills seems to be to encourage a credulous "believe-it-all" attitude towards science professors and their belief traditions. Mills seems to want us to trust in our science professors as much as a Soviet commissar would trust in his politburo.  And just as Soviet commissars very much believed that Marxist party orthodoxy should be enforced, Mills tells us multiple times that scientist belief traditions should be enforced.

Using "false dilemma" reasoning, in which the reader is given a choice between two radically opposed alternatives that are not the only choices,  Mills paints a choice between two extremes. The first extreme he describes like this:

"According to one influential view, consensus should play no role in science. This is because, so the argument goes, science is fundamentally about questioning orthodoxy and elite beliefs, basing knowledge instead on evidence that is equally available to all. At the moment when science becomes consensus, it ceases to be science."

This is a "straw man" portrayal. Critics of unjustified claims of a scientific consensus do not typically claim "consensus should play no role in science." Everyone agrees that there is a consensus about things such as the existence of cells and the high temperature of the sun, and no has a problem with there being a consensus about such well-established facts. But critics of unjustified claims of a scientific consensus may reasonably claim that (1) some of the claims made about a scientific consensus are untrue because such a consensus (defined as a unanimity of opinion) does not really exist, or (2) some of the claims made about a scientific consensus are inappropriate because the underlying belief is untrue. 

After constructing and then knocking down this "straw man" of ultra-rebellious thinking in which all established facts are put in doubt, Mills proceeds to advocate what he seems to think is the proper way things should occur in the world of science. He uses a kind of reasoning very similar to some reasoning made by Catholic authorities during the Protestant Reformation, who argued that only those trained sufficiently by the Catholic Church could criticize what the Catholic Church was doing. Mills seems to advocate a world in which only those trained within science academia belief traditions can criticize the claims of such traditions. He does this by stating this:

"In order to participate in or contribute to established science — much less to criticize or overthrow it — one has to have been trained in the relevant scientific fields. That is to say, one has to have been brought up in a particular scientific tradition, whether geocentric or heliocentric astronomy, or classical or relativistic physics."

This statement is very wrong.  It is not necessary for a person to have been trained and "brought up in a particular scientific tradition" in order to criticize the statements of scientists in that tradition. Any person studying by himself for a sufficient length of time can gain enough knowledge to make good and worthwhile criticisms of the statements of scientists in many scientific fields.  Also, a person can contribute to science without being "brought up in a particular scientific tradition."  There are many ways in which citizen scientists can and have contributed to established science.

A very important point that Mills fails to realize is that you don't need to thoroughly understand a theory in order to make solid, important rebuttals of the theory.  Suppose someone hands me some bizarre 80-page document advancing some elaborate ancient astronaut theory that includes (among its many complicated claims) the claim that extraterrestrials are living in tall castles on the moon. I don't have to understand all or most of this theory to refute it. I need merely point out that astronomers have thoroughly mapped the moon, and have found no such high castles on it.  Similarly, I don't need to know much about the complicated theoretical speculations of what is called cosmic inflation theory to point out that this theory of something strange happening near the first instant of the universe it not an empirically well-founded theory. I need merely learn that cosmologists say that throughout the first 200,000 years of the universe's history matter and energy were so dense that no observations from such a period will ever be possible. Armed with that one fact, I know that this cosmic inflation theory can never possibly be verified.  

We then have a statement which is characterized by an  overawed naivete. Mills seems to suggest that scientists receiving training should trustingly accept triumphal stories handed down by their elders, rather like Catholics reverently accepting stories of the wondrous deeds of the medieval saints. He states this:

"To be initiated into a tradition, one has to first submit to the authority of its bearers — as an apprentice does to the master craftsman — and to the institutions that sustain the tradition. In the natural sciences, the bearers of tradition are usually exemplary figures from the past, such as Newton, Einstein, Darwin, or Lavoisier, whose stories are passed down by teachers and textbooks."

This is not the way any one should study nature.  We should learn about nature by studying observations, always questioning belief dogmas and whether observational claims are robust, always asking whether some claim about nature is a belief mandated by facts and observations, or merely some speech custom or belief tradition of overconfident authorities.  Sacred lore and kneeling to tradition may have its place in religion, but has no proper place in the world of science. Whenever scientists are taught through a kind of process in which "stories are passed down" like sacred lore, and revered "not-to-be-questioned" scientists are put on pedestals, it is a sign that scientific academia has gone off track. 

We may wonder whether the visual below might be a good representation of the ideas of Mills on how scientists should be trained:

bad way to train scientists

Very confusingly defined in multiple ways, "consensus" is a word that some leading dictionaries define as an agreed opinion among a group of people. The first definition of "consensus" by the Merriam-Webster dictionary is "general agreement: unanimity."  Mills fails to see that some of the most important claimed examples of scientific consensus are cases where we do not have good evidence that there actually is a consensus (defined as a unanimous opinion). The only way to reliably tell whether a consensus exists among scientists is to do a secret ballot vote. Such votes are not done of scientists. So, for example, we do not know whether even 90% of scientists believe in Darwinism or in claims that minds come mainly from the brain.  

But there are quite a few of what we may call cases of a socially enforced reputed consensus. That is where there is a general idea that there is an expectation that a scientist is supposed to support some idea (or at least not contradict it), or else he is going to be in career trouble.  Mills seems to give his approval of the idea of a socially enforced consensus.  First he states, "This is why consensus is so vital to science — and why the institutions of science not only can and do but should use their authority to enforce it." This is only one of 21 times Mills uses the word "authority" in his essay. 

Enforcing a consensus


The idea of an enforced consensus is self-contradictory. According to the Merriam-Webster definition, "consensus" means agreement or unanimity. When people agree on something, there is no need to enforce opinions. The instant someone talks about enforcing a consensus, it is an indication that no real consensus exists. 

Mills then gives us a rather long description of a scientific consensus as an excuse for placing questions "out of bounds" and for "gatekeeping" in which there is a kind of protected country club in which hated opposing observations and arguments are kept out, like black applicants to 1950's country clubs. Mills approvingly cites the tendency of modern scientists to ignore a host of arguments and observations that conflict with their belief dogmas. Such behavior, so unworthy of a first-class scientist or first-class scholar,  is an appalling tendency to fail to study observational reports that conflict with your beliefs and are evidence against some thing you claim is true. Referring to the literature of critics and contrarians as "venues," he writes this:

"What is striking, however, is that the arguments presented in these venues are almost never refuted by mainstream scientists. They may be publicly denounced, but without elaborate argumentation in professional journals. Most of the time, they are simply ignored....Science could never advance if it had to re-establish every past theory, counter every objection, or refute every crank."

The term "crank" is a term that merely means someone who irritates you. The reasoning that scientists should be excused from countering objections to their theories because they don't have time to do that is one of the silliest arguments made by scientists too lazy to respond to objections to their theories, and is here repeated by Mills. Scientists nowadays waste enormous amounts of time and federal dollars cranking out poorly reproducible results and poorly designed experiments described in papers that typically get almost no readership, with a large fraction of the papers being highly speculative or mostly unintelligible or devoted to topics so specialized and obscure that they are of no real value. The idea that scientists are too busy to respond to arguments and evidence against their theories is absurd. Were scientists to stop wasting so much time, they would have plenty of time to respond to arguments and evidence against their theories.

There then follows a bad argument by Mills that we must accept the teachings of scientists all over the place because we are "dependent" on such teachings. It involves another silly argument about time,  an argument that kind of seems to say that we can't challenge any dogmas of scientists because we don't have time to study up on the relevant topics. That isn't true at all. For example, anyone can read up on a Saturday on the very slight reasoning Darwin used to advance his very simple theory of so-called "natural selection," and anyone can read up on a Sunday on some powerful reasons for rejecting such claims. 

Scientific theories can be cast into doubt or effectively refuted by someone who has not spent terribly much time studying such theories. I will give one of hundreds of possible examples. Consider the case of out-of-body experiences, which are very widely reported by humans.  Prevailing neuroscientist theory holds that the mind is purely the product of the brain, and prevailing evolutionary theory holds that the collective reality of human minds is purely the product of some brain-related random mutations in the past. But suppose a scholar collects many reliable accounts of people who report traveling out of their bodies and observing them from above; and suppose he finds that in many of these cases the person had no appreciable brain activity; and suppose he gathers evidence that quite a few of these people discovered things that should have been impossible for them to know if such experiences were mere hallucinations.  Now we have evidence that would seem to cast very great doubt on two major scientific theories: both the theory that mind is purely the product of the brain, and the theory  holding that the mentality of the human species is  purely the product of some brain-related random mutations in the past.  To produce such evidence, it was not even necessary to study closely the theories cast into doubt by the evidence.  I could give countless other examples of how independent research by non-scientists can cast doubt on the claims of hallowed scientific theories, without requiring great study of the intricacies of such theories by the non-scientists. Such examples help show that we are not at all "dependents" who must meekly accept scientific theories we have not become experts on, like little children forced to believe whatever their parents tell them.  

Later Mills mentions "the modern evolutionary synthesis" (in other words, Neo-Darwinism), making the very dubious claim that it is a "consensus."  He is apparently unaware that there is a large respectable scholarly body of literature opposing Neo-Darwinism, and that nearly a thousand scientists have signed their names indicating their agreement to the following statement:

"We are skeptical of claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for Darwinian theory should be encouraged.”

Any secret ballot of scientists would show that a significant fraction do not believe or doubt Darwinism. I doubt Mills realizes that a claim of a consensus is a powerful rhetorical weapon, and that long before there is anything like a consensus, believers in some scientific idea may start claiming all over the place that there is a consensus in support of an idea.  This is a powerful form of psychological  intimidation, a sneaky way by which bad ideas can start to go viral and rise to the top.  Advocates of some theory will claim that all or almost all of the smart people are moving or have moved to embrace the theory,  although that may not at all be true. The "everybody moving in one direction" impression may largely be an illusion fostered by those who know the persuasion power of portraying some consensus that does not yet exist.  "Bandwagon and herd-effect your way to the top" has been the secret by which many a bad theory becomes dominant. 

Mills cites "the modern evolutionary synthesis" as an example of something "that scientific institutions can and should enforce." Here he sounds like some Soviet commissar telling us that Marxist dogma must be enforced.  The commissars believed it was right for Marxist dogma to be enforced, because they regarded it as a form of science, what they called "economic science" or "scientific communism.

Towards the end, Mills mentions how scientists were wrong in stating in 2020 claims that there was a consensus that COVID-19 had purely natural origins. But he seems to have learned no sound lesson from such a failure, and seems to mention this failure just as a lure to attract readers interested in hearing someone with ideas different from his own, like someone using cheese in a mouse trap.

The end of the claim that there was a scientific consensus on COVID-19 origins came about largely because of the work of non-scientists such as journalists and people in non-scientific fields (such as Jamie Metzl) who kept writing articles giving us facts opposing such a consensus.  Here we have an important reality contrary to what Mills has suggested, that science matters should be left to the scientists because they are too complicated for people to understand, a claim Mills made about the time he said this: "In order to participate in or contribute to established science — much less to criticize or overthrow it — one has to have been trained in the relevant scientific fields." Mills has failed to draw the right lesson from what happened here: that non-scientists may have something very valuable to contribute to scientific debates.  The "leave science to the scientists" thinking that Mills seems to advocate makes no more sense than "leave war to the generals."

Mills tries to portray the year 2020 inaccurate claims of a COVID-19 origins consensus as some freak aberration. But there was nothing very out-of-the-ordinary about such a thing. It was just another example of the very long-standing tendency of many science professors to  jump to conclusions, to claim to know things that they do not actually know, to avoid studying evidence that conflicts with the conclusions they have reached, to claim they understand the origin of something before achieving almost any of the prerequisites needed before credibly making such a claim, to meekly bow to the authority of their peers or predecessors, and to unfairly characterize reasonable critics as unreasonable extremists. To see a table listing many parallels between COVID-19 origins groupthink and human origins groupthink, see this post.  

In the last paragraph, Mills returns to his ideas seeming to recommend that we should be ruled by the belief traditions of professors, telling us that "science’s integrity must be protected by enforcing consensus."  When people say such things, they are using the secondary definition of "integrity," which is "the state of being whole and undivided." Similarly, medieval authorities argued that "the integrity of the Church must be protected" when trying to justify abominations such as the Inquisition and the Albigensian Crusade. The idea that a consensus should be enforced is a medieval-sounding and Soviet-sounding idea that has no place in properly functioning science. Science by itself is a morally neutral activity with no intrinsic moral component, and in the past hundred years many scientists in the Soviet Union, Maoist China, imperial Japan and also the United States had a long history of entanglement (direct or indirect) with brutality and oppression, often funded by governments. So you may find Mills saying that Darwinism needs to be enforced almost as scary as Mike Flynn saying that America should have only one religion. 

Mills would do well to study the chart below, and to ask himself whether he is encouraging some of the dysfunctional items on the right end, while discouraging some of the good things on the left end. 

good science practice versus academia reality

You should tend not to be impressed by a claim of a consensus, whenever such a consensus is enforced or sociogenic. Imagine if I query fifty architects about their opinion about a new building in some city, in fifty separate emails each addressed solely to one of fifty different people, and the architects all give me the same answer about whether the building was well-designed. Such a consensus is impressive because it is not a social conformity effect, and there is no enforcement involved. But it is not impressive when 100% of the graduates of some teaching program claim to believe in some dogma that everyone who went through that program was strongly pressured into believing.  For example, 100% of the graduates of biblical fundamentalist theology schools may believe in biblical fundamentalism, but that does nothing to show that biblical fundamentalism is true. And 100% of the graduates of some neuroscience master's degree program may believe the brain is the sole cause of the mind, but that does nothing to show that such a belief is true; for within such a program anyone who rejected such a dogma would have been treated like an outcast or a leper. 

Let's consider things generically, in which the term "Theory X" can stand for any number of theories. Suppose there arises a Theory X which starts to gain enough acceptance that there appears professional training programs to indoctrinate people in Theory X, making them Theory X experts. It may be that a large fraction or most of the population rejects Theory X as unbelievable. But it may be that a small number of people are particularly inclined to believe in Theory X.  It will be usually be only such people who sign up to begin some long Theory X training program to produce Theory X experts. In that program these trainees are constantly pressured to maintain their belief in Theory X, and in the program it is made clear that anyone who rejects Theory X will be scorned and ostracized.  Later when the training is finished, a new group of Theory X experts appears. Upon getting jobs as Theory X experts, the social pressure continues, with all of their fellow Theory X experts pressuring them to keep espousing the tenets of Theory X.  The "old guard" keeps the new guys in conformity with Theory X. 

Suppose then a poll is taken indicating that 100% or 95% of Theory X experts believe in Theory X. Should we be the least bit impressed by this consensus or near-consensus? Certainly not, because it is merely a sociogenic effect.  When a training program acts like a cookie cutter to produce uniformity in the trainees,  that does nothing to establish the likelihood that the training program's tenets are correct.

Thursday, September 26, 2019

Reign of the Pyramid Top: How Tiny Elite Cliques Can Shape a Scientific Consensus

The 2009 paper “Groupthink in Academia: Majoritarian Departmental Politics and the Professional Pyramid” (by Daniel B. Klein and Charlotte Stern) is one of the most insightful papers I have read on the topic of the sociology of academic conformism. Although the specific examples given only involve topics of politics, the insights of the paper are very applicable to related topics such as the sociology of scientist groupthink.  

The authors give us this description of what they call departmental majoritarianism:

The most important departmental decisions involve the hiring, firing, and promotion of tenure-track faculty. Such decisions come down to majority vote. Although the chair exercises certain powers, committees control agendas, and so on, the central and final procedure for rendering the most important decisions is democracy among the tenure-track professors—or departmental majoritarianism.”

What this means is that the professors in a particular department get to vote on who will be admitted as candidates to become tenured professors with basically a guaranteed life-long job at some university or college. What type of people do they give an approving “thumbs up” vote to? People who share their beliefs on ideological issues. So if a scientist believes that the brain is the storage place of human memories, and that human mentality is merely a product of the brain, he will not be likely to make a vote allowing someone who questions such dogmas to become a tenured professor. And if a scientist believes that random mutations and survival-of-the-fittest explain the main wonders of biology, and that life forms are merely the result of blind, accidental processes, he will not be likely to make a vote allowing someone who questions such dogmas to become a tenured professor.

The authors give some insight on how this works:

Theories of group formation and social dynamics tell us that social groups tend to seek and attract newcomers like themselves (McPherson, Smith-Lovin, and Cook 2001), screen out and repel misfits (Allport 1954; Brewer 1999), and mold the unformed in their own image (Katz and Lazarsfeld 1955, 62–63; Moscovici 1985). These tendencies are rooted in human nature. Suppose a department must hire a new member, and 51 percent of the current members share a broadly similar ideology...Moreover, they believe that one must broadly conform to that ideology to be a good colleague and a good professor. What happens? The department members hire someone like them. The 51 percent becomes 55 percent, then 60 percent, then 65 percent, then 70 percent, and so on. As Stephen Balch (2003) and others have noted, majoritarianism tends to produce ideological uniformity in a department.”

The effect described here is one that occurs over multiple years or decades. Gradually over the years a monolithic consensus arises (or may seem to arise) through such a process, and eventually all of the professors in the department may end up espousing some particular ideology or dogma. But we should not be very impressed by the uniformity of opinion in such a department. All that was necessary to get such a uniformity was for a mere 51% of the department to hold some ideology or dogma. Majoritarian politics will tend to cause that 51% to slowly increase to become 100%.

The paper describes a kind of academic pyramid in which a tiny elite at the top of the pyramid exerts influence totally out-of-proportion to its numbers. The paper states the following:

In structure, the tribe is pyramidal, with the elite at the apex...Position within the pyramid depends on focal, conventional rankings of key institutions, notably academic journals, departments, publishers, citations, grants, awards, and other markers of merit. Research is highly specialized, and the tribe is broken down into subfields...Prestige and eminence are determined within the subfield, a kind of club within the tribe. The clubs constitute the tribe, just as agencies and branches constitute the government. Each club sorts people with overt reference to pedigree, publication, citations, and letters of reference. The club controls these filters and then applies them to itself. It controls the graduate programs and journals. By spawning and hiring new recipients of Ph.D. degrees, the club reproduces itself.”

In the science world it is easy to identify the apex at the top of the academic pyramid. It is about 20 universities including the Ivy League (Harvard, Yale, Columbia, and so forth), along with other well-known universities such as Oxford, Cambridge, MIT and the California Institute of Technology (CalTech). The paper notes the extraordinary degree of influence of the top of the academic pyramid. It gives us an example from the world of law: “In the field of law, Richard Redding finds: 'A third of all new teachers [hired in law schools between 1996 and 2000] graduated from either Harvard (18%) or Yale (15%); another third graduated from other top-12 schools, and 20 percent graduated from other top-25 law schools. ' ” Referring to the top or apex of this academic pyramid, the paper states the following: “Because of the mechanisms that operate within disciplines—propagation, 'follow the apex' and 'freeze-out'—if the apex embraces ideology j, it will tend to sweep that ideology into positions in every department all the way down the pyramid.”

The diagram below illustrates how this works. At the top of the pyramid (shown in pink) is a tiny elite. In a particular subject matter field such as neuroscience, there may be only one hundred or two hundred professors in such a “top of the pyramid” apex. But such professors exert influence out of all proportion to their numbers. What becomes recognized as the “scientific consensus” may be determined by those one hundred or two hundred professors.


scientific consensus


Given this “opinion cascade,” it is astonishingly easy for some tiny elite group to control the scientific consensus on some topic. For some new dogma to become “the scientific consensus,” it is merely necessary that the following occurs:
  1. Somehow 51% of a few hundred professors at the top of the academic pyramid come to believe in some dogma.
  2. Over the years that 51% becomes a much higher percentage, as professors vote in other professors sharing their belief in this dogma.
  3. The “opinion cascade” from the top proceeds down the pyramid, and eventually the dogma becomes the scientific consensus due to groupthink effects and “follow the pyramid top” tendencies.
We saw such a thing occurring in the world of cosmology after 1980, when a tiny handful of professors (mainly at MIT and Stanford) were able to get an alleged scientific consensus started, one that was based on an entirely speculative "cosmic inflation" theory for which there was no evidence (such a theory not to be confused with the Big Bang theory, for which there is evidence).  We almost saw such a thing occurring in the world of particle physics, where string theory (a purely speculative theory for which there is no evidence) almost became a scientific consensus among particle physicists, but fell short. 

There are various other reasons why it is may be surprisingly easy for some little elite clique to get some weak theory accepted as a scientific consensus. One reason is that there are all kinds of possible tactics by which a weak scientific theory may have its prestige enhanced by the use of various sneaky "prestige by association" ploys such as I describe in this post.  Another reason is that because of the ambiguous definition of science (sometimes defined to mean "facts established by observation" and other times defined to mean "the activity of scientists"), almost any theory advanced by a scientist can be peddled as "science."  

An additional reason is that it is relatively easy to create a perceived consensus rather than an actual consensus.  A few professors at Ivy League universities can simply start talking as if a consensus is building, by claiming "there is a consensus starting to form" around their theory, or that "more and more scientists are starting to believe" their theory, or that "there is growing agreement" that their theory is true.  Before long, people may start to think there is a consensus about the truth of some theory, even though no such consensus exists.  But such a perceived consensus can exert enormous force. Now professors in the less-prestigious universities may start to voice belief in the theory, not because they actually believe it, but because they are "falling in line" and "going along to get along" in an act of conformity and social compliance.  My post "How to Get Your Weak Scientific Theory Accepted" explains some of the tricks-of-the-trade by which some tiny elite might get its little bit of tribal folklore to become recognized as a scientific consensus. 

One of the most famous experiments in the history of psychology was the Asch conformity experiment, which showed that large fractions of people would tend to state absurd conclusions whenever they thought that a majority of their peers had reached the same conclusion.  A group of people at a table were asked to judge which of a set of three lines was the same length as another line they were shown. After watching a group of other people all give an answer that was obviously wrong (as they had been secretly instructed to do by the experimenter), about one third of the test subjects also stated the same obviously wrong answer when they were asked about the matter.  They were conforming to the nonsense judgment of their peers. We don't know how many of these subjects privately disagreed with what they publicly stated.  Similarly, when a scientific consensus supposedly forms around some idea that is unproven or illogical, we don't know how many of the professors publicly supporting the idea privately disagree with the idea, but publicly support it in order to look like a "team player," someone who "fits in" with his peers. Professors may engage in "public compliance" by which they seem to jump on a bandwagon, even if they have private doubts about the idea. 

If you were to do a test like the Asch test, but with the confederates of the experimenter posing not as peers of the test subject but as higher-status people or authorities,  you would probably get far more than one third of the test subjects pressured into stating absurd conclusions.  For example, if the sole person tested was seated with seven people, who all identified themselves as biologists, and these seven people all identified an unusually large rat in a small cage as a mouse,  then probably 70% of the time the test subject would also publicly agree that the squirrel-sized animal in the cage was a mouse rather than a rat.  We allow ourselves to be hypnotized by some pronouncement of an expert, failing to ask ourselves: how does such a person benefit from this claim he is making? The answer is often: he increases his own social power and prestige by making such a claim. 

What I have described here is how a tiny elite can control how a huge body thinks or votes. Similar things often happen in the world of politics, where a tiny elite can largely control who becomes the nominee of one of the two main parties. We did not see this in the 2016 US election, but did see it in the 2004 US election. Candidate Howard Dean (a man of good judgment, calm temperament and long experience as a governor of Vermont) had raised huge sums of money by January 2004. He finished only third in the Iowa caucuses, but had excellent prospects of winning the New Hampshire primary, which would have given him a large chance of becoming the Democratic nominee. But just after the Iowa caucuses, a tiny elite of journalists and pundits ruined his chances. In the days before the New Hampshire primary, the journalists and pundits focused incessantly on a mere cheer that Howard Dean had made at the end of one of his campaign speeches, trying to falsely portray the cheer as some demented scream. Because of this, Dean lost the New Hampshire primary, and his chance of winning the nomination was effectively over. A tiny elite clique at the top of an “election pyramid” had basically controlled who the Democratic Party would nominate for president. 

You could create an "election pyramid" graph similar to the pyramid graph above. At the very top would be a small elite clique of reporters, pundits, columnists and TV personalities. Just below them in the pyramid would be a small number of voters participating in the Iowa caucuses and the New Hampshire primary.  Everything else that happens politically every four years in the US usually follows (but does not always follow) from what is dictated by these small groups at the top of the pyramid. 

In regard to claims of a scientific consensus, we should remember that most major claims that a scientific consensus exists are not well founded, in the sense that we do not actually know whether a majority of scientists privately hold the opinion that is claimed as a consensus. The only way to reliably determine whether a scientific consensus exists on a topic is to hold a secret ballot voting procedure among all the scientists in a field, one in which there is a zero risk for a scientist to support any opinion he may privately hold.  Such secret ballot voting procedures are virtually never held. When scientists are polled, the arrangement of polling questions is often poor,  with too narrow a range of choices, and without "no opinion" or "I don't know" as one of the possible answers. 

Since the private opinions of scientists may differ from some alleged scientific consensus, and since there are all kinds of sociological, ideological and political reasons why a scientific consensus may form for reasons having little to do with scientific evidence,  an appeal to a scientific consensus has little argumentative force. It's much better to appeal to facts and logic rather than alleged opinion majorities. 

Monday, November 13, 2017

Disastrous Blunders of the Experts

Often when someone wants to get you to believe in some dubious doctrine popular among some current group of experts, that person will basically tell you, “Trust the experts!” The reasoning is that when some group of experts reaches a consensus, it is very likely to be true. But history is full of examples in which the opinions of experts were not merely wrong, but disastrously wrong. Below are some examples, many from recent history.

Expert Fiasco #1: The Bay of Pigs Invasion

When John Kennedy assumed the office of US President in 1961, he found that his experts were unanimously (or all but unanimously) in favor of the Bay of Pigs invasion. This was a harebrained scheme of dumping on the shores of communist Cuba a group of 1400 Cuban exiles. The experts believed that this small group could whip up an insurrection that would lead to the downfall of Cuban dictator Fidel Castro. The invasion very quickly failed, with more than 1000 being captured. Commenting on how wrong the advice was, Kennedy later said, “The advice of those who were brought in on the executive branch was also unanimous, and the advice was wrong.” By creating worries of Cuba being invaded, the failed Bay of Pigs invasion helped to sow the seeds of the Cuban Missile Crisis of the next year, in which the world was brought to the brink of atomic destruction.

Expert Fiasco #2: The Vietnam War

In his book The Best and the Brightest, journalist David Halberstam documented the disastrous role of expert advice in the Vietnam War. A group of intellectually brilliant US experts (many with Ivy League credentials) urged full American military involvement in the conflict between North Vietnam and South Vietnam. President Kennedy to some degree and Presidents Johnson and Nixon to a much larger degree took the advice of the experts. The result was a disastrous treasury-draining war that ended up costing more than 58,000 US lives, and many times more Vietnamese and Cambodian lives. The war ended in defeat for the United States, as South Vietnam was taken over by communist North Vietnam. The experts kept pushing a “Domino Theory” which maintained all of Southeast Asia would go communist if South Vietnam become communist. After the war was lost, the theory's prediction did not come true.

Expert Fiasco #3: Eugenics

In the decades prior to World War II, the academic scientific community embraced theories of eugenics, including ideas that certain “inferior” people should be encouraged or forced to be sterilized. Almost every major college or university offered a course in eugenics. Over 62,000 people were forcibly sterilized in the United States alone. After it became clear that the Nazis had embraced eugenics, and used it to try to justify their senseless slaughter of millions in concentration camps, eugenics started to fall out of favor.

Expert Fiasco #4: The Housing Bubble of 2005, and Financial Meltdown of 2008

In the years 2003 to 2005 a huge bubble arose in the US housing market, with housing prices inflating to unreasonable heights. Quite a few independent bloggers who were not financial experts began to raise alarm bells that a housing bubble had arisen, and was about to pop, causing prices to plunge. But the financial experts on Wall Street almost completely failed to alert people to such a possibility. When house prices started to fall between 2006 and 2008, the experts on Wall Street almost all failed to understand the financial disaster that was unfolding. At one point the experts at Standard and Poor's agency gave an AAA rating to CDO securities, which officially signified that there was only about 1 chance in 800 that such securities would default. But it turned out that 28% of these same securities defaulted. Based largely on the astonishingly bad judgments of financial experts, the financial meltdown of 2008 ended up causing countless home foreclosures, a sharp rise in unemployment that lasted for years, and a huge stock market decline that wiped out a good fraction of the retirement savings of millions of people.

Expert Fiasco #5: Blunders of the Psychiatrists

In the field of psychiatry there have since 1950 been three huge blunders supported by experts. One was support for lobotomies as a treatment for mental illness. Lobotomy surgeries were widely supported for decades, although they are now generally regarded as a grotesque horror. Another error was classification of homosexuality as a mental disease. It was only in 1973 that psychiatrists stopped classifying homosexuality as a mental disorder. For decades before that, people discovered to be homosexuals were often forced into bizarre treatments that are now looked on as senseless interventions. A third blunder of the psychiatrists was a dogmatic embrace of the dubious doctrines of Freud, such as his weird theory that much of mental illness was caused by sex-related conflicts stemming from early childhood. The popularity of Freudianism is declining, but for decades his farfetched dogmas were embraced by a large fraction of psychiatrists.

Expert Fiasco #6: The Iraq War

In 2002 and early 2003 the United States government led by George W. Bush tried to whip up public support for an unprovoked invasion of Iraq. The rationale given was that Iraq had assembled terrifying “weapons of mass destruction” that were a threat to the United States. In the time between October 2002 and March 2003 I remember seeing a long parade of experts on my television screen, almost all of which assured us that the coming invasion of Iraq was a wise step that was vitally necessary. Many of these experts said that the invasion was something that would be a breeze. Even though United Nations weapons inspectors had searched the country in the months before March 2003, finding no weapons of mass destruction, the experts told us such weapons would be found.

In March 2003 the unprovoked invasion was launched. No Iraqi weapons of mass destruction were found. The war ended up being a big treasury-draining disaster, leading to more than 4000 US soldier deaths, and more than 30,000 US soldier injuries. The total number of Iraqis that died from the invasion and its resulting unrest has been estimated between 151,000 and more than a million. In the following years, the nation of Iraq suffered very frequent suicide bombings and almost constant violent unrest, with the eventual loss of a large fraction of the country to crazed ISIS fanatics. The price tag for this misadventure was countless trillions of US dollars. 

Expert Fiasco #7: Vioxx

The drug called Vioxx (also known as rofecoxib) was developed by Merck to treat arthritis and its associated pain. The scientific experts at the Food and Drug Administration gave Vioxx their approval in 1999. In the following years doctors also gave the drug their approval, writing some 80 million prescriptions for the drug. But the drug was actually very dangerous, and the wikipedia.org article on the drug says that it caused “between 88,000 and 140,000 cases of serious heart disease.” Finally in 2004 Merck withdrew the drug. 

Expert Fiasco #8: The Opioid Overdose Epidemic

We are currently in the middle of an opioid overdose epidemic. A CDC site tells us, "From 2000 to 2015 more than half a million people died from drug overdoses." Very many of these come from prescription pain medicines that were over-prescribed by doctors. The CDC site says this:

We now know that overdoses from prescription opioids are a driving factor in the 15-year increase in opioid overdose deaths. The amount of prescription opioids sold to pharmacies, hospitals, and doctors’ offices nearly quadrupled from 1999 to 2010, yet there had not been an overall change in the amount of pain that Americans reported.

Clearly the experts writing prescriptions made a big blunder, writing far too many prescriptions for opioids.


Expert Fiasco #9: Nuclear Weapons

When the atomic bomb was being developed, physicists thought there was a substantial chance that the first atomic explosion might ignite the atmosphere, destroying all life on earth. This fact has been whitewashed by many accounts claiming that they made calculations that there was no chance of such an ignition. But Chapter 17 of Daniel Ellsberg's book The Doomsday Machine makes quite clear that physicists still thought there was a significant chance of such a planet-killing event when the first atomic bomb was exploded in July, 1945. At one point Enrico Fermi estimated the chance at 10 percent, according to one source. 

Physicists should have advised that exploding an atomic bomb was an unacceptable risk. Instead, they allowed the testing to occur. They then supported the creation of ever more destructive bombs including hydrogen bombs vastly more powerful than atomic bombs. For decades the world was put at grave risk of nuclear destruction -- all of which could have been avoided if Fermi and his colleagues had done the right thing and said that any test of any atomic bomb was an unacceptable risk.  The atomic bomb was not needed for Japan's defeat, as a naval blockade and relentless aerial firebombing were creating equivalent military pressure in the summer of 1945. 
 
Why Experts Are So Often Wrong

We still see spectacular errors being committed by experts. A very recent example was the fact that prior to the 2016 election of Donald Trump, the great majority of political pundits predicted very confidently that Trump would lose the election. Then there was the BICEP2 affair. In March 2014 scientists announced they had proof of primordial gravitational waves proving the cosmic inflation theory. Almost the entire community of physicists and cosmologists endorsed this claim. By the end of 2014 it had become clear that BICEP2 had detected something that could just as easily be mere dust, and the claims of an important scientific finding were retracted. In this case the red flags were there from March 2014, so there was no excuse for this erroneous bandwagon effect. Then there was the 2016 affair of the 750 GeV diphoton resonance. A certain signal blip showed up at the Large Hadron Collider, and physicists wrote more than 500 scientific papers about this blip, speaking as if this was some matter of great cosmic importance. By July 2016 it had become clear that the signal blip was mere random noise, of no significance at all. 

Is there some general reason why experts often get things wrong? There seems to be such a reason. It is the fact that experts often are trained in ideological enclaves. An expert typically becomes an expert by volunteering for some particular graduate or specialized training program at a university or in the military. These graduate programs are often ideological enclaves, places where there predominates some particular ideology not embraced by most people.

The fact that the graduates of such programs are volunteers creates the opportunity for sociological selection effects. Let's imagine an extreme example. Let's imagine there arises some new discipline called tricostics. It might be the opinion of 90% of those who have read about tricostics that tricostics is pure nonsense. But tricostics might be “all the rage” at some Graduate Program in Tricostics Studies at a particular university, or some Pentagon training program specialized in tricostics. The people who sign up for such a program might almost all be from the tiny fraction of the population that believes in tricostics. At this particular program there might then be tremendous sociological pressure for students to embrace tricostics. So 90% of the graduates of this tricostics program might be believers in tricostics, even though a randomly selected jury from the general population would probably conclude tricostics is worthless nonsense.


ideological enclave
 
An expert existing in some ideological enclave may get be filled with dogmatic overconfidence about some opinion that is popular within his little ideological enclave. He may think something along the lines of: “No doubt it is true, because almost all my peers and teachers agree that it is true.” But the idea may seem senseless to someone who has not been conditioned inside this ideological enclave, this sheltered thought bubble. 

A good rule is: decide based on the facts, and not merely because there is some consensus of experts.   

Postscript: It is interesting that a government web site gives us a "hierarchy of evidence" pyramid, one of a number of similar pyramids you can find by doing a Google image search for "hierarchy of evidence."  In the hierarchy of evidence (ranging from weakest at the bottom to strongest at the top), "expert opinion" is at the very bottom of the pyramid. So why is it we are so often asked to believe this or that explanation for some important matter, based on expert opinions?


Post-postscript: See this post for another case of a disastrous blunder by experts.  The post states that there were between 340,000 to 690,000 US deaths caused by atomic testing. The US government presumably polled the experts about whether it was safe to test atomic weapons above ground, and there was presumably a consensus of experts that such above-ground testing was safe.  The result was probably more US deaths than from the bombs dropped on Japan. 

The reaction to the research of Ignaz Semmelweis was a case of a disastrous expert blunder in the nineteenth century. Semmelweis accumulated evidence that cases of a certain kind of deadly fever could be greatly reduced if physicians would simply wash their hands with an antiseptic solution, particularly after touching corpses. According to a wikipedia.org article on him, "Despite various publications of results where hand washing reduced mortality to below 1%, Semmelweis's observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community." Thousands died unnecessarily, because of the stubbornness of experts, who were too attached to long-standing myths and cherished fantasies such as the idea that physicians had special "healing hands" that would never be the source of death. The wikipedia article tells us, "At a conference of German physicians and natural scientists, most of the speakers rejected his doctrine, including the celebrated Rudolf Virchow, who was a scientist of the highest authority of his time."  Decades later, it was found that Semmelweis was correct, and his recommendations were finally adopted.   The wikipedia.org article notes, "The so-called Semmelweis reflex — a metaphor for a certain type of human behavior characterized by reflex-like rejection of new knowledge because it contradicts entrenched norms, beliefs, or paradigms — is named after Semmelweis, whose ideas were ridiculed and rejected by his contemporaries." 

Post-post-postscript: Very sadly, the year 2020 has given us a new example of a disastrous blunder by experts. In the US a deadly coronavirus began to explosively spread in the second half of February 2020. For the next six weeks leading expert such as Anthony Fauci kept telling us that ordinary people did not need to wear face masks to prevent the virus from spreading. Around April 3, experts reversed their previous position, and started teaching that it was very important for everyone to wear face masks to prevent the virus from spreading.  A full discussion of the erring statements by experts can be found here. By now more than a million Americans have died from coronavirus. A large fraction of this death toll could have been prevented if the experts had started giving us back about March 1, 2020 the correct advice that everyone should be wearing face masks.

The World Health Organization's blunder in this matter was gigantic. From the beginning of the global coronavirus pandemic around March 1, 2020 to June 2, 2020, the World Health Organization senselessly taught on its website that there was no need for ordinary people to wear face masks as a global pandemic raged. A meta-analysis published in the leading medical journal The Lancet on June 2, 2020 concluded that "face mask use could result in a large reduction in risk of infection." This was an analysis of studies that had already been published.  A CNN story on this meta-analysis says, "The chance of transmission without a face mask or respirator (like an N95 mask) was 17.4%, while that fell to 3.1% when a mask was worn." The World Health Organization's disastrous blunder of telling people they did not need to wear face masks during a pandemic is an example of the stubborn persistence of errant biological dogmatism, something that we see all over place in biology. Finally in early June, 2020 the WHO reversed its position, telling us we should wear face masks to prevent coronavirus. 

In February 2021 a group of experts wrote a letter to the CDC and leading authorities, complaining that the CDC had bungled things by not correctly describing the airborne nature of SARS-CoV-2, the virus that causes coronavirus:

"CDC guidance and recommendations have not yet been updated or strengthened to address and limit inhalation exposure to small aerosol particles. CDC continues to use the outdated and
confusing term 'respiratory droplets' to describe both larger propelled droplet sprays and smaller inhalable aerosol particles. It also confuses matters with 'airborne transmission' to indicate inhalation exposure exclusively at long distances and does not consider inhalation exposure via the same aerosols at short distances....CDC guidance and recommendations do not include the control measures necessary for protecting the public
and workers from inhalation exposure to SARS-CoV-2."

The blunders of experts such as Anthony Fauci continued even after COVID-19 vaccines were developed. In May 2021 Fauci went on television saying that fully vaccinated people with COVID-19 did not need to wear masks, because they become "dead ends" for COVID-19. Fauci stated this: "So even though there are breakthrough infections with vaccinated people, almost always the people are asymptomatic and the level of virus is so low it makes it extremely unlikely — not impossible but very, very low likelihood — that they’re going to transmit it,”  But now a CDC page says, "People who get vaccine breakthrough infections can spread COVID-19 to other people." 

Monday, July 3, 2017

Exaggerations Abound When People Talk About a Scientific Consensus

One of the most powerful argumentative techniques in favor of some truth claim is to assert that there is some consensus of opinion among scientists that the claim is correct. But often such assertions are unwarranted. Quite a few of the times that people claim that there is a scientific consensus on something, there is no actual majority of scientists who assert such a thing. Below are six reasons for thinking that quite a few claims of scientific consensus are exaggerated, and are not matched by an actual majority opinion of scientists on the matter in question.

Factor #1: Claims of Consensus Are Often Made Before a Consensus Is Reached

Let's imagine that there's some theory that is starting to get traction in the scientific community. Imagine you are some advocate for the theory, trying to get even more people to accept it. What is your easiest route to such a goal? It is to claim that there is a scientific consensus in support of this theory you support. Many people will meekly fall into line and accept your theory, as soon as they hear a claim that most scientists have accepted the theory. The temptation to claim “most scientists believe this” is so great that people often make such claims even when no such consensus has been reached.

Factor #2: Most People Who Claim a Scientific Consensus Offer No Evidence

The great majority of statements claiming a scientific consensus on something offer no evidence to back up for such a claim. For every time that someone claims that most scientists agree on something, and tries to back up such a claim by referring to some opinion poll or study of scientific opinion, there must be ten or twenty times that someone claims that most scientists agree on something without offering any evidence to back up the claim.

Factor #3: There Typically Exist No Formal Processes for Identifying the Opinions of Scientists on Theories

Given the fact that people are often claiming that most scientists think such-and-such a thing, it is rather surprising to consider that there typically exists no systematic process for having scientists state their opinions on whether particular theories are true. In the world of science, there is nothing equivalent to the voting booth. For example, scientists are not sent annual questionnaires in which they are asked to rate the likely truth of different theories on a scale of 1 to 10, with 10 being certainty about a particular theory.

So when people claim that most scientists believe this or most scientists believe that, and try to back up their claims with some evidence, they may refer to opinion polls or a survey of the scientific literature. These are very imperfect measures of opinion, for reasons discussed below.

Factor #4: People Often Self-Censor Private Opinions Conflicting With Perceived Norms

In November of 2016 there was a startling result in the American presidential election. Donald Trump won a victory in the electoral college, despite losing the popular vote by millions. This was despite both late election polls showing him losing by a substantial margin, and also Election Day exit polls showing him losing in some of the key states he won. A reasonable idea to explain this is the idea of self-censorship. This is the idea that when people hold opinions that differ from perceived norms, they often never publicly state such opinions, and will only express them in something like a secret ballot. It may be that a significant percentage of voters planned to vote for Trump, but told pollsters otherwise, as they regarded their support of Trump as something that conflicted with perceived norms.

We have no idea how much self-censorship plays a role in scientific opinion. Many a scientist may disagree with theories that are supposedly supported by a majority of scientists. Such scientists may engage in self-censorship, figuring that it is not a good career move to speak in opposition to some theory that many other scientists are supporting. This makes it harder to determine just what the majority of scientific opinion really is. 

An example of self-censorship

Factor #5: It Is Very Hard to Unravel the Level of Support for a Theory Based on Scientific Papers

Since scientists have no formal process for voting on the truth of theories, some people have attempted to use studies of scientific papers to draw conclusions about a scientific consensus on some topic. Such attempts can be problematic.

An example of an analysis of scientific papers that offers limited insight is this study, which has been widely although inaccurately summarized as reporting a 97% consensus about anthropogenic global warming. It is probably correct that a majority of scientists do believe that mankind is the main cause of global warming, although the study does not back up the claim of 97% consensus. For one thing, the study was based only on abstracts, those short summaries that appear at the top of a scientific paper. Secondly the study actually reported that 66% of the abstracts reported no opinion about man-made global warming. The 97% figure was from a second phase that sent a questionnaire to those who had already stated an opinion in their abstract about whether humans cause global warming. Of those people, only 14% responded; and of those 14%, 97% supported anthropogenic global warming either explicitly or in a weaker implicit sense. It is not correct to extrapolate from such a fraction of a fraction and make the same 97% claim about the scientific community in general, particularly given the dubious business of getting that 97% by lumping together explicit endorsements of anthropogenic global warming and merely implicit endorsements that may be more nuanced and ambiguous.

Page 15 of this Pew poll of scientists indicates that only 89% of them agreed that earth is warming mostly due to human activity, and that only 77% of them agreed that global warming is a very serious problem. This suggests a consensus about this topic much less than the 97% figure cited (I agree with the 89% on this topic).

It would also be extremely problematic for someone to draw conclusions about a scientific consensus based on an analysis of scientific papers on topics such as cosmic inflation or string theory. Let us consider a physicist who has become familiar with the arcane speculative mathematics of string theory or cosmic inflation theory. Such a physicist learns that he can make a comfortable living grinding out speculative papers offering yet another twist on these theories. But suppose this scientist publishes five papers on such a topic. Does it mean he actually believes the theory is likely to be true? We cannot tell. It could be that the physicist is simply interested in the mathematics, and finds that he can fulfill his yearly quota of scientific papers by writing on the topic. Such a thing does not tell us whether the scientist believes such theories to be true.

Factor #6: Opinion Polls Of Scientists Can be Misleading or Confusing Because of the Way They Are Phrased

Pros in the political field know that the way questions are worded can have gigantic effects on the results. For example, if a question asking about support for abortion is worded from a pro-choice perspective, it will get some answer suggesting a very high support for allowing abortion. The same question worded from a “protect the unborn child” perspective may show a vastly different level of support for allowing abortion.

The same principle holds true in regard to polls of scientists about scientific theories. For example, a Pew opinion poll asked a question of scientists that seemed designed to produce the highest level of response: a question asked whether they agree that “humans and other living things have evolved over time.” That got a 98% yes response. But “evolved over time” could mean small-scale stuff, what is known as microevolution. A scientist believing in small-scale evolution may answer “Yes” to such a question, even though he doesn't believe in the origin of species from more primitive species, or does not believe that such a thing is mainly caused by natural selection. 

Very absurdly, the Pew poll question gave respondents a choice between asserting that "Humans and other living things have evolved over time" and "Have existed in their present form since the beginning of time."  Such a choice forces anyone believing in a 13-billion-year-old universe to choose the first answer, since there is no option such as "Humans originated for unknown reasons about 100,000 years ago, long after the Big Bang." This is a classic pollster's goof: make it seem like almost everyone believes in choice A by offering a choice between choice A and some choice B that almost no one would accept. 

What if these questions were asked:

Is it true that humans have evolved from ape-like ancestors?
Is it true that humans have evolved from ape-like ancestors mainly because of Darwinian natural selection?

These are the questions Pew should be answering, but it doesn't. On page 28 of this full report, it does ask the respondent to choose between the choices shown below:


A poll of scientists (with dubious aspects discussed above and below)

It is interesting that despite constant indoctrination to the contrary, nearly two-thirds of the public reject the claim that humans have evolved over time due to natural processes such as natural selection. It is also interesting here that about 10% of scientists do not believe that evolution occurs mainly because of natural processes such as natural selection.  The survey was made only of American Association for the Advancement of Science members, a subset of scientists more likely to be "old guard" thinkers conforming to ideological orthodoxy.  A full survey of scientists might have yielded a number greater than 10% doubting the "party line" on this topic.

Here we also have a case where there is a large chance of significant self-censorship, as the prevailing academic culture declares deviation from Darwinian orthodoxy as a taboo. The actual percentage of scientists rejecting the Darwinian explanation may be much higher than the 10% indicated in this survey, and could easily be as high as 15% or 20%. The people who responded to the Pew survey were people who responded after being  mailed a letter with the AAAS masthead, signed by the head of the AAAS.  That must have maximized the peer-pressure "fall in line with the majority" effect. A secret ballot without such a "Big Brother is watching" effect might have produced a very different result. 

But the poll still doesn't tell whether there is any consensus about natural selection as an explanation for evolution. The poll asks about “natural processes such as natural selection,” but does not tell us what percentage of scientists are satisfied with the "prevailing party line" claim that natural selection and random mutations can explain the mountainous amounts of biological complexity we observe. Is that percentage 70%? 60%? Or less than 50%? We don't know. Although we sometimes hear claims that almost all scientists believe the idea that Darwinian natural selection explains the origin of species and biological complexity, we don't have polls backing up such a claim. We don't know whether this supposed overwhelming majority is even a 50% majority.

What about fields such as neuroscience? Is it really true that an overwhelming majority of neuroscientists believe that the mind is purely a product of the brain? We don't know, because there is no institutional scientific process for voting on such a thing.

Conclusion

From the discussion above, two general conclusions may be drawn:
  1. When it is claimed that there is a scientific consensus on something, the consensus is often much weaker than is claimed, with a substantial minority rejecting the majority opinion.
  2. Although some claims of a scientific consensus are warranted, it is often claimed that most scientists agree on some topic, when there is actually no clear evidence that such a majority of opinion exists.
Consider also this confusing fact: you may be inclined to believe something if you hear "there is an overwhelming scientific consensus in favor of this," but you may well suspend judgment about such a thing if you hear that "a substantial minority of scientists disagree with the claim."  But both phrases can be used to describe a situation where 90 percent of scientists accept some doctrine, and 10 percent of scientists disagree with that doctrine.

So what are you going to do, when the waters are so muddied in regard to what scientists think? The answer is simple: decide based on facts, logic and evidence, rather than following an “I'll think like most of them think” strategy. Since the insular tribes of academia are often ideological enclaves very much subject to dubious thought customs, inappropriate hero worship, bandwagon effects, sociological influences and groupthink, it's not a good idea to simply follow an “I'll go with the crowd” principle. “Follow the facts” and "follow the logic" are better principles than “follow the crowd.”