The
2009 paper “Groupthink in Academia: Majoritarian Departmental
Politics and the Professional Pyramid” (by Daniel B. Klein and
Charlotte Stern) is one of the most insightful papers I have read on
the topic of the sociology of academic conformism. Although the
specific examples given only involve topics of politics, the insights
of the paper are very applicable to related topics such as the
sociology of scientist groupthink.
The
authors give us this description of what they call departmental
majoritarianism:
“The
most important departmental decisions involve the hiring, firing, and promotion
of tenure-track faculty. Such decisions come down to majority vote.
Although the chair exercises certain powers, committees control
agendas, and so on, the central and final procedure for rendering the
most important decisions is democracy among the tenure-track
professors—or departmental majoritarianism.”
What
this means is that the professors in a particular department get to
vote on who will be admitted as candidates to become tenured
professors with basically a guaranteed life-long job at some university or
college. What type of people do they give an approving “thumbs up”
vote to? People who share their beliefs on ideological issues. So if
a scientist believes that the brain is the storage place of human
memories, and that human mentality is merely a product of the brain,
he will not be likely to make a vote allowing someone who questions
such dogmas to become a tenured professor. And if a scientist
believes that random mutations and survival-of-the-fittest explain
the main wonders of biology, and that life forms are merely the
result of blind, accidental processes, he will not be likely to make
a vote allowing someone who questions such dogmas to become a tenured
professor.
The
authors give some insight on how this works:
“Theories
of group formation and social dynamics tell us that social groups
tend to seek and attract newcomers like themselves (McPherson,
Smith-Lovin, and Cook 2001), screen out and repel misfits (Allport
1954; Brewer 1999), and mold the unformed in their own image (Katz
and Lazarsfeld 1955, 62–63; Moscovici 1985). These tendencies are
rooted in human nature. Suppose
a department must hire a new member, and 51 percent of the current
members share a broadly similar ideology...Moreover, they believe
that one must broadly conform to that ideology to be a good colleague
and a good professor. What happens? The department members hire
someone like them. The 51 percent becomes 55 percent, then 60
percent, then 65 percent, then 70 percent, and so on. As Stephen
Balch (2003) and others have noted, majoritarianism tends to produce
ideological uniformity in a department.”
The
effect described here is one that occurs over multiple years or
decades. Gradually over the years a monolithic consensus arises (or may seem to arise) through such a process, and eventually all of the professors in the
department may end up espousing some particular ideology or dogma.
But we should not be very impressed by the uniformity of opinion in
such a department. All that was necessary to get such a uniformity
was for a mere 51% of the department to hold some ideology or dogma.
Majoritarian politics will tend to cause that 51% to slowly increase
to become 100%.
The
paper describes a kind of academic pyramid in which a tiny elite at
the top of the pyramid exerts influence totally out-of-proportion to
its numbers. The paper states the following:
“In
structure, the tribe is pyramidal, with the elite at the
apex...Position within the pyramid depends on focal, conventional
rankings of key institutions, notably academic journals, departments,
publishers, citations, grants, awards, and other markers of merit.
Research is highly specialized, and the tribe is broken down into
subfields...Prestige and eminence are determined within the subfield,
a kind of club within the tribe. The clubs constitute the tribe, just
as agencies and branches constitute the government. Each club sorts
people with overt reference to pedigree, publication, citations, and
letters of reference. The club controls these filters and then
applies them to itself. It controls the graduate programs and
journals. By spawning and hiring new recipients of Ph.D. degrees, the
club reproduces itself.”
In
the science world it is easy to identify the apex at the top of the
academic pyramid. It is about 20 universities including the Ivy
League (Harvard, Yale, Columbia, and so forth), along with other
well-known universities such as Oxford, Cambridge, MIT and the
California Institute of Technology (CalTech). The paper notes the
extraordinary degree of influence of the top of the academic pyramid.
It gives us an example from the world of law: “In the field of law,
Richard Redding finds: 'A third of all new teachers [hired in law schools
between 1996 and 2000] graduated from either Harvard (18%) or Yale
(15%); another third graduated from other top-12 schools, and 20
percent graduated from other top-25 law schools. ' ” Referring to
the top or apex of this academic pyramid, the paper states the
following: “Because of the mechanisms that operate within
disciplines—propagation, 'follow the apex' and 'freeze-out'—if
the apex embraces ideology j, it will tend to sweep that ideology
into positions in every department all the way down the pyramid.”
The
diagram below illustrates how this works. At the top of the pyramid (shown in pink) is a tiny elite. In a particular subject matter field such as
neuroscience, there may be only one hundred or two hundred professors in such a
“top of the pyramid” apex. But such professors exert influence
out of all proportion to their numbers. What becomes recognized as
the “scientific consensus” may be determined by those one hundred or two hundred
professors.
Given
this “opinion cascade,” it is astonishingly easy for some tiny
elite group to control the scientific consensus on some topic. For
some new dogma to become “the scientific consensus,” it is merely
necessary that the following occurs:
- Somehow 51% of a few hundred professors at the top of the academic pyramid come to believe in some dogma.
- Over the years that 51% becomes a much higher percentage, as professors vote in other professors sharing their belief in this dogma.
- The “opinion cascade” from the top proceeds down the pyramid, and eventually the dogma becomes the scientific consensus due to groupthink effects and “follow the pyramid top” tendencies.
We saw such a thing occurring in the world of cosmology after 1980, when a tiny handful of professors (mainly at MIT and Stanford) were able to get an alleged scientific consensus started, one that was based on an entirely speculative "cosmic inflation" theory for which there was no evidence (such a theory not to be confused with the Big Bang theory, for which there is evidence). We almost saw such a thing occurring in the world of particle physics, where string theory (a purely speculative theory for which there is no evidence) almost became a scientific consensus among particle physicists, but fell short.
There are various other reasons why it is may be surprisingly easy for some little elite clique to get some weak theory accepted as a scientific consensus. One reason is that there are all kinds of possible tactics by which a weak scientific theory may have its prestige enhanced by the use of various sneaky "prestige by association" ploys such as I describe in this post. Another reason is that because of the ambiguous definition of science (sometimes defined to mean "facts established by observation" and other times defined to mean "the activity of scientists"), almost any theory advanced by a scientist can be peddled as "science."
There are various other reasons why it is may be surprisingly easy for some little elite clique to get some weak theory accepted as a scientific consensus. One reason is that there are all kinds of possible tactics by which a weak scientific theory may have its prestige enhanced by the use of various sneaky "prestige by association" ploys such as I describe in this post. Another reason is that because of the ambiguous definition of science (sometimes defined to mean "facts established by observation" and other times defined to mean "the activity of scientists"), almost any theory advanced by a scientist can be peddled as "science."
One of the most famous experiments in the history of psychology was the Asch conformity experiment, which showed that large fractions of people would tend to state absurd conclusions whenever they thought that a majority of their peers had reached the same conclusion. A group of people at a table were asked to judge which of a set of three lines was the same length as another line they were shown. After watching a group of other people all give an answer that was obviously wrong (as they had been secretly instructed to do by the experimenter), about one third of the test subjects also stated the same obviously wrong answer when they were asked about the matter. They were conforming to the nonsense judgment of their peers. We don't know how many of these subjects privately disagreed with what they publicly stated. Similarly, when a scientific consensus supposedly forms around some idea that is unproven or illogical, we don't know how many of the professors publicly supporting the idea privately disagree with the idea, but publicly support it in order to look like a "team player," someone who "fits in" with his peers. Professors may engage in "public compliance" by which they seem to jump on a bandwagon, even if they have private doubts about the idea.
If you were to do a test like the Asch test, but with the confederates of the experimenter posing not as peers of the test subject but as higher-status people or authorities, you would probably get far more than one third of the test subjects pressured into stating absurd conclusions. For example, if the sole person tested was seated with seven people, who all identified themselves as biologists, and these seven people all identified an unusually large rat in a small cage as a mouse, then probably 70% of the time the test subject would also publicly agree that the squirrel-sized animal in the cage was a mouse rather than a rat. We allow ourselves to be hypnotized by some pronouncement of an expert, failing to ask ourselves: how does such a person benefit from this claim he is making? The answer is often: he increases his own social power and prestige by making such a claim.
What I have described here is how a tiny elite can control how a huge body thinks or votes. Similar
things often happen in the world of politics, where a tiny elite can
largely control who becomes the nominee of one of the two main
parties. We did not see this in the 2016 US election, but did see it
in the 2004 US election. Candidate Howard Dean (a man of good judgment, calm temperament and long experience as a governor of Vermont) had raised huge sums
of money by January 2004. He finished only third in the Iowa
caucuses, but had excellent prospects of winning the New Hampshire
primary, which would have given him a large chance of becoming the
Democratic nominee. But just after the Iowa caucuses, a tiny elite of
journalists and pundits ruined his chances. In the days before the New Hampshire
primary, the journalists and pundits focused incessantly on a mere
cheer that Howard Dean had made at the end of one of his campaign speeches,
trying to falsely portray the cheer as some demented scream. Because of
this, Dean lost the New Hampshire primary, and his chance of winning
the nomination was effectively over. A tiny elite clique at the top
of an “election pyramid” had basically controlled who the
Democratic Party would nominate for president.
You could create an "election pyramid" graph similar to the pyramid graph above. At the very top would be a small elite clique of reporters, pundits, columnists and TV personalities. Just below them in the pyramid would be a small number of voters participating in the Iowa caucuses and the New Hampshire primary. Everything else that happens politically every four years in the US usually follows (but does not always follow) from what is dictated by these small groups at the top of the pyramid.
In regard to claims of a scientific consensus, we should remember that most major claims that a scientific consensus exists are not well founded, in the sense that we do not actually know whether a majority of scientists privately hold the opinion that is claimed as a consensus. The only way to reliably determine whether a scientific consensus exists on a topic is to hold a secret ballot voting procedure among all the scientists in a field, one in which there is a zero risk for a scientist to support any opinion he may privately hold. Such secret ballot voting procedures are virtually never held. When scientists are polled, the arrangement of polling questions is often poor, with too narrow a range of choices, and without "no opinion" or "I don't know" as one of the possible answers.
Since the private opinions of scientists may differ from some alleged scientific consensus, and since there are all kinds of sociological, ideological and political reasons why a scientific consensus may form for reasons having little to do with scientific evidence, an appeal to a scientific consensus has little argumentative force. It's much better to appeal to facts and logic rather than alleged opinion majorities.
You could create an "election pyramid" graph similar to the pyramid graph above. At the very top would be a small elite clique of reporters, pundits, columnists and TV personalities. Just below them in the pyramid would be a small number of voters participating in the Iowa caucuses and the New Hampshire primary. Everything else that happens politically every four years in the US usually follows (but does not always follow) from what is dictated by these small groups at the top of the pyramid.
In regard to claims of a scientific consensus, we should remember that most major claims that a scientific consensus exists are not well founded, in the sense that we do not actually know whether a majority of scientists privately hold the opinion that is claimed as a consensus. The only way to reliably determine whether a scientific consensus exists on a topic is to hold a secret ballot voting procedure among all the scientists in a field, one in which there is a zero risk for a scientist to support any opinion he may privately hold. Such secret ballot voting procedures are virtually never held. When scientists are polled, the arrangement of polling questions is often poor, with too narrow a range of choices, and without "no opinion" or "I don't know" as one of the possible answers.
Since the private opinions of scientists may differ from some alleged scientific consensus, and since there are all kinds of sociological, ideological and political reasons why a scientific consensus may form for reasons having little to do with scientific evidence, an appeal to a scientific consensus has little argumentative force. It's much better to appeal to facts and logic rather than alleged opinion majorities.
No comments:
Post a Comment