Header 1

Our future, our universe, and other weighty topics


Monday, September 30, 2019

"Accidental Engineering" Sounds Goofy, So They Use the Term "Adaptions"

Humans have never observed a case of a very complex piece of engineering arising by accident. For example, we know of no cases of nice livable houses that arose from trees accidentally falling and forming into houses. We also know of no cases of trees accidentally falling to form into nice flat bridges.  Occasionally a tree will fall in a way that provides a kind of bridge across a stream, but something so simple cannot really be called a case of accidental engineering.  A fallen tree spanning a stream just resembles a fallen tree, not some feat of engineering. 

Therefore, the phrase "accidental engineering" has a very implausible sound to it, rather like the phrase "accidental writing" or "accidental computer programming."  But it is just such a concept of accidental engineering that our current biologists want us to believe in. They want us to believe that such accidental engineering occurred not just once or twice, but very many times,  so many times that there were billions of cases of accidental engineering that explain all of the complex biological innovations discovered in the biological world.  

Imagine if one of today's biologists were to explain such a dogma without using euphemisms.  He might sound something like this:

"We should be glad that so very many cases of accidental engineering occurred to produce the parts of the human body. You can see things because of some accidental engineering that produced your eyes, and because of some very different accidental engineering that produced the parts of your brain involved in vision. You have a circulatory system because of entirely different cases of accidental engineering. Then there are the many cases of accidental engineering that conspired by chance to produce your skeletal system, and the many other cases of accidental engineering that randomly conspired to produce your intricate muscular system. On the molecular level, your body is made up of proteins, and there are more than 20,000 different types of proteins used by your body. Each of these types of protein molecules is like its own separate complex invention, a complex tiny machine or device with hundreds of amino acid parts fitting together in just the right way to produce a particular functional effect. So it seems that there were some 20,000 cases of very lucky accidental engineering that produced all of the protein molecules your body needs."

Such a statement sounds very far-fetched. So Darwinist biologists don't write statements like the statements above, although they believe exactly what is stated in the statement above.  Instead, Darwinist biologists use a euphemism: the word "adaption." For example, a Darwinist biologist may tell you that your eyes are an adaption, and that your visual cortex is an adaption, and that your nose is an adaption, and that your skeletal system is an adaption, and that your circulatory system is an adaption, and that your fingers and toes are adaptions. 

Of course, the term "adaption" is about the least objectionable-sounding term you can use. Who can dispute that adaption occurs? Every day we see adaptions occurring around us. I hear on the TV that it's cold outside today, so I wear a jacket. That's an adaption. I taste my morning coffee, and find it's too hot. So I stir it. That's an adaption. Given that we see adaption occurring constantly around us,  it's very unlikely that anyone will ever say something like "Adaptions are so rare."  But when our biologists euphemistically use the word "adaption," what they are usually referring to are claims of accidental engineering, which is something so rare that humans have never observed it as it happened. 

A dogmatic biologist using the term "adaption" for his claim of accidental engineering is somewhat like a person who wants you to believe that certain people have superpowers like in comic books, but who tries to make this sound like a not-unreasonable claim by using the euphemistic term "capability fluctuations" for his claim about superpowers. 

EUPHEMISM MEANING
“Friendly fire” When a soldier from your army shoots or bombs other soldiers from your army
“Collateral damage” When bombs dropped from military jets kill civilians unexpectedly
“Beginning a journey of self-discovery” Fired
“Correctional facility” Prison
“On an educational hiatus” Dropped out of college
“Adaptions” Complex biological innovations described as the results of accidental engineering


What is rather amusing is that the people who make these fanciful  claims about accidental engineering are in general people who know nothing about engineering, and who never did any engineering.  Their ignorance of the most basic principles of engineering is often obvious.  Just as we can use a term such as "hydrodynamics know-nothing" to refer to someone like myself who knows nothing about hydrodynamics, it is in general true that Darwinist biologists are "engineering know-nothings," because they know nothing about engineering.  It is rather hilarious that such "engineering know-nothings"  often insist that we should base our world-views on their engineering opinions, such as the opinion that wonderful cases of accidental engineering have occurred innumerable times. 

Below we see a schematic diagram of a complex system like we find all over the place in biology, a system with so many interlocking dependencies that it seems impossible to imagine how it could have arisen accidentally.  Such systems raise an abundance of "which came first, the chicken or the egg" problems, problems that our learned biologists ignore or sweep under the rug.  

complex system

Examples of such biological things with interlocking dependencies include the many types of protein molecules that cannot fold correctly and cannot be functional unless there exist other types of  helper protein molecules (called chaperone proteins).  The source here estimates that 20 percent to 30 percent of protein molecules have a dependency on other chaperone proteins. When we do not consider the chaperone-dependency of such protein molecules, we might calculate a probability on the order of no more than about 1 in 10130 of the protein molecule appearing by chance from its component amino acids (since the gene for a protein molecule is typically a sequence of hundreds of amino acids, and such a sequence can be arranged in 10260 ways, almost all nonfunctional).  When we then consider the dependency of such a molecule on one or more other equally complex molecules (a chaperone protein molecule),  we must calculate a much, much smaller probability of the protein and its chaperones appearing, probably something as improbable as 1 chance in 10 to the two-hundredth power.  We are asked to believe that such miracles of chance (each an impressive example of accidental engineering) occurred not just once but billions of times, for there are billions of different types of protein molecules in the animal kingdom (the source here estimates 50 billion), and 20 percent of them require chaperone proteins.  This is rather like believing in some planet where billions of people all live in houses that appeared accidentally, after many billions of falling trees conveniently formed into houses. 

Thursday, September 26, 2019

Reign of the Pyramid Top: How Tiny Elite Cliques Can Shape a Scientific Consensus

The 2009 paper “Groupthink in Academia: Majoritarian Departmental Politics and the Professional Pyramid” (by Daniel B. Klein and Charlotte Stern) is one of the most insightful papers I have read on the topic of the sociology of academic conformism. Although the specific examples given only involve topics of politics, the insights of the paper are very applicable to related topics such as the sociology of scientist groupthink.  

The authors give us this description of what they call departmental majoritarianism:

The most important departmental decisions involve the hiring, firing, and promotion of tenure-track faculty. Such decisions come down to majority vote. Although the chair exercises certain powers, committees control agendas, and so on, the central and final procedure for rendering the most important decisions is democracy among the tenure-track professors—or departmental majoritarianism.”

What this means is that the professors in a particular department get to vote on who will be admitted as candidates to become tenured professors with basically a guaranteed life-long job at some university or college. What type of people do they give an approving “thumbs up” vote to? People who share their beliefs on ideological issues. So if a scientist believes that the brain is the storage place of human memories, and that human mentality is merely a product of the brain, he will not be likely to make a vote allowing someone who questions such dogmas to become a tenured professor. And if a scientist believes that random mutations and survival-of-the-fittest explain the main wonders of biology, and that life forms are merely the result of blind, accidental processes, he will not be likely to make a vote allowing someone who questions such dogmas to become a tenured professor.

The authors give some insight on how this works:

Theories of group formation and social dynamics tell us that social groups tend to seek and attract newcomers like themselves (McPherson, Smith-Lovin, and Cook 2001), screen out and repel misfits (Allport 1954; Brewer 1999), and mold the unformed in their own image (Katz and Lazarsfeld 1955, 62–63; Moscovici 1985). These tendencies are rooted in human nature. Suppose a department must hire a new member, and 51 percent of the current members share a broadly similar ideology...Moreover, they believe that one must broadly conform to that ideology to be a good colleague and a good professor. What happens? The department members hire someone like them. The 51 percent becomes 55 percent, then 60 percent, then 65 percent, then 70 percent, and so on. As Stephen Balch (2003) and others have noted, majoritarianism tends to produce ideological uniformity in a department.”

The effect described here is one that occurs over multiple years or decades. Gradually over the years a monolithic consensus arises (or may seem to arise) through such a process, and eventually all of the professors in the department may end up espousing some particular ideology or dogma. But we should not be very impressed by the uniformity of opinion in such a department. All that was necessary to get such a uniformity was for a mere 51% of the department to hold some ideology or dogma. Majoritarian politics will tend to cause that 51% to slowly increase to become 100%.

The paper describes a kind of academic pyramid in which a tiny elite at the top of the pyramid exerts influence totally out-of-proportion to its numbers. The paper states the following:

In structure, the tribe is pyramidal, with the elite at the apex...Position within the pyramid depends on focal, conventional rankings of key institutions, notably academic journals, departments, publishers, citations, grants, awards, and other markers of merit. Research is highly specialized, and the tribe is broken down into subfields...Prestige and eminence are determined within the subfield, a kind of club within the tribe. The clubs constitute the tribe, just as agencies and branches constitute the government. Each club sorts people with overt reference to pedigree, publication, citations, and letters of reference. The club controls these filters and then applies them to itself. It controls the graduate programs and journals. By spawning and hiring new recipients of Ph.D. degrees, the club reproduces itself.”

In the science world it is easy to identify the apex at the top of the academic pyramid. It is about 20 universities including the Ivy League (Harvard, Yale, Columbia, and so forth), along with other well-known universities such as Oxford, Cambridge, MIT and the California Institute of Technology (CalTech). The paper notes the extraordinary degree of influence of the top of the academic pyramid. It gives us an example from the world of law: “In the field of law, Richard Redding finds: 'A third of all new teachers [hired in law schools between 1996 and 2000] graduated from either Harvard (18%) or Yale (15%); another third graduated from other top-12 schools, and 20 percent graduated from other top-25 law schools. ' ” Referring to the top or apex of this academic pyramid, the paper states the following: “Because of the mechanisms that operate within disciplines—propagation, 'follow the apex' and 'freeze-out'—if the apex embraces ideology j, it will tend to sweep that ideology into positions in every department all the way down the pyramid.”

The diagram below illustrates how this works. At the top of the pyramid (shown in pink) is a tiny elite. In a particular subject matter field such as neuroscience, there may be only one hundred or two hundred professors in such a “top of the pyramid” apex. But such professors exert influence out of all proportion to their numbers. What becomes recognized as the “scientific consensus” may be determined by those one hundred or two hundred professors.


scientific consensus


Given this “opinion cascade,” it is astonishingly easy for some tiny elite group to control the scientific consensus on some topic. For some new dogma to become “the scientific consensus,” it is merely necessary that the following occurs:
  1. Somehow 51% of a few hundred professors at the top of the academic pyramid come to believe in some dogma.
  2. Over the years that 51% becomes a much higher percentage, as professors vote in other professors sharing their belief in this dogma.
  3. The “opinion cascade” from the top proceeds down the pyramid, and eventually the dogma becomes the scientific consensus due to groupthink effects and “follow the pyramid top” tendencies.
We saw such a thing occurring in the world of cosmology after 1980, when a tiny handful of professors (mainly at MIT and Stanford) were able to get an alleged scientific consensus started, one that was based on an entirely speculative "cosmic inflation" theory for which there was no evidence (such a theory not to be confused with the Big Bang theory, for which there is evidence).  We almost saw such a thing occurring in the world of particle physics, where string theory (a purely speculative theory for which there is no evidence) almost became a scientific consensus among particle physicists, but fell short. 

There are various other reasons why it is may be surprisingly easy for some little elite clique to get some weak theory accepted as a scientific consensus. One reason is that there are all kinds of possible tactics by which a weak scientific theory may have its prestige enhanced by the use of various sneaky "prestige by association" ploys such as I describe in this post.  Another reason is that because of the ambiguous definition of science (sometimes defined to mean "facts established by observation" and other times defined to mean "the activity of scientists"), almost any theory advanced by a scientist can be peddled as "science."  

An additional reason is that it is relatively easy to create a perceived consensus rather than an actual consensus.  A few professors at Ivy League universities can simply start talking as if a consensus is building, by claiming "there is a consensus starting to form" around their theory, or that "more and more scientists are starting to believe" their theory, or that "there is growing agreement" that their theory is true.  Before long, people may start to think there is a consensus about the truth of some theory, even though no such consensus exists.  But such a perceived consensus can exert enormous force. Now professors in the less-prestigious universities may start to voice belief in the theory, not because they actually believe it, but because they are "falling in line" and "going along to get along" in an act of conformity and social compliance.  My post "How to Get Your Weak Scientific Theory Accepted" explains some of the tricks-of-the-trade by which some tiny elite might get its little bit of tribal folklore to become recognized as a scientific consensus. 

One of the most famous experiments in the history of psychology was the Asch conformity experiment, which showed that large fractions of people would tend to state absurd conclusions whenever they thought that a majority of their peers had reached the same conclusion.  A group of people at a table were asked to judge which of a set of three lines was the same length as another line they were shown. After watching a group of other people all give an answer that was obviously wrong (as they had been secretly instructed to do by the experimenter), about one third of the test subjects also stated the same obviously wrong answer when they were asked about the matter.  They were conforming to the nonsense judgment of their peers. We don't know how many of these subjects privately disagreed with what they publicly stated.  Similarly, when a scientific consensus supposedly forms around some idea that is unproven or illogical, we don't know how many of the professors publicly supporting the idea privately disagree with the idea, but publicly support it in order to look like a "team player," someone who "fits in" with his peers. Professors may engage in "public compliance" by which they seem to jump on a bandwagon, even if they have private doubts about the idea. 

If you were to do a test like the Asch test, but with the confederates of the experimenter posing not as peers of the test subject but as higher-status people or authorities,  you would probably get far more than one third of the test subjects pressured into stating absurd conclusions.  For example, if the sole person tested was seated with seven people, who all identified themselves as biologists, and these seven people all identified an unusually large rat in a small cage as a mouse,  then probably 70% of the time the test subject would also publicly agree that the squirrel-sized animal in the cage was a mouse rather than a rat.  We allow ourselves to be hypnotized by some pronouncement of an expert, failing to ask ourselves: how does such a person benefit from this claim he is making? The answer is often: he increases his own social power and prestige by making such a claim. 

What I have described here is how a tiny elite can control how a huge body thinks or votes. Similar things often happen in the world of politics, where a tiny elite can largely control who becomes the nominee of one of the two main parties. We did not see this in the 2016 US election, but did see it in the 2004 US election. Candidate Howard Dean (a man of good judgment, calm temperament and long experience as a governor of Vermont) had raised huge sums of money by January 2004. He finished only third in the Iowa caucuses, but had excellent prospects of winning the New Hampshire primary, which would have given him a large chance of becoming the Democratic nominee. But just after the Iowa caucuses, a tiny elite of journalists and pundits ruined his chances. In the days before the New Hampshire primary, the journalists and pundits focused incessantly on a mere cheer that Howard Dean had made at the end of one of his campaign speeches, trying to falsely portray the cheer as some demented scream. Because of this, Dean lost the New Hampshire primary, and his chance of winning the nomination was effectively over. A tiny elite clique at the top of an “election pyramid” had basically controlled who the Democratic Party would nominate for president. 

You could create an "election pyramid" graph similar to the pyramid graph above. At the very top would be a small elite clique of reporters, pundits, columnists and TV personalities. Just below them in the pyramid would be a small number of voters participating in the Iowa caucuses and the New Hampshire primary.  Everything else that happens politically every four years in the US usually follows (but does not always follow) from what is dictated by these small groups at the top of the pyramid. 

In regard to claims of a scientific consensus, we should remember that most major claims that a scientific consensus exists are not well founded, in the sense that we do not actually know whether a majority of scientists privately hold the opinion that is claimed as a consensus. The only way to reliably determine whether a scientific consensus exists on a topic is to hold a secret ballot voting procedure among all the scientists in a field, one in which there is a zero risk for a scientist to support any opinion he may privately hold.  Such secret ballot voting procedures are virtually never held. When scientists are polled, the arrangement of polling questions is often poor,  with too narrow a range of choices, and without "no opinion" or "I don't know" as one of the possible answers. 

Since the private opinions of scientists may differ from some alleged scientific consensus, and since there are all kinds of sociological, ideological and political reasons why a scientific consensus may form for reasons having little to do with scientific evidence,  an appeal to a scientific consensus has little argumentative force. It's much better to appeal to facts and logic rather than alleged opinion majorities. 

Sunday, September 22, 2019

The Sanskrit Effect Debunked

In 2018 there was a story in Scientific American attempting to insinuate that memorization efforts change the brain.  The story refers to a scientific study that scanned the brains of people called pandits who had memorized Sanskrit scriptures. A scientist named James Hartzell says, "Numerous regions in the brains of the pandits were dramatically larger than those of controls, with over 10 percent more grey matter across both cerebral hemispheres, and substantial increases in cortical thickness." The story author has given this alleged effect the catchy name of "the Sanskrit effect." But when we take a close look at the study, co-authored by Hartzell, we find no robust evidence for any brain change caused by memorization. 

The study used a technique called voxel-based morphometry, which attempts to judge brain volume purely by brain scans.  The wikipedia.org article on this technique notes some serious reasons for concerns about its reliability. It states the following:

"However, VBM [voxel-based morphometry] can be sensitive to various artifacts, which include misalignment of brain structures, misclassification of tissue types, differences in folding patterns and in cortical thickness. All these may confound the statistical analysis and either decrease the sensitivity to true volumetric effects, or increase the chance of false positives. For the cerebral cortex, it has been shown that volume differences identified with VBM may reflect mostly differences in surface area of the cortex, than in cortical thickness."

The study was one looking at large areas of the brain, looking for differences. This means that the authors had complete freedom to check hundreds of tiny regions of the brain, looking for any differences between their 21 pandits who memorized scriptures, and a group of controls.  The problem with this is that a scientist may simply find deviations that we would expect to exist by chance in tiny little brain regions, and then cite these as evidence of a brain effect of memorization.  The more regions that are checked, the more likely that some correlation will be found that is purely a chance variation. Similarly, if you were to compare 100 liver regions of 21 Sanskrit scholars with 100 liver regions of 21 ordinary people, you might find 10 or 20 regions of slightly higher tissue density in the Sanskrit scholars. But that would in no way suggest that memories are stored in livers. 

Note that the scientist did not claim that the brains of the pandits who memorized scriptures had 10 percent more grey matter than ordinary people. He merely claimed that in "numerous regions" there was a 10% greater difference.  If I take 20 random people, scan their brains, and compare them to 20 other random people whose brains I scanned, I will (purely by chance) probably be able to find quite a few little regions in which the first group had more grey matter than the second (as well as quite a few regions in which the first group had less grey matter). 

In fact, different regions of random brains typically have a grey matter density that differs by up to 20%.  The graph below (link) shows how much grey matter volume varies among a large set of individuals.  We see grey matter volume variations of about 20% for ages around 21, and the average age of the 21 Sanskrit pandits was 21.7.  



So it is quite unremarkable that a search (among many different tiny brain regions) for differences in grey matter volume of 21 subjects would find many tiny regions where there was a difference of  about 10% or 12% or even 15%. Given such large variation in grey matter volumes from subject to subject, we would expect such differences by chance, even if memorization has no effect on grey matter volume.  We would expect to find such 10% or 12% differences in grey matter volume when studying random brain regions of any two random groups of 21 subjects (for example, 21 policeman and 21 firemen or 21 baseball pitchers and 21 baseball hitters).  It was therefore not-very-honest  carnival-barker hype for Hartzell to describe such completely typical and unremarkable variations in grey matter volumes as brain regions that were "dramatically larger." 

 In the paper, we read that these pandits who memorized scriptures "showed less GM [grey matter] than controls in a large cluster (62% of subcortical template GM) encompassing the more anterior portions of the hippocampus bilaterally and bilateral regions of the amygdala, caudate, nucleus accumbens, putamen and, thalamus."  So the study found that in some regions of their brains, these pandits who memorized Sanskrit scriptures had less grey matter than ordinary people, and that in other regions of their brains, they had more grey matter.  That is basically what we would expect to find by chance, and provides no good evidence for anything. 

Analyzing any brain scan involves a large number of steps, with a large amount of subjective interpretation. An absolute essential for a study like this is a blinding protocol, under which scientists analyzing the brain scans do not know whether the subjects are the ordinary control subjects or the Sanskrit memorization pandits being studied.  Such a blinding protocol would be necessary to avoid bias under which scientists looking for some difference would be more likely to find it in a group thought to have such a difference. But the "Sanskrit effect" paper makes no mention of such a blinding protocol being used.  I search for the word "blind" in the text, and find no relevant use of it. 

On page 23 a technical paper tells us how many subjects we would need to have when doing this kind of "whole brain" study using brain scanning:

"With a plausible population correlation of 0.5, a 1000-voxel whole-brain analysis would require 83 subjects to achieve 80% power. A sample size of 83 is five times greater than the average used in the studies we surveyed: collecting this much data in an fMRI experiment is an enormous expense that is not attempted by any except a few major collaborative networks."

How many subjects did the  study of Sanskrit pandits use (a study that refers to its use of "whole brain VBM analysis")? Only 21.  This means that it used only a small fraction of the subjects needed for a moderately convincing result, using the approach used.  For comparison, a recent paper described two brain scan studies that looked for a link between grey matter volume and psychopathy. The number of subjects who had brain scans were 80 and 64. Clearly our "Sanskrit effect" study has used a number of subjects way too small to support any claims of having found a reliable result. 

For all these reasons, the results are not robust evidence for anything.  Imagine if you did a study trying to prove the hypothesis that people wearing dark shirts tend to be taller than people wearing light shirts. If you took many photos in school class rooms, and then reported that in "many classroom rows the students wearing dark shirts were taller than average," this would be totally unconvincing evidence for such a hypothesis. We would expect exactly such a thing to be observed by chance even if the hypothesis was not correct. Such a goofy study would be comparable to what Hartzell has reported. He has simply reported grey matter volume variations from place to place such as we would expect to exist by chance in brains where grey matter volumes vary by 20% from person to person, even if memorization has no effect on grey matter volume. 

Very strangely, the Hartzell paper shows 14 bar graphs of the density of gray matter volumes in different parts of the brain, showing the difference between Sankskrit memorization pandits and controls; and in 13 out of 14 of these bar graphs, the Sankskrit memorization pandits are shown as having less grey matter volume than the controls.  So in general, the bar graphs of his paper contradict the insinuations of Hartzell in Scientific American. 

Thursday, September 19, 2019

Contrarian Predictions Regarding Biology, the Brain and Technology

I will now offer some predictions regarding the future. I won't do much to explain the conceptual outlook that motivates these predictions. But anyone reading the posts on this blog and this blog can read about some of the evidence that motivates these predictions.

Prediction #1: No place in the brain outside of the cell nucleus will be found to have the stability needed to store memories for decades; and no evidence will be found that episodic memories or learned knowledge are stored in the cell nucleus.

Today's neuroscientists typically claim that memories are stored in the synapses of the brain. But there is a reason why we should reject this claim: we know that the proteins that make up synapses are very short-lived, having average lifetimes of only a few weeks. Such an average lifetime of a synapse protein is only a thousandth of the length of time that humans can remember things (many seniors having good memories of things that happened more than 50 years old). I predict that nothing will be discovered that resolves this discrepancy. I predict that our neuroscientists will never discover any place in the brain that is a suitable stable storage place for 50-year-old memories. No place in the brain will be found that is both stable and offers lots of storage room for all the memories humans have. DNA in the cell nucleus is stable, but its storage space is already used up by genomic information.

Prediction #2: There will not be discovered in the brain anything like a neural positioning system or a neural indexing system or anything else that would be useful in explaining how a brain could instantly retrieve memories.

Humans are able to retrieve memories instantly, even memories of knowledge they haven't thought about in years. But the human brain seems to have nothing that might explain this speed of recall. There are two things that can enable fast retrieval in a system: a positioning system and indexing. A simple book has both. The positioning system is the page numbering. The indexing appears at the back of the book. I predict that nothing like these things will be discovered in the brain, or anything else that might explain how instant memory recall could be accomplished by a brain. By a "neural positioning system" I refer not to some neural system for finding a human's position in the world, but to a system that might allow one part of the brain to quickly read from some precise tiny location far away in the brain. 

Prediction #3: No type of encoding scheme will ever be discovered by which learned knowledge or episodic memories could be stored as neural states.

A typical neuroscientist claims that memories are stored in the brain. But learned knowledge or episodic memories can't simply be poured into little spots of the brain like someone might pour tea into a tea cup. If the brain stores memories as neural states, there would need to be some type of encoding scheme by which information and sensory experience was translated into neural states. No such encoding scheme has ever been discovered, and I predict that it never will be discovered. The genetic code was discovered about 1950, and we are many decades overdue for a discovery of a “neural code.” Such a code will not be discovered because it doesn't exist. It doesn't exist because our memories and learned knowledge are not actually stored in our brains (a statement I make for the 30 reasons discussed here).

Prediction #4: Other than genetic and epigenetic information in the nucleus of neurons, no sign will ever be found of encoded information stored in the brain.

Even if you have no idea what encoding system was used to encode some information, you can still discover evidence that encoded information exists. Before Europeans were able to unravel how hieroglyphics worked, they knew that hieroglyphics contained encoded information. Such Europeans were able to see many repeated symbols, and these are the hallmarks of encoded information. 
Repeated symbols in hieroglyphics

In the case of the brain, we know there is one type of encoded information in it. That information is the DNA information found in all neurons and in all other cells. But other than this genetic information, no real sign of encoded information has ever been discovered in the brain. Synapses (the alleged storage place of memories) have been examined at very high resolutions using instruments such as electron microscopes, and no sign of encoded information has ever been found in synapses. I predict that scientists never will discover any sign of encoded information in the brain, other than the genetic and epigenetic information in the nucleus. There will never be some “Eureka” moment when scientists discover tiny little repeated symbols in synapses.

Prediction #5: It will never be possible to enhance human memory by downloading information into a brain.

If you believe that memories are stored in the brain, you should believe in a technological possibility: that it will be possible to download knowledge into a brain, through some technology that writes information to your synapses. I predict that such a technology will never exist.

Prediction #6: It will never be possible to make computers with a general purpose intelligence.

Computers are getting more and more proficient, partially because computers are getting faster, and partially because more and more human logic, information and data is being transferred to computers. We have heard a lot of hype about artificial intelligence. But nothing like a general computer intelligence has ever appeared. I predict that such a thing never will appear. Computers do not actually understand anything, and have never had anything like human cognition or human self-hood. There is no imaginable technological path that might lead to a machine with actual consciousness, self-hood and understanding such as humans have. Because human understanding and intelligence is not actually caused by the brain, it will not be possible to make machines conscious or cognizant by leveraging any type of “mind from matter” principle discovered by studying the human brain.

Prediction #7: It will never be possible to upload a human mind into a computer or a robot.

Darwinist materialism has always had many similarities to a religion, and we now see the philosophy of transhumanism supplying a kind of finishing touch to this stealth religion. That finishing touch is the eschatological idea that humans will be able to gain immortality by uploading their minds into computers or robots. I predict that such a thing will never occur. It is not possible because the brain does not actually store a human's memories, and is not actually the source or cause of human consciousness. If you are sad about not being able to upload your mind into a computer or a robot, do not be. The very reasons for thinking that such a thing is impossible are reasons for thinking that your consciousness will continue after your death. If you mind is not being generated by your brain (or an aspect of your brain), and your memories are not stored in your brain, there is every reason to suspect that your self-hood will continue after your brain stops functioning (as we seem to see occurring when people report near-death experiences).

Prediction #8: There will continue to be discovered cases in which people have good memories and good minds despite very large brain damage.

We already have very many cases of people with good memories and good minds, despite very large brain damage. These include hemispherectomy patients who had half of their brains removed, and people such as John Lorber's patients who lost most of their brains due to disease. Since such cases are caused by a very real aspect of reality (that neither intelligence nor memory is brain-caused),  additional cases of this type will continue to be discovered. 

Prediction #9: All attempts to reproduce the origin of life by mimicking chemical conditions in the early earth will fail, and scientists will not even be able to produce a single functional protein through any simulation of early earth conditions.

For more than a century scientists have been trying to reproduce the origin of life by doing laboratory experiments. They have basically accomplished nothing by all this work. The most they have ever produced is a few types of amino acids, only the simplest types. Contrary to the misleading hype that has been written, amino acids are not at all the building blocks of life. They are merely the building blocks of the building blocks of life. The actual building blocks of life are things such as proteins and nucleic acids. Even the simplest living thing requires 100+ types of proteins, but scientists have never produced even a single functional protein in any experiment simulating conditions on the early earth. I predict that they never will produce any functional protein in any experiment simulating conditions on the early earth, and will therefore have no success in trying to reproduce the origin of life by simulating early earth conditions.

Prediction #10: Good evidence for psychic phenomena and the paranormal will continue to accumulate, with new types of paranormal phenomena appearing; but the majority of scientists will long continue to ignore such evidence. 

The evidence for psychic phenomena (such as ESP, apparitions and near-death experiences) is vast, but it has been almost totally ignored by mainstream scientists.  I predict that the majority of scientists will continue for many years in their "heads in the sand" attitude towards very important paranormal phenomena.  Such an attitude is caused by an entrenched belief system (strongly resembling a religion) that has become popular among scientists.  When entrenched belief systems have been discredited by observations, such belief systems often continue to persist for a long, long time, because of a kind of ideological inertia, in which people think to themselves, "I will keep believing as I have believed for so long" or "I will keep believing as my teachers taught." I predict that novel and unprecedented forms of paranormal phenomena will be observed in the next few decades, as has actually occurred during recent years (as shown in the 1500+ photos here, here, and here).  

Prediction #11: It will not be possible to enhance the human mind by adding new neurons or neuron-equivalents, through technology or genetic engineering, although it may be possible to enhance the human mind by some technique that has the effect of turning off some brain activity or reducing brain activity. 

Since the human brain is not the source of the human mind, there will be no successful attempts to increase the human mind by adding more neurons or (electronic neuron-equivalents). But it could be that the brain acts like a valve or limitation device, limiting human consciousness. Because of the latter possibility, it may be possible to enhance the human mind through some method that limits brain activity or turns off some of the neurons in our brain. Such a thing might be accomplished genetically, electronically or chemically. 

A Note to Future "Accuracy Checkers"

Let's suppose you are a person in the future (maybe a year or two from now) checking my list of predictions, and trying to find one that failed.  You might have little difficulty in seeming to find such a failure if you fall "hook, line and sinker" for some of the enormous amount of hype, exaggeration and misinformation that appears in the science literature, particularly in science popularization web sites.  As discussed at length here, there is an epidemic of exaggeration and unwarranted claims in the science world nowadays, much of it coming from university press offices which churn out press release headlines that do not actually match what was shown in the scientific study being discussed. In addition, a large fraction of scientific papers make causal claims that are not actually justified by their data.  So before citing some scientific paper that you think discredits one of the predictions here, ask yourself: did the paper really prove what is claimed in its title or in the headline of its press release? 

Sunday, September 15, 2019

Eight Reasons for Doubting Your Brain Makes Decisions

Neuroscientists like to claim that thoughts and ideas come from your brain, that your memories are stored in your brain, and that when you remember you are retrieving information from your brain. I have discussed in other posts why such claims are not well founded in observations, and why there are strong reasons for rejecting or doubting all such claims. In this post I will discuss another dogmatic claim made about the brain: the claim that the brain is the source of human decisions. I will discuss eight reasons for thinking that this claim is no better founded than dogmatic claims about the brain being the storage place of memories or the source of human abstract thoughts.

Reason #1: Scientists have no understanding of how neurons could make a decision.

When they try to present low-level explanations for a how a brain could do some of the things that they attribute to brains, our neuroscientists falter and fail. An example is their complete failure to credibly explain either how memories could be encoded in neural states, how memories could be permanently stored in brains, or how memories could be instantly recalled by brains. Neuroscientists also cannot credibly explain how a person could make a decision when faced with multiple choices. 

When I did a Google search for "what happens in the brain when a decision is made," I got a bunch of articles with confident sounding titles. But reading the stories I read mainly what sounded like  bluffing, hype, promissory sounds,  and the kind of talk someone uses to persuade you he understands something he doesn't actually understand (along with some references to brain scanning studies that aren't robust for reasons discussed later in this post).  At no one point in these articles do we ever reach someone who makes us think, "This guy really understands how a brain could reach a decision." 

Let us consider a simple example. Joe says to himself, “Today I can either go to the library or go to see a movie.” He then decides to go to the library, and then starts walking towards the library.

To explain this neurally, we would have to explain several different things:

Item 1: How Joe's brain could hold two different ideas, the idea about the possibility of going to the movie, and the idea of going to the library.
Item 2: The appearance in Joe's brain of a third idea, an idea that he will go today to the library.
Item 3: Some neural act that causes his muscles to move in a way corresponding to his idea about going to a library.

The first two of these things cannot be credibly explained through any low-level explanation involving neurons or synapses. See my post “No One Understands How a Brain Could Generate Ideas” for a discussion of the failure of neuroscientists to present any credible explanations of how brains could generate ideas. In that post, I cite some “expert answers” pages on which the experts address exactly the question of how a brain could generate ideas, and sound exactly as if they have no understanding of such a thing.

On one of the “expert answers” pages that I cite, we have this revealing answer:

'How does the 'brain' forms new ideas?' is the wrong question. We don't actually know how the brain codes old ideas.”

That is correct, which means that neither Item 1 in my list above can be explained neurally, nor Item 2. Since we do not understand how a brain could either hold ideas or form new ideas, we do not have any understanding of how a brain could make a decision.

Reason #2: Hemispherectomy patients can still make decisions just fine.

Hemispherectomy is an operation done on patients with severe epileptic seizures. In an hemispherectomy operation, half of the brain is removed. I can find no studies that have specifically studied decision-making ability in hemispherectomy patients. However, I have cited here and here and here and here scientific papers that show results for intelligence tests taken “before” and “after” a hemispherectomy operation. Such papers show, surprisingly, that removing half of a brain has little effect on intelligence as measured in IQ tests.

Written IQ tests are typically tests of not just intelligence but also decision making ability. For example, the Wechsler IQ test is by far the most common one used by scientists, and it is a multiple-choice test. Every single time a person has to pencil in one of the little ovals in a multiple-choice test, he has to make a decision. So standard IQ tests are very much tests of not just intelligence but also decision-making ability (which may be considered an aspect of intelligence).

Since IQ tests done on hemispherectomy patients show little damage to IQ scores after removing half of a brain, we can only conclude that removing half of a brain has little or no effect on decision making ability. We would not expect such a thing to be true if your brain is what makes your decisions.

Reason #3: Some people who lost most of their brains could still make decisions normally.

Cases of removal of half of the brain by surgical hemispherectomy are not at all the most dramatic cases of brain damage known to us. There are cases of patients who lost almost all of their brains due to diseases such as hydrocephalus, a disease that converts brain tissue to a watery fluid. Many such cases were studied by the physician John Lorber. He found that most of his patients were actually of above-average intelligence. Similarly, a French person working as a civil servant was found in recent years to have almost no functional brain.

Such cases seem to show that you can lose more than 75% of your brain and still have a normal decision making ability. This argues against claims that your brain is what is making your decisions.

Reason #4: Split brain patients don't have their decision making harmed.

The two hemispheres of the brain are connected by a set of thick fibers called the corpus callosum. In rare operations this set of fibers is surgically severed. The result is what called a split-brain patient. Despite the erroneous claims that are sometimes made about this topic, the fact is that such an operation absolutely does not result in anything like a split personality or a split consciousness or a split mind. Such an operation does not result in two minds causing conflicting decisions.

The scientific paper here (entitled "The Myth of Dual Consciousness in the Brain") sets the record straight, as did a scientific study published in 2017. The research was done at the University of Amsterdam by Yair Pinto. A press release entitled “Split Brain Does Not Lead to Split Consciousness” stated, “The researchers behind the study, led by UvA psychologist Yair Pinto, have found strong evidence showing that despite being characterised by little to no communication between the right and left brain hemispheres, split brain does not cause two independent conscious perceivers in one brain.”  Their study (entitled "Split brain: divided perception but undivided consciousness") can be read here“We have shown that severing the cortical connections between the two brain hemispheres does not seem to lead to two independent conscious agents within one brain,” the researchers said.

In 2014 the wikipedia.org article on split-brain patients stated the following:

In general, split-brained patients behave in a coordinated, purposeful and consistent manner, despite the independent, parallel, usually different and occasionally conflicting processing of the same information from the environment by the two disconnected hemispheres...Often, split-brained patients are indistinguishable from normal adults.”

In the video here we see a split-brain patient who seems like a pretty normal person, not at all someone with “two minds." And at the beginning of the video here the same patient says that after such a split-brain operation “you don't notice it” and that you don't feel any different than you did before – hardly what someone would say if the operation had produced “two minds” in someone. And the video here about a person with a split brain from birth shows us what is clearly someone with one mind, not two. In these interviews, every single time the split-brain patients answer questions normally, they are showing their ability to make decisions normally. The mere act of answering questions always involves decisions about what to say and how to say it.

But this is not at all what we should expect from the assumption that the brain is the source of our decisions. If that assumption were true, a split-brain operation should cause two independent sources of decision-making that would have a tendency to conflict with each other.

Reason #5: “Decision zig-zag" is almost never observed in pressure situations, but we would expect it to be very common if different parts of the brain (or halves of the brain) were causing decisions. 

Here's a quick mental test I'd like you to try. If you can answer all the questions real quickly, in a small number of seconds, it will tend to show you're a smart person who can think fast.  Try it. 

1. Pick a color.
2. Pick a number between 1 and 10. 
3. Pick a planet.  
4. Pick a continent.
5. Pick a city.

Did you skip the test? No fair. It's easy -- go back and try it. 

Now, if you are like 90% of my readers, you were able to do this exercise real quickly, in less than 10 or 15 seconds. But we would not expect such a thing to be possible if your brain was making your decisions. For in that case, we would expect that different parts of the brain would be coughing up different decisions, leading to a result rather like this:

Pick a color? Uh, red -- no green - no blue - okay, red! 
Pick a number? Uh, 8! No, 6 ! No -- uh, 4!  No, 2!
Pick a planet? Merc -- no Jupiter -- no, Earth, no wait...
Pick a continent?  North -- no South -- no Eur -- no Afri -- no Asia!
Pick a city? New ... uh, no Shang... no Paris -- oops, no Moscow! 

As mentioned above, people who have half of their brains removed in hemispherectomy operations can make decisions normally. It therefore cannot be maintained that a decision requires a full brain. If you think that brains make decisions, you are forced to the idea that part of a brain (half a brain or less) can make a decision. But such an idea makes us ask: should not then people be overwhelmed by conflicting decision signals, sent by different parts of a brain? 

Consider the organization of the brain. There are two identical halves. Under the hypothesis that a half of a brain or less can make a decision, we would therefore expect to see very often something that we can call "decision zig-zag."  This would involve behavior in which an organism was flipping back and forth between two possible decisions, as if two physical areas of the brain were conflicting with each other, coming to separate decisions. We would expect to see this particularly often in "coin flip" kind of decisions in which one choice is not obviously better than another. 



But we rarely see such behavior in humans, whenever there is time pressure. It is true that given some important choice, and given the luxury of time to deliberate, a person may kind of go back-and-forth in his mind about what to do. For example, if you are accepted by two different colleges, you may kind of go back-and-forth in your mind, first favoring one choice, then another.  But whenever there is a tight time pressure, and people know there is only a very short time for a decision, humans typically behave with very little indecision. 

Scores on standardized tests such as SAT tests are an excellent gauge of how very infrequently high-performing humans engage in "decision zig-zag" under pressure situations.  In the reading and writing part of an SAT test, a student has to answer more than 100 questions in less than two hours. The questions are multiple choice questions, so doing the test requires making 100 decisions, each a decision about which of the choices to select. Each question typically requires 30 seconds or more of reading.  There is very little time for indecision. Every one who performs very well on the test (in the 90th percentile or higher) is making 100 or more decisions (about which answer to choose) with very little indecision.  Under such pressure situations, humans do not at all perform like they would perform if different halves or different parts of your brain were sending you different signals about what to do.  Humans instead act like beings with a single unified mind.  It would seem that if different parts or halves of a brain were determining what decision to make, there would be so much indecision and "decision zig-zag" that the average SAT score in the US would be at least 200 points lower than it is. 

Reason #6: There is no particular region of the brain that seems to be crucial to non-muscular decision making.

Some particular regions of the brain have been strongly associated with particular functions. For example, we know that the brain stem is strongly associated with autonomic activity that keeps the heart and lungs working. Any major damage to the brain stem usually causes death. We also know that the visual cortex is strongly associated with vision. But no strong associations have been established between any part of the brain and calm non-muscular decision making.  By "non-muscular decision making" I mean the type of thing that goes on when you silently pick a number between 1 and 10 or silently choose in the morning what you will eat for dinner.  

To get an idea of how weak is the neuroscience case that your brain makes decisions, we can look at an article in Psychology Today entitled “The Neuroscience of Making a Decision.” After referring to some brain region that might be involved in addiction, which has no general relevance to the issue of whether brains make decisions, we are referred to a study claiming that the striatum is involved in decision-making. It's a study used that only 7 rats, Since this is less half of the minimum number of animals per study group recommended for a modestly convincing result, the study provides no good evidence for a neural involvement in decision making.

Then the Psychology Today article refers to a brain-scanning study attempting to show that regions called the dorsolateral prefrontal cortex and the ventromedial prefrontal cortex have something to do with decision making. These are the two regions that are most commonly cited as being involved in decision making. A brain scanning study could only give robust evidence for some region being heavily involved in some activity if it were to show a strong percent signal change, rather than the weak signal change of only 1% or less that brain scanning studies typically show. In this case, the study does not even give a figure for the percent signal change. So it does not provide any robust evidence that the the dorsolateral prefrontal cortex or the ventromedial prefrontal cortex have something to do with decision making

Our Psychology Today article then concludes, having provided no real evidence that there is any such thing as a “neuroscience of decision making.”

This study examined six patients with damage to the dorsolateral prefrontal cortex, and found that they had an average IQ of 104, above the average of 100. Since filling out a written IQ test requires many cases of decision making (in regard to the answer given), such a result is incompatible with claims that the dorsolateral prefrontal cortex is some part of the brain particularly involved in decision making. This study says, “We have studied numerous patients with bilateral lesions of the ventromedial prefrontal (VM) cortex” and that “most of these patients retain normal intellect, memory and problem-solving ability in laboratory settings.” The meta-analysis here says that the ventromedial prefrontal cortex is the region of the brain "most commonly implicated in moral decision making," but says that there is a "lack of a significant cluster of activation" in this area, meaning that it doesn't actually light up more during brain scans. 

Failing to report any actual figures for percent signal changes (the number we need to know to judge whether some area of the brain is more involved in an activity), the same meta-analysis notes differences between its findings and the findings of other studies, highlighting how much these brain scan studies tend to conflict with each other. We read the following:

"As previously stated, Bzdok et al. (2012) found a cluster of activation in the rTPJ (BA 39), which we did not find. Another discrepancy between our ME activation clusters and Bzdok et al.’s (2012) for moral cognition are that they found a cluster of activation in the left amygdala, which we did not find. Also, Bzdok et al. (2012) reported activation in the precuneus, which was not found to be a cluster of significant activation for the ME experiments in our analysis."

Another example of a report of a supposed “neuroscience of decision making” is a Neuroscience News article here entitled, “Researchers Discover Decision Making Center of Brain.” We again have a reference to a mere brain scanning study. But this time the study has a graph that gives us the percent signal change that we need to judge whether robust evidence has been found. The graph shows that the percent signal change picked up by the brain scanning is only about a fraction of one percent, about 1 part in 300. That's no good evidence for anything, and could easily be the result of pure chance fluctuations.

Similarly weak results are found in this study, trying to use brain scanning to find some region of the brain more involved in decision making. The graph shows that the percent signal change picked up the brain scanning is only about a fraction of one percent, about 1 part in 300. That's no good evidence for anything, and could easily be the result of pure chance fluctuations. 

In Figure 3 of the study here, we get a brain scanning result for the percent signal change in activity for the dorsolateral prefrontal cortex. The graph shows a signal change of only about 1 part in 300 (about .3 percent). That's no good evidence for anything, and could easily be the result of pure chance fluctuations. 

Most of the studies that claim to show neural correlates of decision making are mainly finding either neural correlates of emotion (which can often be entangled with decision making) or neural correlates of muscle activation (often paired with decision making).  When I do a Google search for "neural correlates of motionless decision making," I am unable to find a single study testing such a thing. 

Reason #7: There is no convincing evidence of some type of change of brain state when a calm non-muscular decision is made.

By looking at brain scans, it is impossible to reliably predict when anyone made a non-muscular decision.  We should not be fooled by a certain type of brain scanning experiment with the following characteristics:

(1) The study will not be pre-registered, and will not publish in advance a specification of some particular type of brain activation signal that it is looking for (in some very specific little part of the brain) as a sign of when someone made a decision.
(2) The study will scan the brains of people as they made some decision in their minds. 
(3) Scientists will then examine the brain scans, looking for some particular tiny area of the brain that was more a tiny bit more active when the decisions were made. 
(4) The study will involve only a small number of subjects, maybe 10, 15, 20 or 25. 

Let me explain why this type of study is not at all good evidence for anything.  In any random brain there will be random fluctuations in activity from moment to moment.  Let us suppose a researcher has the freedom to compare any of 200 different little areas of the brain, looking for some area that has an increase in activity during some particular moment (such as when a decision is made). We would expect that purely by chance there would be some area that would show a tiny bit more activity during the particular moment being studied, even if it is not your brain that is making a decision. Similarly, if I use a machine that can detect minute fluctuations in temperature in the livers of 20 people while they are making a decision, and I have the freedom to check 200 different little regions of the liver, I will probably be able to find some tiny liver region which (purely by chance) had a minutely higher temperature when some decision was made. But this would do nothing to show that livers make decisions. 

A discussion of this issue can be found around page 23 of the technical paper here, where we read the following:

"With a plausible population correlation of 0.5, a 1000-voxel whole-brain analysis would require 83 subjects to achieve 80% power. A sample size of 83 is five times greater than the average used in the studies we surveyed: collecting this much data in an fMRI experiment is an enormous expense that is not attempted by any except a few major collaborative networks."

In other words, brain imaging studies tend to use only a fraction of the sample size they need, given the techniques they typically use.  It is possible to do a reliable study with a small sample size, if you limit the analysis to only one small tiny area of the brain. But that is almost never done.  

On page 33 the paper above states the following, giving us a strong reason for skepticism about brain scanning studies:

"In short, our exploration of power suggests that across-subject whole-brain correlation experiments are generally impractical: without adequate multiple comparisons correction they will have false positive rates approaching 100%, with adequate multiple comparisons correction they require 5 times as many subjects than what the typical lab currently utilizes."

The study here is an example of the type of unconvincing study I have just discussed. The authors scanned brains, looking for change in signal strength corresponding to whether some type of decision was made.  Having the freedom to check any of 200 or more brain regions (since their study was not a pre-registered study announcing its intention to look in only one little place in the brain), the authors found one or two tiny regions where there is an extremely small greater activation when a decision was made. The difference in signal strength (as reported in Figure 1 and figure 2) was only about .1 of 1 percent, which is about 1 part in 1000.  But we would expect a result as good as that by chance, because of random variations in little parts of the brain, even if brains do not actually make decisions. So the study does nothing at all to provide evidence that brains are making decisions.  The study say it did "whole brain analyses", but it used only 32 subjects, only a small fraction of the 83 subjects recommended above for a mere 1000-voxel "whole brain analysis" study.  

On page 68 of the book "Casting Light on the Dark Side of Brain Imaging," we read about another problem in brain scanning studies:

"Take a guess at how many ways we can analyze data from a single brain scan. Theoretically countless, practically at least 69,000 ways....Brain imaging data usually requires between 6 and 10 steps of general data preparation and analysis. Researchers can perform any of these steps in a variety of ways....Different choices in data processing and analysis can lead to widely divergent results: small variations can quickly sum to form large discrepancies. In some cases, researchers may run many variations of an analysis, but only report results that support their hypothesis. This practice can lead to biased publications that overestimate true effects." 

Here is the kind of thing we would like to have in order to have convincing evidence of greater brain activity when a non-muscular decision is made:

(1) There would have to be many replicated pre-registered studies that all showed that some particular region of the brain activated at a substantially higher level when a decision was made (more than just a fraction of 1 percent). 
(2) In the pre-registration declarations, published prior to the collection of any data, the study authors would have to announce that they were studying only one small region of the brain to see whether it activates more during decision making, rather than giving themselves the freedom to check any brain region they wanted in a "fishing expedition" kind of approach to produce signal variations we would expect by chance.  
(3) In the same  pre-registration declarations, published prior to the collection of any data, the study authors would have to commit to one exact method of data analysis, precisely spelled out, thereby depriving themselves of the freedom to keep "slicing and dicing" the brain scan data until they got a result supporting their hypothesis. 

 Nothing like this has occurred.  Instead we have a succession of little brain scan studies (usually with low statistical power) showing minute less-than-one-percent activation increases in some region that differs from study to study, studies in which researchers are free to look for some minute signal deviation in any brain region, and free to try dozens of data analysis methods until something that can be called a neural correlation coughs up.  The results of such studies are what what we would expect to get by chance even if brains are not actually making decisions. 

In short, we have no robust evidence that brains make decisions. Nature never told us that decisions are made by brains. It is merely neuroscientists who told us such a thing, without ever having adequate evidence for such a claim. 

Reason #8: Humans can make decisions many times more quickly than they would make if decisions were being made by brains subject to several severe signal slowing factors and severe signal noise. 

Humans can make decisions very, very fast. Every time some one drives in the city, he is making important decisions very quickly, such as whether to brake at a particular moment. Every time some one speaks very quickly in conversation, he is making many instantaneous decisions about what words to use. People such as  quarterbacks and soccer players, standardized test takers,  chess players in special "speed" matches, and players of the Jeopardy TV show are making decisions at a very fast speed, often instantaneously. 

If you tried my previous selection game, you probably made decisions at a rate of about one decision per one or two seconds (each time you picked one of the possibilities, you were making a decision on what to pick).  If it took you 15 seconds to do that test, about two-thirds of that was reading and memory recall; and you were making a decision at a rate of about one decision per second.  A baseball hitter typically makes a decision (on whether to swing) in only a small fraction of a second. According to this scientific paper, "The average speech rate of adults in English is between 150 and 190 words per minute (Tauroza and Allison 1990), although in conversation this figure may rise considerably, reaching 200 wpm (Walker 2010; Laver 1994)."  Someone speaking in conversation at 200 words per minute is making decisions at a rate of three per second, each a decision on which word to use.  

But we have strong reasons for believing that brains should not be fast enough for instantaneous decisions. The "100 meters per second" claim often made about brain signal speed is not at all accurate, as it completely ignores very serious slowing factors such as the 200-times-slower speed of transmission through dendrites, the very serious slowing factor caused by cumulative synaptic delays, and the additional very serious slowing factor caused by what is called synaptic fatigue. A realistic calculation of brain signal speed (such as I have made here) leads to the conclusion that brains should be far too slow to allow extremely rapid or instantaneous decisions, and that if a brain were making a non-muscular decision (not involving a reflex), it should take at least five or ten seconds for any such decision.  

We also know that the brain has multiple sources of very severe signal noise, as discussed here. It would seem that all this noise would be a huge factor preventing different parts of a brain from reaching a single decision quickly, just as it would greatly decrease the chance of a classroom of 30 people reaching the same decision very quickly if all of the people were blaring different podcasts, videos and rock songs from their smartphones. 

An intelligent hypothesis about human decision making is that it comes from an immaterial aspect of human beings, what is commonly called a soul or spirit.  A neuroscientist will protest that it is forbidden to postulate some important reality that we cannot directly see. Such a rule is not at all followed in general by scientists. Astrophysicists and cosmologists nowadays are constantly claiming that most of the universe consists of important realities we cannot see, what they call dark matter and dark energy -- both things that have never been directly observed by any method.