Header 1

Our future, our universe, and other weighty topics

Saturday, October 12, 2019

There Was No "How" in SciAm's "How Matter Becomes Mind"

The July 2019 cover of Scientific American was dominated by a big headline stating, "How the Mind Arises." Inside we had a long article entitled "How Matter Becomes Mind."  The article did nothing to actually explain how a mind could ever arise from a brain.

The article by Max Bertolero and Danielle S. Bassett tried to sell us on something called "network neuroscience,"  a term that hasn't been around for many years. The article is behind a paywall, but in other articles that you can freely read online, we can read about the claims of people who are adherents of this academic discipline.  For example, in the article here ("Inside the Network Neuroscience Theory of Human Intelligence") and the article here ("Network Neuroscience Theory of Human Intelligence" by Aron K. Barbey) we can read theorizing similar to that in the Scientific American article.

At the top of the "Network Neuroscience Theory of Human Intelligence" article by Barbey, we have a claim that "general intelligence, g, emerges from individual differences in the network architecture of the human brain."  This claim makes no sense. Let us consider a single human being in isolation, not considering any other humans. There has to be some reason why this particular person has intelligence, understanding and consciousness. It makes no sense to claim that his intelligence arises from differences between his brain and the brains of other people.  If all of the people in the world were destroyed by some giant asteroid colliding with Earth, and a single astronaut was left in a space station, then there would be no brain differences between this person and other humans; for there would be only one human left. But the surviving human would still be intelligent. So what sense does it make to claim that intelligence arises from "individual differences" in brains? 

This seems to be the general approach of adherents of "network neuroscience":

(1) They start out by claiming or insinuating that individual parts of the brain can be strongly associated with particular mental functions.
(2) They do various forms of "network analysis" using the mathematics of network analysis that has grown up over recent decades, mainly in reference to computer networks. In their analysis, regions of the brain are considered as nodes in a network. 
(3) These thinkers claim that something in their network analysis provides insight as to how brains could think or understand things. 

There are several serious problems with this approach. The first is that there is no good evidence that particular parts of the brain are causes of particular mental functions relating to thinking or understanding.  In his paper Barbey states, "Converging evidence from resting-state fMRI and human lesion studies strongly implicates the frontoparietal network in cognitive control, demonstrating that this network accounts for individual differences in adaptive reasoning and problem-solving – as assessed by fMRI measures of global efficiency and structural measures of brain integrity."  The term "frontoparietal network" basically refers to the front part of the brain.  This kind of claim that thought comes from the front part of the brain is debunked in my lengthy post "The Dubious Dogma That Thought Comes from the Frontal Lobes or Prefrontal Cortex," which includes links to many neuroscience papers. 

When making the claim I just quoted, Barbey gives references for several neuroscience papers. One of those papers is a 2010 paper by Barbey and two others, entitled "Dorsolateral prefrontal contributions to human intelligence."  That paper found an average IQ of 91 for some 19 patients who had lesions in the dorsolateral prefrontal cortex. But the study here with a much larger sample tells us that 37 patients with damage to the dorsolateral prefrontal cortex had an average IQ of 97.4, only very slightly below average (Table 1 and Table 2).  There is no truth to insinuations that we can tell from fMRI studies that some particular part of the brain is responsible for some intellectual function. Contrary to the misleading visuals so often given by neuroscientists, it is not at all true that particular parts of the brain are activated far more strongly than other regions when some particular intellectual task is done (outside of the occipital lobe used for vision, the differences tend to be only about a half of one percent, about what we would expect from random fluctuations). 

In the article Barbey starts talking about small-world networks. The wikipedia.org article on small world networks describe them as a type of network in which "most nodes can be reached from every other node by a small number of hops or steps." 

A small-world network

Barbey tries to persuade us that a brain is a small-world network. This analysis is incorrect. The only natural and straightforward way to analyze the brain from a network perspective is to consider individual neurons as the nodes of the network.  Considered in that way, the brain is not at all a small-world network. As this paper says, "If considered at the cellular level, brain networks are also unlikely to form classical small-world networks."  The actual number of hops needed to travel from one end of the brain to another is in the hundreds or thousands, and it is not at all a small number such as five or six. 

Figure 12 in the paper here gives a graph comparing connection probability between two neurons and distance. The graph shows that at a distance of 500 microns, this connection probability falls to essentially zero.  Now, a human brain is 15 centimeters across, which is 150,000 microns.  Such a distance is equal to 300 lengths of 500 microns. So we can very roughly calculate that the number of hops to get from one side of the brain to another is 300 or more. Considering such facts, we cannot at all judge the brain to be a small-world network, which can be traversed by fewer than 10 hops (as in the visual above). 

So how do people such as Barbey state claims that the brain is a small-world network? Rather than judging a neuron to be a node of a brain network, which is the natural way to consider things, they artificially and arbitrarily declare certain regions of the brain to be the nodes.  

Barbey states this:

"Recent advances in network neuroscience further elucidate the functions afforded by a small-world architecture, motivating new insights about how brain network topology and dynamics account for individual differences in specific and broad facets of general intelligence, represented by the Network Neuroscience Theory."

Claiming that the brain is a small-world network is erroneous, unless you decide that the nodes of a brain should be some regions of the brain that you have arbitrarily selected, rather than a neuron, which is the real natural node of a brain.  Even if the brain was a small-world network, or any other type of network, that would not clarify how a brain could generate a thought or a concept or a belief or some understanding of something. 

In Barbey's paper there is a diagram with the strange title "Hierarchical structure of general intelligence."  There is actually nothing hierarchical or structural about general intelligence.  It is an unstructured thing without anything like the parent-child relations that are found in hierarchies. 

After defining an ICN as an "intrinsic connectivity network," Barbey towards the end of his paper tries to put his theory in a nutshell. He states, "In summary, Network Neuroscience Theory proposes that general intelligence depends on the dynamic reorganization of ICNs – modifying their topology and community structure in the service of system-wide flexibility and adaptation." This has no weight as an explanation for how a brain could produce a mind, or how neurons could produce thought or understanding. I could give countless examples of cells below the neck that dynamically reorganize, and people don't believe that such cells cause thought. There is no reason why we should think that some reorganization effect in a brain can explain thinking or intelligence or understanding.  Moreover, there isn't very much physical evidence for reorganization effects in a brain (although there are many cases of minds rebounding after severe brain damage, there is little physical visual evidence of brains restructuring to achieve such a rebound).   

The Scientific American article "How Matter Becomes Mind" 
by Bertolero and Bassett offered similar content that did nothing to explain how it is that a brain could produce a thought, an idea, or some understanding of something.  The article had some dubious poorly substantiated claims that the brain consists of modules. It sounded somewhat like the discredited old theory known as phrenology, which is described in a wikipedia.org article as being "based on the concept that the brain is the organ of the mind, and that certain brain areas have localized, specific functions or modules."

The authors say, "It is the music your brain plays that makes you you." If that were true, then each day you would be a different you, because each day your brain "plays different music" by transmitting different electrical signals. But you stay you with remarkable constancy from year to year, despite such daily fluctuations in "brain music."   And in many cases, a heart of someone will stop, and their brain will "stop playing music" as its electrical activity very quickly ceases. But in such cases a person will often continue to have vivid experiences (what are called near-death experiences). So you can't be some "music" your brain is playing.  

After making the statement above, the authors went on and on with a musical analogy, continuing it for several paragraphs. Previously the authors had referred to "massive orchestras of neurons."  Now they referred to "musical compositions" played by brain modules,   they referred to "the symphony in our head," and they referred to "the brain's music." This is all worthless for explaining how matter could give rise to mind, or how a brain could produce an idea, or how neurons could produce a thought.  Music is a sound, not a thought, an idea or understanding. If if were true that the brain is constantly "playing music," this would do nothing to explain how matter could give rise to mind, or how a brain could produce an idea, or how neurons could produce a thought.  A modern computerized keyboard can be set on a continuous loop, to continuously play music; but the keyboard doesn't have mind or thoughts or understanding to the slightest degree. 

The Scientific American article by Bertolero and Bassett refers us to some brain scanning data they used. They state, "To better understand what was happening, we used publicly available data from a landmark study known as MyConnectome, in which Stanford University professor Russell Poldrack personally underwent imaging and cognitive appraisals three times a week for more than a year."  They seem to have drawn a conclusion (about re-routing of brain connections) based on some brain scans of a single individual, and that's kind of like drawing a conclusion about baseball hitters based on how often your spouse gets a base hit. 

Connections and networks do nothing to explain cognition.  Let us imagine a System X which is a dense network of 100 trillion nodes (each consisting of a chip or transistor). Imagine that in System X each node is connected to a billion other nodes.  In this System X the number of nodes is greater than the number of cells in the brain; and also the number of connections per node is many thousands of times greater than in the brain. But despite all this network connectivity which is so much greater than in the brain, there would not be the slightest reason for thinking that this System X would be capable of having a thought or an idea or an actual understanding of something.  See my post "Physical Connections Do Nothing to Explain Cognition" for more on the futility of trying to use connections or networks to explain the human mind. 

In the articles I have mentioned, there is some reference to some studies that try to show that some small fraction of intelligence can be predicted by some analysis of brains or brain networks or brain connections.  Such studies are typically dubious for a variety of reasons.  The scientists involved in the studies will typically be free to play around with any of hundreds of different ways of analyzing brain scans and crunching the fMRI data, until they find one that may seem to predict a small fraction of intelligence.  If one such way is found, it is not very impressive; for we might expect them to get a little success on some tiny fraction of the analytic permutations, purely by chance. 

Consider two theories. The first (call it Theory A) is that your mind is produced by your neurons and the connections between them. The second (call it Theory B) is that your mind is not at all produced by your brain, and that your mind is an aspect of your immaterial soul. Under this Theory B it might be that the brain acts as a kind of valve to limit your mind.  Without our brains we might have minds fit for godlike thoughts and brilliant cosmic contemplation, but our brains may limit our minds so that we feel comfortable doing mainly the tiny little chores of earthly living.  Under such a case, it is quite possible that brain parameters might affect intelligence, for they might affect how much of a "valving effect" occurs to limit your intelligence. So if you were to show some limited relation exists between brain states (or brain wiring) and intelligence, you do nothing at all to show that the brain is the source of the human mind. Such a finding would be equally compatible with Theory A and Theory B.   A study purporting to give such a finding would neither show how matter could produce a mind, nor would it actually support the claim that matter does produce a mind. 

As if to try to compensate for the weakness of their explanations, the article made sure to use a gigantic font for the title "How Matter Becomes Mind."  I've never seen a font so big in a magazine; the font was so big the title took up half of the page.  I offer this advice to scientists: don't so often claim to understand things you do not, and if you make such claims, don't state them in a gigantic font. 

Tuesday, October 8, 2019

The Measly Results of Origin-of-Life Researchers

For many decades, scientists have been doing research trying to understand how life could have originated on this planet. They have never really got to first base in their efforts. But based on the hype you read in the press, you might think that these scientists are hitting doubles, triples and grand-slam home runs. For several decades I have read more outrageous exaggerations and misleading distortions on this topic than on perhaps any other scientific topic. It didn't seem very different when I visited a science news site on Saturday, and saw two new articles on this topic.

The first article was one on nature.com, an article announcing “Lab-made primordial soup yields RNA bases.” To put this into perspective, we need to first consider some of the requirements for the origin of life:

Large-scale requirements Chemical sub-requirements
100+ proteins, each consisting of 100 or more amino acids arranged in just the right way to achieve a functional end The twenty amino acids used by earthly life must all be available. Most of them are far more complex than the simplest ones (alanine and glycine). The amino acids must be exclusively left-handed, as we see in earthly life.
Genetic information in a molecule such as DNA, specifying the amino acid sequences of these 100+ proteins Ribose sugar molecules, phosphate group molecules, and four DNA nitrogen bases (adenine, guanine, cytosine and thymine)
RNA molecules Ribose sugar molecules and four RNA nitrogen bases (adenine, guanine, cytosine and uracil)
Genetic code Additional complex requirements.

Now, it should be rather clear from the list above that if someone merely makes in a lab the four RNA nitrogen bases, then such a thing is hardly anything in regard to explaining the origin of life. It's rather like merely showing someone a handful of nails after he asks, “Show me that you are all set to build a house.”

A long-standing problem in trying to explain the origin of RNA molecules is that such things require ribose sugar molecules. But in the “primordial soup” imagined by scientists (existing before life), ribose sugar molecules would not be stable. This was demonstrated by a scientific paper which stated the following:

The generally accepted prebiotic synthesis of ribose, the formose reaction, yields numerous sugars without any selectivity. Even if there were a selective synthesis of ribose, there is still the problem of stability. Sugars are known to be unstable in strong acid or base, but there are few data for neutral solutions. Therefore, we have measured the rate of decomposition of ribose between pH 4 and pH 8 from 40 degrees C to 120 degrees C. The ribose half-lives are very short (73 min at pH 7.0 and 100 degrees C and 44 years at pH 7.0 and 0 degrees C)... These results suggest that the backbone of the first genetic material could not have contained ribose or other sugars because of their instability.

What this paper told us is that at the very warm temperature needed for any kind of “origin of life from chemicals” scenario, ribose sugars are unstable, lasting about 73 minutes. So it is very hard to imagine any credible scenario in which a ribose sugar would combine with an RNA nitrogenous base such as adenine to make one of the building blocks of RNA. Having only the RNA nitrogenous bases would be like trying to build a big hotel with only lumber and no metal, nails or screws, or trying to build such a hotel with only nails or screws, and no lumber.


The Nature.com article quotes a scientist named Thomas Carell insinuating that his experiment tells us something hopeful about the probability of life naturally arising on other planets.  This is not at all correct.  In this article, the distinguished chemist James Tour analyzes a previous 2016 experiment by Carell on the same topic. He describes the experiment as follows (I have added boldface to one of the lines):

"Starting with 2,4,5,6-tetraaminopyrimidine-sulfate and suspending it in formic acid and sodium formiate, the mixture was heated to 101°C for two hours. The solvent was evaporated under reduced pressure and water was added to dissolve the product. Concentrated ammonium hydroxide was then used to raise the mixture to pH 8. The solution was cooled overnight at 4°C, yielding substantial amounts of formylated tetraminopyrimide as a crystalline solid. This was isolated from the other products. Then, they allowed the formylated product to interact with 15 equivalents of homochiral ribose by grinding the two together thoroughly in the solid state and heating the mixture in an oven at 100°C for eight hours. The team purchased its ribose...There is no reason to suppose that nature could have commanded these exquisite laboratory skills."

So the ribose sugar (a key part of a nucleoside, as shown in the diagram above) was purchased, not experimentally created; and some very contrived technique was used that we would not expect to find occurring naturally.  Yet the study was written up with a title claiming a "nucleoside formation pathway," as if some natural path to such a thing had been found. 

Was an equally cheesy method used in the latest experiment? The article on Nature.com seems to suggest a similar shortcoming in Carell's latest experiment, for the article merely announces that 4 nitrogen bases were produced (without claiming that a full nucleoside was produced); and it also says, "The next major problem Carell wants to tackle is what reactions could have formed the sugar ribose, which needs to link to nucleobases before RNA can form.”  That sure makes it sound like the experiment did not actually produce a nucleoside (including a sugar ribose) from scratch, and that the experiment therefore did not at all produce one of the building blocks of RNA in conditions simulating the early Earth (but merely some building blocks of the building blocks of RNA). Since the abstract of the latest Carell experiment states, "We report the synthesis of the pyrimidine nucleosides from small molecules and ribose, driven solely by wet-dry cycles," we can assume that the ribose was not naturally generated through any conditions simulating the early Earth, since that is not claimed in the abstract. 

In a 2018 paper Thomas Carell and his colleagues claimed that "wet-dry cycles" can "enable the origin of nucleosides."  But the paper described a very artificial-sounding technique. First, some salts were obtained with a variety of methods like this:

"1-methylguanidine (2a) hydrochloride salt (10.95 g, 100 mmol, 1 eq.) and malononitrile (1) (6.65 g, 100 mmol, 1 eq.) was dissolved in H2O (230 mL, containing 6 mL of AcOH) in a 500 mL beaker. A solution of NaNO2 (7.00 g, 101 mmol, 1.01 eq., in 20 mL of H2O) was slowly added at room temperature. After stirring at room temperature for 2 h the reaction mixture was kept at 45 °C in an oil bath for 3–4 days open to the air until the mixture was concentrated to about 100 mL. The reaction mixture was placed in a fridge overnight at 8–10 °C. The formed yellow crystals were filtered off to give the desired product (6.70 g, 40 mmol, 40%)."

Then (according to the supplementary materials) ribose sugar was added to this strange mixture:

"Ribose (56.5 mg, 0.375 mmol, 15 eq.) was thoroughly ground up with the corresponding FaPy compound 5a-h (0.025 mmol, 1 eq.) and heated to 100 °C for 8 h in an oven. The remaining reaction mixture was taken up in basic solution (3 mL, 0.5 M Et3N) and heated in a sealed tube (ACE 15 mL pressure tube) at 100 °C for several days (see below)."

These are artificial chemist procedures that can be hardly compared to anything that might have occurred naturally, and the ribose is not arising naturally, but is being added.  The chemist is simply dumping in off-the-shelf ribose or purchased ribose, not producing ribose in a simulation of early Earth conditions.   So in no sense can an honest claim be made that the building blocks of RNA (nucleosides that include ribose) have been produced in a simulation of early Earth conditions. 

On the same day that www.realclearscience.com referred us to this www.nature.com article, it linked to a cockeyed Universe Today article about origin-of-life research. The story starts out by stating that the Cassini-Huygen space probe “delivered the most compelling evidence to date of extra-terrestrial life.” This is entirely false. The probe did nothing of the sort.

The article then tries to back up its imaginary claim by stating this:

For instance, a team of German scientists recently examined data gathered by the Cassini orbiter around Enceladus’ southern polar region, where plume activity regularly sends jets of icy particles into space. What they found was evidence of organic signatures that could be the building blocks for amino acids, the very thing that life is made of!”

Once again, meager and measly results are being hyped to sound like something big. First, the article does not say that amino acids were actually found, but merely talks about the “building blocks for amino acids.” Second, the article does not actually say that “building blocks for amino acids” were found, but that something was found that “could be” the building blocks for amino acids.

The simpler amino acids are fairly simple molecules. For example, glycine and alanine (two of the simpler amino acids) are about half as complex as the nucleoside depicted in the diagram above. So finding “building blocks for amino acids” is an unimpressive result, involving finding molecules of only a few atoms. The building blocks of life are proteins. So the tiny molecules in question (possible building blocks of amino acids that are the building blocks of proteins that are the building blocks of life) merely qualify as possible building blocks of the building blocks (amino acids) of the building blocks (proteins) of life. Of course, finding a mere building block of a building block of a building block of something is unimpressive, and gives you no reason whatsoever to suspect that such a thing could naturally appear. Similarly, finding by chance a mere pixel-sized dot that is the building block of a letter or character that is the building block of a book gives you no reason at all to think that books can form by chance.

Origin-of-life researchers have failed to produce a single nucleic acid in any experiment accurately simulating conditions of the early Earth, and have not even produced from scratch a single building block of a nucleic acid (a nucleoside) in such an experiment. Such researchers have also failed to produce a single functional protein through any such experiment. In experiments accurately simulating conditions of the early Earth, origin-of-life researchers have failed to produce more than a few of the 20 types of amino acids used by living things. There is a problem with some of the famous experiments attempting to produce amino acids by sending energy through gases simulating the early Earth's atmosphere. The gaseous mixtures used in such experiments often do not match the early Earth's atmosphere as it is now understood.

An article in Scientific American discusses the famous Miller-Urey experiment that produced some amino acids in a brown broth:

But the Miller-Urey results were later questioned: It turns out that the gases he used (a reactive mixture of methane and ammonia) did not exist in large amounts on early Earth. Scientists now believe the primeval atmosphere contained an inert mix of carbon dioxide and nitrogen—a change that made a world of difference. When Miller repeated the experiment using the correct combo in 1983, the brown broth failed to materialize. Instead, the mix created a colorless brew, containing few amino acids.”

A 2017 experiment attempted a Miller-Urey experiment with a more realistic mixture of gases. It produced only a single type of amino acid: glycine, the simplest type.  As shown in Table 1, the glycine appeared in only tiny trace amounts of 40 parts per million. The other 19 amino acids used by living things were not produced. For five or six decades following the 1950's Miller-Urey experiment, humanity was told the fairy tale that scientists showed that it was relatively easy for the early Earth to have made amino acids. It now seems that 19 out of 20 such amino acids cannot be produced in experiments realistically simulating the Earth's atmosphere.

At least 8 of 20 amino acids used by living things (arginine, asparagine, cysteine, glutamine, histidine, lysine, tryptophan and tyrosine) have never been produced by any experiment simulating early Earth conditions, including those that used inappropriate gaseous mixtures. Whenever amino acids are produced in experiments trying to simulate the early Earth's atmosphere, the amino acids are 50% left-handed and 50% right-handed. In contrast, the amino acids used by Earthly organisms are all left-handed. Accordingly, attempts to experimentally produce amino acids like those in living things can be called 100% unsuccessful. In experiments simulating the early Earth, no one has ever produced a liquid mixture that had even one type of amino acid existing in exclusively left-handed form.

When something is produced in origin-of-life experiments, it is often in negligible amounts such as an invisible millionth of a gram. But you won't learn that by reading the abstract of a paper, which will typically just brag about having produced something, without mentioning the just-barely-measurable quantity. 

The simplest living thing would consist of very many thousands of amino acids arranged in just the right way to achieve about a hundred functional proteins.  If it were ever shown that all twenty amino acids in living things can be naturally produced in a place like that of the early Earth, this would do nothing to show the mathematical possibility of a life form arising from random combinations of amino acids, which would be comparable to the likelihood of sea shells and driftwood brought up by the tide forming randomly into long, complex, functional instructions (like a long set of instructions on how to build a jet fighter). Showing that all of those twenty amino acids could naturally form would be rather like merely showing that a monkey placed at a keyboard or typewriter can strike each of twenty different keys, and would do nothing to show that natural events can randomly produce the vast amount of functional information required for life to originate.  

No hint of such a difficulty has been communicated in 99% of the science articles that have been written about the origin of life. The majority of these articles rather speak as if the origin of life could occur if some building blocks pile up, without ever communicating the mountainous organization requirements for life to originate, similar to discarded scattered auto parts in a junkyard forming into a long row of functioning cars after a tornado passes by.  

There is only one question we should be asking origin-of-life researchers who have produced such meager and measly results that come nowhere close to substantiating the possibility of the primordial soup hypothesis, that life can naturally form from chemicals. That question is: is that all you got?

Postscript: Another representative example of recent origin-of-life studies and their announcements is a study by Japanese scientists, which was touted with a press release entitled, "Life's building blocks may have formed in interstellar space."  The experiment (trying to simulate "giant gas clouds") merely produced nucleobases (nitrogenous bases such as adenine) that are not at all "building blocks of life" but at best "building blocks of building blocks of life." Getting this required very complicated manipulations with various high-tech instruments (and low-tech items such as quartz wool and glass vials) that presumably would not be floating about in interstellar space. The press release makes no mention of how much of these nucleobases were produced.  A table in the scientific paper tells us that none of the four nucleobases produced were produced in a quantity greater than 4 parts per million, with the average yield being about 1 part per million. In the press release a scientist shamelessly hypes this very meager result produced so artificially, saying, “This result could be key to unraveling fundamental questions for humankind."

Friday, October 4, 2019

Everett "Parallel Universes" Nonsense Is the Ultimate in Escalation of Commitment

Thanks to the diligent book-hawking efforts of a certain errant professor, you may have read recently about  what is called the Everett Many Worlds theory, a theory that is claimed by its proponents to be “an interpretation of quantum mechanics.” The theory holds that every instant the universe is constantly splitting up into an infinite number of copies of itself, so that every possibility (no matter how unlikely) can be realized. The theory has a name that makes it sound not so unreasonable (with all the planets being discovered, the phrase “many worlds” doesn't sound too far-fetched). But the name “many worlds” doesn't describe the nutty idea behind the theory. The theory would be more accurately described as the theory of infinite duplication, because the theory maintains the universe is duplicating itself every second.

There is no evidence whatsoever for this theory, which is endorsed by only a minority of theoretical physicists. The Everett Many Worlds theory has been rejected by physicists such as Adrian Kent, Chad Orzel, Sabine Hossenfelder, T. P.Singh (who says it has been falsified), Lubos Motl (who heaps unsparing scorn on those who advance the theory) and also Casey Blood, who calls it “fatally flawed.” No one has ever observed a parallel universe. We also cannot plausibly imagine such a theory ever being verified. To verify the theory, you would need to travel to some other universe to verify its existence, which is, of course, impossible. Even if you did travel to such a universe, you could never verify the idea that every possibility is occurring in other parallel universes.

Why would someone believe something as nutty as the Everett Many Worlds theory? Belief in the theory seems to be an extreme example of what psychologists call “escalation of commitment.” Escalation of commitment is typically when someone double-downs, triple-downs or quadruple-downs on some monetary or belief commitment he has already made, typically in response to some evidence suggesting that the original commitment was not a wise one. Here's an example:

Wife: Why did you invest so much money in the stock of Worldwide Widgets? The price has dropped 50% since you bought all those shares.
Husband: I totally believe in Worldwide Widgets! Why, now that you mention the company, I'm going to go and invest twice as much money in it.

There's a British expression to describe this type of behavior: “in for a penny, in for a pound” (which refers to the British currency unit worth many pennies).  An American version of the same phrase might be "in for a buck, in for a bundle," the word "bundle" meaning a large amount of cash. 

The classic case of escalation of commitment was the Vietnam War. Modest military commitments were made by the US in the war around 1964, under President Lyndon Johnson. But things started to go bad. Then the commitments of costs and troops escalated dramatically around 1965 and 1966. Despite the low prospects of victory under an all-but-unwinnable military situation, more and more US soldiers (more than 500,000) were sent to South Vietnam. Also, more and more bombs were dropped, until the total number of bombs dropped vastly exceeded all the bombs dropped in World War II. It was as if Lyndon Johnson just couldn't bring himself to admit that he had made a mistake by getting involved in the war. He kept doubling-down and tripling-down his original commitment. The US ended up losing the war.

Escalation of commitment can also occur in regard to beliefs. Once a person has committed himself to some belief, he may then start adopting additional beliefs, perhaps extravagant and irrational beliefs, if such additional beliefs seem to be needed to shore up or defend his original belief decision.  An interesting example of escalation of commitment can be found in the case of physicists who appeal to a multiverse to try to explain away cosmic fine-tuning.

During the twentieth century, it was discovered that the fundamental constants of the universe are remarkably fine-tuned to allow for the existence of life forms such as ourselves. In many cases it was found that some small change in the universe's fundamental constants would have ruled out the possibility of life in our universe. A dramatic example is the exact equality (to more than eighteen decimal places) of the absolute value of the proton charge and the electron charge. Each proton has a mass 1836 times greater than the mass of each electron. But the electrical charge on each proton is the very exact opposite of the electrical charge on each electron. According to page 6 of this textbook, a scientific experiment "proved that the proton and the electron do not differ in magnitude of charge by more than 1 part in 1020."  Were it not for this exact match, which we would not expect to occur in 1 in 100,000,000,000,000,000,000 random universes, the chemistry of life would be impossible, and planets would not be able to hold together (as the electrical repulsion of their particles would vastly exceed the gravitational force that holds them together).

Faced with many cases such as these, many physicists began to evoke the idea of a multiverse: that maybe there are an infinity or near-infinity of universes, perhaps so many that we would expect one of them to have been as fortunate as our universe. Such an idea was a most dramatic example of escalation of commitment. The commitment made by the typical physicist was a commitment to the idea that the universe was merely the product of blind, random forces. The discovery of cosmic fine-tuning threatened that commitment, seeming to make the idea seem dead wrong. Rather than retract their commitment to an idea that seemed to be discredited by facts, some physicists “super-escalated” their commitment to such an idea, by evoking the infinite baggage of an infinity of universes. In for a penny, in for a pound – or, in the case of the multiverse idea, in for a seemingly infinite number of pounds.

Of course, the idea of the multiverse is quite worthless in defending the claim that ours is a random, purposeless universe. The idea of a multiverse may be of some value in defending an idea that nobody cares about, the idea that some universe in a vast collection of universes might be coincidentally fine-tuned. But the idea does nothing to defend the idea that our universe is coincidentally fine-tuned. You do not increase the probability of any one random trial being successful by evoking the possibility that there were a gigantic number of trials. For example, if you imagine a trillion lottery players on a trillion planets, each betting a large amount that the number 8328293232 will win the lottery, this increases the chance that one of these players will win with that bet, but does absolutely nothing to increase the chance that you will win making such a bet. So the multiverse idea does nothing to help out the person who has made the ideological commitment to the idea that our universe is random and purposeless. The probability of our universe having all of the physics long-shots needed for biological habitability is exactly the same microscopic probability, regardless of whether there is only one universe or an infinite number of them.

So the multiverse is an example of an utterly futile escalation of commitment. But completely futile escalations of commitment are very common, such as when an investor buys twice as many shares of a stock he previously bought, a stock with a price that is now plunging. An escalation of commitment is usually not a rational thing. It is typically an irrational impulse, an act of desperation to reduce or stifle someone's uncomfortable suspicions that maybe a previous decision or commitment he made was unwise.

There is another case where some physicists have made a gigantic escalation of commitment. Under quantum mechanics, it was maintained that until it is observed in one location, the position of an electron near an atomic nucleus exists as a kind of “probability cloud.” Then when the electron is observed, this probability cloud “collapses,” and the electron is observed in one particular spot in the probability cloud. This is sometimes called a collapse of a wave function.

probability clouds

There was a reason why quantum mechanics made some people very uncomfortable. Quantum mechanics seemed to imply that the act of observation actually determines reality. So in the case of the electron near a nucleus, the act of observation seems to change the electron from something “smeared out” across a kind of probability cloud, to something existing in one exact spot.  Other experiments were done, including the double-slit experiment, that seemed to show that the mere act of observation can fundamentally change how tiny particles act. For example, a group of particles might act like a wave if they are not observed, but act like particles and not act like a wave if they are observed; or vice versa.

Nature therefore seemed to be sending us a message loud and clear: observers are a fundamental part of the fabric of reality. This message was a complete horror and an anathema to a certain kind of physicist, one believing that observers are mere accidental by-products or epiphenomena of nature's blind workings.

Some of these physicists responded to this message of nature by engaging in a gigantic escalation of commitment, one designed to sweep under the rug the message of quantum mechanics that observers are a fundamental part of the fabric of reality.  This escalation of commitment was the Everett many-worlds theory.  The believers in this nonsense claimed that that there is no collapse of the wave function, and that every time an observation is made, all of the possibilities represented by the probability cloud are actualized, each in a separate universe. It was kind of thinking along the lines of "there wasn't a cloud of possibilities that collapsed into one possibility when an observation was made, but instead all of the possibilities in the cloud of possibilities became realities in separate parallel universes." 

The supporters of this raving nonsense seem to think that there are an infinite number of copies of themselves in alternate parallel universes (presumably including many universes in which they rule as the King of America, since their thinking is that every physically possible thing occurs innumerable times).  Such thinkers claim this idea is an interpretation of quantum mechanics. But the Everett many-worlds idea is not an interpretation of quantum mechanics, but instead merely a bizarre fantasy very loosely inspired by quantum mechanics. Similarly, if I look through a telescope and see a tiny point of light, and speculate that this is a fleet of extraterrestrial spaceships coming to conquer our planet, that is not an interpretation of my telescope observation. It is instead a bizarre fantasy very loosely inspired by my telescope observation. 

What's going on with the Everett many-worlds theory is escalation of commitment, some escalation of commitment a billion times more dramatic than when someone doubles-down on his bad stock pick by buying twice as many shares of the plunging stock.  Having committed themselves to the idea that observers are mere accidental by-products of a purposeless universe,  like the scent from some boiling soup, certain people cannot stand to believe what quantum mechanics is telling us, that observers are a fundamental part of the fabric of reality.  As a desperate defense against such an indication of nature, such people have erected speculations of an infinity of parallel universes, thinking this will somehow help.  Such speculations are ludicrous, but that's how escalation of commitment typically is. It's usually not something logical, but an act of desperation to try to prop up beliefs or conceits or previous commitments that are threatened.  Escalation of commitment in some of the cases I have mentioned isn't just "in for a penny, in for a pound" or "in for a buck, in for a bundle"; instead, it's "in for a buck, in for an infinity of bucks." How high will some of our dogmatic materialists escalate their commitments (and pile up belief baggage) to try to defend their previous belief commitments?  Higher than the clouds in the sky.  

Monday, September 30, 2019

"Accidental Engineering" Sounds Goofy, So They Use the Term "Adaptions"

Humans have never observed a case of a very complex piece of engineering arising by accident. For example, we know of no cases of nice livable houses that arose from trees accidentally falling and forming into houses. We also know of no cases of trees accidentally falling to form into nice flat bridges.  Occasionally a tree will fall in a way that provides a kind of bridge across a stream, but something so simple cannot really be called a case of accidental engineering.  A fallen tree spanning a stream just resembles a fallen tree, not some feat of engineering. 

Therefore, the phrase "accidental engineering" has a very implausible sound to it, rather like the phrase "accidental writing" or "accidental computer programming."  But it is just such a concept of accidental engineering that our current biologists want us to believe in. They want us to believe that such accidental engineering occurred not just once or twice, but very many times,  so many times that there were billions of cases of accidental engineering that explain all of the complex biological innovations discovered in the biological world.  

Imagine if one of today's biologists were to explain such a dogma without using euphemisms.  He might sound something like this:

"We should be glad that so very many cases of accidental engineering occurred to produce the parts of the human body. You can see things because of some accidental engineering that produced your eyes, and because of some very different accidental engineering that produced the parts of your brain involved in vision. You have a circulatory system because of entirely different cases of accidental engineering. Then there are the many cases of accidental engineering that conspired by chance to produce your skeletal system, and the many other cases of accidental engineering that randomly conspired to produce your intricate muscular system. On the molecular level, your body is made up of proteins, and there are more than 20,000 different types of proteins used by your body. Each of these types of protein molecules is like its own separate complex invention, a complex tiny machine or device with hundreds of amino acid parts fitting together in just the right way to produce a particular functional effect. So it seems that there were some 20,000 cases of very lucky accidental engineering that produced all of the protein molecules your body needs."

Such a statement sounds very far-fetched. So Darwinist biologists don't write statements like the statements above, although they believe exactly what is stated in the statement above.  Instead, Darwinist biologists use a euphemism: the word "adaption." For example, a Darwinist biologist may tell you that your eyes are an adaption, and that your visual cortex is an adaption, and that your nose is an adaption, and that your skeletal system is an adaption, and that your circulatory system is an adaption, and that your fingers and toes are adaptions. 

Of course, the term "adaption" is about the least objectionable-sounding term you can use. Who can dispute that adaption occurs? Every day we see adaptions occurring around us. I hear on the TV that it's cold outside today, so I wear a jacket. That's an adaption. I taste my morning coffee, and find it's too hot. So I stir it. That's an adaption. Given that we see adaption occurring constantly around us,  it's very unlikely that anyone will ever say something like "Adaptions are so rare."  But when our biologists euphemistically use the word "adaption," what they are usually referring to are claims of accidental engineering, which is something so rare that humans have never observed it as it happened. 

A dogmatic biologist using the term "adaption" for his claim of accidental engineering is somewhat like a person who wants you to believe that certain people have superpowers like in comic books, but who tries to make this sound like a not-unreasonable claim by using the euphemistic term "capability fluctuations" for his claim about superpowers. 

“Friendly fire” When a soldier from your army shoots or bombs other soldiers from your army
“Collateral damage” When bombs dropped from military jets kill civilians unexpectedly
“Beginning a journey of self-discovery” Fired
“Correctional facility” Prison
“On an educational hiatus” Dropped out of college
“Adaptions” Complex biological innovations described as the results of accidental engineering

What is rather amusing is that the people who make these fanciful  claims about accidental engineering are in general people who know nothing about engineering, and who never did any engineering.  Their ignorance of the most basic principles of engineering is often obvious.  Just as we can use a term such as "hydrodynamics know-nothing" to refer to someone like myself who knows nothing about hydrodynamics, it is in general true that Darwinist biologists are "engineering know-nothings," because they know nothing about engineering.  It is rather hilarious that such "engineering know-nothings"  often insist that we should base our world-views on their engineering opinions, such as the opinion that wonderful cases of accidental engineering have occurred innumerable times. 

Below we see a schematic diagram of a complex system like we find all over the place in biology, a system with so many interlocking dependencies that it seems impossible to imagine how it could have arisen accidentally.  Such systems raise an abundance of "which came first, the chicken or the egg" problems, problems that our learned biologists ignore or sweep under the rug.  

complex system

Examples of such biological things with interlocking dependencies include the many types of protein molecules that cannot fold correctly and cannot be functional unless there exist other types of  helper protein molecules (called chaperone proteins).  The source here estimates that 20 percent to 30 percent of protein molecules have a dependency on other chaperone proteins. When we do not consider the chaperone-dependency of such protein molecules, we might calculate a probability on the order of no more than about 1 in 10130 of the protein molecule appearing by chance from its component amino acids (since the gene for a protein molecule is typically a sequence of hundreds of amino acids, and such a sequence can be arranged in 10260 ways, almost all nonfunctional).  When we then consider the dependency of such a molecule on one or more other equally complex molecules (a chaperone protein molecule),  we must calculate a much, much smaller probability of the protein and its chaperones appearing, probably something as improbable as 1 chance in 10 to the two-hundredth power.  We are asked to believe that such miracles of chance (each an impressive example of accidental engineering) occurred not just once but billions of times, for there are billions of different types of protein molecules in the animal kingdom (the source here estimates 50 billion), and 20 percent of them require chaperone proteins.  This is rather like believing in some planet where billions of people all live in houses that appeared accidentally, after many billions of falling trees conveniently formed into houses.