Thursday, October 31, 2019

When Science Books Have Misleading Titles

Science books are supposed to present established truths. But the titles of science books sometimes contain misleading phrases that are something quite different from established truth.

An example was the book Almost Human: The Astonishing Tale of Homo naledi and the Discovery That Changed Our Human Story. The book is by two paleontologists named Lee Berger and John Hawks. A person buying the book would have expected to read about the discovery of some form of life that was almost human.

But on page 193 of their book, the scientists finally give us an estimate of the brain size for the Homo naledi organisms corresponding to the fossils they found. It was about 560 cubic centimeters for a male, and about 450 cubic centimeters for a female. The average size of a male human brain is about 1350 cubic centimeters. So far from being “almost human,” Homo naledi had a brain only about 41% of the size of the modern human brain. Judging from brain size, we shouldn't even consider Homo naledi as half-human.

Another science book with a misleading title is the book on DNA entitled How to Code a Human: Exploring the DNA Blueprints That Make Us Who We Are. But DNA is not actually a blueprint of a human or a program for making a human. DNA is mainly just a database of some low-level chemical information used by humans. The book does not at all explain how to code a human, nor does it refer us to anything that specifies the human form, DNA being no such thing. Similar sins of misrepresentation have been committed by countless science books which have perpetuated what I call the Great DNA Myth, the myth that DNA is a human specification.  Read here for the reasons why DNA cannot be any such thing.  Read the end of this post for quotes by quite a few scientists saying that DNA is not a blueprint for a human and is not a program for making a human. 


dna bunk

A famous science book with a misleading title is Charles Darwin's book On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. The reason why the title is misleading is that while it used the term “natural selection,” Darwin did not actually specify a theory of nature choosing things or selecting things. He merely specified a theory of survival-of-the-fittest or differential reproduction (the superior reproduction rate of fitter organisms) that was given the catchy but misleading name “natural selection.” Selection is something only done by conscious agents, and Darwin did not believe that nature was a conscious agent.

The title would not have been misleading if Darwin had actually theorized that nature is conscious and can make choices. Such an idea has been advanced by a few thinkers, including some who have thought that planet Earth is a living entity, a conscious kind of “mother Earth.” But Darwin did not expound any such theory, and did not believe that nature made anything resembling a choice made by living beings. It was therefore misleading of him to use the phrase “natural selection” for a theory that was not actually a theory about choices selected by nature, and that could have been more honestly described as simply “survival-of-the-fittest” or “differential reproduction.” In 1869 Darwin himself stated, "In the literal sense of the word, no doubt, natural selection is a false term."  

Another example of a science book with a misleading title
is the book One of Ten Billion Earths by astrophysicist Karel Schrijver. The book is about exoplanets, planets revolving around other stars. You would think from the title that some second Earth has been discovered. But no such thing has occurred. All that has been discovered are exoplanets, a few of which are Earth-sized. No Earth-like planets have been discovered, because life has never been discovered on another planet. If we ever discover some other Earth-sized planet with life, astronomers may be entitled to start saying that there is a scientific basis for estimating that Earth is only one of billions of Earths. Until such a thing occurs, there is no scientific basis for making any claim about another Earth existing. 

A science book with a very misleading title is the book A Universe from Nothing by physicist Lawrence Krauss. It's a book about the origin of the universe, but it does not actually discuss theories of the universe appearing from nothing. Instead it discusses theories of how the Big Bang could have occurred from various physical states that are very much “something,” as they can be described by particular mathematical descriptions.

Another science book with a misleading title is Matt Ridley's book "Genome: The Autobiography of a Species in 23 Chapters." Contrary to what the title suggests, genomes do not store anything like the past history of a species. 

Another science book with a misleading title is the book "Hyperspace: A Scientific Odyssey Through Parallel Universes, Time Warps, and the Tenth Dimension" by physicist Michio Kaku.  You are entitled to make whatever imaginative flight-of-fancy journeys you want to make through parallel universes, tenth dimensions, and time warps, but the moment you call such a journey "scientific" you are misleading your readers. 

A science book with a supremely misleading title is the book The Ape That Understood the Universe by Steve Stewart-Williams. The “ape” in the title refers to mankind. Of course, human beings are not apes, and it is almost as false and silly to refer to humans as apes as it is to refer to humans as dogs or cows. As for understanding the universe, humans have done no such thing. Humans do not understand the origin of the universe (the Big Bang being unexplained), and do not understand the composition of the universe (such things as dark energy and dark matter being simply very vague guesses). Humans do not understand the origin of the laws of nature or the origin of the universe's fundamental constants. Humans also do not understand the most basic truths about the universe, such as whether it is something filled with life, or whether Earth is the only planet with life. The title The Ape That Understood the Universe epitomizes what is wrong with the thinking of many modern scientists: errant biology combined with triumphalist arrogance and hubris.

Monday, October 28, 2019

Smolin Seems to Have Lost His Multiverse Enthusiasm

Physicist Lee Smolin has a new book entitled Einstein's Unfinished Revolution: The Search for What Lies Beyond the Quantum. In the book, Smolin sells himself as a realist. Contrary to many scientists, he claims that quantum mechanics does not require any radical revision of old-fashioned ideas about the relation between the physical world and the observer. In the preface, he states, “Commonsense realism, according to which science can aspire to give a complete picture of the natural world as it is, or would be in our absence, is not actually threatened by anything we know about quantum mechanics.”

His case for this claim is not terribly convincing. He discusses some theories that supposedly allow us to keep quantum mechanics and maintain such a “commonsense realism.” But he's frank in discussing problems, shortcomings and drawbacks of such theories, so frank that we're not left thinking that any one of these theories is very likely to be true.

In the preface Smolin refers rather scornfully to the idea of the multiverse, that our universe is only one of many universes. He states the following:

A vocal minority of cosmologists proclaims that the universe we see around us is only a bubble in a vast ocean called the multiverse that contains an infinity of other bubbles....The fact that all, or almost all, of the other bubbles are forever out of range of our observations, means the multiverse hypothesis can never be tested or falsified. This puts this fantasy outside the bounds of science.”

That sounds very down-to-earth, and fits in with Smolin's positioning of himself in the book as a realist. But what is strange is that Smolin himself was for many years a champion of a multiverse theory that he had created. He called the theory “cosmological natural selection,” and advanced it in a book called The Life of the Cosmos. In his book Time Reborn, Smolin describes the theory as follows:

The basic hypothesis of cosmological natural selection is that universes reproduce by the creation of new universes inside black holes. Our universe is thus a descendant of another universe, born in one of its black holes, and every black hole in our universe is the seed of a new universe. This is a scenario within which we can apply the principles of natural selection.”

I discuss this theory and some reasons for doubting it in this post.  

Smolin claimed that one advantage of his theory of cosmological natural selection is that it makes a falsifiable prediction. In a 2004 paper (page 38) he lists one such prediction:

There is at least one example of a falsifiable theory satisfying these conditions, which is cosmological natural selection. Among the properties W that make the theory falsifiable is that the upper mass limit of neutron stars is less than 1.6 solar masses. This and other predictions of CNS have yet to be falsified, but they could easily be by observations in progress.”

But by now this prediction has proven to be incorrect. In September 2019 a science news story reported on observations of the most massive neutron star ever found. We are told, “The researchers, members of the NANOGrav Physics Frontiers Center, discovered that a rapidly rotating millisecond pulsar, called J0740+6620, is the most massive neutron star ever measured, packing 2.17 times the mass of our Sun into a sphere only 30 kilometers across.”

So Smolin told us that the cosmological natural selection theory would be falsified if neutron stars were found to be more massive than 1.6 solar masses, and by now it has been found that a neutron star has 2.17 solar masses. I guess we can consider the cosmological natural selection theory to be falsified.


Art depicting a neutron star (Credit: NASA'S Goddard Space Flight Center)

Judging from his latest book, Smolin may have kind of deep-sixed or lost interest in his main theoretical offspring, the theory of cosmological natural selection. Such a theory was a multiverse theory, but now in the passage I quoted Smolin dismisses multiverse theories as “fantasy.” The index of Smolin's new book makes no mention of the theory of cosmological natural selection, and I can't recall him mentioning it in the book. The book jacket lists other books Smolin has written (including his excellent book The Trouble with Physics), but makes no mention of his opus The Life of the Cosmos selling us the theory of cosmological natural selection.

Smolin's latest book book is very thoughtful, readable and well-written, explaining some abstruse topics (such as pilot-wave theory) more clearly than I have ever read them expounded. Smolin denounces the Everett “many-worlds” theory, stating on page 172, “The Everett hypothesis, if successful, would explain vastly too much, and also much too little.”

The Everett many-worlds theory is the groundless nonsensical notion that reality is constantly splitting up into different copies of itself, so that there are an infinity of parallel universes. On page 178 of the book, Smolin perceives the ruinous moral implications of such a theory. Using the term “somewhere else in the wave function” to refer to imagined parallel universes, he notes, “If every eventuality we worked to eliminate, whether starvation, disease or tyranny, was actual somewhere else in the wave function, then our efforts would not result in an overall improvement.” Smolin states the following:

If no matter what choices I make in life, there will be a version of me that will take the opposite choice, then why does it matter what I choose? There will be a branch in the multiverse for every option I might have chosen. There are branches in which I become as evil as Stalin and Hitler and there are branches where I am loved as a successor to Gandhi. I might as well be selfish and make the choices that benefit me....This seems to me to be an ethical problem because simply believing in the existence of all those copies lessens my sense of moral responsibility.”

I had made a very similar comment prior to Smolin's 2019 book. In a post published on April 8, 2018 I called the Everett many-worlds theory an evil doctrine because of its morality-killing tendencies. I stated the following:

Why is the Everett 'many worlds' theory an evil doctrine? It is because if a person seriously believed such a doctrine, such a belief would tend to undermine any moral inclinations he had. I will give a concrete example. Imagine you are driving in your car at 2:00 AM on a bitterly cold snowy night, and you see a scantily clad very young child walking alone far from anyone. If you don't believe in the Everett 'many worlds' theory, you may stop your car and call the police to alert them of this situation, or do something like give your warm coat to the child to keep her warm. But if you believe in the Everett “many worlds” theory, you may reason like this: regardless of what I do, there will be an infinite number of parallel universes in which the child freezes to death, and an infinite number of other parallel universes in which the child does not freeze to death; so there's really no point in doing anything. So you may then drive on without stopping or doing anything, convinced that the multiverse would still be the same no matter how you acted.”

Being infinitely extravagant and completely groundless, as well as morally disastrous, the Everett “many worlds” theory of an infinity of parallel universes (and infinite copies of yourself) is best described as “evil nonsense.” I am pleased to find that I am not the only who recognizes how morally ruinous such speculation is.

Thursday, October 24, 2019

The Goofs in His "Evolution and Probability"

In his scientific paper “Evolution and Probability,” professor Luca Peliti raises the question of how evolution as described by Darwin could produce such wonders as the human eye. He then considers the problem of the origin of proteins. He states the following:

Living cells have a repertoire of 20 amino acids and proteins are made of linear chains of amino acids (called polypeptide chains). A typical enzyme (a kind of protein whose function is to fasten, sometimes by several orders of magnitude, the unfolding of chemical reactions) is about a hundred amino acids long. If we suppose that only one particular amino acid sequence can make a functional enzyme, the probability to obtain this sequence by randomly placing amino acids will be of the order of one over the total number of amino acid sequences of length 100, i.e., 20100 ≈ 1030.”

Peliti is talking about a very important reality here, the difficulty of protein molecules forming by any Darwinian process. The problem is very severe, partially because of the very great number of different types of proteins in the human body. Humans have more than 20,000 types of protein molecules, specified by about 20,000 genes, each of which lists the amino acid sequence used by a protein. Each protein uses a different sequence of amino acids.

Peliti has committed two very serious errors in the passage quoted above. The first is that he has committed a careless math error in telling us that 20 to the hundredth power is equal to about ten to the thirtieth power. Twenty to the hundredth power is actually equal to ten to the one hundred and thirtieth power. Twenty to the hundredth power is equal to 1.2676506 X 10130. In other words, 20100 ≈ 10130 rather than 20100 ≈ 1030 as Peliti stated. This difference is kind of all the difference in the world, because 10130 is more than 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times larger than 1030. If there was a chance of about 1 in 1030 that a functional protein could appear by chance, that would be a likelihood very slim but perhaps something we might expect to occur, given countless million years for random trials. But if there was a chance of about 1 in 10130 that a functional protein could appear by chance, that would be a likelihood so small that we would never expect it to happen in the history of the universe.

The screen shot below (from an online large exponents calculator) shows a conversion that corresponds to the one stated in the previous paragraph.


The second major error committed by Peliti in the passage quoted above is that he has seriously misstated the typical number of amino acids in a protein molecule or enzyme (an enzyme is a type of protein molecule). The number of amino acids in a human protein varies from about 50 to more than 800. The scientific paper here refers to "some 50,000 enzymes (of average length of 380 amino acids)." According to the page here, the median number of amino acids in a human protein is 375, according to a scientific paper. The difference between Peliti's suggestion that protein molecules such as enzymes have about 100 amino acids and the reality that they have a median of about 375 amino acids turns out to be a gigantically big difference when you calculate the probability of a protein forming by chance, a difference almost infinitely greater than the 375% difference between 100 and 375.

Let us do a calculation using the correct numbers and correct math. There are 20 amino acids used by living things. The median amino acid length of a human protein is 375 amino acids. So to calculate the chance of a set of amino acids randomly forming into the exact set of amino acids used by a functional protein such as an enzyme, the correct figure is 1 in 20 to the three-hundred-and-seventy-fifth power. This is a probability of about 1 in 10 to the four-hundred-eighty-seventh power. Very precisely, we can say that the chance of a random sequence of amino acids exactly matching that of a protein with 375 amino acids is a probability of 1 in 7.695704335 X 10487That is a probability similar to the probability of you correctly guessing (with 100% accuracy) the ten-digit telephone numbers of 48 consecutive strangers. The calculation is shown in the visual below:



Now, for a protein such as an enzyme to function properly, it must have a sequence of amino acids close to its actual sequence. Experiments have shown that it is easy to ruin a protein molecule by making minor changes in its sequence of amino acids. Such changes will typically “break” the protein so that it will no longer fold in the right way to achieve the function that it performs. A biology textbook tells us, "Proteins are so precisely built that the change of even a few atoms in one amino acid can sometimes disrupt the structure of the whole molecule so severely that all function is lost." And we read on a science site, "Folded proteins are actually fragile structures, which can easily denature, or unfold." Another science site tells us, "Proteins are fragile molecules that are remarkably sensitive to changes in structure." But we can imagine that a protein molecule might still be functional if some minor changes were made in its sequence of amino acids.

Let us assume that for a protein molecule to retain its function, at least half of the amino acids in a functional protein have to exist in the exact sequence found in the protein. Under such an assumption, to calculate the chance of the functional protein forming by chance, rather than calculating a probability of 1 in 20 to the three-hundredth-and-seventh-fifth power, we might calculate a probability of 1 in 20 to the one-hundred-eighty-seventh power (187 being about half of 375). This would give us a probability equal to about 1 in 10 to the two hundred and forty-third power, a probability of about 1 in 10243. The calculation is shown below.


So even under the probably-too-generous assumption that perhaps only half of a protein's amino acid sequence has to be just as it is, we still are left with a calculation suggesting that a functional protein could never form by chance. The probability above is about the probability of you correctly guessing the 10-digit phone numbers of 24 consecutive strangers. It would seem that some miracle of luck would be required for a particular type of functional protein molecule to form. Since there are billions of different types of functional protein molecules in the biological world,  you need to believe in billions of miracles of luck to believe that such things appeared naturally. 

Peliti does get the math rather right in the excerpt below, in which he describes what he later calls a “garden of forking paths”:

We can recast this argument in the following way. Let us suppose, following Dawkins... that 1000 evolution steps are necessary to obtain a working eye, and that at each step there are only two possibilities: the right or the wrong one. If the choice is made blindly, the probability of making the right choice at any step is equal to 1/2 . Then the probability of always making the right choice at every step is equal to 1/21000 ≈ 1/10300. It is clear that the fact that our lineage made the correct choice at each step looks nothing less than miraculous!”

The math here is reasonable. So we have two different ways of calculating things, and both have given us a likelihood of no more than about 1 chance in 10 to the two hundredth power. Such miracles of chance would need to happen not just once but very many times for the biological world to end up in its current state.

After raising this protein origin problem and the problem of the origin of complex biological systems like the vision system, Peliti attempts to address such origin problems. Peliti begins to reason like this:

In the argument we just discussed we assumed that one does not know if one is on the right path until the end is reached. Let us suppose instead that each step made in the good direction provides a small advantage in terms of survival or fecundity to the being that makes it. More precisely, let us imagine to send in this 'Garden of forking paths' a group of, say, 100 individuals, who perform their choice (right or wrong) at each step, and then reproduce. Let us assume that those who make the right choice at the right moment have slightly more offspring than those who make the wrong one: for instance, that for each offspring of an individual who made the wrong choice, there are 1.02 offspring (on average) of one who made the right choice. Thus, after the first step, we shall have 50 individuals on the right side and 50 on the wrong side and, after reproduction, 102 on the right side and 100 on the wrong side. This is surely a very small difference: the fraction of individuals on the right path is 51% rather than 50%. However, if we wait a few generations, the fraction of individuals on the right side will increase: after 10 generations the ratio of the number of offspring of an individual who had made the right choice to that of one who made the wrong choice will be about 1.22, and after 100 generations it will be about 7.25.”

Peliti then goes further and further with such reasoning, eventually reaching the conclusion that something like an eye might appear after 300,000 generations. I need not discuss his full line of reasoning, and will simply note that Peliti's entire line of reasoning is fallacious because of the bad assumption that he has made at the beginning, the assumption that he states as, “Let us suppose instead that each step made in the good direction provides a small advantage in terms of survival or fecundity to the being that makes it.”

An assumption like this is what we may call a “benefit-with-every-step” assumption. Such an assumption can be visually represented like this:


straight line benefit calculation
Graph 1

The graph above shows a type of situation in which 10% completion yields 10% of the benefits of full completion, 20% completion yields 20% of the benefits of full completion, and so on and so forth, with 50% completion yielding 50% of the benefits of full completion, and 100% completion yielding 100% of the benefits of full completion.

A straight-line benefit calculation is suitable for a small minority of cases. For example, if you are a farmer who plants and harvests 10% of his farming field, you will get 10% of the benefit of planting and harvesting 100% of the field. And if you are a street vendor who sells 20% of the items in your vendor's wagon, you will get 20% of the benefit of selling 100% of the items in your wagon.

But it is obviously true that in life in general and in biology in particular, it is very often totally inappropriate to be using such a straight-line benefits calculation or to be making a benefit-with-every-step assumption. Here are some of the innumerable examples I could give to prove such a point:

  1. Completing 30% of your SAT examination doesn't give you 30% of the benefit of completing 100% of the examination; instead it leaves you with a very bad score that doesn't do you any good at all.
  2. Completing 50% of a suspension bridge across a river doesn't give you a bridge that is 50% as good as a full bridge; instead it leaves you with a bridge that is a suicide device because people driving across will find their cars falling into the river.
  3. Completing 20% of an airplane doesn't give you something that flies 20% as fast or as high as a full airplane, but merely gives you something that doesn't fly at all.
  4. Completing 20% of a television set doesn't give you something that gets 20% as many channels as a full TV set, but merely gives you something that doesn't display TV programs.
  5. If you have 20% of a functional protein molecule, the molecule won't perform its function, and won't do anything.
  6. If you build only 20% of a rocket designed to launch satellites, that rocket won't be able to reach orbit, and will produce no benefits at all.
  7. A woman with 20% or 30% of her reproductive system won't be able to have 20% of a baby; instead, she won't be able to have any baby at all.
  8. Having only one quarter of a circulatory system (such as only veins or only arteries or only half a heart) provides no benefit at all. 

We can call these type of cases late-yielding cases or “halfway yields nothing” cases. In all such cases, it is utterly inappropriate to use a benefit-with-every-step assumption or a straight-line benefit calculation, and it is utterly inappropriate to be evoking slogans such as “every step yields a benefit.” What type of graph would be appropriate to illustrate such cases? Rather than a straight-line graph, it would be a graph with a J-shaped line that stays flat for the first half of the graph, and then slopes up sharply. The graph would look rather like this:

late yielding benefit calculation
Graph 2

The exact slope of such a curve would differ from one late-yielding situation to another, but in each case the curve would never be a straight-line curve but a curve rising sharply in the end stages, without any benefit in the earliest stages.

Now, which one of these two calculations would be appropriate to use when considering the case of the evolution of a vision system? Clearly the appropriate calculation would be like that shown in the “late yielding benefit calculation” graph (Graph 2), and not a calculation like that shown in the “straight-line benefit calculation” graph (Graph 1).

Let us ignore the red-herrings often thrown about when discussing such an issue, such as an appeal to the light-sensitive patches of earthworms. Humans are part of the chordates phylum, which is an entirely different phylum from the phylum of earthworms (the platyhelminthes phylum). No one believes that humans are descended from earthworms.

If we look at the fossil record of the phylum that humans belong to (chordates), we see no evidence that chordate eyes evolved from some mere light-sensitive patch such as on an earthworm. When considering the possible evolution of vision systems, we should be using something like a late-yielding benefit calculation and a “halfway yields nothing” assumption.

People very often speak as if “eyes=vision,” but such an assumption is false. For an organism to have vision, it is necessary to have a complex system consisting of many different things, including the following:

  1. A functional eye requiring an intricate arrangement of many parts to achieve a functional end.
  2. One or more specialized protein molecules capable of capturing light, each consisting of many amino acids arranged in just the right way to achieve a functional end.
  3. An optic nerve stretching from the eye to the brain.
  4. Extremely specialized parts in the brain necessary for effective vision.

All of these parts must be necessary for vision, so we clearly have here a situation that is not correctly described by an “every step yields a benefit” calculation. The correct assumption would be a “halfway yields nothing” assumption and a late-yielding benefits calculation.

We can give the name "the benefit-at-every-step fallacy" for the type of fallacy Peliti has committed by saying, "Let us suppose instead that each step made in the good direction provides a small advantage in terms of survival or fecundity to the being that makes it."  I can define the benefit-at-every-step fallacy as the fallacy of assuming that benefits will be yielded throughout a process that only yields benefits in its late stages.  Here is an example of such a fallacy (I'm not quoting here from Peliti's paper):

Dave: "If I slowly buy up a whole bunch of auto parts at the  junkyard, and assemble them into a car, in about 5 weeks I'll have a car I can sell for $1000. 1000 divided by 5 is 200. So I'll get an average of $200 a week, and that will pay my rent while I'm working on the car." 

Dave has committed here the benefit-at-every-step fallacy, just like Peliti.  Dave's labors will not actually yield any benefits until he has finished, or almost finished, the car he is working on.

A vision system is only one very complex thing that we would need to explain to naturally explain the origin of creatures such as humans. By far the biggest explanatory problem is explaining the origin of protein molecules. There are more than 20,000 different types of protein molecules in the human body, most specialized for some particular functional purpose. In the animal kingdom there are more than 10 billion different types of protein molecules, most of which are properly considered as complex inventions. The science site here estimates that there are between 10 billion and 10 trillion “unique protein sequences” in the animal kingdom. Most protein molecules have very hard-to-achieve folds necessary for their functionality. The science site here says “there are few common folds” in the protein universe. This means we have billions of complex biological inventions that we need to account for, for there are billions of different types of protein molecules in the natural world, each its own distinct complex invention.

When considering the appearance of protein molecules, we would have to use a late-yielding benefit calculation as shown in Graph 2 above, and not at all a straight-line benefit calculation as shown in Graph 1 above. It is certainly not true that the formation of 10% of a protein molecule gives you 10% of the benefit of the full molecule, or that the formation of 20% of a protein molecule gives you 20% of the benefit of the full molecule. It is instead usually true that having 50% of a protein molecule leaves you with a completely useless molecule. To be functional, protein molecules require complex hard-to-achieve folds. If you remove any 50% of their amino acid sequence, the protein molecules will not be functional, for reasons such as they cannot achieve the folds on which their functionality depends. Almost certainly, a large percentage of protein molecules will be nonfunctional if they have only 70% of their amino acids.  (See the end of this post for some scientific papers supporting these claims.)  As a biology textbook tells us, "Proteins are fragile, are often only on the brink of stability."

Indeed, a large percentage of protein molecules are not even independently functional if they have 100% of their amino acids, because of external functional dependencies of such protein molecules.  According to this source, between 20 to 30 percent of protein molecules require other protein molecules (called chaperone molecules) for them to fold properly.  Very many other protein molecules are not independently functional, because their function only arises when they act as team members in complex teams of proteins called protein complexes.  The answer to the question "what good is 50% of a protein molecule" is usually "no good at all," and in regard to a large fraction of protein molecules, the correct answer to the question "what good is 100% of a protein molecule" is "no good at all, unless there also exists one or more other protein molecules that the first protein molecule requires to be functional." 

All over the place in biology we see extremely complex innovations and intricate systems requiring many correctly arranged parts for minimal functionality, particularly when we look at the byzantine complexities and many "chicken or the egg" cross-requirements of biochemistry. That's why a physicist recently told us that even the simplest bacterium is more complex and functionally impressive than a jet aircraft. It is in general true that the more complex or intricate an innovation or system (biological or otherwise), the more likely that it falls under a late-yielding benefit calculation (as depicted in Graph 2) in which something like a "halfway yields nothing" assumption is correct, and the less likely that such an innovation or system falls under a "benefit at every step" calculation (as depicted in Graph 1) in which we can assume that the addition of each part produces a benefit.  It is a gigantic mental error to assume or insinuate that very complex  biological innovations should be typically described by a simplistic "benefit at every step" calculation such as depicted in Graph 1. But our ideologically motivated biologists have been making such an error for well over a century, and Peliti has merely repeated their error. 

Later in his “Evolution and Probability” paper, Peliti appeals to the concept of “selective pressure.” That term (also called "selection pressure") is simply a variation of "natural selection," a variation designed to impress us with the idea that something like physics is going on (even though nothing like that is at play).  

Here is a case where something a little like “selection pressure” might actually occur. Let us imagine a population (a group of organisms of the same species) where 10% of the organisms have some favorable trait that makes it more likely for them to survive and reproduce, and 90% of the organisms do not have such a trait. Then, over many generations, it might be that this trait might become more common in such a population, because those with the trait might reproduce more. In such a case, over many generations the percentage of the population with such a trait might increase. The 10% of the population with the trait might increase to 15%, then 20%, and so forth. One could use the term “selection pressure” to refer to such a tendency, although the term is actually doubly misleading; for there is no actual selection occurring (unconscious nature is not making a choice), and there is no actual pressure involved (pressure being a physical pushing force that is not occurring in this case).  The fact that "natural selection" does not actually describe a choice or selection is why Charles Darwin wrote in 1869,  "In the literal sense of the word, no doubt, natural selection is a false term."

But let us imagine some complex biological innovation that yields no benefit until more than 70% of it appears. It would be absolutely wrong and misleading to claim that the biological innovation had occurred because a “selection pressure” had pushed nature into forming such an innovation. During the first half of the imagined gradual formation process, there would have been no benefit from the appearance of 5% or 10% or 15% or 25% of such a biological innovation. So there would be no “selection pressure” at all that would have caused nature to pass through such stages.

But our biology dogmatists have constantly ignored such obvious truths, and have used the term “selection pressure” in an utterly inappropriate way. They will take some fantastically improbable biological innovation, and speak as if “selection pressure” pushed such a thing into existence. In reality, selection pressure would never occur in regard to any biological innovation until that innovation first appeared in some beneficial form. It is a complete fallacy to talk about “selection pressure” causing novel biological innovations whenever such innovations are late-yielding innovations that are nonfunctional in a fragmentary form.

Peliti is one of the biology theorists who has misused the concept of selection pressure. His error is shown in the passage below from his “Evolution and Probability” paper:

We can summarize this discussion by stressing two main points: one, that an evolutionary history that appears extremely improbable a priori looks much more probable a posteriori, looking back from the arrival point, since a constant selective pressure has acted during the whole path, favoring advantageous variants with respect to disadvantageous ones.”

Here Peliti is again guilty of the "benefit-at-every-step" fallacy so often committed by biology dogmatists.  Because very complex biological innovations such as protein molecules and vision systems and reproductive systems usually do not provide any benefit when only small fractions of them exist (and very often do not provide any benefit when less than 60% or 70% of them exist), it is absolutely erroneous to assume that a “constant selective pressure has acted during the whole path” to produce complex innovations, while thinking along the lines of “every step provides a benefit.” There would typically be no selection pressure at all (and no benefit to survival or reproductive fecundity) until such extremely complex and hard-to-achieve biological innovations achieved a functional threshold, which would very often require fantastically improbable arrangements of matter that we would never expect to occur by chance (or by Darwinian evolution) in the history of the universe.  

Besides the fact that thinkers such as Peliti cannot credibly explain the origin of protein molecules such as those used in vision systems, there is the gigantic problem that no imaginable change in DNA can explain biological innovations such as vision systems, because the structure of such systems is nowhere specified in DNA. Contrary to the frequent mythical statements made about DNA being a blueprint for a human or a recipe for making a human, the fact is that DNA only specifies very low-level chemical information like the amino acid sequences of proteins. Not only does DNA not specify the structure of a human eye, it does not even specify the structure of any of the specialized cells used by the eye (such as the specialized cells used by cones and rods of the eye), or the structure of any other cells.  So any claim to explain the appearance of vision systems based on changes in DNA (with or without natural selection and with or without lucky random mutations) is futile.  The fact that DNA does not specify biological body plans or phenotypes is discussed very fully here

Through abused phrases such as "selection pressure," our biology theorists confuse us about probabilities.  The flowchart below helps to clarify the reality.  Consider some imagined path of biological progression.  The first question to ask is: was there a plan or idea matching the end result? If you think the answer is no, then go to the second question: was there a will or intention to improve the end result? If you think the answer to that question is no, then assume the progression occurred through a "random walk" effect like that of typing monkeys.  In the case of Darwinian evolution, there is no plan or idea for any of the results, nor is there any will or intention to achieve the results.  Darwinian evolution is properly described as a "typing monkeys" random-walk type of effect, with all the mountainous improbabilities associated with such a thing.  Our biologists often tell us that "evolution is a tinkerer," although that analogy is false and deceptive.  A tinkerer is an agent willfully attempting to improve something by using trial and error. There is no such willful agent present in Darwinian evolution. 


evolution flowchart

From an information standpoint, the vast amount of functional information in genomes is comparable to books in a library. The most accurate analogy for Darwinian evolution is the analogy of typing monkeys producing a large library of impressive works of literature (or functional information such as computer programs),  despite prohibitive odds against such a result. In fact, at the end of this recent essay, an evolutionary biologist ends up candidly comparing Darwinian evolution to typing monkeys.  The keystrokes of the monkeys are analogous to random mutations.  If you imagine one or more skyscrapers filled with typing monkeys, and a roving editor in each skyscraper searching for useful typed prose made by the monkeys, and making copies of such miracles of chance if they ever occur, with such a roving editor being analogous to natural selection, you will have a good analogy for the theory of Darwinian evolution. 

I discussed how fine-tuned and sensitive to changes protein molecules are.  Further evidence for such claims can be found in this paper, which discusses very many ways in which a random mutation in a gene for a protein molecule can destroy or damage the function or stability of the protein.  An "active site" of an enzyme protein is a region of the protein molecule (about 10% to 20% of the volume of the molecule) which binds and undergoes a chemical reaction with some other molecule.  Below are only a fraction of the examples of protein sensitivity and fragility cited by the paper:

"If a mutation occurs in an active site, then it should be considered lethal since such substitution will affect critical components of the biological reaction, which, in turn, will alter the normal protein function...Even if the mutation does not occur at the active site, but quite close to it, the characteristics of the catalytic groups will be perturbed....Changing the reaction rate, the pH, or salt and temperature dependencies away from the native parameters can lead to a malfunctioning protein....An amino acid substitution at a critical folding position can prevent the forming of the folding nucleus, which makes the remainder of the structure rapidly condense. Protein stability is also a key characteristic of a functional protein, and as such, a mutation on a native protein amino acid can considerably affect its stability....When a protein is carrying its function, frequently the reaction requires a small or large conformational change to occur that is specific for the particular biochemical reaction. Thus, if a mutation makes the protein more rigid or flexible compared to the native structure, then it will affect the protein’s function."

A very relevant scientific paper is the paper "Protein tolerance to random amino acid change." The authors describe an "x factor" which they define as "the probability that a random amino acid change will lead to a protein's inactivation." Based on their data and experimental work, they estimate this "x factor" to be 34%. It would be a big mistake to confuse this "x factor" with what percentage of a protein's amino acids could be changed without making the protein non-functional.  An "x factor" of 34% actually suggests that almost all of a protein's amino acid sequence must exist in its current form for the protein to be functional.  

Consider a protein with 375 amino acids (the median number of amino acids in humans).  If you were to randomly substitute 4% of those amino acids (15 amino acids) with random amino acids, then (assuming this "x factor" is 34% as the scientists estimated), there would be only about 2 chances in 1000 that such replacements would not make the protein non-functional.  The calculation is shown below (I used the Stat Trek binomial probability calculator). So the paper in question suggests protein molecules are extremely fine-tuned, fragile and sensitive to changes, and that more than 90% of a protein's amino acid sequence has to be in place before the molecule is functional. 



Figure 1 of the paper here suggests something similar, by indicating that after about 10 random mutations, the fitness of a protein molecule will drop to zero. 

Sunday, October 20, 2019

Pedestals, Paywalls and Peer Pressure Are Bad for Science

Scientists have long put certain other scientists on pedestals. Such idolization can be bad for science. Centuries ago, the progress of science was greatly harmed by the fact that people put Aristotle on a pedestal, thinking that his conclusions (largely wrong) were well-founded scientific findings. For a long time, few people wanted to do work that might challenge the conclusions of Aristotle, because it would be almost like heresy to have to say, “Aristotle was wrong.” Rather the same situation has often existed in regard to other scientists who were put on pedestals. When scientists become scientist fanboys, erroneous beliefs can become entrenched dogmas. 

Scientists exert peer-pressure on their fellow scientists in a number of ways. The most common type of peer-pressure involves getting scientists to conform to some belief "party line" popular among scientists.  There are numerous ways to exert such pressure, such as the peer-review process by which contrarian or unorthodox opinions or inconvenient research results can be excluded. 

At the Neuroskeptic blog there was a very disturbing anonymous quote from an “early career researcher” or ECR. The scientist is quoted as stating this: “I have been constantly harassed by superiors to modify data in order to provide better agreement with known experimental values in order to make the paper look better for publishing at prestigious journals.”  What is particularly shocking is that it is not a case of a single apprentice researcher admitting to doing such an unethical thing, but a claim that he or she is “constantly harassed by superiors” to do such an unethical thing.

Why is such an accusation not terribly surprising? It has been clear for a long time that early-career scientists are told that there is a particular path to success that they must follow to be appointed as professors. The path to success is to publish one or more papers in one of the top science journals such as Cell or Nature or Science. But your chance of getting a paper published in such journals is not very high if your research suggests something that conflicts with existing dogmas and prevailing theories. Also, it is well known that science journals have a strong bias against publishing negative results, such as when a researcher looks for some effect but does not find it in his experimental results. So we can understand why researchers might be pressured to “modify data in order to provide better agreement with known experimental values.”

An online article tells us that scientists are being pressured by their peers to produce "sexier" results

A group of junior researchers at the University of Cambridge have established a campaign to reduce the pressure faced by scientists to produce “sexier” results, which they say can lead to inaccurate research work....Founder of the ‘Bullied into Bad Science’ campaign Corina Logan, a Leverhulme early career zoology research fellow at Cambridge, told The Times that junior researchers faced mounting pressure from senior academics to produce exciting results....Speaking to Varsity, Logan explained that she had started the movement “in response to the feedback I received after giving talks on how we researchers exploit ourselves and discriminate against others through our publishing choices. “Early career researchers often come up to me after these talks to say they would like to publish ethically but feel like they can’t because their supervisor won’t let them or they are reluctant to because they have heard that they need to publish in particular journals to be able to get jobs and grants.”

What are these "sexier" and "exciting" results that scientists are being pressured into producing? They are sometimes dubious studies that may seem to back up the cherished dogmas of scientists, by resorting to one of the many "building blocks of bad science literature" that I specified in this post. The dubious methodologies of such studies are often hard to discover, because the studies are hidden behind paywalls.  But the "sexier" and "exciting" results are splattered all over the news media, thanks to science journalists who often act like credulous parrots by engaging in pom-pom journalism.  There are great monetary incentives for such "echo chamber" effects, because the more sensational-sounding a science news headline, the more advertising revenue it will generate. The greater the web clicks on a page announcing some hyped-up result, the greater the ad revenue from online ads. 



An online article states the following:

It is difficult to overstate how much power a journal editor now had to shape a scientist’s career and the direction of science itself. “Young people tell me all the time, ‘If I don’t publish in CNS [a common acronym for Cell/Nature/Science, the most prestigious journals in biology], I won’t get a job,” says Schekman. He compared the pursuit of high-impact publications to an incentive system as rotten as banking bonuses. 

According to this post, more than 50 academics “each has a story of being told by senior colleagues that their career would be on the line if they did not keep up a steady flow of eye-catching results in top journals, where their articles cannot be read without an expensive subscription.”

The journals “where their articles cannot be read without an expensive subscription” are commonly called paywall journals. How do such paywalls abet bad science? They make it hard for people to do a quality check of bad science when it is published.

For example, in the field of neuroscience there is an extremely widespread problem of studies that have low statistical power because too few research animals are used. The minimum number of animals or subjects per study group for a moderately convincing experimental result has been estimated as being from 15 to 30 or more. But a sizable fraction of all neuroscience studies use far fewer than 15 animals or subjects per study group, often as few as 6 or 8. In such low-power studies, there is a very high chance of false alarms. How can we tell whether a study used a suitable number of animals or subjects per study group? You can't do that by reading the abstract of the scientific study, which typically does not tell how many animals or subjects per study group were used. To find out how many animals or subjects per study group were used, you have to read the full paper.

But paywall journals make it very hard to read papers. They thereby abet bad science, by making it hard for people to check the details of scientific studies. If you've done some dubious study with low statistical power and a high risk of a false alarm, a paywall is your best friend, for it will minimize the chance that anyone will discover the sleazy shortcuts you took.

According to a 2015 article, more than half of the world's scientific publishing is controlled by only five corporations. Such corporations add little to the scientific research they publish, and by using paywalls they severely restrict readership of scientific papers.  It's a system that's very unnecessary, because nowadays it is easy to publish scientific research online.  But scientists keep up their compliance with the paywall publishers, and even work for free for the publishing giants by doing unpaid peer review of scientific papers. This is rather like someone spending his summers working for free flipping hamburgers for one of the fast-food giants. Why haven't scientists rebelled against such a system, in which they are acting as unpaid laborers to enhance the profits of a handful of corporate giants? Sadly, once an unworthy system is in place (either as a set of customs long followed by scientists or a set of beliefs which scientists have long clung to), our scientists (acting like prisoners of habit) are bad about rising up against such a system to say, "Let's come up with something better."  

Wednesday, October 16, 2019

If Levin Is Right About Martian Microbes, They're Irrelevant

On the Scientific American web site, Gilbert Levin has an article entitled “I'm Convinced We Found Evidence of Life on Mars in the 1970s.” Levin is referring to results from the two Viking spacecraft that landed on Mars in 1976.

Levin's opinion on this matter is based on results from the Labeled Release experiment. In this experiment some chemical nutrients were added to the Mars soil. Some gases were then released by the soil. With tests on Earth, such a result occurred only with soil containing life, not sterile soil.

While such a result seemed to be a hopeful sign in regard to life, there was another Viking experimental result that was a very big negative result. Aboard the Viking spacecraft was an instrument called the gas chromatograph — mass spectrometer experiment (GCMS). This was a very sensitive instrument to look for organic molecules in the Martian soil. This instrument found no evidence of organic molecules in the soil it tested, not even at a parts-per-billion level. “Organic molecules” means simply any complex molecules based on carbon. Given that all life on Earth is based on organic molecules, and that any believable scenario for life requires organic molecules, the result of the GCMS experiment seemed to tell us loud and clear that there was no life in the soil tested.

Consequently, a scientific consensus formed that no life had been detected on Mars. As a 1970's paper by five scientists stated, “The three biology experiments produced clear evidence of chemical reactivity in soil samples, but it is becoming increasingly clear that the reactions were nonbiological in origin.” The paper attributed the results of the Labeled Release experiment to mere oxidants in the Martian soil.

The average reader of the article will not be likely to realize why Levin is not a disinterested and impartial judge of whether the Labeled Release experiment produced evidence for life on Mars. The fact is that Levin was the principal scientist behind the Labeled Release experiment. So that experiment was kind of Levin's baby. His opinion on the accomplishments of that experiment may be no more likely to be impartial than a father's opinion on whether his daughter is pretty.

Levin makes a minimal mention of the failure to detect organic molecules in 1976, but doesn't explain the significance of that.  He should have explained that no organic molecules equals no life, and that if the reading of no organic molecules was correct, then whatever was observed by his Labeled Release experiment could not possibly have been a sign of life.  He has presented no reason for thinking that the GCMS instrument malfunctioned.  In this regard he's kind of like someone claiming that some test has revealed a sign of life in a body, even though it is known that an infrared temperature reading has revealed no sign of any heat in the body. 

Besides the lack of organic molecules in the soil tested by the Viking landers, there was another reason why scientists concluded that the Viking landers had failed to detect life. This reason was the lack of water on Mars. As far as we can see, the soil on Mars has no appreciable liquid water. And liquid water is a necessity for life.


Mars: an arid wasteland (image credit: NASA/JPL)

Given the conditions on Mars, it makes no sense for Levin to state in his Scientific American article that “it would take a near miracle for Mars to be sterile.” Quite the opposite would seem to be true. Levin tries to justify his strange statement “it would take a near miracle for Mars to be sterile” by suggesting that Mars and Earth have been “swapping spit” for billions of years because of comet collisions. This is hardly convincing reasoning. What are the chances of some life traveling from Earth to Mars after a comet collision? Very, very low, since an interplanetary journey in the vacuum of space would almost certainly sterilize any fragment of a comet traveling between planets. Even if some bit of life ejected by a comet collision were to travel from Earth to Mars, what is the chance that such life could then survive entry into the Martian atmosphere, which would almost certainly cause it to burn up? Very, very low.

Levin lists some findings that he calls “evidence supportive of, or consistent with, extant microbial life on Mars.” They include the following (the quotes after each bullet are Levin's words):

  • Complex organics, have been reported on Mars by Curiosity’s scientists, possibly including kerogen, which could be of biological origin.” But only extremely tiny trace amounts of organic molecules were found. For example, Table 1 of this paper tells us that only 90 nanomoles of organic molecules were found. A nanomole is a billionth of a mole. 90 nanomoles is about 1 part in 11 million. By comparison, Earth soil is about 5% organic matter, about 1 part in 20. The organic molecules found weren't any of the building blocks of life.  "Organic molecules" simply refer to any carbon compounds, regardless of whether they have any biological significance.  Referring to the same "complex organics" Levin refers to, a geobiologist says, "The new findings do not allow us to say anything about the presence or absence of life on Mars now or in the past." 
  • "A wormlike feature was in an image taken by Curiosity." The fact that Levin does not provide a link to the image suggests that this claim is not strong. 
  • "Surface water sufficient to sustain microorganisms was found on Mars by Viking, Pathfinder, Phoenix and Curiosity." The wikipedia.org article on "Water on Mars" states, "Some liquid water may occur transiently on the Martian surface today, but limited to traces of dissolved moisture from the atmosphere and thin films, which are challenging environments for known life," and also, "No large standing bodies of liquid water exist on the planet's surface, because the atmospheric pressure there averages just 600 pascals (0.087 psi), a figure slightly below the vapor pressure of water at its melting point; under average Martian conditions, pure water on the Martian surface would freeze or, if heated to above the melting point, would sublime to vapor." 
  • Ejecta containing viable microbes have likely been arriving on Mars from Earth.” This claim by Levin is unproven and unbelievable. A Cornell University article claims “planetary material is shared throughout the solar system.” But it also tells us that material from Mars tends to take much longer to travel to Earth -- “up to 15 million years.” We may presume that similar lengths of time would be required for ejected material to travel from Earth to Mars. There is no reason to believe that life from Earth could survive being blasted up into space, some incredibly long journey through space, and also the heat of passing through the upper atmosphere of Mars.
  • "The excess of carbon-13 over carbon-12 in the Martian atmosphere is indicative of biological activity, which prefers ingesting the latter."  NASA has a simpler explanation for this excess, a non-biological explanation. 
  • "No factor inimical to life has been found on Mars." This statement is utterly false. Mars is a very arid planet with an atmosphere so thin that the planet is constantly bombarded by radiation.  The average temperature is about -80 F, and the planet is plagued by dust storms.  A NASA press release reminds us that on Mars "both radiation and harsh chemicals break down organic matter." 
  • "Phoenix and Curiosity found evidence that the ancient Martian environment may have been habitable."  Who cares, given that the planet is now so inhospitable?

I may note that if Levin's claim that “ejecta containing viable microbes have likely been arriving on Mars from Earth” were true, it would pretty much kill the whole point of looking for microbial life on Mars. We have been told many times that it is very important to look for life on Mars, because it would tell us something about the likelihood of life's natural appearance. The argument has been: you can't tell much about the likelihood of life appearing from chemicals if it happened on only one planet, but you can know about that likelihood if you know that it occurred independently on two planets in the solar system. People have often spoken along these lines: “If we find life on Mars, that will show that it was relatively easy for life to naturally appear from non-life, and that therefore life must be common in the universe.”

Astronomer Martin Rees scientist put it this way (note the important use of the word "independently"):

"But the existence of even the most primitive life on Mars would in itself be hugely interesting to scientists. It would also have broader cosmic implications. If life had originated twice, independently, within our solar system, we’d have to conclude that it can’t be a rare fluke – and that the wider cosmos must teem with life, on zillions of planets orbiting other stars. But until we find life on Mars (or maybe on the moons of Jupiter or Saturn, or on a comet) it remains possible that life is very rare and special to our Earth."

But imagine if it were true that “ejecta containing viable microbes have likely been arriving on Mars from Earth.” If that were true, then the existence of microbial life on Mars would not be important, for such life would probably just be something descended from microbes that traveled from Earth, rather than something that appeared independently. In such a case it would not all be true that "life had originated twice, independently."  If Mars life came from Earth, then finding life on Mars wouldn't tell us anything new about the likelihood of life appearing by chance from non-life, and finding life on Mars would not tell us anything new about the likelihood of life independently appearing on planets revolving around other stars.  If Mars life came from Earth, then even if we found life on Mars, there would still be only one known case in which life independently appeared.