Header 1

Our future, our universe, and other weighty topics

Wednesday, September 27, 2017

Future Starships May Be More Like “Noah's Ark” Than Star Trek's Enterprise

This fall we will have two new television shows that perpetuate the ideas about interstellar travel that we saw in the Star Trek movies and television series. The first is a comedy-drama TV show called The Orville starring Seth MacFarlane, which has been described as a Star Trek clone. The second is a new CBS series called Star Trek: Discovery, which is yet another incarnation of Star Trek, following in the footsteps of the original TV series, Star Trek: The New Generation, Star Trek: Voyager, Star Trek: Enterprise, and Star Trek: Deep Space Nine.

So far I have enjoyed The Orville, which seems to be striking a nice balance between drama and comedy. As someone who has spent endless hours watching three previous Star Trek television series, I was greatly anticipating the first episode of Star Trek: Discovery. But the episode slightly disappointed me, despite impressive production values. Besides the distracting uniforms (with golden sides that reflect light in a glaring way), there was the issue of the trigger-happy first officer (Burnham). In the show she recommends an unprovoked attack against the Klingons, on the weak grounds that long ago some of them killed some Vulcans.  The show may be trying to make the female first officer look like a real macho man-lady, but so far the show has failed to create a character we can morally respect.

By now the Star Trek series and the Star Wars series of movies have firmly planted various ideas about how interstellar travel will work. These ideas include the following notions:

  1. Interstellar travel will be very fast, because ships will be able to use warp drives that allow spaceships to travel between stars in a matter of days, or hyperspace jumps that allow spaceships to instantaneously travel between stars.
  2. When an interstellar spaceship gets to a planet revolving around another star, the main activity of the crew will be to interact with or study the life forms that exist on the planet.
  3. Interstellar spaceships will be very heavily loaded with weapon systems and defensive shielding systems, to protect themselves against hostile spaceships that may be encountered in a star system.
  4. Pretty much the only earthly life forms on the spaceship will be humans. There will be no real vegetation, and no horses, birds or dogs anywhere (except those that appear in a holographic illusion system called the holodeck).
  5. The spaceships that travel between stars will be rather cramped, and living on them will be rather like living in a submarine, except for those lucky enough to enjoy the holographic illusion system called the holodeck.

But there are reasons for thinking that such assumptions are totally wrong, and that interstellar spaceships and their activities will be completely different from what we see in Star Trek and Star Wars.

The first reason has to do with the speed of interstellar travel. The nearest star is about 4 light-years away. A light-year is the time it takes for light to travel a year. Einstein's Special Theory of Relativity tells us that the speed of light is an absolute speed limit that cannot be violated. It is barely possible that there are currently undiscovered laws of nature or facets of nature that might allow us to side-step such a limitation, and travel to a distant star much faster, perhaps instantly. But such a possibility is a mere speculation, and there is currently no evidence for it. Given the speed limit of the speed of light, an average distance between stars of about 4 light years, and the engineering difficulties of even reaching half of the speed of light (and the need to slowly accelerate and slowly decelerate), the average interstellar voyage will probably take decades.

There is also strong reason to suspect that once a spaceship reaches another solar system, it will be nothing like what happens on Star Trek or Star Wars. This is because there is strong reason to suspect that life in the universe may be rare, in the sense that the vast majority of planets do not have life, even if the planets are a suitable distance from the star they orbit. The problem is that even the simplest form of life apparently needs to be an incredibly complex system, one that seems to be fantastically improbable to appear by chance.  See here for some of the reasons for thinking that life originating by chance is harder than water-tight log cabins accidentally forming by falling trees. 

I've watched as many Star Trek episodes as anyone, but I can never recall an episode with an exchange like this:

Captain Kirk: Mister Sulu, put us in orbit around the planet. Spock, what do your sensors pick up in regard to life forms?
Spock: Nothing, Captain. Nothing at all. The planet is completely sterile.
Captain Kirk: You mean we came all the way over here for nothing?

But there are strong reasons for suspecting that this is just what would be likely to happen to an interstellar mission arriving in a distant solar system. Unless there is something special going on to increase the odds (some cosmic teleology or some cosmic life-force), the odds are stacked very strongly against us finding life in a particular solar system that we might explore.

These odds may be recognized by the designers of an interstellar spacecraft. So instead of being a ship like the starships we see in Star Trek, the first interstellar spaceship may be completely different. It may have these characteristics:

  1. No weapons systems, and no defensive systems, because the odds of encountering a threatening presence may be too small.
  2. The ship may be a miniature worldlet, capable of supporting a complex ecosystem. The ship may be designed for multiple generations to live in. There will be huge areas filled with plant life and various forms of life.
  3. The ship may be something like a Noah's Ark, containing hundreds or thousands of examples of earthly life, including animals such as dogs, horses, cows, birds, lions, bears, dolphins, and fish.

When the interstellar spaceship arrives at a planet arriving around a distant star, the plan may be to terraform a planet that will probably be lifeless. This will be a slow process that may take centuries. First, microorganisms would be introduced on the planet, microbes that would be designed to give off oxygen. After many decades the planet would have a breathable atmosphere. Then various plant forms could be introduced. Finally various forms of animal life could be introduced, starting with small animals, and gradually working up to larger animals.

The interstellar spaceship would need to be a multi-generation ship because the terraforming process might take centuries. For the crew members, it might be frustrating. The initial crew members who left Earth on the interstellar voyage might live the rest of their lives on the spaceship, dying of old age before the ship ever reached another star. Or, if a crew member lived to see the arrival of the ship at a distant solar system, the crew member might never walk on a planet of that solar system. If it took centuries to terraform the planet, that crew member might have to be satisfied with watching the planet in orbit, as it slowly got a little bit greener every year.

If it turns out that interstellar missions work completely different from what we saw in Star Trek, we should hardly be surprised. Star Trek was a great show, but it did have its realism problems. I've seen all episodes of the original series several different times, and can never recall one time when the crew members either wore a jacket or rolled up their sleeves when exploring distant planets. But such planets would have had random conditions, with some being much colder than Earth, and some being much hotter. While I was writing this post, I watched an old episode of Star Trek in which Mr. Scott tells Captain Kirk that pressing a red button on a panel will blow up the spaceship they are on. But the red button didn't even have any marking next to it indicating what it did. So Star Trek had realism problems from its beginning, and we shouldn't use it as a realistic model for interstellar exploration. 

space form
It may be like this for the first interstellar voyage

Saturday, September 23, 2017

His Big Bang Redefinition Is Confusing and Insubstantial

The latest post by cosmologist Ethan Siegel (published by Forbes.com) seems to offer a dramatic announcement. It is entitled, “The Big Bang Wasn't the Beginning, After All.”

For decades astronomers have told us that the Big Bang that occurred about 13 billion years ago was the beginning of the universe. We have been told that the universe began in this event, in which the universe began expanding from an infinitely dense point, with incredible heat and density in its first seconds. Has something new been discovered to overturn this view?

No, not at all. It's just Ethan Siegel playing redefinition games in a very confusing fashion. There is no substance at all in Siegel's announcement, and nothing new has been discovered about the universe's beginning.

Here's what's going on. About 15 years after getting the best evidence for the Big Bang (the discovery of the cosmic background radiation), scientists began to make a speculation about something that may have happened when the universe was about a trillionth of a trillionth of a trillionth of a second old. This speculation is called the cosmic inflation theory. It is speculated that when the universe was only about 10-35 second old, it underwent for a fraction of a second a period of exponential expansion, and then returned to the normal, linear rate of expansion we now observe. This strange speculation is called the cosmic inflation theory.

That name is one of the most confusing labels ever put on a science theory. When discussing this cosmic inflation theory, you must not fall into the trap of thinking that the universe's expansion at, say, 10 seconds after the Big Bang was an example of “cosmic inflation.” You must remember that this term “cosmic inflation” refers only to something that supposedly went on during the first second of the universe's history.

Now Siegel wants to introduce a new confusion. He wishes to redefine the term “Big Bang” so that it only refers to what happened after this alleged period of cosmic inflation that supposedly occurred in the universe's first second. So Siegel wants us to start thinking: first there was the cosmic inflation, and then there was the Big Bang. Redefining the term "Big Bang," he says, "The hot Big Bang definitely happened, but doesn't extend to go all the way back to an arbitrarily hot and dense state." 

By making this attempt at redefining the Big Bang (as something that did not occur at the very beginning), Siegel is out on his own. This is not the way the majority of cosmologists speak, and this is not the way they have been speaking about the Big Bang for the past 50 years. For 50 years cosmologists have been talking about the Big Bang as if that term means: what happened at the very beginning of our universe.

For example, in the visual below from a NASA web site, we see the Big Bang referred to as something occurring before the alleged period of cosmic inflation, not after it (Siegel proposes the opposite order, that we think of the Big Bang as occurring after cosmic inflation). 

big bang

What would motivate a cosmologist to try to redefine the term “Big Bang” so that it does not refer to the very beginning of the universe? It is easy to think of a motivation. For 50 years cosmologists have been troubled by the fact that they have no explanation for the Big Bang at the universe's beginning. But if a cosmologist redefines the term “Big Bang” so that it doesn't refer to the very beginning, he can then say that he has an explanation for the Big Bang. If I redefine “Big Bang” to mean something that began when the universe was one second old, then I can claim to have an explanation for that state by referring to the previous second. So the motivation may be: the redefinition may help a cosmologist place a laurel wreath on his head, allowing him to say, “Clever me, I explained the Big Bang.”

Siegel has been pushing this attempt to redefine the Big Bang for quite a while, but there is little evidence that other cosmologists are following him in this matter. Most cosmologists continue to use the term “Big Bang” as they have done for the past 50 years, to mean the event that started the universe. The majority of cosmologists continue to speak as if the Big Bang began at the very beginning, Time Zero, not some time after Time Zero. They mainly continue to speak as if the Big Bang was the very beginning of time.

There is no substance behind any attempt to redefine the Big Bang as something that began later than the very beginning. Nature does nothing to support such a redefinition. The cosmic inflation theory (that there was some special period of exponential expansion during the universe's first second) is unproven, very farfetched, and not supported by any compelling evidence. There are good reasons for rejecting such a theory, discussed by Paul Steinhardt at Princeton University, and in this post and this post. Among the reasons is that the theory conflicts with findings about anomalies in the cosmic background radiation. 

Were we to redefine the Big Bang so that it does not refer to the very beginning, it would be a kind of arbitrary semantic silliness similar to redefining the word “human” so that it does not refer to people with very dark skin. Whenever such a very arbitrary redefinition is proposed, we should ask: who is attempting to help himself or his kind by proposing such a redefinition? Just as a politician might wish to redefine “human” to make things easier for his own kind, a cosmologist might wish to redefine “Big Bang” to make it easier to place a triumphal gold medal around his own neck. But explanatory triumphs are not earned by arbitrary redefinition. 

Postscript: Theoretical physicist Sabine Hossenfelder has a post entitled "Is the Inflationary Universe a Scientific Theory? Not Anymore."  She states this:

It is this abundance of useless models that gives rise to the criticism that inflation is not a scientific theory. And on that account, the criticism is justified. It’s not good scientific practice. It is a practice that, to say it bluntly, has become commonplace because it results in papers, not because it advances science. 

Post-postscript: Siegel has a new errant post entitled "The Multiverse Is Inevitable, and We're Living in It."  He states the following:

Rather, the Multiverse is a theoretical prediction that comes out of the laws of physics as they’re best understood today. It’s perhaps even an inevitable consequence of those laws: if you have an inflationary Universe governed by quantum physics, this is something you’re pretty much destined to wind up with. 

This is extremely erroneous. The "inflationary universe" is not at all "the laws of physics as they're best understood today," but instead a family of speculative cosmological theories, not well supported by evidence. 

Later on in the post he admits that it could be "our ideas about inflation are completely wrong" and that in that case "the existence of a Multiverse isn't a foregone conclusion."  You do not substantiate one speculation by pointing out it is implied by some another speculation. 

Wednesday, September 20, 2017

The Royal Society's Slick GMO Guide Has More Spin Than Straight-Talk

The Royal Society (the United Kingdom's oldest science organization) has released a slick information guide pitching genetically modified organisms (GMO's). It's a document giving 18 answers to 18 questions about GMO's.

 A page from the info guide

At the beginning of the document there is some cleverly worded text designed to make you think that the document is going to be a balanced look at the topic of GMO's. We are told that balanced focus groups were created:

There were eight groups in total and 66 members of the public took part. Participants were recruited for a range of views based on those for and against GM or who were undecided, in order to reflect the findings of a nationally representative survey on the subject.

But these focus groups were just kind of a smoke screen, because we are then told that “the following set of 18 questions was the outcome of the responses from the focus groups,” and that the answers to the questions were written by “a group of experts who have endeavored to ensure the answers are factual, as much as possible, and not associated with any value judgment.” So the focus groups were ignored when writing the answers to the questions. That's hardly a technique for providing a balanced examination of an issue. The claim that the answers in the document are “not associated with any value judgment” is misleading, because the answers do actually make value judgments such as favorable judgments about genetically modified crops.

The key question addressed is question 8, which is “Is it Safe to Eat GM Crops?” The following answer is given:

Yes. There is no evidence that a crop is dangerous to eat just because it is GM. There could be risks associated with the specific new gene introduced, which is why each crop with a new characteristic introduced by GM is subject to close scrutiny. Since the first widespread commercialisation of GM produce 18 years ago there has been no evidence of ill effects linked to the consumption of any approved GM crop.

Asking “Is it safe to eat genetically modified crops” is not asking the right question, because “is it safe” questions are so vague and debatable that almost any answer can be justified. Is it safe to drink three glasses of vodka a night, or to drive at 70 miles an hour on the highway, or to live in a beachfront house in Florida (where hurricanes are common)? It is easy to make a case for either the “yes” or “no” answers.

A much better question to ask is: is there a reasonable chance that you will be harmed if you consume genetically modified crops? The answer to this question is: yes, there is. Such a chance is probably much less than 50%, but it is substantial nonetheless. The roles that genes play are often extremely complex and obscure. It is not very unlikely that we may discover harmful effects from genetically modified food. While each genetically modified crop may be tested before it is released, there is still the possibility that eating certain combinations of genetically modified crops might turn out to be dangerous. Similarly, neither carbon nor oxygen is harmful in itself, but a certain combination of them (carbon monoxide) can be fatal.

A study published in 2012 found that a genetically modified crop and a herbicide it was engineered to be grown with caused severe organ damage and hormonal disruption in rats fed over a long-term period of two years. Eventual consequences for some of the rats included tumors. Published in a peer-reviewed scientific journal, the study was carried out by a team led by Professor Gilles-Eric Séralini. A kind of intellectual lynch mob quickly formed, led by pro-GMO interests, which caused the paper to be retracted. The incident was a great black mark on contemporary bio-science, and seems like a very troubling attempt at a cover-up. After a long delay another scientific journal published the study. See here for other information about the study.

We hear no specific mention of Seralini's research in the long Royal Society document on GMO's. The Royal Society document inconsistently states the following about genetically modified foods (GM):

Since the first widespread commercialisation of GM produce 18 years ago there has been no evidence of ill effects linked to the consumption of any approved GM crop....There have been a few studies claiming damage to human or animal health from specific foods that have been developed using GM.

The second statement contradicts the first statement, particularly since the Royal Society document does nothing to dispute these “studies claiming damage to human or animal health from specific foods that have been developed using GM” other than to weakly note they have been “challenged.”

Should we think that genetically modified foods are very safe on the grounds that “Since the first widespread commercialisation of GM produce 18 years ago there has been no evidence of ill effects linked to the consumption of any approved GM crop”? Not necessarily. As they say in the investment industry, “Past results do not guarantee future results.” The fact that something hasn't yet produced much harm doesn't show it won't produce harm in the future. The passengers on the fatal flights of the Hindenberg and the Challenger probably thought they were safe, on the grounds they were using technology which hadn't failed in quite a long while.

Beware of experts telling you something is safe based on a past performance record. At about the beginning of 2008, the financial experts such as Standard and Poor's told us that CMO tranches were a very safe investment, based on their previous performance record. But in 2008 such investments experienced a disastrous large-scale failure, with defaults aplenty, and investors losing billions. If something unexpected like that happens with genetically modified foods, we might see a large-scale loss of life.

Question 13 of the 18 questions asks this about genetically modified crops: “GM crops have only been around for 20 years, might there still be unexpected and untoward side effects?” The answer given by the Royal Society document is: yes. So if there might be unexpected and untoward side effects from eating genetically modified crops, as the Royal Society document admits in answering Question 13, why was the answer it gave to question 8 (“Is It Safe to Eat GM Crops?”) a simple “Yes” answer? Given what the Royal Society has answered for question 13, it seems that the answer to question 8 (“Is It Safe to Eat GM Crops?”) should have not been a simple “Yes, “ but instead something like, “Probably, but there may be unexpected and untoward side effects from eating genetically modified crops.”

See this link for a critical analysis of the deficiencies of the Royal Society document on GMO's, which has some notable omissions and inconsistencies.

The Royal Society information guide answers 18 questions about genetically modified food, but it doesn't answer the question below, which would make a useful addition to their guide.

Their guide left out this Question 19

Postscript: This news story claims that an emeritus professor writing pro-GMO pieces was secretly taking $57,000 from one of the leading GMO manufacturers. (Conversely, I have never received any money or benefit from any organization that is in any way related to any of my blog posts, excluding the government that benefits citizens such as me.)

A recent ethically troubling news story tells us, "The blueprint for life - DNA - has been altered in human embryos for the first time in the UK."  This raises the question: what will they do with the monsters that will result from trial-and-error experimentation with DNA in human embryos? Will they coldly kill off such bad results, or lock them up, as suggested in the speculative visual below?

gene splicing

Saturday, September 16, 2017

He's Apoplectic That This Anomaly Is Being Researched

As shown in this post, the ire of neuroscientist Steven Novella was recently ignited. Very strangely the bitter indignation of this scientist has been kindled by the simple fact that a research laboratory has been opened.

The research laboratory is a laboratory in India to study the anomaly known as homeopathy. I have never tried homeopathy, and have never recommended it. But I know that it is an alternative medical technique that can sometimes involve giving people extremely diluted solutions. A believer in this technique may believe that if you take a certain type of concentrated solution, and then dilute it by a factor such as 10,000 times or more, the solution will still have some medical potency (and the medical potency can still be retained even if there is no detectable trace of what was originally in the concentrated solution).  

Based on what we currently know about chemistry, you would expect homeopathy to have no effectiveness whatsoever. But surprisingly, research studies have often seemed to show real medical effectiveness from such extremely diluted solutions. There seem to be three possibilities here:

  1. Homeopathy has no real effectiveness, and studies suggesting otherwise are just false alarms.
  2. Homeopathy does have some real effectiveness, and its effectiveness is because nature has some important aspect that our chemists have overlooked or not yet discovered.
  3. Homeopathy can sometimes be effective not for chemical reasons but simply because some people believe it is effective; and homeopathy is a case of a mind-over-matter effect or a placebo effect in which a person's expectations or beliefs can affect his medical outcomes. (Notably, a meta-analysis in the British medical journal Lancet found that homeopathy produced results substantially better than could be explained by a placebo effect; it stated, "The results of our meta-analysis are not compatible with the hypothesis that the clinical effects of homoeopathy are completely due to placebo.")

Because items 2 and 3 are of very significant scientific and intellectual interest, it seems that homeopathy is worthy of further study and investigation. So I am puzzled by the irate reaction of Steven Novella to an Indian news story that merely mentions that a new research center has been opened in India to study homeopathy, without even making any general claims about homeopathy. Why would a scientist not want an anomaly to be investigated further? Could it be that Novella is worried that the research might find something that challenges his dogmatic proclamations on the topic of homeopathy?

Novella has a link saying “homeopathy does not work for anything.” When I follow that link, it takes me to another post by Novella mentioning mainly the NHMRC report on homeopathy. In a previous post, I thoroughly examined this report, and found it to be a prime example of a faulty and biased meta-analysis. I documented quite a few defects in the meta-analysis, such as arbitrarily excluding studies with fewer than 150 subjects, a cut-off level that is not typically used by other medical meta-analyses. A typical meta-analysis on a topic other than homeopathy will include studies having 75 or 100 subjects.  The NHMRC considered only 225 research studies out of more than 1800, an exclusion rate far higher than we have with other similar meta-analysis studies. There are guidelines called the PRISMA guidelines which give recommendations on how a meta-analysis should be done. The NHMRC report violated such guidelines. So we cannot use the NHMRC as a guide to whether homeopathy is effective, and its meta-analysis does not cross-out the Lancet meta-analysis suggesting that homeopathy has an effectiveness better than a placebo. 

Novella also compares ESP and homeopathy, and the information he gives on ESP is dead wrong. He says this:

After a century of research and thousands of studies there is no clear evidence that ESP is real. For both homeopathy and ESP there is a great deal of noise, but no clear signal. There are many flawed or small studies, but no repeatable high quality studies.

What Novella is telling us about ESP is entirely wrong. Sound experimental research showing the reality of ESP has been done for more than 130 years. Among the research highlights was the work of Professor Joseph Rhine at Duke University. Under controlled experimental conditions, Rhine showed spectacular results such as tests showing 3746 successes out of 10,300 tries (a 36% success rate), in experiments in which the expected success rate was only 20%. We would not expect such a result to occur by chance even if every person on planet Earth was tested for ESP. The subject in question (Pearce) got even better results testing with another researcher (Pratt), getting 558 successes in a card-guessing experiment in which the expected number of successes was about 370. Such a success rate occurring by chance had a likelihood of less than 1 in 10,000,000,000,000,000,000,000. Pearce's successes were repeated, showing Novella's claim about repeatibility is inaccurate. An even more spectacular result was reported by Professor Riess, who did a remote card-guessing test showing a success rate of 73% on 1850 guesses in an experiment in which the expected success rate was only 20%.

More recent research on ESP has included sensory deprivation experiments called ganzfeld experiments. Done by many researchers using many subjects, such experiments have repeatedly  shown success rates of 30% to 32% and higher on tests in which the expected chance rate is only 25%. This is a very high degree of repeatability. Even more dramatic recent results are summarized at the end of this post. There are also innumerable very dramatic  anecdotal reports of ESP collected by researchers such as Louisa Rhine, and summarized in the book The Gift

So Novella is simply misinforming us about ESP. He's told us that “there is no clear evidence that ESP is real,” which is false. He's also told us that there “are no repeatable high quality studies,” but there are very many such studies, and ESP is very much an effect that shows up dramatically in repeated scientific studies.

Given that Novella has misinformed us about ESP, we may wonder whether he is also misinforming us about the evidence for homeopathy. He tells us that there is in regard to homeopathy a “great deal of noise, but no clear signal,” and if you read between the lines, that “great deal of noise” sounds like something that could be real evidence of an important reality behind homeopathy. How to sort out whether there is real evidence? One way is to do more research. So Novella's ire about a homeopathy research center appearing is strange, and also unscientific.  

 "Don't bother me with more data" isn't scientific

Postscript: Another example of a Novella misstatement is this  post with the title, "AWARE Results Finally Published -- No Evidence of NDE." The AWARE study (which you can read about here and here) actually found very dramatic evidence for near-death-experiences (NDE), including a case of a man reporting floating above his body while his heart had stopped (including independent verification of his recollections of his medical resuscitation efforts while he was unconscious), and a person who reported traveling through a tunnel toward a very strong light, and encountering a beautiful crystal city, along with about 100 other near death experiences.

Postscript: Novella's rage against paradigm-challenging research continues in this post, where he fulminates against the existence of the Susan Samueli Center for Integrative Medicine.  In the post Novella claims that acupuncture "does not work." But a New York Times article asserts otherwise, saying the following:

A new study of acupuncture — the most rigorous and detailed analysis of the treatment to date — found that it can ease migraines and arthritis and other forms of chronic pain. The findings provide strong scientific support for an age-old therapy used by an estimated three million Americans each year.  

In the post, a methodologist at the very prestigious Memorial Sloan-Kettering Cancer Center in New York says, "We think there’s firm evidence supporting acupuncture for the treatment of chronic pain."

Wednesday, September 13, 2017

Brain Dogmas Versus the “Total Recall” People

In the Guardian this year, there was a long and very fascinating article entitled “Total Recall: The People Who Never Forget.” The article discusses very rare cases of people with what what is called hyperthymesia or Highly Superior Autobiographical Memory, a topic that the 60 Minutes TV program had covered in 2010.

Less than 100 such people have been identified. They have the uncanny ability to remember in great detail every single day of past years of their lives, sometimes stretching back for decades.

The article discusses Jill Price, and says this about her:

Price was the first person ever to be diagnosed with what is now known as highly superior autobiographical memory, or HSAM, a condition she shares with around 60 other known people. She can remember most of the days of her life as clearly as the rest of us remember the recent past, with a mixture of broad strokes and sharp detail. Now 51, Price remembers the day of the week for every date since 1980; she remembers what she was doing, who she was with, where she was on each of these days. She can actively recall a memory of 20 years ago as easily as a memory of two days ago, but her memories are also triggered involuntarily.

A memory researcher named James McGaugh has verified the accuracy of Price's recollections. One way to do this is simply to ask her questions about what happened on a particular day that something notable happened. For example, if asked on what day Rodney King was beaten, or what day Bing Crosby died, she can quickly recall the exact date. Of if asked the significance of some particular date, she can tell you some famous person died on that day. McGaugh also verified the accuracy of Price's recollections by checking her diary. He and his colleagues wrote up the case in a scientific paper.

Another scientific paper (which can be read in full here) describes the case of HK, which the 2012 paper describes as a 20-year old person with “near-perfect” autobiographic memory. HK is described as blind and having been born at 27 weeks early, 13 weeks early. The paper states:

As can be seen in Figure 1,for dates between this first memory until his 10th year of life, HK shows a relatively steady increase in accuracy for autobiographical events. Accuracy takes a noticeable jump to near 90% in 2001 at age 11. From that point forward, HK’s recollection of autobiographical events is near perfect.

The paper also gives us insight as to what it is like to have such a memory:

He reports that he is able to relive memories in his mind as if they just happened. HK stated that everything about his memory, including sounds, smells, and emotions, are vividly re-experienced when he remembers a particular event in time...,He stated that there is no difference in the vividness of his recollection between events that occurred when he was five and events that he experienced within the past month.

The scientists did an MRI scan of the brain of this person with this near-flawless autobiographic memory. Did it show something like an abnormally big brain? No, something like the opposite was found. For the paper tells us that the volumetric analysis “reveals significantly reduced total tissue volume in HK” and that “a volumetric analysis of subcortical structures shows general reduction in subcortical volumes in HK (1019 mL) relative to controls (1249 ± 29 mL).” So the person with this miracle memory had a brain about 20% smaller. The only part of his brain that was larger was his amygdala, an almond-shaped part of the brain. HK's was 20% bigger, but that's only a flea-sized difference.

Cases such as Price and HK apparently date back to the nineteenth century, for an 1871 article describes a man named Daniel McCartney who, according to the Guardian article, “could remember the day of the week, the weather, what he was doing, and where he was for any date back to 1 January 1827, when he was nine years and four months old.” 

Something different from these cases of Highly Superior Autobiographic Memory or hyperthymesia are the cases of extraordinary memory in savants. Savants are individuals who have some mental disability but also have some extraordinary mental talents. An example of a savant is the late Kim Peek, who could accurately recall the details of 12,000 books he had read, despite having an IQ of only 87. Other examples are Tony DeBlois, who can play 8000 songs from memory, and Derek Paravicini who can play a piece after hearing it only once, despite having a severe learning disability. 

So far researchers have failed to draw any noteworthy conclusions from these cases of extraordinary memory. But such cases may suggest that standard ideas about memory are wrong. The standard story is that memories are all stored in your brain. There are several reasons for rejecting this idea, such as the seeming impossibility of explaining how instantaneous recall of distant memories could occur (discussed here), and the reason (discussed here) that there is no workable theory of how brains could be storing memories for 50 years. The most popular theory is that memories are stored in synapses. But synapses are subject to a high rate of protein turnover and structural turnover which should make it impossible for synapses to be storing memories for longer than a year or two. The proteins in synapses are replaced within a few weeks. A recent scientific paper mentions an estimate that the rate of protein turnover in synapses is about 0.7% per hour, which is a rate of about 16% per day. 

Under the assumption that memories are stored in brains, we would not by any means expect some people to remember their past experiences 800% better than an ordinary person can. Such a thing would seem to require a brain 800% bigger than the biggest brain that can fit into a human skull. And also under the assumption that memories are all stored in brains, we should not expect people with smaller brains or damaged brains (such as patient HK and Kim Peek) to have dramatically better memories.

But let's imagine a different scenario. Imagine if people's memories are not stored in brains, but are stored in some psychic or spiritual reality associated with a person. Your memories may be stored in something like your soul, not your brain. Under such a scenario, you might have complete memories for everything that happened to you. But your brain might act as a kind of valve, limiting your access to these memories. For some rare people, this normal blockage or limitation of access to biographical memory may not be occurring. Such people may have Highly Superior Autobiographic Memory.

Cases of Highly Superior Autobiographic Memory are easy to reconcile with such a theory. We simply imagine that in such people the brain's function as a kind of “blockage valve” for our memories is not working normally, so the blockage does not occur as it normally does. But cases of Highly Superior Autobiographic Memory are hard to reconcile with standard claims that the brain is where all your memories are stored. Under such claims, it is unthinkable that anyone should have memories such as Jill Price's, which would seem to require a brain the size of an elephant's.

Postscript: Another interesting mind anomaly is the rare anomaly of acquired savants.  A person may acquire extraordinary mental abilities after some accident.  One such case is Orlando Serrell, who acquired extraordinary calculation and memory skills after being hit by a baseball when he was 10, in 1979.  According to his web site, "He can recall the weather, where he was, and what he was doing for every day since the accident."  Again, we have something that does not fit in with conventional dogmas about the brain, but which is compatible with very different ideas about memory.  If your brain acts as a kind of faucet or valve for your memories, rather than a storehouse of such memories, it might be that a head injury might reset that valve so that memory is dramatically enhanced.

Saturday, September 9, 2017

The Fallacies of Dawkins' “Climbing Mount Improbable”

Richard Dawkins' book Climbing Mount Improbable is a book trying to persuade you that blind Darwinian evolution (evolution caused by natural selection and random mutations) produced pretty much all the biological complexity we see in the world. The book is centered around a mountain-climbing analogy. In this analogy, reaching some height of biological complexity is portrayed as something like climbing a mountain. The book assures us that Darwinian evolution is able to climb “even the most precipitous heights” because it takes the “mildly sloping paths” rather than the hardest steepest path up the mountain. These paths Dawkins describes (on page 73) as “gently inclined grassy meadows, graded steadily and easily towards the distant uplands.” He tells us on page 77 that Darwinian evolution works by “going round the back of Mount Improbable and crawling up the gentle slopes.”

This is a very poor analogy – an inappropriate metaphor. The appearance of a new type of macroscopic organism involves all kinds of complex parts appearing on the scene in an incredibly intricate and coordinated way, so that great functional coherence is achieved. But climbing a mountain does not involve any type of event in which complex parts are assembled. Climbing a mountain doesn't even involve complexity. So mountain climbing is a terrible analogy for achieving some stunning wonder of coordinated biological complexity.

We can imagine all type of analogies that would have been more appropriate for describing blind processes producing complex biological structures. One such analogy would be the idea of trees in a forest luckily forming into a log cabin, or stones randomly forming into a stone house. But such analogies would throw unwanted light on the difficulties of random processes forming coherent structures. So rather than using such an analogy, Dawkins has chosen an inappropriate analogy that seems to have been chosen purely for its rhetorical advantages.

In this sense he's followed in the same steps as Darwin, who gave us the inappropriate metaphor of “natural selection,” a term suggesting incorrectly that nature chooses things. The metaphor is not literally accurate, because strictly speaking only conscious agents choose things. The idea of survival of the fittest can be stated without a metaphor by using either the term “survival of the fittest” or “differential reproduction.” So why did Darwin make such use of the metaphorical term “natural selection”? Probably for the same reason Dawkins has chosen a mountain-climbing analogy: for rhetorical advantage. Once a person has been persuaded that the evolution of incredibly complex machine-like functionality is like mountain-climbing, the same person might be persuaded that such an end was easily achieved by “walking up the easy back route,” because there are often “easy back routes” leading to mountain tops. But since mountain-climbing doesn't involve even the simplest combination of parts, you are deluding yourself if you think that this mountain-climbing analogy is suitable for describing the appearance of wonderfully coordinated biological machinery.

On page 77 of the book, Dawkins tries to suggest that Darwinism is not a theory of chance. He says, “It is grindingly, creakingly, crashingly obvious that if Darwinism were really a theory of chance, it couldn't work.” But, of course, Darwinism is a theory of chance, the claim that the combination of blind chance mutations and natural selection is enough to produce the origin of new species. As Dawkins says on page 80, “One stage in the Darwinian process is indeed a chance process – mutation.” So on one page he talks as if Darwinism isn't a theory of chance, and a few pages later he's talking as if it is just that.

Dawkins admits on page 81 that mutations usually have bad effects, not good effects. That is a severe understatement, and a more candid statement would have been to say that for every mutation that is helpful, there are hundreds or thousands that are harmful.

On page 96 Dawkins tries to suggest the idea of macromutations, that a single mutation can cause a huge change in an organism. He asks, most ridiculously, “Couldn't the elephant's trunk have shot out in a single, giant step?” We know why he wants to suggest this old “hopeful monsters” idea. If macromutations can occur, then rather than have to believe that lots of favorable mutations were needed for some biological innovation (something incredibly unlikely to occur by chance), we can believe that in some cases a favorable innovation arose from a single macromutation.

But the evidence Dawkins gives in support of the idea of macromutations is rather laughable. He states this on page 96:

Macro-mutations do happen. Offspring are sometimes born radically, monstrously different from either parent, and other members of the species. The toad in figure 3.2 is said by the photographer, Scott Gardner of the Hamilton Spectator, to have been found by two girls in their garden in Hamilton, Ontario. He says that they put it on the kitchen table for him to photograph. It had no eyes at all on the outside of its head. When it opened its mouth, Mr. Gardner said, it seemed to become more aware of its surroundings.

This is laughable as evidence. The photo he shows is a mutant frog that apparently has no eyes. Inside the frog's mouth are two little round things that could be anything – maybe eyes, or maybe something the frog ate, or maybe two marbles the two girls put in the frog's mouth. The second-hand claim that the frog “seemed to become more aware of its surroundings” when it opened its mouth is laughable from an evidence standpoint. There are, in fact, no known cases of a proven macromutation that ever produced a useful new feature (visible to the eye) in a biological organism. Dawkins' frog anecdote smacks of desperation. Why he is citing some story in the Hamilton Spectator rather than citing a scientific journal? Darwinist biologist Jerry Coyne says this about macromutations:

Macromutationism is the idea that important evolutionary changes between groups were produced by single mutations with very large effects....The notion of macromutationism pops up every few years in evolutionary biology. It’s wrong but it’s resilient.

Two scientists have noted, “In fact, to our knowledge, no macromutations ... that gave birth to novel proteins have yet been identified.”

Referring to the fossil record, on page 106 Dawkins makes the damaging admission that “Transitional forms are generally lacking at the species level.” Why should there not be a vast abundance of transitional forms in the fossil record if Darwinian theory is correct?

One of the great difficulties in natural history is explaining the origin of flight, where we have the wing-stump problem. For flight to have appeared, a transitional species would presumably have to have had a mere wing stump. But wing stumps are not at all useful, so we cannot explain the appearance of such a thing by saying that it provided some survival benefit.

On page 115 Dawkins starts trying to defend the idea that there is an “easy back path” by which Darwinian evolution could produce flying species. He says, “One possibility is that true flight grew out of the habit of gliding between trees, which lots of animals do, even if they don't quite fly.” On page 118 he suggests that it would have been easy for gliding to evolve:

In none of these cases is there any difficulty in finding a gentle path up Mount Improbable. Indeed, the fact that the gliding habit has evolved so many times testifies to the ease with which these mountain paths can be found.

This type of reasoning is very common among orthodox Darwinists, but it is fallacious. The reasoning goes like this: it must be easy for blind evolution to produce Capability X, because Capability X has appeared numerous times in nature. Such reasoning is fallacious because we have no way of knowing how many occurrences were produced by a particular type of process that originated biological complexity. Consider these possibilities:

Possibility 1: Species have originated merely through natural selection and random mutations.
Possibility 2: Species have originated through the action of some mysterious cosmic life-force or cosmic programming that acts throughout the universe as an organizational principle.
Possibility 3: Species have originated because extraterrestrial spaceships have periodically arrived and planted new life forms on our planets.
Possibility 4: Species originate when new types of life stray into our planet after spacetime wormholes open up leading from some other dimension to our planet.
Possibility 5: Species originate because some divine creator or angel causes them to appear. 
Possibility 6: We are living in a computer simulation created by extraterrestrials, and all fossils found are simply details added by them as kind of "backstory details" to flesh out the simulation.   

Since we don't know which of these is true, we cannot appeal to the fact that some feature exists in multiple life forms as something that helps to substantiate any of these claims; because such a thing might happen under any one of these scenarios. In the case of gliding, we have no known cases of gliding that have been proven to have been produced by natural selection and random mutations. So the mere fact that gliding has appeared in multiple species does nothing to support the claim that gliding could have easily appeared by natural selection and random mutations.

There are, in fact, very strong reasons for believing that a feature such as gliding could not have appeared through natural selection and random mutations. Let us consider the nature of gliding. Gliding between trees or branches requires an incredibly precise coordination between muscles, bones, web-like structures between limbs, eyes, and brain – for animals must not only land on a distant tree branch, but also be able to land on the branch without falling. If that coordination isn't just right, it will be very harmful or suicidal for an animal to try to glide between trees; for the animal will fall to the ground. Developing gliding is similar to developing a suspension bridge, in the sense that until the functionality is almost completed, what you have is functionality that will be suicidal if you try to use it. The table below illustrates the point.

Gliding Suspension Bridge
25% Completion Suicidal (jumping causes animal to fall to death) Suicidal (car using bridge ends up in river)
50% Completion Suicidal (jumping causes animal to fall to death) Suicidal (car using bridge ends up in river)
75% Completion Suicidal (jumping causes animal to fall to death) Suicidal (car using bridge ends up in river)
100% Completion Marginally beneficial, or maybe not – still great risks in gliding, and sparse benefits Beneficial (car is able to cross bridge)

So it seems that the appearance of gliding cannot be explained using Darwinian ideas. We have no explanation of how a species would have evolved the first 10% or the first 20% of gliding functionality, which would have had no “survival of the fittest” benefit. It's the same type of problem involved in trying to explain the evolution of flying without imagining gliding, which gives us problems such as the fact that there would be no survival benefit if just a wing stump ever appeared.

Then there's also the fact that if one imagines gliding animals turning into flying animals, you have no explanation of how all the changes occurred needed to change from gliding animals to flying animals. Trying to explain how Darwinian evolution could have produced “the development of gliding” plus also “the evolution from gliding to flying” is not easier than trying to explain how Darwinian evolution could have produced only the evolution of flying.

For the reasons giving above, trying to make the evolution of birds seem easy by suggesting that nature first evolved gliding animals and then evolved birds from such animals is like reasoning that it's not very hard to climb to the top of the Met Life skyscraper in midtown Manhattan using outer wall climbing, because you can first climb the outer walls of the Chrysler building and then jump to the top of the nearby Met Life skyscraper. Alternate theories discussed by Dawkins to account for the origin of flight are not any more credible, and we can chuckle at his suggestion that perhaps feathers were first developed to catch insects. Dawkins has no clear story to tell us, saying that “perhaps birds began flying by leaping off the ground, while bats began by gliding out of trees” while adding that alternately “perhaps birds too began by gliding out of trees” (page 126). Such uncertainty does not amount to a convincing story of how flight appeared in birds.

In Chapter 8 Dawkins begins trying to explain how vision could have evolved through random mutations and natural selection. He uses the “smoke and mirrors” trick followed by similar reasoners, by focusing on the eye and acting as if we merely need to explain the appearance of eyes to explain the appearance of vision. Proceeding in such a way is very fallacious, because the eye is merely one part of a complicated system needed to explain vision, what we may call the vision system. The main elements of the vision system are:
  1. The eye, which in modern organisms is an intricate arrangement of parts
  2. Extremely complicated proteins and biochemistry used by the eye to capture light
  3. The optic nerve connecting the eye to the brain
  4. Extremely complicated changes in the brain needed for an organism to make use of inputs from the eye.
These parts are so complicated that even the most primitive vision system providing a minimal benefit will require an extremely complicated invention exceedingly unlikely to appear by chance, as unlikely as falling trees forming into a nice roofed log cabin. You cannot explain such an invention merely by describing how a primitive eye could appear.

To try to explain how a primitive eye could appear, Dawkins appeals to a scientific paper published by Nilsson and Pelger, and Dawkins inaccurately describes this as a “computer model.” The paper does not actually describe any computer model or computer simulation. The paper describes how a circular patch of light-sensitive cells could change into a curved cupped eye, conveniently assuming that it underwent exactly the changes that would bring about such a thing (which would be most unlikely to occur given the thousands of possible ways that random mutations might change the appearance of such a flat circular patch). The paper cheats by assuming that such a flat circular patch of light-sensitive cells would come to exist before it had any usefulness. The paper also includes absurd statements such as “The evolution of an eye can thus be compared to the lengthening of a structure, say a finger, from a modest 8 cm to 8000 km, of a fifth of the Earth's circumference.” That is wrong because the evolution of an eye would be the assembly of a very intricate arrangement of coordinated parts, something vastly more complicated than just a mere lengthening of an object.

I can describe the type of smoke-and-mirrors trick involved in Nilsson and Pelger's paper. It works like this: given some complex functional arrangement of parts that is incredibly hard to achieve by chance, you try to make it look easy by ignoring 45% of the arrangement, and also assuming that another 45% of the arrangement was conveniently there to begin with; and then you argue that it's easy to make the arrangement because it's not too hard to make the remaining 10%. So, for example, someone might argue that it's pretty easy to make a suspension bridge, by ignoring the whole superstructure (the towers and the steel cables), and just assuming that the huge complicated substructure (leading down into the river) just happens to conveniently exist, and then kind of talking as if making a suspension bridge is as easy as building a road.

This is very close to what Nilsson and Pilger have done. They've simply ignored the requirements of an optic nerve and the incredibly complicated brain changes needed for vision, and they've assumed that the light-sensitive cells (so hard to achieve because they require fantastically improbable and intricate light-capturing proteins) were just there to begin with. Having either ignored or assumed the prior existence of 90% of a vision system, they then argue that the remaining part is easy, so it's easy for animals to get vision. This is a huge fallacy which Dawkins repeats, because he's so very eager to believe that vision is an easy hurdle for evolution to jump over.

Ignoring the almost unfathomable intricacy of a vision system, Dawkins seems to think that explaining eyes is as easy as explaining how some cells might simply form into a cup-shaped curve. Similarly, a person might fallaciously claim that it's real easy to make a camera, because all you need is a little box shape – or that it's easy to make a moon rocket, because all you need is a big tube shape.

Here is a description of the insanely complicated light-capturing biochemistry going on in the eye, from a biochemistry textbook:

  1. Light-absorption converts 11-cis retinal to all-trans-retinal, activating rhodopsin.
  2. Activated rhodopsin catalyzes replacement of GDP by GTP on transducin (T), which then disassociates into Ta-GTP and Tby.
  3. Ta-GTP activates cGMP phosphodiesterase (PDE) by binding and removing its inhibitory subunit (I).
  4. Active PDE reduces [cGMP] to below the level needed to keep cation channels open.
  5. Cation channels close, preventing influx of Na+ and Ca2+; membrane is hyperpolarized. This signal passes to the brain.
  6. Continued efflux of Ca2+ through the Na+-Ca2+ exchanger reduces cytosolic [Ca2+].
  7. Reduction of [CA2+] activates guanylyl cyclase (CG) and inhibits PDE; [cGMP] rises toward “dark” level, reopening cation channels and returning Vm to prestimulus level.
  8. Rhodopsin kinase (RK) phosphorylates “bleached” rhodopsin; low [Ca2+] and recoverin (Recov) stimulate this reaction. Arrestin (Arr) binds phosphorylated carboxyl terminus, reactivating rhodopsin.
  9. Slowly, arrestin dissociates, rhodopsin is dephosphorylated, and all-trans-retinal is replaced with 11-cis-retinal. Rhodopsin is ready for another phototransduction cycle.

We have no mention of any of these complexities in Dawkins' book, which show the absurdity of his claims that it's easy to get an eye. The text above mentions proteins that are far more structurally complicated than a primitive eye, such as a rhodopsin protein specified in a gene that uses 1000+ base pairs to specify the protein.

Rhodopsin proteins used in vision

With a vision system we have the same type of “uselessness of a wing stump” problem in explaining the origin of flight. Aquatic animals are believed to be the first that had eyes. But imagine an aquatic animal with only the poorest type of vision, the type of vision you have if you cover your eyes with 2-ply toilet paper. With such vision the aquatic animal couldn't tell if other aquatic animals are far away, but it could only tell if they are very close (seeing just a very blurry blob ahead of itself). But a blind aquatic animal could already tell when another aquatic animal is very close, from its splash (and in a sea of blind fish, if you're a blind fish you will have other fish frequently bumping into you). So if a blind aquatic animal in a sea of blind aquatic animals gets the weakest type of vision, some extremely blurry vision, it doesn't produce a benefit. Very weak vision might also allow an aquatic animal to know which way was up in the water. But a blind aquatic animal could already tell that from temperature changes in water and pressure changes (the deeper you go, the colder it is and the greater the pressure). And an aquatic animal doesn't need to know which way is up in the water.

So the poorest type of vision would have no survival value for an aquatic animal. If there were to be some incredibly improbable set of random mutations allowing the poorest type of vision, that would not be rewarded. For an aquatic animal, there is no series of gradual changes, each rewarded, that leads to the advanced functionality of vision. There is instead a high functionality threshold that must be reached before any survival reward is obtained, involving a complex, highly coordinated arrangement of parts needed to get something better than the weakest type of vision. We cannot explain how such a high threshold could be reached through random mutations and natural selection. For the threshold to be reached, too many complex parts have to be assembled with too much coordination. Far from being an “easy back route,” this is a high cliff like the front face of Half Dome in Yosemite. 

Natural selection is merely a dumb filter, something that can filter out bad designs but cannot explain the appearance of good designs. There is a very general reason why random mutations plus natural selection (a survival reward for greater fitness) cannot explain the origin of complex biological functionality. The reason is that the early stages of bringing into place such functionality would in general not yield rewards. Fragmentary implementations yield only very poor rewards or no rewards at all, and it is usually true that no reward is produced until a large fraction of an implementation is produced. So in almost all cases it is not possible to describe a pathway by which a series of random gradual changes could yield very complex functionality, with the early parts of the pathway producing a significant reward. If there is no reward in the early parts of the implementation pathway, it is gigantically improbable that nature would walk down that pathway, which would be only one of quadrillions of possible random paths.

The diagram below illustrates this point. The part in red represents the initial stages of a biological innovation, stages that are pre-functional and therefore not explained by an appeal to natural selection, which can only come into play once a functional threshold has been reached.

evolution problem

Dawkins tells us that eyes independently evolved 44 times, and scientists say that random mutations would have to occur 100 times for a particular biological innovation to become fixed in the gene pool. This means our orthodox Darwinist is required to believe that 4400 times blind chance produced a vision system, which is rather like believing that 4400 full water-tight log cabins have been produced by random falling trees in forests (although the latter is far more likely). Given the intricacy of a vision system, vastly greater than a log cabin, we would not expect a vision system to have arisen by chance mutations and natural selection even once in the history of our galaxy. So when Dawkins tells us that an eye can appear on the evolutionary scene “at the drop of a hat,” your fairy tale alarm should start ringing very loudly.

Considering all of its required parts (including very complex brain changes, fine-tuned proteins, an optic nerve, and intricate eye anatomy), a vision system is one of the most complex cases of organized functionality known to man. Adherents of the Darwinian “modern synthesis” have no way to account for this, for they lack any theory of organization. Darwinism is a theory of accumulation, not a theory of organization – the accumulation of random changes by mutations. As an evolutionary biologist confessed recently,Indeed, the MS [modern synthesis] theory lacks a theory of organization that can account for the characteristic features of phenotypic evolution, such as novelty, modularity, homology, homoplasy or the origin of lineage-defining body plans.” But what do some people do when they have to explain mountainous levels of organization, and they lack a biological theory of organization? They try to fake their way through, by using verbal tricks, carefully selective prose and omissions to try to make the mountain of organization look like a mere molehill of organization.

We may compare Darwinian evolution to a man trying to build elaborate structures, but who is acting under two handicaps. The first is that he has a refrigerator-sized steel block chained to his leg. The block (causing such slowness) symbolizes the fact that Darwinian evolution relies on favorable random mutations to achieve innovations, but such mutations should be so rare that waiting for them means the Darwinian evolution of complex macroscopic innovations should work no faster than a snail's pace. We may also imagine that this man has been commanded to operate under a rule of “do 100 harmful things for every useful thing you do.” Such a rule symbolizes the fact that for every random mutation that is helpful, there are very many that are harmful. How fast should such a man be able to build useful structures? Never faster than an insanely slow pace. So if the fossil record shows something like the Cambrian Explosion, in which every major phylum of animal now existing appeared in a relatively short time, we must suspect something was going on much more than just Darwinian evolution by random mutations and natural selection. 

Any convincing naturalistic attempt to explain the origin of vision and the origin of other complex biological functionality would devote a great deal of time to explaining the exceptionally fine-tuned and intricate biochemistry of life, and would also devote a great deal of time to explaining how it is that so many fine-tuned proteins came to arise in human biology. For countless proteins there is what is called a steep fitness landscape. This means that the proteins can only function well if they have a small number of states very similar to their existing states. Explaining the origin of such proteins is a nightmare for thinkers such as Dawkins who maintain that nothing but blind chance and natural selection brought these proteins into existence. Calculations repeatedly indicate that it would take something like ten to the seventieth power tries or search attempts for nature to find a particular type of protein known to exist; and there are thousands of such proteins in the human body. The number of search attempts that nature would have time to accomplish in the history of the earth is some number trillions of times smaller.

Does Dawkins book have some chapters trying to explain the origin of fine-tuned proteins? To the contrary, there isn't even an entry for proteins in the index of his book, nor is there an entry for chemistry or biochemistry. We would also expect that any convincing naturalistic attempt to explain the origin of vision and the origin of other complex biological functionality would devote a great deal of time to topics such as coordination and coherence, since the marvel of biological functionality is largely the wonder of how lots of small parts can work together in such a coordinated and coherent way. But we find no entry for “coordination” or “coherence” in Dawkins' index, which also doesn't have an entry for “complexity.”

Dawkins book fails completely in its attempts to show “easy back routes” by which Darwinian evolution could produce great wonders of complexity. He fails to provide a single example of some impressive piece of macroscopic biological functionality that has been proven to have been produced by natural selection and random mutations. No such example exists. Dawkins also fails to provide a single plausible story leading us to think that natural selection and random mutations would have been capable of producing any impressive piece of very complex macroscopic biological functionality.

In this regard Dawkins is in good company. The passage on vision biochemistry that I quoted is from page 459 of the 1119-page biochemistry textbook Lehninger Principles of Biochemistry. The book gives us a thousand pages of description of the most intricate machine-like biochemistry in the human body, but does basically nothing to explain how this functionality could have originated (apart from a short pro-forma review of Darwinism tenets, which don't specifically deal with biochemistry). There is endless discussion of proteins, but when I look up “proteins, evolution of” in the index of the book, I am referred to 5 pages that make no substantive clarification as to how fine-tuned proteins could have naturally evolved. How did we get all these thousands of fine-tuned proteins, each so fantastically unlikely to have arisen by chance? Our 1119-page biochemistry textbook has no real answer. Similarly, the 1041-page textbook Biochemisty by Lubert Stryer of Stanford University is notable for making no substantive attempt to explain the origin of any part of the wonderful biochemical machinery it discusses. The index of the book lists only 20 pages referring to evolution, and when I look up those pages I find only passing or incidental references to evolution. Mentioning Darwin in only one paragraph, with only a passing mention, the 1041-page book lacks even a substantial exposition of Darwinian theory, and doesn't even mention natural selection in its index, ignoring the topic of the origin of life and the origin of proteins. Which is not what we would expect if Darwinian theory were useful in explaining the origin of life or very complex biochemistry machinery.

Nor is there any real origins answer offered by this long recent review of the topic of protein evolution, which states this near its end (referring to protein folds):

It is not clear how natural selection can operate in the origin of folds or active site architecture. It is equally unclear how either micromutations or macromutations could repeatedly and reliably lead to large evolutionary transitions. What remains is a deep, tantalizing, perhaps immovable mystery.