Header 1

Our future, our universe, and other weighty topics

Friday, April 19, 2019

Motivated Reasoning of the “Cosmic Inflation” Storytellers

In 2017 Scientific American published a sharp critique of the theory of cosmic inflation originally advanced by Alan Guth (not to be confused with the more general Big Bang theory). The theory of cosmic inflation (which arose around 1980) is a kind of baroque add-on to the Big Bang theory that arose decades earlier. The Big Bang theory asserts the very general idea that the universe began suddenly in a state of incredible density, perhaps the infinite density called a singularity; and that the universe has been expanding ever since. The cosmic inflation theory makes a much more specific claim, a claim about less than one second of this expansion – that during only a small fraction of the first second of the expansion, there was a special super-fast type of expansion called exponential expansion. ("Cosmic inflation" is a very bad name for this theory, as it creates all kinds of confusion in which people confuse the verified idea of an expanding universe and the shaky idea of cosmic inflation. The term "cosmic inflation" refers not to cosmic expansion in general, but to the very specific idea that the universe's expansion was once a type of expansion -- exponential expansion -- radically faster and more dramatic than its current linear rate of expansion.) 

The article in Scientific America criticizing the theory of cosmic inflation was by three scientists (Anna Ijjas, Paul J. Steinhardt, Abraham Loeb), one a Harvard professor and another a Princeton professor. It was filled with very good points that should be read by anyone curious about the claims of the cosmic inflation theory.  You can read the article on a Harvard web site here. Or you can go to this site by the article's authors, summarizing their critique of the cosmic inflation theory.

Recently a very long scientific paper appeared on the ArXiv physics paper server, a paper with the cute title “Cosmic Inflation: Trick or Treat?” In its very first words the paper's author (Jerome Martin) misinforms us, because he refers to cosmic inflation as something that was “discovered almost 40 years ago.” Discovery is a word that should be used only for observational results in science. Cosmic inflation (the speculation that the universe underwent an instant of exponential expansion) was never discovered or observed by scientists. In fact, it is impossible that this “cosmic inflation” or exponential expansion ever could be observed. During the first 300,000 years of the universe's history, the density of matter and energy was so great that all light particles were thoroughly scattered and shuffled a million times. It is therefore physically impossible that we ever will be able to observe any unscrambled light signals from the first 300,000 years of the universe's history. So we will never be able to get observations that might verify the claim of cosmic inflation theorists that the universe underwent an instant of exponential expansion.

At the end of the paper the author claims that the cosmic inflation theory has “all of the criterions that a good scientific theory should possess.” The author gives only two examples of such things: first, the claim that the cosmic inflation theory is falsifiable, and second that “inflation has been able to make predictions.” His claim that the theory is falsifiable is not very solid. He says that the cosmic inflation theory could be falsified if it were found that the universe did not have what is called a flat geometry, but then he refers us to a version of the cosmic inflation theory that predicted a universe without such a flat geometry. So cosmic inflation theory isn't really falsifiable at all. So many papers have been published speculating about different versions of cosmic inflation theory that the theory can be made to work with any future observations. Harvard astronomer Loeb says here the cosmic inflation theory "cannot be falsified." 

It is not at all true that the cosmic inflation theory has “all of the criterions that a good scientific theory should possess,” or even most of those characteristics. Below is a list of some of the characteristics that are desirable in a good scientific theory. You can have a good scientific theory without having all of these characteristics, but the more of these characteristics that you have, the more highly regarded your scientific theory should be.

  1. The theory is potentially verifiable. While falsification has been widely discussed in connection with scientific theories, it should not be forgotten that the opposite of falsification (verification) is equally important. Every good scientific theory should be potentially verifiable, meaning that there should always be some reasonable hypothetical set of observations that might verify the theory. In the case of the cosmic inflation theory, we can imagine no such observations. The only thing that could verify the cosmic inflation theory would be if we were to look back to the first instant of the universe and observe exponential expansion occurring. But, as I previously mentioned, there is a reason why such an observation can never possibly occur, no matter how powerful future telescopes are. The reason is that the density of the very early universe was so great that all light signals from the first 300,000 years of the history were hopelessly shuffled, scrambled and scattered millions of times.
  2. The theory merely requires us to believe in something very simple. A very desirable characteristic of a scientific theory is that it only requires that we believe in something very simple. An example of a theory with such a characteristic is the theory that the extinction of the dinosaurs was caused by an asteroid collision. Such a theory asks us only to believe in something very simple, merely that a big rock fell from space and hit our planet. Another example of a theory that meets this characteristic is the theory of global warming. In its most basic form, the theory asks us to merely believe in something very simple, that humans are putting more greenhouse gases in the atmosphere, and that such gases raise temperatures (as we know they do inside a greenhouse). But the cosmic inflation theory (the theory of primordial exponential expansion) does not have this simplicity characteristic. All versions of such a theory require complex special conditions in order for this cosmic inflation (exponential expansion) to begin, to last for only an instant, and then to end in less than a second so that the universe ends up with the type of expansion that it now has (linear expansion, not exponential expansion). We need merely look at the papers of the cosmic inflation theorists (all filled with complex mathematical speculations) to see that the theory fails very much to meet this simplicity characteristic of a good scientific theory.  In a recent post, the cosmic inflation pitchman Ethan Siegel tells us, "If you have an inflationary Universe that's governed by quantum physics, a Multiverse is unavoidable."  What that means is the cosmic inflation has the near-infinite baggage of requiring belief in some vast collection of universes. Of course, this is the exact opposite of the simplicity that is desirable in a good theory.
  3. There is no evidence conflicting with the theory. A characteristic of a good scientific theory is that there is no evidence conflicting with the theory. The theory of electromagnetism and the theory of plate tectonics are very good theories, and there is no evidence against them. But there are quite a few observations conflicting with the cosmic inflation theory (the theory of exponential expansion in the universe's first instant). Such observations (sometimes called CMB anomalies) are discussed in this post. The observations are mainly cases in which the cosmic background radiation has some characteristic that we would not expect to see if the cosmic inflation theory were true. A scientific paper says, “These are therefore clearly surprising, highly statistically significant anomalies — unexpected in the standard inflationary theory and the accepted cosmological model.”
  4. The theory makes precise numerical predictions that have been exactly verified to several decimal places very many times. This characteristic is one that the best theories in physics have, theories such as the theory of general relativity, the theory of quantum mechanics, and the theory of electromagnetism. For example, the theory may predict that some unmeasured quantity will be 342.2304, and scientists will measure that quantity and find that it is exactly 342.2304. Or the theory may predict that some asteroid will hit the Moon on exactly 10:30 PM EST on May 23, 2026, and it will then be found (10 days later) that the asteroid did hit the Moon on exactly 10:30 PM EST on May 23, 2026. The cosmic inflation theory does not have this characteristic of a good scientific theory. It makes no exact numerical predictions at all. There have been published several hundred different versions of the cosmic inflation theory, each of which is a different scientific model. Each of those hundreds of models can predict 1000 different things, because the numerical parameters used with the equations can be varied. So the predictions of the cosmic inflation theory are pretty much all over the map, and it is impossible to point to any case in which it made a good precise successful prediction. When advocates of the cosmic inflation theory talk about predictive success, they are talking about woolly kind of predictions (like “the universe will be pretty flat”) rather than exact numerical predictions, and they are talking about one-shot affairs rather than cases in which predictions are repeatedly verified. Many a wrong theory can have an equal degree of predictive success. For example, a bad economic theory may predict various things, and may vaguely predict correctly that the stock market will go up next year.
  5. We continue to get observational signs that the theory is correct. A desirable characteristic of a good scientific theory is that we continue to observe signs suggesting that theory is correct. The theory of plate tectonics has such a characteristic. Every time there is an earthquake in the “Ring of Fire” region that marks the boundaries of continental plates, that's an additional observational sign that the plate tectonics theory is correct. The theory of gravitation continues to send us observational signals every day that the theory is correct. But we do not get any observational signs from the universe that it once underwent an instant of exponential expansion, nor can we logically imagine how such signs could ever come or keep coming from such a primordial event.

So it is clear that Martin's claim that the theory of cosmic inflation has “all of the criterions that a good scientific theory should possess” is not at all true. Saying something similar to what I said above, a New Scientist article puts it this way:

But no measurement will rule out inflation entirely, because it doesn’t make specific predictions. “There is a huge space of possible inflationary theories, which makes testing the basic idea very difficult,” says Peter Coles at Cardiff University, UK. “It’s like nailing jelly to the wall.”

The tall tale of cosmic inflation (exponential expansion at the beginning of the universe) is a modern case of a tribal folktale, told by a small tribe of a few thousand cosmologists. Below is the basic piece of folklore of the cosmic inflation theory:

"At the very beginning, the universe started out with just the right conditions for it to start expanding at a super-fast exponential rate. So for the tiniest fraction of a second, the universe did expand at this explosive exponential rate. Then, BOOM, the universe suddenly switched gears, did a dramatic change, and started expanding at the much slower, linear rate that we now observe."

Why would anyone believe such a story that can never be verified? The answer is: because they have a strong motivation. The arguments given for the cosmic inflation theory are examples of what is called motivated reasoning. Motivated reasoning is reasoning that people engage in not because they have premises or evidence that demand particular conclusions, but because they have a motivation for reaching the conclusion.

The motivation for the cosmic inflation theory was that people wanted to get rid of some apparent fine-tuning in the Big Bang. At about the time the cosmic inflation theory appeared, scientists were saying that the universe's initial expansion rate was just right, and that if it had differed by less than 1 part in 1,000,000,000,000,000,000,000,000,000,000,000,000,000, we would not have ended up with a universe that would have allowed life to exist in it. That type of extremely precise fine-tuning at the very beginning of Time bothers those who want to believe in a purposeless universe. 

Saying that the universe's initial expansion rate was fine-tuned is equivalent to saying that the density was fine-tuned, for the requirement is a very precise balancing involving an expansion rate that is just right for a particular density (or, to state the same idea, a density that is just right for a particular expansion rate).  In a recent very long cosmology paper, scientist Fred Adams notes on page 41 the requirement for a very precise fine-tuning of the universe's initial density (something like 1 in 10 to the sixtieth power, which is a trillionth of a trillionth of a trillionth of a trillionth of a trillionth).  On page 42 Adams states that, "The paradigm of inflation was developed to alleviate this issue of the sensitive fine-tuning of the density parameter."  That was the motivation of the cosmic inflation theory -- to sweep under the rug or get rid of a dramatic case of fine-tuning in nature. 

The folklore mongers who sell cosmic inflation stories may believe that they have got rid of this fine-tuning at the beginning. But they actually haven't. They've merely “robbed Peter to pay Paul,” by getting rid of fine-tuning in one place (in regard to the universe's initial expansion rate) at the price of requiring lots of fine-tuning in lots of other places. That's because all theories of cosmic inflation themselves require enormous amounts of fine-tuning. But with a cosmic inflation theory it may be rather less noticeable, because the required fine-tuning occurs in lots of different places rather than in one place.

Judging from a 2016 cosmology paper,  the cosmic inflation theory requires not just one type of fine-tuning, but three types of fine-tuning. The paper says, “Provided one permits a reasonable amount of fine tuning (precisely three fine tunings are needed), one can get a flat enough effective potential in the Einstein frame to grant inflation whose predictions are consistent with observations.” How on Earth does it represent progress to try to get rid of one case of fine-tuning by introducing a theory that requires three cases of fine-tuning? And the estimate of three fine-tunings in the paper is probably an underestimate, as other papers I have read suggest that 7 or more precise fine-tunings are needed.

fine tuning
This is not theoretical progress

We may compare the cosmic inflation pitchman to some person who wants to sell someone in Manhattan a car. “Think of all the money you'll save!” says the pitchman. “You won't have to pay $40 on subways each week.” But what the pitchman fails to tell you is that when you add up the cost of the monthly car payments, the cost of car insurance, and the cost of a garage parking space (because there's so few parking spaces in Manhattan), the total cost of the car is much more than the cost of the subway. Similarly the pitchmen of cosmic inflation theory tell us that the theory is great because it reduces fine-tuning in one place (in regard to the universe's initial expansion rate), and neglect to tell you that the total amount of fine-tuning (adding up all of the special requirements and fine-tuning needed for cosmic inflation to work) is probably far “worse” if you believe that cosmic inflation occurred.

What has been going on with the cosmic inflation theory is very similar to what went on for decades with the supersymmetry theory, a theory which physicists have been fruitlessly laboring on for decades. Like the cosmic inflation theory, supersymmetry was motivated by a desire to sweep under the rug some fine-tuning. In the case of supersymmetry, the fine-tuning scientists wanted to get rid of was the apparent fact of the Higgs boson or Higgs field being fine-tuned very precisely ("like a pencil standing on its point" is an analogy sometimes given).  An article on the supersymmetry theory discusses the fine-tuning that motivated the theory:

One logical option is that nature has chosen the initial value of the Higgs boson mass to precisely offset these quantum fluctuations, to an accuracy of one in 1016However, that possibility seems remote at best, because the initial value and the quantum fluctuation have nothing to do with each other. It would be akin to dropping a sharp pencil onto a table and having it land exactly upright, balanced on its point. In physics terms, the configuration of the pencil is unnatural or fine-tuned.

Similarly, a paper on an MIT server entitled "Motivation for Supersymmetry" states the following (referring to the many new types of hypothetical particles called "supersymmetric partners" imagined by the supersymmetry theory):

Thus in order to get the required low Higgs mass, the bare mass must be fine-tuned to dozens of significant places in order to precisely cancel the very large interaction terms....However, if supersymmetric partners are included, this fine-tuning is not needed.

Physicists erected the ornate theory of supersymmetry, thinking that they were explaining away this very precise fine-tuning  in nature, "to dozens of significant places." But they failed to see that they were just “robbing Peter to pay Paul,” because the total amount of fine-tuning required by the supersymmetry theory (given all of its many different things that had to be just right) was as great as the fine-tuning that it tried to explain away. So there was no net lessening of fine-tuning even if the supersymmetry theory was true.

The MIT paper above says "many thousands" of science papers have been written about supersymmetry. Most of them spun out ornate webs of speculation, as ornate and unsubstantiated as the gossamer speculations of cosmic inflation theorists.  Supersymmetry has failed all observational tests, and now many physicists are lamenting that they wasted so many years on it. Our cosmic inflation theorists have failed to heed the lesson of the supersymmetry fiasco: that trying to explain away fine-tuning in the universe is a waste of time. 

Monday, April 15, 2019

When an Apparition Is Seen by Multiple Observers: 17 Cases

Apparitions have been by humans throughout history. Skeptics claim that such apparitions are just hallucinations. But there are two reasons for rejecting such a theory. The first reason is that there are an unusually high fraction of apparition sightings in which a person sees an apparition of someone (typically someone the observer did not know was in danger), and then later finds out that this person died on the same day (or the same day and hour) as the apparition was seen. We would expect such cases to be extremely rare or nonexistent if apparitions are mere hallucinations, since all such cases would require a most unlikely coincidence. But the literature on apparitions shows that it is quite common for an apparition to appear to someone on the day (or both the day and hour) of the death of the person matching the apparition. See here and here and here and here for 100 such cases.

The second reason for rejecting claims that apparitions are mere hallucinations is the fact that an apparition is quite often seen or heard by more than one person at the same time. We should not expect any such cases under a theory of apparitions being hallucinations.

I will now review some examples of cases in which an apparition was seen or heard by more than one person.  A very early case is found in the 17th century book Miscellanies by John Aubrey.  On page 82 we are told that a week after his death, an apparition of Henry Jacob appeared to Dr. Jacob, and that the apparition was also seen by his cook and maid. 

Below are some cases from Volume 2 of the classic work on apparitions, “Phantasms of the Living,” which you can read here. I will use the case numbers given in the book.

Page 174, Case #310: A reverend Fagan, his cousin Christopher, and a Major Collis all heard the name “Fagan” called from a source they could not determine. Two of them said the voice was like the voice of Captain Clayton. The next morning a telegram arrived saying that Captain Clayton had died on the same day and hour as the voice was heard. (I won't count his case as one of my 17 cases, since it is auditory only.) 

Page 178, Case #312: Gorgiana Polson reported seeing a woman who she thought was “something unnatural” and exclaimed, “Oh, Caroline.” The woman was dressed in black silk “with a muslin 'cloud' over her head and shoulders.” At the same time, a “little nursery girl” was terrified of going into a room where she saw a similar strange figure, “in black, with white all over her head and shoulders.” Gorgiana later found that Caroline had died on the same day the apparition was seen.

Page 181, Case #314: A Mrs. Coote reported that she saw her sister-in-law Mrs. W. appear at her bedside. The same Mrs. W. reportedly appeared to Mrs. Coote's aunt, appearing as a “bright light from a dark corner of the bedroom,” who was recognized as Mrs. W. by the aunt. Also, according to Mrs. Coote, “this appearance was also made to my husband's half-sister.” It was soon found that Mrs. W. had died. According to Mrs. Coote “A comparison of dates...served to show the appearance occurred ...at the time of, or shortly thereafter, the death of the deceased.”

Page182, Case #315: A Mr. de Guerin reported that in 1854, he saw something that “appeared like a thin white fog....after a few minutes I plainly distinguished a figure which I recognized as that of my sister Fanny.” He said “the vision seemed to disappear gradually in the same manner as it came.” He later learned that “on the same day my sister died – almost suddenly.” de Guerin immediately mailed a description of what he had seen to another sister, Mrs. Elmslie, who lived far away; “but before it reached her, I had received a letter from her, giving me an almost similar description of what she had seen the same night, adding 'I am sure dear Fanny is gone.' ” She reported that the apparition disappeared.

Page 196-197, Case #317: Violet Montgomery and Sidney Montgomery reported that in 1875 they had seen a female figure that “never touched the ground at all, but floated calmly along.” Page 197 also mentions a Mr. W.S. Soutar, who claimed that he and his brother also saw a female figure that glided without any apparent movement of the feet.

Page 213, Case #330: A James Cowley said he “saw, with all the distinctness possible to visual power” an apparition of his late wife. At the same instant his two-year-old son said, “There's mother!”

Page 213, Case #331: Charles A.W. Lett said that six weeks after the death of Captain Townes, his wife and Miss Berthon reported seeing a half-apparition of Captain Townes, consisting of only his head and shoulders. According to Lett, several other people saw the apparition, identifying it as Captain Townes; and then the apparition “gradually faded away.”

Page 235, Case #345: A Mrs. Cox was told by a nephew that he had just seen his father (Mrs. Cox's brother), who was thought to be far away in Hong Kong. Mrs. Cox told the boy this was nonsense, but then saw the same apparition of her brother. She reported that the apparition called her name three times. She soon found out that her brother had died on the same day the apparition was seen.

Page 241, Case #349: In 1845 while at college Philip Weld died in a boating accident. The president of the college immediately set out to travel to the father of Philip to deliver the bad news. Arriving the next day, he was surprised to hear the father say that yesterday he and his daughter had recently seen Philip walking between two persons, one wearing a black robe. “Suddenly they all seemed to me to have vanished,” said the father. Later, the father saw a portrait that he identified as one of the men who he had seen with the apparition of his son. The portrait was of a saint who had died long ago.

Page 247, Case #351: In 1882 J. Bennett and her daughter saw a man whose health they were worried about: “He passed so near that we shrank aside to make way for him.” Later “we found, in fact, that he had died about a half hour before he appeared to us.”

Page 248-249, Case #352: At quarter to 7 on July 11, 1879, Samuel Falkinburg observed his son exclaim “Grandpa!” Samuel looked up toward the ceiling and “saw the face of my father as plainly as I ever saw him in my life.” Soon thereafter he found out that his father died on July 11, 1879, at quarter to 7.

Page 253, Case #354: A girl went to live far away from her beloved aunt who had raised her for most of her childhood. One day someone other than the girl said, “Oh look there! There's your aunt in bed with Caroline!” The girl was astonished to see her aunt lying on the bed. A short time later the aunt seemed to have disappeared. Later the girl found the aunt had died, and that her last words were a remark that she could die happy because she had seen the child.

Page 248-249, Case #352At quarter to 7 on July 11, 1879, Samuel Falkinburg observed his son exclaim “Grandpa!” Samuel looked up toward the ceiling and “saw the face of my father as plainly as I ever saw him in my life.” Soon thereafter he found out that his father died on July 11, 1879, at quarter to 7. 

Page 604, Case 651: Benjamin Coleman was surprised to see at his bedside his son, who was believed to be far away at sea.  The figure (having a sailor's dress) vanished from Benjamin's sight. He then soon heard his servant William Ball say that William also had seen the son that day in sailor's dress.  The father later found that the son "had died that very day and hour, of dysentery, on board ship." 

Page 611, Case 658: Chatting in bed, Elizabeth and Henriette both saw a strange light, which they both said was beautiful.  Elizabeth then said it was little Mary Stanger, and that she was "floating away."  It was later learned that Mary Stanger had "died at the exact time" the two girls had seen the vision. 

On page 40-43 of his book Death and Its Mystery by the astronomer Camille Flammarion, we have an astonishing case of an apparition seen by multiple observers. Unlike the other cases I have reported, this is an example of an apparition of a living person.  According to Flammarion's account, 13 girls saw an apparition of a school teacher named Emilie Sagee, right next to her physical form, so that there was "one beside the other." "They were exactly alike, and going through the same movements," according to Flammarion, who states, "All the young girls, without exception, had seen the second form, and agreed perfectly in their description of the phenomenon."   Later, according to his account, 42 pupils saw an apparition of Sagee in a school at the same time Sagee was also observed picking flowers in a garden -- as if there were two copies of Emilie Sagee. According to the pupils, the apparition "gradually vanished."  Flammarion reports, "The forty-two pupils described the phenomenon in the same way." Such an observation may possibly be evidence for the idea that each of us has an "astral body" different from our physical body.  The apparition observed may have been a rare sighting of such a thing appearing before death. 

Flammarion reports a similar case of an apparition of the living on page 49-50 of his book.  Two observers saw a Miss Jackson warming her hands before a fire. "Suddenly, before their very eyes, she disappeared," according to Flammarion.  Half an hour later, Miss Jackson entered the room and warmed her hands before the fire.  

I have 17 other cases of apparitions seen by multiple observers, which I will present in a future post. 

Thursday, April 11, 2019

The Only Good Thing About the “We Are in a Computer Simulation” Theory

Early in the century Nick Bostrom advanced an argument claiming that there is a significant chance that we are merely living in a computer simulation. This idea has received a high degree of worldwide attention that makes no sense, given the extreme weakness of Bostrom's argument for such an idea.

Bostrom imagined extraterrestrial civilizations running computer programs that somehow produce experiences such as you and I are now having, calling these "ancestor simulations." I may merely quote a brief passage from Bostrom's original paper to show the sophistry of its reasoning.

"A technologically mature 'posthuman' civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one."

This cannot be careful reasoning, because Bostrom has sloppily spoken as if an interest in running “ancestor-simulations” is equivalent to actually running them. But there are 1001 reasons why extraterrestrial civilizations might not run such “ancestor-simulations,” even though they had some interest in running them if it was possible. He also does nothing to justify his claim that “at least one of the following propositions is true,” and it is certainly not clear that the third proposition is even possible, let alone that it must be true if the other two propositions are false.

Bostrom also makes the big mistake of implying that if there is one alien civilization interested in creating an “ancestor simulation,” that such a civilization would now be producing countless such simulations. He suggests that if there is one such civilization, the number of simulated lives would greatly outnumber the number of real lives. This is a completely unjustified insinuation. The more often some weird non-essential project has been done, the less people tend to be interested in doing it. If an alien civilization were able to run some ancestor simulation of the type Bostrom imagines, we have every reason to suspect that it would grow bored with such a thing after some particular number of years, and lose interest in it. Given an alien civilization that had one point in its long existence had an interest in running an ancestor simulation, there is no reason to think (given a very long lifetime for that civilization) that it would now be running such simulations. And there is also no reason to believe that it would now be running very many such simulations, so many that the number of simulated lives would outnumber the number of real lives.

Moreover, if we were to be living in a simulation, there would be no reason to believe that there are any extraterrestrial planets that might have super-advanced civilizations doing computer-generated "ancestor simulations," because a consequence of such a hypothesis is that all of our astronomical data (and all of our computer progress data and all of our video game progress data) is illusory, and that the stars and planets and computers and video games we observe are just "parts of the simulation."  So the simulation argument is like some guy who climbs up a ladder and then kicks the ladder out from under his feet. If you exist in a simulated reality, then you have zero basis for believing anything about extraterrestrials or computers outside of your simulated reality. 

A general rule of all successful arguments is: the conclusion never discredits one of the premises. Below is an argument that violates that rule:

Premise 1: My husband is a good man.
Premise 2: Good men tell the truth.
Premise 3: So when my husband said, "I'm going to flatten you!" when I told him I had thrown away his big stack of porn magazines, he must have been telling the truth. 
Conclusion: Therefore, my husband literally plans to flatten me, perhaps by renting a streamroller vehicle, and running over me. 

One reason this is a bad argument is that the conclusion invalidates one of the premises (if your husband is planning to murder you, then he's not a good man).  Something similar goes on in the argument that we are living in a computer simulation, which could be stated like this:

Premise 1: We have astronomical reasons for thinking maybe there are very old extraterrestrial civilizations. 
Premise 2: Such civilizations would have incredibly powerful computers.
Premise 3: Such computers would be so powerful they could simulate our lives. 
Conclusion: We're probably living in a computer simulation run by extraterrestrials.

In this case, as in the wife's argument, the conclusion discredits one of the premises. If we are living in a simulated reality, then all of our astronomical data is just "part of the simulation," and we have no reason for thinking there are old extraterrestrial civilizations. 

An argument for the simulated universe idea was advanced by Elon Musk, who stated the following:
The strongest argument for us being in a simulation probably is the following. Forty years ago we had pong. Like, two rectangles and a dot. That was what games were.
Now, 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously, and it's getting better every year. Soon we'll have virtual reality, augmented reality.
If you assume any rate of improvement at all, then the games will become indistinguishable from reality, even if that rate of advancement drops by a thousand from what it is now. Then you just say, okay, let's imagine it's 10,000 years in the future, which is nothing on the evolutionary scale.
So given that we're clearly on a trajectory to have games that are indistinguishable from reality, and those games could be played on any set-top box or on a PC or whatever, and there would probably be billions of such computers or set-top boxes, it would seem to follow that the odds that we're in base reality is one in billions.”
What Musk describes is a progression of sophistication in video game technology. But we have not one bit of evidence that any computer or video game has ever itself had the slightest iota of experience, consciousness, or life-flow. We only have evidence that biological creatures such as us can have some experience, consciousness, or life-flow. So there is no basis for thinking that some super-advanced alien civilization could ever be able to produce computers or video games that by themselves were the source of experiences like the ones we have. Making such an assumption based on an extrapolation of technical progress in video games is as fallacious as arguing that one day video games will be so realistic that their characters or creatures will leap out of the video game screen and endanger your life (kind of like in the visual below). 

Musk's witless reasoning along the lines of  "we're probably in a computer simulation because video games are getting better" has recently been repeated by a computer science expert named Rizwan Virk. Regarding the possibility that we are living in a computer simulation, Virk has deluded himself into believing that "there is plenty of evidence that points in that direction"  (there is actually no such evidence). After making a completely inappropriate appeal to quantum mechanics and Schrodinger's cat, which have no relevance to this question, Virk tries to back up the simulation argument by telling us that physicist John Wheeler made the "discovery" that everything is information. That was merely a speculation of Wheeler's, one that hasn't won much acceptance; and if it is true, it wouldn't imply we are in a computer simulation. Virk also tries to back up the simulation idea by arguing that video games have got a lot better over the years. It's the same bunk argument presented by Musk. 

Arguments that we are living in a computer simulation have very little intrinsic worth. But there is one good thing about considering such arguments seriously. If we consider seriously the possibility that we are merely living in a computer simulation, our minds may be opened to an important general possibility that may very well be true: the possibility that reality is radically different from the way we it is officially portrayed.

Let us consider one way a person might think about the possibility we are living in a computer simulation. He might think like this:

We have been told that our minds are being produced by our brains, but that may not be true.

We have been told that we are merely the product of blind evolution, but that may not be true.

We have been told that the matter we see around us exists independently of our minds, but that may not be true.

Maybe instead of being just the result of a long series of incredibly improbable accidents, we are here because of the intention of some purposeful intelligence.

Maybe we're just living in a computer simulation run by extraterrestrials.”

The last of these ideas is not a very viable one at all, but the preceding ideas are all well worth considering, particularly in forms outside of the “computer simulation” idea. Considering such possibilities very seriously would seem to be a step in the direction of philosophical maturity.  Once someone starts reasoning along the lines above, he may climb out of the thought prison that our mainstream experts have kept us chained in for so long.  But having escaped such a prison,  you should explore around and look for something better than the cheesy "we are in an extraterrestrial computer simulation" idea. 

Sunday, April 7, 2019

Brains Are Way Too Slow to Explain Fast Recall and Thinking

Scientists have long advanced the claim that the human brain is the storage place for memories and the source of human thinking. But such claims are speech customs of scientists rather than things they have proven. There are numerous reasons for doubting such claims. One big reason is that the proteins in synapses have an average lifetime of only a few weeks, which is only a thousandth of the length of time (50 years or more) that humans can store memories. Another reason is that neurons and synapses are way too noisy to explain very accurate human memory recall, such as when a Hamlet actor flawlessly recites 1476 lines. Another general reason can be stated as follows: the human brain is too slow to account for very fast thinking and very fast memory retrieval.

Consider the question of memory retrieval. Given a prompt such as a person's name or a very short description of a person, topic or event, humans can accurately retrieve detailed information about such a topic in one or two seconds. We see this ability constantly displayed on the long-running television series Jeopardy. On that show, contestants will be given a short prompt such as “This opera by Rossini had a disastrous premier,” and within a second after hearing that, a contestant may click a buzzer and then a second later give an answer mentioning The Barber of Seville.  Similarly, you can play with a well-educated person a game you can call “Who Was I?” You just pick random names of actual people from the arts or history, and require the person to identify the person within about two seconds. Very frequently a person will succeed. We can imagine a session of such a game, occurring in only ten seconds:

John: Marconi.
Mary: Invented the radio.
John: Magellan.
Mary: First to sail around the globe.
John: Peter Falk.
Mary: A TV actor.

We can also imagine a visual version of this game, in which you identify random pictures of any of 1000 famous people. The answers would often be just as quick.

The question is: how could a brain possibly achieve retrieval and recognition so quickly? Let us suppose that the information about some person is stored in some particular group of neurons somewhere in the brain. Finding that exact tiny storage location would be like finding a needle in a haystack, or like finding just the right index card in a swimming pool full of index cards. It would also be like opening the door of some vast library with a million volumes and instantly finding the exact volume you were looking for.

There are certain design features that a system can have that will allow for very rapid retrieval of information. One of these features is an indexing system. An indexing system requires a position notation system, in which the exact position of some piece of information can be recorded. An ordinary textbook has both of these things. The position notation system is the page numbering system. The indexing system is the index at the back of the book. But the brain has neither of these features. There is nothing in the brain like a position notation system by which the exact position of some tiny group of neurons can be identified. The brain has no neuron numbers, and a brain has no coordinate system similar to street names in a city or Cartesian coordinates in a grid. Lacking any such position notation system, the brain has no indexing system (something that requires a position notation system).

So how is it that humans are able to recall things instantly? It seems that the brain has nothing like the speed features that would make such a thing possible. You can't get around such a difficulty by claiming that each memory is stored everywhere in the brain. There would be two versions of such an idea. The first would be that each memory is entirely stored in every little spot of the brain. That makes no more sense than the idea of a library in which each page contains the information in every page of every book. The second version of the idea would be that each memory is broken up and scattered across the brain. But such an idea actually worsens the problem of explaining memory retrieval, as it would only be harder to retrieve a memory if it is scattered all over your brain rather than in a single little spot of your brain.

We also cannot get around this navigation problem by imagining that when you are asked a question, your brain scans all of its stored information. That doesn't correspond to what happens in our minds. For example, if someone asks me, "Who was Teddy Roosevelt," my mind goes instantly to my memories of Teddy Roosevelt, and I don't experience little flashes of knowledge about countless other people, as if my brain were scanning all of its memories.  

When we consider the issue of decoding encoded information, we have an additional strong reason for thinking that the brain is way too slow to account for instantaneous recall of learned information.  In order for knowledge to be stored in a brain, it would have to be encoded or translated into some type of neural state. Then, when the memory is recalled, this information would have to be decoded: it would have to be translated from some stored neural state into a thought held in the mind. This requirement is the most gigantic difficulty for any claim that brains store memories. Although they typically maintain that memories are encoded and decoded in the brain, no neuroscientist has ever specified a detailed theory of how such encoding and decoding could work. Besides the huge difficulty that such a system of encoding and decoding would require a kind of "miracle of design" we would never expect for a brain to ever have naturally acquired (something a million times more complicated than the genetic code), there is the difficulty that the decoding would take quite a bit of time, a length of time greater than the time it takes to recall something. 

So suppose I have some memory of who George Patton was, stored in my brain as some kind of synapse or neural states, after that information had somehow been translated into synapse or neural states using some encoding scheme.  Then when someone asks, "Who was George Patton?" I would have to not only find this stored memory in my brain (like finding a needle in a haystack), but also translate these synapse or neural states back into an idea, so I could instantly answer, "The general in charge of the Third Army in World War II."  The time required for the decoding of the stored information would be an additional reason why instantaneous recall could never be happening if you were reading information stored in your brain.  The decoding of neurally stored memories would presumably require protein synthesis, but the synthesis of proteins requires minutes of time. 

There is another reason for doubting that the brain is fast enough to account for human mental activity. The reason is that the transmission of signals in a brain is way, way too slow to account for the very rapid speed of human thought and human memory retrieval.

Information travels about in a modern computer at a speed thousands of time faster than nerve signals travel in the human brain. If you type in "speed of brain signals" into the Google search engine, you will see in large letters the number 286 miles per hour, which is a speed of 128 meters per second. This is one of many examples of dubious information which sometimes pops up in a large font at the top of the Google search results. The particular number in question is an estimate made by an anonymous person who quotes no sources, and one who merely claims that brain signals "can" travel at such a speed, not that such a speed is the average speed of brain signals. There is a huge difference between the average speed at which some distance will be traveled and the maximum speed that part of that distance can be traveled (for example, while you may briefly drive at 40 miles per hour while traveling through Los Angeles, your average speed will be much, much less because of traffic lights). 

A more common figure you will often see quoted is that nerve signals can travel in the human brain at a rate of about 100 meters per second. But that is the maximum speed at which such a nerve signal can travel, when a nerve signal is traveling across what is called a myelinated axon. Below we see a diagram of a neuron. The axons are the tube-like parts in the diagram below.


The less sophisticated diagram below makes it clear that axons make up only part of the length that brain signals must travel.


There are two types of axons: myelinated axons and non-myelinated axons (myelinated axons having a sheath-like covering shown in blue in the diagram above). According to this article, non-myelinated axons transmit nerve signals at a slower speed of only .5-2 meters per second (roughly one meter per second). Near the end of this article is a table of measured speed of nerve signals traveling across axons in different animals; and in that table we see a variety of speeds varying between .3 meters per second (only about a foot per second) and about 100 meters per second. 

But from the mere fact that nerve signals can travel across myelinated axons at a maximum speed of about 100 meters per second, we are not at all entitled to conclude that nerve signals typically travel from one region of the brain to another at 100 meters per second. For nerve signals must also travel across dendrites and synapses, which we can see in the diagrams above. It turns out that nerve signal transmission is much slower across dendrites and synapses than across axons. To give an analogy, the axons are like a road on which you can travel fast, and the dendrites and synapses are like traffic lights or stop signs that slow down your speed.

According to neuroscientist Nikolaos C Aggelopoulos, there is an estimate of 0.5 meters per second for the speed of nerve transmission across dendrites. That is a speed 200 times slower than the nerve transmission speed commonly quoted for myelinated axons. According to Bratislav D. Stefanovic, MD, the conduction speed across dendrites is between .1 and 15 meters per second. Such a speed bump seems more important when we consider a quote by UCLA neurophysicist Mayank Mehta: "Dendrites make up more than 90 percent of neural tissue."  Given such a percentage, and such a conduction speed across dendrites, it would seem that the average transmission speed of a brain must be only a small fraction of the 100 meter-per-second transmission in axons. 

Besides this “speed bump” of the slower nerve transmission speed across dendrites, there is another “speed bump”: the slower nerve transmission speed across synapses (which you can see in the top “close up” circle of the first diagram above). There are two types of synapses: chemical synapses and electrical synapses. The parts of the brain allegedly involved in thought and memory have almost entirely chemical synapses. (The sources here and here and here and here and here refer to electrical synapses as "rare."  The neurosurgeon Jeffrey Schweitzer refers here to electrical synapses as "rare."  The paper here tells us on page 401 that electrical synapses -- also called gap junctions -- have only "been described very rarely" in the neocortex of the brain. This paper says that electrical synapses are a "small minority of synapses in the brain.")

We know of a reason why transmission of a nerve signal across chemical synapses should be relatively sluggish. When a nerve signal comes to the head of a chemical synapse, it can no longer travel across the synapse electrically. It must travel by neurotransmitter molecules diffusing across the gap of the synapse. This is much, much slower than what goes on in an axon.

Diagram of a synapse

There is a scientific term used for the delay caused when a nerve signal travels across a synapse. The delay is called the synaptic delay. According to this 1965 scientific paper, most synaptic delays are about .5 milliseconds, but there are also quite a few as long as 2 to 4 milliseconds. A more recent (and probably more reliable) estimate was made in a 2000 paper studying the prefrontal monkey cortex. That paper says, "the synaptic delay, estimated from the y-axis intercepts of the linear regressions, was 2.29" milliseconds. It is very important to realize that this synaptic delay is not the total delay caused by a nerve signal as it passes across different synapses. The synaptic delay is the delay caused each and every time that the nerve signal passes across a synapse. 

Such a delay may not seem like too much of a speed bump. But consider just how many such "synaptic delays" would have to occur for, say, a brain signal to travel from one region of the brain to another. It has been estimated that the brain contains 100 trillion synapses (a neuron may have thousands of them).  So it would seem that for a neural signal to travel from one part of the brain to another part of the brain that is a distance away only 5% or 10% of the length of the brain, that such a signal would have to endure many thousands of such "synaptic delays" requiring a total of quite a few seconds of time. 

An average male human brain has about 1300 cubic centimeters. Let's try to calculate the minimum number of synapses that would have to be sequentially traversed in order for a neural signal to travel through a volume of only 1 cubic centimeter (.39 of an inch). 

If there are 100 trillion synapses in a brain of 1300 cubic centimeters,  then the number of synapses in this volume of 1 cubic centimeter would be roughly 100 trillion divided by 1300, which gives 77 billion. (This page gives an estimate of 418 billion synapses per cubic centimeter, but notes that estimates of synapse density vary; so let's just stick with the smaller number.)

It would be a big mistake to assume that a neural signal would have to sequentially traverse all those 77 billion synapses. To traverse the shortest path across this area, the signal would have to merely pass through a number of synapses that is roughly the cube root of the total number of synapses in this volume (the number that you would have to multiply by itself three times to get the total number of synapses in this volume).  Similarly, if we imagine a ball with 64 equally spaced connected nodes, including nodes in the center, something rather like the ball shown below,  then it is clear that the shortest path between any one node at the outer edge of the ball to another node on the opposite end of the ball would require that you traverse a number of nodes that is at least the cube root of 64, which is 4. 

So to roughly compute the shortest series of synapses that would have to be traversed for a brain signal to travel though this 1 cubic centimeter volume, we can take the cube root of 77 billion (the number that multiplied by itself three different times equals 77 billion).  The cube root of 77 billion is 4254.  So it seems that to traverse the shortest path through a volume of 1 cubic centimeter containing 77 billion synapses, traveling a distance of about 1 cubic centimeter, a neural signal would have to pass sequentially through a path containing at least 4000 different synapses (along with other neural elements such as dendrites).  

To calculate how long this traversal would take across a 1 cubic centimeter region of the brain, considering only the dominant delay factor of synaptic delays,  we can simply multiply this number of 4000 by the synaptic delay (the time needed for the signal to cross a single synaptic gap).  Using the smallest estimate of the synaptic delay (an estimate from 1965 of about .5 millisecond), and ignoring the more recent year 2000 estimate of 2.29 milliseconds for the synaptic delay, this gives us a total time of 4000 multiplied by .5 millisecond.  This gives us a total time of two seconds (2000 milliseconds) for how long it would take a nerve signal to travel across one cubic centimeter of brain tissue.   The velocity of nerve signal speed we get from this calculation is a speed of less than 1 centimeter per second (it's actually a speed of a half a centimeter per second).  

Take careful note that this speed is more than 10,000 times slower than the "100 meters per second" figure that is given by some experts when they are asked about how fast a brain signal travels. Such an expert answer is very misleading, because it only calculates the fastest speed that a nerve signal can travel inside the brain, while it is traveling through the fastest tiny parts of the brain (myelinated axons),  not the average speed of such a brain signal as it passes through different types of brain tissue and many different synapses. It turns out that because of the "speed bump' of synaptic delays, the average speed of a nerve signal traveling though the brain should be about 20,000 times slower than "100 meters per second" -- a slowpoke speed of about a half of a centimeter per second.  That's half the maximum speed at which a snail can move.  If I had used the year 2000 estimate of the synaptic delay (2.29 milliseconds), I would have got a speed estimate for brain signals that is only about .125 centimeters per second, which is one eighth the speed of a moving snail. 

slow brain

This calculation is of the utmost relevance to the question of whether the brain is fast enough to account for extremely rapid human thinking and instantaneous memory retrieval.  Based on what I have discussed, it seems that signal transmission across regions of the brain should be very slow -- way too slow to account for very fast thinking and instantaneous recall and recognition.  

Many a human can calculate as fast as he or she can recall. For example, the Guinness world record web site tells us, "Scott Flansburg of Phoenix, Arizona, USA, correctly added a randomly selected two-digit number (38) to itself 36 times in 15 seconds without the use of a calculator on 27 April 2000 on the set of Guinness World Records in Wembley, UK."  Such speed cannot be explained as the activity of a brain in which signals literally move at a less than a snail's pace. 

To give another example, In 2004 Alexis Lemaire was able to calculate in his head the 13th root of this number:

85,877,066,894,718,045, 602,549,144,850,158,599,202,771,247,748,960,878,023,151, 390,314,284,284,465,842,798,373,290,242,826,571,823,153, 045,030,300,932,591,615,405,929,429,773,640,895,967,991,430,381,763,526,613,357,308,674,592,650,724,521,841,103,664,923,661,204,223

In only 77 seconds, according to the BBC, Lemaire was able to state that it is the number 2396232838850303 which when multiplied by itself 13 times equals the number above.  Here we have calculation speed far beyond anything that could be possible if calculation is done by a brain in which signals travel at less than a snail's pace.  

In this matter it seems our neuroscientists have acted as if they were afraid to put two and two together. They have measured the speed of brain signal transmission in axons, dendrites and synapses. But I find a curious avoidance in neuroscience literature of the basic topic of the average time it should take a signal to travel across one region of the brain to another. It's like our neuroscientists are afraid to do the math which might lead them to the conclusion that signals cannot travel from one random brain region to another nearby region at a rate of more than an inch a second. For if they were to do such math, their claim that brains are the source of our thinking and recall would be debunked.  

Echoing part of what I have said here, a textbook says "the cumulative synaptic delay may exceed the propagation time along the axons." But why aren't scientists more explicit, by telling us that this cumulative synaptic delay will actually exceed the propagation time along the axons by a factor of more than 1000, leading to "snail's pace" brain signals? Another source vaguely tells us that "cumulative synaptic delay would affect the speed of information processing at every level of cognitive complexity" without mentioning what a crippling effect this would be if our brains were doing thinking and recall. 

I may note whenever a neuroscientist answers a question such as "how fast do brain signals travel" by mentioning only the fastest rate at which a brain signal can travel through the fastest little parts of the brain (through a myelinated axon), such as neuroscientists typically do, such an answer is either deceptive or very clumsy. It's like answering the question "how fast can you travel across Manhattan" by citing the maximum speed limit on any Manhattan cross-street such as 42nd Street, without considering all the delays caused by traffic lights.  Synaptic delays are comparable to traffic light delays, and they are a factor that must be calculated when realistically considering how fast a brain signal typically travels inside the brain.  

It is interesting that both this 1979 scientific paper and this 2008 scientific paper estimate the number of synapses in the human cortex as being about a billion per cubic millimeter, which equals a trillion per cubic centimeter.  This is 10+ times greater than the 77 billion per cubic centimeter figure I was using above. The more synapses, the more speed bumps, and the slower the brain signal. If I had done the brain speed calculation specifically for cortex tissue (the supposed center of higher thought), the calculation would have come up with a brain signal speed very much slower than the  half a centimeter per second result that was reached. 

To sum up,  we have several gigantic reasons for thinking that brains must be too slow to account for instantaneous recall:

(1) Finding the exact little spot where a memory was stored would be like finding a needle in a haystack, given the lack of any indexing system or position coordinate system in the brain.
(2) Decoding stored memories from encoded neural states would take additional time that would make neural memory recall much less than instantaneous.
(3) The "snail's pace" speed of brain signals (greatly slowed by synaptic delays) would prevent an instantaneous recall of memories and stored information such as humans often have. 

The slowness of the brain is one of many neuroscience reasons for believing that the brain cannot be the storage place of our memories, and cannot be the source of our thinking and consciousness.  Human mentality must be primarily a psychic or spiritual or non-biological reality rather than a neural reality. 

I can imagine various ways in which a person could try to rebut some of the argumentation in this post, Someone could simply say that we know that signals must travel very fast in a brain, because humans are able to recall things instantly or recognize things instantly. But we do not at all know that recognition or recall are actually effects produced by the brain, and we have good reasons for doubting that they are (such as the short lifetimes of synapse proteins and the fact that the high noise levels in brains and synapses is incompatible with the fact that humans such as Hamlet actors can flawlessly recall very large bodies of memorized information). So we cannot use the speed of recognition or recall to deduce the speed of brain signals. 

Another way you could try to rebut this post would be to cite some expert who estimated how fast signals move about in a brain.  But further analysis would generally show that such an estimate was not derived from a calculation of all the low-level factors (such as synaptic delay) affecting the speed of brain signals, but was simply a calculation based on the assumption that brains must pass about signals at the speed at which humans recognize or recall things or respond to things.  We cannot use such circular reasoning or "begging the question" when considering this matter. The only intelligent way to calculate the speed of a brain signal is to do a calculation based on low-level things (such as synaptic delays) that we definitely know, rather than starting out making grand assumptions about the mind and brain that are unproven and actually discredited by the very low-level facts (such as the length of synaptic delays)  that should be examined. 

Although neuroscientists typically claim that synapses are where memories are stored in the brain, there are four ways in which the characteristics of synapses are telling us that thinking and memory is not brain-caused:

(1) Synapses show no signs of having stored information, and their main structural feature (the disorganized little blob or bag that is the synaptic knob or head) seems like pretty much the last type of structure we'd expect to see in something storing information for decades. 
(2) Synapses are unstable units undergoing spontaneous remodeling, and synapses consist of proteins with average lifetimes of only a few weeks, only a thousandth of the maximum length of time that humans store memories.
(3) Synapses are very noisy, so noisy that one expert tells us that a signal passing through a synapse "makes it across the synapse with a probability like one half, or even less," making synapses unsuitable as reliable transmitters of memory information that humans such as Wagnerian tenors can recall abundantly with 100% accuracy. Given such noise levels, which would seem to have the effect of rapidly extinguishing brain signals,  there would seem to be good reason for suspecting that it is effectively impossible for brain signals to travel more than a centimeter or an inch without vanishing or becoming mere tiny traces of their original strength. 
(4) The most common type of synapse is slow,  and although the synaptic delay in a single synapse is only about a millisecond,  when we calculate the cumulative synaptic delay we find that brain signals must be slower than a snail's pace, way too slow to explain instantaneous recall and fast thinking. 

In fact, if some designer of the human body had specifically designed something to tell us (by its characteristics) that our brains cannot be the source of our fast thinking and instantaneous memory, it's rather hard to imagine anything that would do a better job of telling us that than our signal-slowing, very noisy, unstable synapses.  Our synapses are telling us (by their characteristics) that thinking and memory is not brain-caused, but our neuroscientists (trapped in ideological enclaves of dogma and reigning speech customs) aren't listening to what our synapses are telling us. 

Postscript: I may note that you do not get a much faster estimate for the speed of brain signals if you calculate the speed from one neuron to the nearest neuron, rather than the speed through a cubic centimeter. The speed is the same snail's pace I have calculated, because the signal will always have to pass through synapses that are the dominant slowing factor. 

There is an entirely different method you could use to calculate the speed of signals inside the brain, using not estimates of the number of synapses per cubic centimeter, but instead the average distance between neurons. This paper mentions an average distance of about 26 micrometers between neurons in a rat cortex, and it says, "we believe that the parameter of 26 ┬Ám [micrometers] average distance between neurons is also a valid assumption in the human brain." I assume that by "average distance between neurons" this source means the average distance between two adjacent neurons. Below are some calculation figures that we get if we use this average distance figure, and we use a synaptic delay estimate that is about the average of the .5 millisecond and 2.29 millisecond estimates quoted above. 

Average distance between neurons in micrometers 26
This distance in centimeters 0.0026
Synaptic delay in milliseconds 1
Time needed to cross distance above (in seconds), considering only the synaptic delay.001
Total distance that could be traversed by a brain signal in a second 1000*0.0026 centimeter=2.6 centimeter
Signal speed between adjacent neurons in centimeters per second 2.6

Using this method, we get a result in the same ballpark as the result calculated by my first method.  The first method found that brain signals travel at a rate of about .5 centimeters per second, and this method finds that brain signals travel at about 2.6 centimeters per second, which is about an inch per second.  Either way, this speed is way too slow to account for instantaneous recall and very rapid thinking. 

A most-realistic estimate of brain signal speed would also take into account two other factors ignored in the calculations above (and also ignored by neuroscientists when discussing the speed of brain signals):
(1) The noise in synapses, and the fact that in the cortex, signal transmission across synapses is highly unreliable. A scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower."  Considered over a large section of brain tissue, this unreliability would be equivalent to a big additional slowing factor, and might well lead to speed estimates much lower than I have made here. 
(2) Synaptic fatigue,  a temporary inability of the head or vesicle of a synapse to send a signal, because of a depletion of neurotransmitters.  It's hard to find any specific number regarding synaptic fatigue, but it could be a very large additional slowing factor.