Header 1

Our future, our universe, and other weighty topics

Saturday, April 27, 2019

The Untrue "All Scientific Theories Are Falsifiable" Claim Is Merely a Rhetorical Device

On Thursday physicist Sabine Hossenfelder published a blog post pushing the dogma that all scientific theories have to be falsifiable. The post had this dogmatic title: “Yes, scientific theories have to be falsifiable. Why do we even have to talk about this?” Since Hossenfelder gives not a single reason for believing that all scientific theories must be falsifiable, we may wonder why she is confident about this claim, so confident that she asks, “Why do we even have to talk about this?” She claims that all hypotheses that are not “falsifiable through observation” are hypotheses that “belong into the realm of religion.” Rather than trying to present any reasons for believing this strange claim, she states, “That much is clear, and I doubt any scientist would disagree with that.” Such a claim is neither clear nor logical, and there are many scientists and philosophers who would disagree with it.

To disprove the idea that scientific theories have to be falsifiable, you need merely provide some examples of important scientific theories that could never be falsified. It is very easy to do this. Let's start with one of the most widely discussed and best-loved theories of modern times, the theory that life exists on some other planet. There is no way to falsify this theory, because the universe is too big.

The universe consists of billions of galaxies, and in each of these galaxies there are millions or billions of stars. Astronomers believe that planets are extremely common, and that a large fraction of all stars have planets. What would you need to do to falsify the belief that extraterrestrial life exists? You might think that you could do this in theory by launching some grand fleet of spaceships to search all planets. But such a task would be impossible. There would be too many planets to search, and it would take too long. The speed of light limits travel to other stars. So some grand fleet of spaceships would require billions of years to search all of the other planets in the universe.

Suppose that after, say, five billion years of exploration such a massive fleet of spaceships still reported no signs of extraterrestrial life. Would that falsify the theory that extraterrestrial life exists? It certainly would not. For in the billions of years that such an expedition had been operating, it would always be possible that extraterrestrial life had appeared in one of the places that had already been searched. So searching every single extraterrestrial planet in the universe for life (without finding any) would absolutely not falsify the claim that extraterrestrial life exists.

Galaxies (Credit: NASA)

Could we imagine, perhaps, that such an expedition might place cameras on every planet that it searched, to try to send back to Earth a signal allowing us to say that at this current moment none of them have life? That wouldn't work, because there is no signal that can travel faster than the speed of light. So, for example, if we got a signal from some camera that had been left on some planet 130,000,000 light-years away, and the camera showed no life, that would only prove that part of the planet did not have life 130,000,000 years ago. It would not prove that the planet does not now have life. Nor would it even prove that 130,000,000 years ago the planet had no life, for the planet might have life in some place that the camera was not observing. Besides the fact that you can't get live “showing it as it is right now” signals from planets many light-years away, there is the difficulty that you can't fill a planet with cameras, and there's all kinds of life that cannot be detected with cameras.

Very clearly, the important scientific theory that life exists on other planets is a theory that is not falsifiable. This simple fact destroys the credibility of the claim that a scientific theory has to be falsifiable. There is no logical need to write anything further on the topic. But just for the sake of overkill, I will give some further examples of scientific theories that are not falsifiable.

Another example of a very popular and widely discussed theory is the theory of natural abiogenesis, which is the theory that life naturally arose from non-life. It is theoretically quite possible to get a result supporting this theory. If a lab that was simulating early Earth conditions reported that life had spontaneously arisen from chemicals, that would be evidence supporting the theory of natural abiogenesis. But it is quite impossible to falsify the theory of natural abiogenesis. Even if you did a billion years of experiments trying to produce life from lab chemicals, and none of them were successful, that still would not prove that life had not arisen from chemicals because of some incredibly unlikely once-in-a-galaxy rare event.

A large class of scientific theories that cannot be falsified are those which describe realities on which our existence depends. Human biological existence has very many physical dependencies, so there ends up being quite a few theories describing realities on which our existence depends. To give one example, our existence depends on a strong nuclear force that binds together protons and neutrons in the nucleus of an atom. There is a theory corresponding to this reality, the theory that there exists a force that causes protons to be bound together in the nucleus, despite the very strong electromagnetic repulsion of their positive charges. Could we falsify this theory? No, we could not. We could only falsify it by observing that something on which our existence depends does not exist. But that could never happen.  Similarly, the theory of electromagnetism is a theory describing the basic electrical repulsion and attraction on which biological chemistry depends. We cannot falsify such a theory, since that would involve observing the nonexistence of something that is a prerequisite for our existence.

Some other fundamental theories are the kinetic theory of matter (that gas consists of small particles in motion), the cellular theory of life (that cells are the fundamental building blocks of life), and the atomic theory of matter (that atoms are the basic building blocks of life). None of these theories can be falsified. Having made countless observations showing such realities, there is no way that future observations will refute them. We cannot imagine any observations that would cause us to believe that gases do not consist of moving tiny particles, or any observations that would cause us to believe that living things are not made up of cells, or any observations that would cause us to believe that rocks are not made of atoms.

The common expression “you can't prove a negative” is largely correct in suggesting that very many theories cannot be disproved or falsified. For example, you can prove that someone masturbates, but cannot prove that he does not masturbate.

Instead of the principle “a scientific theory must be falsifiable,” a much better principle is “a scientific theory should be either verifiable or falsifiable.” In general, any theory that is either verifiable or falsifiable can be considered a scientific theory. The theory of extraterrestrial life is a scientific theory, because we can easily imagine some simple observational events that would verify it. The theory of natural abiogenesis is a scientific theory because we can easily imagine some simple observational events that could verify it. 

The silly idea that a scientific theory must be falsifiable is one that was advanced by philosopher of science Karl Popper, because of ideological considerations. People such as Popper wanted to stigmatize theories that they disliked. So they advanced the strange, illogical idea that a scientific theory must be falsifiable, hoping that it would help brand certain types of theories and experimental inquiry as being unscientific.

And so, while scientists such as Joseph Rhine were piling up strong experimental evidence for the existence of ESP,  as discussed here, thinkers such as Karl Popper were trying to sell the idea that a scientific theory must be falsifiable. This was ideologically convenient. It could now be claimed that the theory that ESP exists is not scientific, since there is no way that it could be falsified (even a million negative ESP tests would not prove that ESP does not sometimes occur). This kind of effort makes no sense, for the same sword invented to kill ESP kills just as effectively the theory of extraterrestrial life, the theory of natural abiogenesis, the kinetic theory of matter, and quite a few other things that ESP-loathing scientists may prefer to believe in.

I may note that a principle that Scientific American attributes to philosopher of science Karl Popper, the claim that “pseudo-science seeks confirmations and science seeks falsifications,” is bunk and fantasy. Mainstream scientists typically spend most of their time searching for confirmations, and spend almost no time looking to falsify their favorite theories. Typical mainstream scientists spend almost no time attempting to falsify the theories they most love, and they are also extremely bad about examining evidence that seems to contradict such theories. So, for example, neuroscientists spend endless hours trying to confirm their dogmas about brains storing memories and brains producing thoughts, but they pay almost no attention to the many facts that argue against such claims. Modern science academia is a conformity culture with cherished dogmas, and it is a culture that punishes and stigmatizes heretics and contrarian thinkers who attempt to discredit those dogmas. This is the exact opposite of an approach centered around falsification. The fact that scientists aren't very interested in falsifying things is shown by the fact that scientists find it is hard to even find a journal that will publish their negative experimental results (as discussed here), and also by the fact that when someone writes a scientific paper presenting evidence against a dogma cherished by scientists, he may find his paper retracted by a journal purely because the paper is daring to question a sacred cow (as happened in this case). 

Karl Popper's essay “Science as Falsification” is a very revealing one, because it makes quite clear that Popper's claim about falsification was not at all something that came from a study of the way scientists actually behave, but was instead a claim that was contrived for the sake of a creating a weapon against a few theories Popper didn't like. Popper makes clear in the paper that he was bothered by the popularity of three theories: Freud's theory of psychoanalysis, Marx's theory of economics and history, and Adler's theory of psychology. Popper reveals in his essay that he invented his theory of falsification to combat such theories. It was actually a poor weapon against Marx's theory and Freud's theory, both of which actually are falsifiable. Freud's theory could be falsified if people became much crazier by going to Freudian psychotherapists, and Marx's theory could be falsified if capitalist countries all became blissfully happy and Marxist countries all became miserable failures.

The whole idea of creating a theory of how science works (“science as falsification”) based on a desire to create a weapon against theories you don't like is one that made no sense at all. Just as Marx and Freud were dogmatic thinkers who got far more influence than they deserved, Popper was a thinker who got far more attention than he deserved. Popper was bothered that the Marxists and Freudians claimed to see confirmations that weren't really confirmations. If people are claiming confirming evidence that isn't really there, a good way to handle that is to advocate tighter standards for confirming evidence, and to show how particular claims of confirming evidence are unfounded – not to be making untrue claims suggesting science is all about falsification rather than confirmation. Were we to follow Popper's bizarre claims such as “confirming evidence should not count except when..it can be presented as a serious but unsuccessful attempt to falsify the theory,” we would have to throw out a large fraction of the scientific results in textbooks.

Rather obviously untrue, the claim that scientific theories are all falsifiable is simply a rhetorical device, one that is occasionally trotted out by people as an argumentative weapon to use against theories they don't want to believe in (sometimes theories for which there is a great deal of evidence). It's a very ineffective weapon, because it's so easy to find examples of respected scientific theories that are not falsifiable. 

Tuesday, April 23, 2019

He Sketches No Path From Cells to Cognition

Judging from the title of the recent book Understanding the Brain: From Cells to Behavior to Cognition by Harvard neuroscientist John E. Dowling, you might think the author does something to explain how cognition such as thinking and memory recall can be produced by cells.  But the author doesn't do anything to explain how such a thing could occur. 

Judging from its index, Dowling's 295-page book makes no mention of the topics of thinking, abstract thinking, ideas, concepts, recall, recognition or reasoning (none of which have an index entry).  The topic of thinking and consciousness doesn't really appear until the last chapter in the book, which starts out on page 253 with this very silly statement: "Human consciousness is just about the last surviving mystery."  The worlds of cosmology, physics, psychology, history and biology are actually filled with 1001 great mysteries that humans don't understand. 

Our biologists do not even know how the simplest prokaryotic cell could have originated, and they do not have a credible account of how eukaryotic cells originated (only a very unbelievable tall tale). Our biologists do not even know what causes a cell to split into two copies of itself; they do not know what causes polypeptide chains to fold into the three-dimensional shapes of proteins; and they do not know what causes a fertilized ovum to progress to become a human baby. Our biologists do not know how humans are able to remember things for 50 years (despite rapid protein turnover in synapses), and how humans can instantly remember things they learned or experienced decades ago.  Our biologists do not have credible explanations of the origin of humans, the origin of language,  or the origin of complex biological innovations (they merely have achievement legends about some of these things). So it is most ridiculous when biologists such as Dowling say things such as "Human consciousness is just about the last surviving mystery," thereby making the modern scientist sound as if he knows a thousand times more than he actually does.  

Equally ridiculous is Dowling's assertion on page 253 that the following mysteries "have been tamed": "the mystery of the origin of the universe, the mystery of life and reproduction, the mystery of the design to be found in nature, the mysteries of time, space and gravity."  No, these mysteries have not at all been "tamed." I will have a future post on why biologists do not actually understand 90% of sex and reproduction; and I may note that gravity is so little understood that it does not even have a place in the standard model of physics.  

Dowling asserts on page 260, "Clearly our rich mental life depends on higher cortical function," but he presents no good evidence to back up this claim. We actually know that crows are quite smart, despite having very small brains that lack a neocortex. We know also that damage to the prefrontal cortex has little effect on mental abilities,  as I documented quite thoroughly in this post, which includes references to many neuroscience papers. 

The assertion quoted above is followed by an appeal to experiments of the scientist Wilder Penfield in which people recalled things after having parts of their brains electrically stimulated. Dowling states on page 262:

Stimulating particular parts of the cortex evoked visual and auditory sensations along with emotions and feelings. Clearly these experiences were evoked from within. 

But it's actually not known whether visual images arising from such stimulation are memories,  hallucinations, or simply vivid imagination. A review of 80 years of experiments on electrical stimulation of the brain uses the word “reminiscences” for accounts that may or may not be memory retrievals. The review tells us, “This remains a rare phenomenon with from 0.3% to 0.59% EBS [electrical brain stimulation] inducing reminiscences.” The review states the following:

We observed a surprisingly large variety of reminiscences covering all aspects of declarative memory. However, most were poorly detailed and only a few were episodic. This result does not support theories of a highly stable and detailed memory, as initially postulated, and still widely believed as true by the general public....Overall, only one patient reported what appeared to be a clearly detailed episodic memory for which he spontaneously specified that he had never thought about it....Overall, these results do not support Penfield's idea of a highly stable memory that can be replayed randomly by EBS. Hence, results of EBS should not, at this stage, be taken as evidence for long-term episodic memories that can sometimes be retrieved.

So the actual experimental results don't support what Dowling has insinuated, and leave us with very great doubt as to whether such reports are of something retrieved "from within" a brain.  You may realize the fallacy of thinking that recalling something during brain stimulation proves something about memory location if you consider the following: when you go to a masseuse, and have your back massaged, you may recall various memories while lying on your stomach, but that doesn't show that your memories are stored in the muscles of your back that are being massaged. 

What is quite possible is that the brain is like some reducing valve or faucet, and that the brain reduces your memory and imagination,  just as a faucet can limit a flow of water to a mere trickle.  Such a reduction may make you more likely to focus on the crummy little details of daily living.  In such a case, electrically stimulating some part of the brain might limit that reduction effect or suppression effect, increasing recall and imagination that are not at all caused by the brain. Similarly, hitting a faucet with a hammer may increase its flow of water, but the water isn't coming from inside the little faucet. 

Dowling then attempts to insinuate on page 262 that brain studies tell us that "certain neurons in the prefrontal cortex become active during the time when the monkey is remembering" where a target is. But, to the contrary, this post (which includes links to many scientific studies) summarizes the evidence that brains show no real signs of looking different or working harder when humans are thinking or recalling things.  Since almost all neurons in the brain are continually active, we should never draw a conclusion based on the mere activity of some neurons while a brain was remembering. 

Dowling then confesses on page 263 that "we are just at the beginning of understanding how neural activity...might relate to consciousness." On the next page he states, "The neural basis of human consciousness seems beyond our experimental reach for the time being."  A few pages later the book ends. It has not provided anything remotely like an explanation for how cells could yield cognition. Dowling hasn't even taken a stab at explaining such a thing.  He has a chapter entitled "From Brain to Mind," but it deals with visual perception.  Of course, there's so much more to "the mind" than just visual perception: imagination, intelligence, abstract thinking, recall, self-hood, and so forth. 

On the topic of memory, Dowling tries to drop little bits of information here and there supporting the idea that the brain is a storage place for our memories. Like almost all neuroscientists writing books on the brain, he mentions the case of patient H.M., who had trouble forming memories after suffering damage to part of his brain called the hippocampus (although he could recall memories learned before that damage). The reliance of neuroscientists on this one case is not at all scientific.  You may establish a cause and effect hypothesis if you correlate many examples of a cause and effect (such as correlating many examples of people smoking with many examples of people having lung cancer).  But it is rather ridiculous to speak (as neuroscientists often do) as if we know some memory problem in someone was caused by some problem in part of his brain. You would need many examples of such a thing before having confidence that the two were causally related.  Similarly, it would be absurd to suggest that our memories are stored in the root canals of our teeth because of the historical case of one patient who had a serious memory problem after having a root canal procedure. 

As usual, the story of HM is wrongly told. Dowling tells us that patient HM "no longer could remember events or facts for more than a few minutes." But a 14-year follow-up study of patient HM (whose memory problems started in 1953) actually tells us that HM was able to form some new memories. The study says this on page 217:

In February 1968, when shown the head on a Kennedy half-dollar, he said, correctly, that the person portrayed on the coin was President Kennedy. When asked him whether President Kennedy was dead or alive, and he answered, without hesitation, that Kennedy had been assassinated...In a similar way, he recalled various other public events, such as the death of Pope John (soon after the event), and recognized the name of one of the astronauts, but his performance in these respects was quite variable. 

Our neuroscientists keep misinforming us about patient HM because it serves their dogmatic purposes to do so.  Even if you were to prove that destruction of the hippocampus prevents the formation of new memories, that would not at all prove that memories are stored in the brain. The hippocampus could simply be kind of a springboard that helps propel sensory experience into some unknown memory storage reality outside of the brain. 

On page 222, Dowling notes, "A well-established but curious observation is that older long-term memories are more persistent in many forms of brain disease than are more recent memories." This fact is actually inconsistent with claims that memories are stored in brains.  We know that the proteins that make up synapses have average lifetimes of only a few weeks, and that the synapses themselves are subject to spontaneous remodeling that makes them unstable. Given such realities, if memories were stored in brains, the ones that you would lose first are the oldest memories, since there would be so much more time for such memories to physically deteriorate. Similarly, if words were written on leaves, the older the writing was, the smaller the chance that it would survive. 

Dowling says on page 222, "It would appear that long-term memories are not permanently stored in the hippocampus but transferred elsewhere, probably various regions of the cortex."  He gives no data backing up this claim,  and there is no evidence for such memory transfer, nor does anyone have any idea of how it could work.  Because there is so much signal noise in the brain, so much noise in synapses, and so much noise and unreliability in the synapses of the cortex in particular, it is not credible that accurate memory information could move from the hippocampus to the cortex.  A scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower."  Signal transmission in the cortex would require the traversal of many synapses, and in each of these traversals there would be a low likelihood of a successful transmission of the signal. This unreliability of signal transmission in the cortex would be equivalent to signal noise even greater than we see in the bottom diagram, making it impossible for precise memories to transfer from the hippocampus to the cortex (and also making it impossible to precisely recall a detailed memory from the cortex).  

signal noise

We know that in hemispherectomy operations in which half of a brain is removed to stop epileptic seizures, there is little damage to memory, even though half of the cortex is surgically removed.  Another reason for rejecting the idea of memories transferring from the hippocampus to the cortex is the issue of what can be called signal drowning.  Signal drowning is what happens when there are so many signals from so many sources that a particular signal is effectively drowned out. Such signal drowning would occur in a malfunctioning television which showed the signal from every cable TV channel all at once, or a malfunctioning radio which played simultaneously the music and words from every AM station at the same time. It would seem that in the cortex there would have to be exactly such signal drowning, because each neuron emits a signal very frequently (about once per second or more), and each neuron is connected directly to more than a thousand other neurons. So we can't imagine how the cortex could receive a memory transferred from the hippocampus, as such a signal would get drowned out from all the random signals from other neurons emitting so frequently.  

Facts such as the low cognitive impact of surgical removal of half of the brain, the huge amount of noise in the brain and synapses, the unreliable transmission of synapses, and the short lifetimes of the proteins that make up synapses are examples of very important neuroscience facts (with huge implications) that neuroscientists such as Dowling avoid mentioning in their books, for they do not wish to discuss observations contrary to their dogmatic claims.

Dowling has several pages discussing what is called LTP, trying to make it sound like this short-term effect (produced by artificial methods) has some relevance to memory. Despite all of the "busy work" time neuroscientists have spent on LTP, no scientist has shown that is has any relevance to memory. We know that despite its misleading name (LTP stands for long-term potentiation), LTP is actually a very short-lived affair, almost always lasting less than a few days.  So such a thing cannot account for memories that last for 50 years. 

Like some student who knows nothing about the origin of World War I saying, "I don't exactly understand that," Dowling confesses on page 222, "Exactly how memories are stored in neurons or in neuronal circuits remains a mystery." But why would that be, if memories were actually stored in neurons? We discovered exactly how genetic information is stored in the nucleus of the cell around 1953. Is it really credible that 66 years later we would still have no understanding of how memories are stored in the brain, if they actually were stored in the brain? No, it isn't.  What is far more credible is that memories are not stored in brains, and that is exactly why no one has been able to read a memory from a bit of brain tissue in a lab, even though it is 66 years after laboratory scientists were able to read genetic information from inside cells. 

Friday, April 19, 2019

Motivated Reasoning of the “Cosmic Inflation” Storytellers

In 2017 Scientific American published a sharp critique of the theory of cosmic inflation originally advanced by Alan Guth (not to be confused with the more general Big Bang theory). The theory of cosmic inflation (which arose around 1980) is a kind of baroque add-on to the Big Bang theory that arose decades earlier. The Big Bang theory asserts the very general idea that the universe began suddenly in a state of incredible density, perhaps the infinite density called a singularity; and that the universe has been expanding ever since. The cosmic inflation theory makes a much more specific claim, a claim about less than one second of this expansion – that during only a small fraction of the first second of the expansion, there was a special super-fast type of expansion called exponential expansion. ("Cosmic inflation" is a very bad name for this theory, as it creates all kinds of confusion in which people confuse the verified idea of an expanding universe and the shaky idea of cosmic inflation. The term "cosmic inflation" refers not to cosmic expansion in general, but to the very specific idea that the universe's expansion was once a type of expansion -- exponential expansion -- radically faster and more dramatic than its current linear rate of expansion.) 

The article in Scientific America criticizing the theory of cosmic inflation was by three scientists (Anna Ijjas, Paul J. Steinhardt, Abraham Loeb), one a Harvard professor and another a Princeton professor. It was filled with very good points that should be read by anyone curious about the claims of the cosmic inflation theory.  You can read the article on a Harvard web site here. Or you can go to this site by the article's authors, summarizing their critique of the cosmic inflation theory.

Recently a very long scientific paper appeared on the ArXiv physics paper server, a paper with the cute title “Cosmic Inflation: Trick or Treat?” In its very first words the paper's author (Jerome Martin) misinforms us, because he refers to cosmic inflation as something that was “discovered almost 40 years ago.” Discovery is a word that should be used only for observational results in science. Cosmic inflation (the speculation that the universe underwent an instant of exponential expansion) was never discovered or observed by scientists. In fact, it is impossible that this “cosmic inflation” or exponential expansion ever could be observed. During the first 300,000 years of the universe's history, the density of matter and energy was so great that all light particles were thoroughly scattered and shuffled a million times. It is therefore physically impossible that we ever will be able to observe any unscrambled light signals from the first 300,000 years of the universe's history. So we will never be able to get observations that might verify the claim of cosmic inflation theorists that the universe underwent an instant of exponential expansion.

At the end of the paper the author claims that the cosmic inflation theory has “all of the criterions that a good scientific theory should possess.” The author gives only two examples of such things: first, the claim that the cosmic inflation theory is falsifiable, and second that “inflation has been able to make predictions.” His claim that the theory is falsifiable is not very solid. He says that the cosmic inflation theory could be falsified if it were found that the universe did not have what is called a flat geometry, but then he refers us to a version of the cosmic inflation theory that predicted a universe without such a flat geometry. So cosmic inflation theory isn't really falsifiable at all. So many papers have been published speculating about different versions of cosmic inflation theory that the theory can be made to work with any future observations. Harvard astronomer Loeb says here the cosmic inflation theory "cannot be falsified." 

It is not at all true that the cosmic inflation theory has “all of the criterions that a good scientific theory should possess,” or even most of those characteristics. Below is a list of some of the characteristics that are desirable in a good scientific theory. You can have a good scientific theory without having all of these characteristics, but the more of these characteristics that you have, the more highly regarded your scientific theory should be.

  1. The theory is potentially verifiable. While falsification has been widely discussed in connection with scientific theories, it should not be forgotten that the opposite of falsification (verification) is equally important. Every good scientific theory should be potentially verifiable, meaning that there should always be some reasonable hypothetical set of observations that might verify the theory. In the case of the cosmic inflation theory, we can imagine no such observations. The only thing that could verify the cosmic inflation theory would be if we were to look back to the first instant of the universe and observe exponential expansion occurring. But, as I previously mentioned, there is a reason why such an observation can never possibly occur, no matter how powerful future telescopes are. The reason is that the density of the very early universe was so great that all light signals from the first 300,000 years of the history were hopelessly shuffled, scrambled and scattered millions of times.
  2. The theory merely requires us to believe in something very simple. A very desirable characteristic of a scientific theory is that it only requires that we believe in something very simple. An example of a theory with such a characteristic is the theory that the extinction of the dinosaurs was caused by an asteroid collision. Such a theory asks us only to believe in something very simple, merely that a big rock fell from space and hit our planet. Another example of a theory that meets this characteristic is the theory of global warming. In its most basic form, the theory asks us to merely believe in something very simple, that humans are putting more greenhouse gases in the atmosphere, and that such gases raise temperatures (as we know they do inside a greenhouse). But the cosmic inflation theory (the theory of primordial exponential expansion) does not have this simplicity characteristic. All versions of such a theory require complex special conditions in order for this cosmic inflation (exponential expansion) to begin, to last for only an instant, and then to end in less than a second so that the universe ends up with the type of expansion that it now has (linear expansion, not exponential expansion). We need merely look at the papers of the cosmic inflation theorists (all filled with complex mathematical speculations) to see that the theory fails very much to meet this simplicity characteristic of a good scientific theory.  In a recent post, the cosmic inflation pitchman Ethan Siegel tells us, "If you have an inflationary Universe that's governed by quantum physics, a Multiverse is unavoidable."  What that means is the cosmic inflation has the near-infinite baggage of requiring belief in some vast collection of universes. Of course, this is the exact opposite of the simplicity that is desirable in a good theory.
  3. There is no evidence conflicting with the theory. A characteristic of a good scientific theory is that there is no evidence conflicting with the theory. The theory of electromagnetism and the theory of plate tectonics are very good theories, and there is no evidence against them. But there are quite a few observations conflicting with the cosmic inflation theory (the theory of exponential expansion in the universe's first instant). Such observations (sometimes called CMB anomalies) are discussed in this post. The observations are mainly cases in which the cosmic background radiation has some characteristic that we would not expect to see if the cosmic inflation theory were true. A scientific paper says, “These are therefore clearly surprising, highly statistically significant anomalies — unexpected in the standard inflationary theory and the accepted cosmological model.”
  4. The theory makes precise numerical predictions that have been exactly verified to several decimal places very many times. This characteristic is one that the best theories in physics have, theories such as the theory of general relativity, the theory of quantum mechanics, and the theory of electromagnetism. For example, the theory may predict that some unmeasured quantity will be 342.2304, and scientists will measure that quantity and find that it is exactly 342.2304. Or the theory may predict that some asteroid will hit the Moon on exactly 10:30 PM EST on May 23, 2026, and it will then be found (10 days later) that the asteroid did hit the Moon on exactly 10:30 PM EST on May 23, 2026. The cosmic inflation theory does not have this characteristic of a good scientific theory. It makes no exact numerical predictions at all. There have been published several hundred different versions of the cosmic inflation theory, each of which is a different scientific model. Each of those hundreds of models can predict 1000 different things, because the numerical parameters used with the equations can be varied. So the predictions of the cosmic inflation theory are pretty much all over the map, and it is impossible to point to any case in which it made a good precise successful prediction. When advocates of the cosmic inflation theory talk about predictive success, they are talking about woolly kind of predictions (like “the universe will be pretty flat”) rather than exact numerical predictions, and they are talking about one-shot affairs rather than cases in which predictions are repeatedly verified. Many a wrong theory can have an equal degree of predictive success. For example, a bad economic theory may predict various things, and may vaguely predict correctly that the stock market will go up next year.
  5. We continue to get observational signs that the theory is correct. A desirable characteristic of a good scientific theory is that we continue to observe signs suggesting that theory is correct. The theory of plate tectonics has such a characteristic. Every time there is an earthquake in the “Ring of Fire” region that marks the boundaries of continental plates, that's an additional observational sign that the plate tectonics theory is correct. The theory of gravitation continues to send us observational signals every day that the theory is correct. But we do not get any observational signs from the universe that it once underwent an instant of exponential expansion, nor can we logically imagine how such signs could ever come or keep coming from such a primordial event.

So it is clear that Martin's claim that the theory of cosmic inflation has “all of the criterions that a good scientific theory should possess” is not at all true. Saying something similar to what I said above, a New Scientist article puts it this way:

But no measurement will rule out inflation entirely, because it doesn’t make specific predictions. “There is a huge space of possible inflationary theories, which makes testing the basic idea very difficult,” says Peter Coles at Cardiff University, UK. “It’s like nailing jelly to the wall.”

The tall tale of cosmic inflation (exponential expansion at the beginning of the universe) is a modern case of a tribal folktale, told by a small tribe of a few thousand cosmologists. Below is the basic piece of folklore of the cosmic inflation theory:

"At the very beginning, the universe started out with just the right conditions for it to start expanding at a super-fast exponential rate. So for the tiniest fraction of a second, the universe did expand at this explosive exponential rate. Then, BOOM, the universe suddenly switched gears, did a dramatic change, and started expanding at the much slower, linear rate that we now observe."

Why would anyone believe such a story that can never be verified? The answer is: because they have a strong motivation. The arguments given for the cosmic inflation theory are examples of what is called motivated reasoning. Motivated reasoning is reasoning that people engage in not because they have premises or evidence that demand particular conclusions, but because they have a motivation for reaching the conclusion.

The motivation for the cosmic inflation theory was that people wanted to get rid of some apparent fine-tuning in the Big Bang. At about the time the cosmic inflation theory appeared, scientists were saying that the universe's initial expansion rate was just right, and that if it had differed by less than 1 part in 1,000,000,000,000,000,000,000,000,000,000,000,000,000, we would not have ended up with a universe that would have allowed life to exist in it. That type of extremely precise fine-tuning at the very beginning of Time bothers those who want to believe in a purposeless universe. 

Saying that the universe's initial expansion rate was fine-tuned is equivalent to saying that the density was fine-tuned, for the requirement is a very precise balancing involving an expansion rate that is just right for a particular density (or, to state the same idea, a density that is just right for a particular expansion rate).  In a recent very long cosmology paper, scientist Fred Adams notes on page 41 the requirement for a very precise fine-tuning of the universe's initial density (something like 1 in 10 to the sixtieth power, which is a trillionth of a trillionth of a trillionth of a trillionth of a trillionth).  On page 42 Adams states that, "The paradigm of inflation was developed to alleviate this issue of the sensitive fine-tuning of the density parameter."  That was the motivation of the cosmic inflation theory -- to sweep under the rug or get rid of a dramatic case of fine-tuning in nature. 

The folklore mongers who sell cosmic inflation stories may believe that they have got rid of this fine-tuning at the beginning. But they actually haven't. They've merely “robbed Peter to pay Paul,” by getting rid of fine-tuning in one place (in regard to the universe's initial expansion rate) at the price of requiring lots of fine-tuning in lots of other places. That's because all theories of cosmic inflation themselves require enormous amounts of fine-tuning. But with a cosmic inflation theory it may be rather less noticeable, because the required fine-tuning occurs in lots of different places rather than in one place.

Judging from a 2016 cosmology paper,  the cosmic inflation theory requires not just one type of fine-tuning, but three types of fine-tuning. The paper says, “Provided one permits a reasonable amount of fine tuning (precisely three fine tunings are needed), one can get a flat enough effective potential in the Einstein frame to grant inflation whose predictions are consistent with observations.” How on Earth does it represent progress to try to get rid of one case of fine-tuning by introducing a theory that requires three cases of fine-tuning? And the estimate of three fine-tunings in the paper is probably an underestimate, as other papers I have read suggest that 7 or more precise fine-tunings are needed.

fine tuning
This is not theoretical progress

We may compare the cosmic inflation pitchman to some person who wants to sell someone in Manhattan a car. “Think of all the money you'll save!” says the pitchman. “You won't have to pay $40 on subways each week.” But what the pitchman fails to tell you is that when you add up the cost of the monthly car payments, the cost of car insurance, and the cost of a garage parking space (because there's so few parking spaces in Manhattan), the total cost of the car is much more than the cost of the subway. Similarly the pitchmen of cosmic inflation theory tell us that the theory is great because it reduces fine-tuning in one place (in regard to the universe's initial expansion rate), and neglect to tell you that the total amount of fine-tuning (adding up all of the special requirements and fine-tuning needed for cosmic inflation to work) is probably far “worse” if you believe that cosmic inflation occurred.

What has been going on with the cosmic inflation theory is very similar to what went on for decades with the supersymmetry theory, a theory which physicists have been fruitlessly laboring on for decades. Like the cosmic inflation theory, supersymmetry was motivated by a desire to sweep under the rug some fine-tuning. In the case of supersymmetry, the fine-tuning scientists wanted to get rid of was the apparent fact of the Higgs boson or Higgs field being fine-tuned very precisely ("like a pencil standing on its point" is an analogy sometimes given).  An article on the supersymmetry theory discusses the fine-tuning that motivated the theory:

One logical option is that nature has chosen the initial value of the Higgs boson mass to precisely offset these quantum fluctuations, to an accuracy of one in 1016However, that possibility seems remote at best, because the initial value and the quantum fluctuation have nothing to do with each other. It would be akin to dropping a sharp pencil onto a table and having it land exactly upright, balanced on its point. In physics terms, the configuration of the pencil is unnatural or fine-tuned.

Similarly, a paper on an MIT server entitled "Motivation for Supersymmetry" states the following (referring to the many new types of hypothetical particles called "supersymmetric partners" imagined by the supersymmetry theory):

Thus in order to get the required low Higgs mass, the bare mass must be fine-tuned to dozens of significant places in order to precisely cancel the very large interaction terms....However, if supersymmetric partners are included, this fine-tuning is not needed.

Physicists erected the ornate theory of supersymmetry, thinking that they were explaining away this very precise fine-tuning  in nature, "to dozens of significant places." But they failed to see that they were just “robbing Peter to pay Paul,” because the total amount of fine-tuning required by the supersymmetry theory (given all of its many different things that had to be just right) was as great as the fine-tuning that it tried to explain away. So there was no net lessening of fine-tuning even if the supersymmetry theory was true.

The MIT paper above says "many thousands" of science papers have been written about supersymmetry. Most of them spun out ornate webs of speculation, as ornate and unsubstantiated as the gossamer speculations of cosmic inflation theorists.  Supersymmetry has failed all observational tests, and now many physicists are lamenting that they wasted so many years on it. Our cosmic inflation theorists have failed to heed the lesson of the supersymmetry fiasco: that trying to explain away fine-tuning in the universe is a waste of time. 

Postscript: A recent scientific article makes untrue comments about the supersymmetry theory.  It amusingly claims that the theory is a "natural outgrowth of a mathematical symmetry of spacetime."  There's nothing natural about the supersymmetry theory, which is a very complex artificial collection of ad-hoc speculations.  The article tells us that the supersymmetry theory is "well established within particle physics,"  ignoring the fact that no evidence for the theory has ever appeared, and that it has failed all observational tests. This is what so many modern scientists and science writers do:  make untrue claims about the evidence status of cherished theories. 

A recent article in Scientific American says the following:
In the big “inflation debate” in Scientific American a few years ago, a key piece of the big bang paradigm was criticized by one of the theory's original proponents for having become indefensible as a scientific theory. Why? Because inflation theory relies on ad hoc contrivances to accommodate almost any data, and because its proposed physical field is not based on anything with empirical justification. 

Monday, April 15, 2019

When an Apparition Is Seen by Multiple Observers: 17 Cases

Apparitions have been by humans throughout history. Skeptics claim that such apparitions are just hallucinations. But there are two reasons for rejecting such a theory. The first reason is that there are an unusually high fraction of apparition sightings in which a person sees an apparition of someone (typically someone the observer did not know was in danger), and then later finds out that this person died on the same day (or the same day and hour) as the apparition was seen. We would expect such cases to be extremely rare or nonexistent if apparitions are mere hallucinations, since all such cases would require a most unlikely coincidence. But the literature on apparitions shows that it is quite common for an apparition to appear to someone on the day (or both the day and hour) of the death of the person matching the apparition. See here and here and here and here for 100 such cases.

The second reason for rejecting claims that apparitions are mere hallucinations is the fact that an apparition is quite often seen or heard by more than one person at the same time. We should not expect any such cases under a theory of apparitions being hallucinations.

I will now review some examples of cases in which an apparition was seen or heard by more than one person.  A very early case is found in the 17th century book Miscellanies by John Aubrey.  On page 82 we are told that a week after his death, an apparition of Henry Jacob appeared to Dr. Jacob, and that the apparition was also seen by his cook and maid. 

Below are some cases from Volume 2 of the classic work on apparitions, “Phantasms of the Living,” which you can read here. I will use the case numbers given in the book.

Page 174, Case #310: A reverend Fagan, his cousin Christopher, and a Major Collis all heard the name “Fagan” called from a source they could not determine. Two of them said the voice was like the voice of Captain Clayton. The next morning a telegram arrived saying that Captain Clayton had died on the same day and hour as the voice was heard. (I won't count his case as one of my 17 cases, since it is auditory only.) 

Page 178, Case #312: Gorgiana Polson reported seeing a woman who she thought was “something unnatural” and exclaimed, “Oh, Caroline.” The woman was dressed in black silk “with a muslin 'cloud' over her head and shoulders.” At the same time, a “little nursery girl” was terrified of going into a room where she saw a similar strange figure, “in black, with white all over her head and shoulders.” Gorgiana later found that Caroline had died on the same day the apparition was seen.

Page 181, Case #314: A Mrs. Coote reported that she saw her sister-in-law Mrs. W. appear at her bedside. The same Mrs. W. reportedly appeared to Mrs. Coote's aunt, appearing as a “bright light from a dark corner of the bedroom,” who was recognized as Mrs. W. by the aunt. Also, according to Mrs. Coote, “this appearance was also made to my husband's half-sister.” It was soon found that Mrs. W. had died. According to Mrs. Coote “A comparison of dates...served to show the appearance occurred ...at the time of, or shortly thereafter, the death of the deceased.”

Page182, Case #315: A Mr. de Guerin reported that in 1854, he saw something that “appeared like a thin white fog....after a few minutes I plainly distinguished a figure which I recognized as that of my sister Fanny.” He said “the vision seemed to disappear gradually in the same manner as it came.” He later learned that “on the same day my sister died – almost suddenly.” de Guerin immediately mailed a description of what he had seen to another sister, Mrs. Elmslie, who lived far away; “but before it reached her, I had received a letter from her, giving me an almost similar description of what she had seen the same night, adding 'I am sure dear Fanny is gone.' ” She reported that the apparition disappeared.

Page 196-197, Case #317: Violet Montgomery and Sidney Montgomery reported that in 1875 they had seen a female figure that “never touched the ground at all, but floated calmly along.” Page 197 also mentions a Mr. W.S. Soutar, who claimed that he and his brother also saw a female figure that glided without any apparent movement of the feet.

Page 213, Case #330: A James Cowley said he “saw, with all the distinctness possible to visual power” an apparition of his late wife. At the same instant his two-year-old son said, “There's mother!”

Page 213, Case #331: Charles A.W. Lett said that six weeks after the death of Captain Towns, his wife and Miss Berthon reported seeing a half-apparition of Captain Towns, consisting of only his head and shoulders. According to Lett, several other people saw the apparition, identifying it as Captain Towns; and then the apparition “gradually faded away.”

Page 235, Case #345: A Mrs. Cox was told by a nephew that he had just seen his father (Mrs. Cox's brother), who was thought to be far away in Hong Kong. Mrs. Cox told the boy this was nonsense, but then saw the same apparition of her brother. She reported that the apparition called her name three times. She soon found out that her brother had died on the same day the apparition was seen.

Page 241, Case #349: In 1845 while at college Philip Weld died in a boating accident. The president of the college immediately set out to travel to the father of Philip to deliver the bad news. Arriving the next day, he was surprised to hear the father say that yesterday he and his daughter had recently seen Philip walking between two persons, one wearing a black robe. “Suddenly they all seemed to me to have vanished,” said the father. Later, the father saw a portrait that he identified as one of the men who he had seen with the apparition of his son. The portrait was of a saint who had died long ago.

Page 247, Case #351: In 1882 J. Bennett and her daughter saw a man whose health they were worried about: “He passed so near that we shrank aside to make way for him.” Later “we found, in fact, that he had died about a half hour before he appeared to us.”

Page 248-249, Case #352: At quarter to 7 on July 11, 1879, Samuel Falkinburg observed his son exclaim “Grandpa!” Samuel looked up toward the ceiling and “saw the face of my father as plainly as I ever saw him in my life.” Soon thereafter he found out that his father died on July 11, 1879, at quarter to 7.

Page 253, Case #354: A girl went to live far away from her beloved aunt who had raised her for most of her childhood. One day someone other than the girl said, “Oh look there! There's your aunt in bed with Caroline!” The girl was astonished to see her aunt lying on the bed. A short time later the aunt seemed to have disappeared. Later the girl found the aunt had died, and that her last words were a remark that she could die happy because she had seen the child.

Page 248-249, Case #352At quarter to 7 on July 11, 1879, Samuel Falkinburg observed his son exclaim “Grandpa!” Samuel looked up toward the ceiling and “saw the face of my father as plainly as I ever saw him in my life.” Soon thereafter he found out that his father died on July 11, 1879, at quarter to 7. 

Page 604, Case 651: Benjamin Coleman was surprised to see at his bedside his son, who was believed to be far away at sea.  The figure (having a sailor's dress) vanished from Benjamin's sight. He then soon heard his servant William Ball say that William also had seen the son that day in sailor's dress.  The father later found that the son "had died that very day and hour, of dysentery, on board ship." 

Page 611, Case 658: Chatting in bed, Elizabeth and Henriette both saw a strange light, which they both said was beautiful.  Elizabeth then said it was little Mary Stanger, and that she was "floating away."  It was later learned that Mary Stanger had "died at the exact time" the two girls had seen the vision. 

On page 40-43 of his book Death and Its Mystery by the astronomer Camille Flammarion, we have an astonishing case of an apparition seen by multiple observers. Unlike the other cases I have reported, this is an example of an apparition of a living person.  According to Flammarion's account, 13 girls saw an apparition of a school teacher named Emilie Sagee, right next to her physical form, so that there was "one beside the other." "They were exactly alike, and going through the same movements," according to Flammarion, who states, "All the young girls, without exception, had seen the second form, and agreed perfectly in their description of the phenomenon."   Later, according to his account, 42 pupils saw an apparition of Sagee in a school at the same time Sagee was also observed picking flowers in a garden -- as if there were two copies of Emilie Sagee. According to the pupils, the apparition "gradually vanished."  Flammarion reports, "The forty-two pupils described the phenomenon in the same way." Such an observation may possibly be evidence for the idea that each of us has an "astral body" different from our physical body.  The apparition observed may have been a rare sighting of such a thing appearing before death. 

Flammarion reports a similar case of an apparition of the living on page 49-50 of his book.  Two observers saw a Miss Jackson warming her hands before a fire. "Suddenly, before their very eyes, she disappeared," according to Flammarion.  Half an hour later, Miss Jackson entered the room and warmed her hands before the fire.  

I have 17 other cases of apparitions seen by multiple observers, which are described in this post. 

Thursday, April 11, 2019

The Only Good Thing About the “We Are in a Computer Simulation” Theory

Early in the century Nick Bostrom advanced an argument claiming that there is a significant chance that we are merely living in a computer simulation. This idea has received a high degree of worldwide attention that makes no sense, given the extreme weakness of Bostrom's argument for such an idea.

Bostrom imagined extraterrestrial civilizations running computer programs that somehow produce experiences such as you and I are now having, calling these "ancestor simulations." I may merely quote a brief passage from Bostrom's original paper to show the sophistry of its reasoning.

"A technologically mature 'posthuman' civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one."

This cannot be careful reasoning, because Bostrom has sloppily spoken as if an interest in running “ancestor-simulations” is equivalent to actually running them. But there are 1001 reasons why extraterrestrial civilizations might not run such “ancestor-simulations,” even though they had some interest in running them if it was possible. He also does nothing to justify his claim that “at least one of the following propositions is true,” and it is certainly not clear that the third proposition is even possible, let alone that it must be true if the other two propositions are false.

Bostrom also makes the big mistake of implying that if there is one alien civilization interested in creating an “ancestor simulation,” that such a civilization would now be producing countless such simulations. He suggests that if there is one such civilization, the number of simulated lives would greatly outnumber the number of real lives. This is a completely unjustified insinuation. The more often some weird non-essential project has been done, the less people tend to be interested in doing it. If an alien civilization were able to run some ancestor simulation of the type Bostrom imagines, we have every reason to suspect that it would grow bored with such a thing after some particular number of years, and lose interest in it. Given an alien civilization that had one point in its long existence had an interest in running an ancestor simulation, there is no reason to think (given a very long lifetime for that civilization) that it would now be running such simulations. And there is also no reason to believe that it would now be running very many such simulations, so many that the number of simulated lives would outnumber the number of real lives.

Moreover, if we were to be living in a simulation, there would be no reason to believe that there are any extraterrestrial planets that might have super-advanced civilizations doing computer-generated "ancestor simulations," because a consequence of such a hypothesis is that all of our astronomical data (and all of our computer progress data and all of our video game progress data) is illusory, and that the stars and planets and computers and video games we observe are just "parts of the simulation."  So the simulation argument is like some guy who climbs up a ladder and then kicks the ladder out from under his feet. If you exist in a simulated reality, then you have zero basis for believing anything about extraterrestrials or computers outside of your simulated reality. 

A general rule of all successful arguments is: the conclusion never discredits one of the premises. Below is an argument that violates that rule:

Premise 1: My husband is a good man.
Premise 2: Good men tell the truth.
Premise 3: So when my husband said, "I'm going to flatten you!" when I told him I had thrown away his big stack of porn magazines, he must have been telling the truth. 
Conclusion: Therefore, my husband literally plans to flatten me, perhaps by renting a streamroller vehicle, and running over me. 

One reason this is a bad argument is that the conclusion invalidates one of the premises (if your husband is planning to murder you, then he's not a good man).  Something similar goes on in the argument that we are living in a computer simulation, which could be stated like this:

Premise 1: We have astronomical reasons for thinking maybe there are very old extraterrestrial civilizations. 
Premise 2: Such civilizations would have incredibly powerful computers.
Premise 3: Such computers would be so powerful they could simulate our lives. 
Conclusion: We're probably living in a computer simulation run by extraterrestrials.

In this case, as in the wife's argument, the conclusion discredits one of the premises. If we are living in a simulated reality, then all of our astronomical data is just "part of the simulation," and we have no reason for thinking there are old extraterrestrial civilizations. 

An argument for the simulated universe idea was advanced by Elon Musk, who stated the following:
The strongest argument for us being in a simulation probably is the following. Forty years ago we had pong. Like, two rectangles and a dot. That was what games were.
Now, 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously, and it's getting better every year. Soon we'll have virtual reality, augmented reality.
If you assume any rate of improvement at all, then the games will become indistinguishable from reality, even if that rate of advancement drops by a thousand from what it is now. Then you just say, okay, let's imagine it's 10,000 years in the future, which is nothing on the evolutionary scale.
So given that we're clearly on a trajectory to have games that are indistinguishable from reality, and those games could be played on any set-top box or on a PC or whatever, and there would probably be billions of such computers or set-top boxes, it would seem to follow that the odds that we're in base reality is one in billions.”
What Musk describes is a progression of sophistication in video game technology. But we have not one bit of evidence that any computer or video game has ever itself had the slightest iota of experience, consciousness, or life-flow. We only have evidence that biological creatures such as us can have some experience, consciousness, or life-flow. So there is no basis for thinking that some super-advanced alien civilization could ever be able to produce computers or video games that by themselves were the source of experiences like the ones we have. Making such an assumption based on an extrapolation of technical progress in video games is as fallacious as arguing that one day video games will be so realistic that their characters or creatures will leap out of the video game screen and endanger your life (kind of like in the visual below). 

Musk's witless reasoning along the lines of  "we're probably in a computer simulation because video games are getting better" has recently been repeated by a computer science expert named Rizwan Virk. Regarding the possibility that we are living in a computer simulation, Virk has deluded himself into believing that "there is plenty of evidence that points in that direction"  (there is actually no such evidence). After making a completely inappropriate appeal to quantum mechanics and Schrodinger's cat, which have no relevance to this question, Virk tries to back up the simulation argument by telling us that physicist John Wheeler made the "discovery" that everything is information. That was merely a speculation of Wheeler's, one that hasn't won much acceptance; and if it is true, it wouldn't imply we are in a computer simulation. Virk also tries to back up the simulation idea by arguing that video games have got a lot better over the years. It's the same bunk argument presented by Musk. 

Arguments that we are living in a computer simulation have very little intrinsic worth. But there is one good thing about considering such arguments seriously. If we consider seriously the possibility that we are merely living in a computer simulation, our minds may be opened to an important general possibility that may very well be true: the possibility that reality is radically different from the way we it is officially portrayed.

Let us consider one way a person might think about the possibility we are living in a computer simulation. He might think like this:

We have been told that our minds are being produced by our brains, but that may not be true.

We have been told that we are merely the product of blind evolution, but that may not be true.

We have been told that the matter we see around us exists independently of our minds, but that may not be true.

Maybe instead of being just the result of a long series of incredibly improbable accidents, we are here because of the intention of some purposeful intelligence.

Maybe we're just living in a computer simulation run by extraterrestrials.”

The last of these ideas is not a very viable one at all, but the preceding ideas are all well worth considering, particularly in forms outside of the “computer simulation” idea. Considering such possibilities very seriously would seem to be a step in the direction of philosophical maturity.  Once someone starts reasoning along the lines above, he may climb out of the thought prison that our mainstream experts have kept us chained in for so long.  But having escaped such a prison,  you should explore around and look for something better than the cheesy "we are in an extraterrestrial computer simulation" idea. 

Sunday, April 7, 2019

Brains Are Way Too Slow to Explain Fast Recall and Thinking

Scientists have long advanced the claim that the human brain is the storage place for memories and the source of human thinking. But such claims are speech customs of scientists rather than things they have proven. There are numerous reasons for doubting such claims. One big reason is that the proteins in synapses have an average lifetime of only a few weeks, which is only a thousandth of the length of time (50 years or more) that humans can store memories. Another reason is that neurons and synapses are way too noisy to explain very accurate human memory recall, such as when a Hamlet actor flawlessly recites 1476 lines. Another general reason can be stated as follows: the human brain is too slow to account for very fast thinking and very fast memory retrieval.

Consider the question of memory retrieval. Given a prompt such as a person's name or a very short description of a person, topic or event, humans can accurately retrieve detailed information about such a topic in one or two seconds. We see this ability constantly displayed on the long-running television series Jeopardy. On that show, contestants will be given a short prompt such as “This opera by Rossini had a disastrous premier,” and within a second after hearing that, a contestant may click a buzzer and then a second later give an answer mentioning The Barber of Seville.  Similarly, you can play with a well-educated person a game you can call “Who Was I?” You just pick random names of actual people from the arts or history, and require the person to identify the person within about two seconds. Very frequently a person will succeed. We can imagine a session of such a game, occurring in only ten seconds:

John: Marconi.
Mary: Invented the radio.
John: Magellan.
Mary: First to sail around the globe.
John: Peter Falk.
Mary: A TV actor.

We can also imagine a visual version of this game, in which you identify random pictures of any of 1000 famous people. The answers would often be just as quick.

The question is: how could a brain possibly achieve retrieval and recognition so quickly? Let us suppose that the information about some person is stored in some particular group of neurons somewhere in the brain. Finding that exact tiny storage location would be like finding a needle in a haystack, or like finding just the right index card in a swimming pool full of index cards. It would also be like opening the door of some vast library with a million volumes and instantly finding the exact volume you were looking for.

There are certain design features that a system can have that will allow for very rapid retrieval of information. One of these features is an indexing system. An indexing system requires a position notation system, in which the exact position of some piece of information can be recorded. An ordinary textbook has both of these things. The position notation system is the page numbering system. The indexing system is the index at the back of the book. But the brain has neither of these features. There is nothing in the brain like a position notation system by which the exact position of some tiny group of neurons can be identified. The brain has no neuron numbers, and a brain has no coordinate system similar to street names in a city or Cartesian coordinates in a grid. Lacking any such position notation system, the brain has no indexing system (something that requires a position notation system).

So how is it that humans are able to recall things instantly? It seems that the brain has nothing like the speed features that would make such a thing possible. You can't get around such a difficulty by claiming that each memory is stored everywhere in the brain. There would be two versions of such an idea. The first would be that each memory is entirely stored in every little spot of the brain. That makes no more sense than the idea of a library in which each page contains the information in every page of every book. The second version of the idea would be that each memory is broken up and scattered across the brain. But such an idea actually worsens the problem of explaining memory retrieval, as it would only be harder to retrieve a memory if it is scattered all over your brain rather than in a single little spot of your brain.

We also cannot get around this navigation problem by imagining that when you are asked a question, your brain scans all of its stored information. That doesn't correspond to what happens in our minds. For example, if someone asks me, "Who was Teddy Roosevelt," my mind goes instantly to my memories of Teddy Roosevelt, and I don't experience little flashes of knowledge about countless other people, as if my brain were scanning all of its memories.  

When we consider the issue of decoding encoded information, we have an additional strong reason for thinking that the brain is way too slow to account for instantaneous recall of learned information.  In order for knowledge to be stored in a brain, it would have to be encoded or translated into some type of neural state. Then, when the memory is recalled, this information would have to be decoded: it would have to be translated from some stored neural state into a thought held in the mind. This requirement is the most gigantic difficulty for any claim that brains store memories. Although they typically maintain that memories are encoded and decoded in the brain, no neuroscientist has ever specified a detailed theory of how such encoding and decoding could work. Besides the huge difficulty that such a system of encoding and decoding would require a kind of "miracle of design" we would never expect for a brain to ever have naturally acquired (something a million times more complicated than the genetic code), there is the difficulty that the decoding would take quite a bit of time, a length of time greater than the time it takes to recall something. 

So suppose I have some memory of who George Patton was, stored in my brain as some kind of synapse or neural states, after that information had somehow been translated into synapse or neural states using some encoding scheme.  Then when someone asks, "Who was George Patton?" I would have to not only find this stored memory in my brain (like finding a needle in a haystack), but also translate these synapse or neural states back into an idea, so I could instantly answer, "The general in charge of the Third Army in World War II."  The time required for the decoding of the stored information would be an additional reason why instantaneous recall could never be happening if you were reading information stored in your brain.  The decoding of neurally stored memories would presumably require protein synthesis, but the synthesis of proteins requires minutes of time. 

There is another reason for doubting that the brain is fast enough to account for human mental activity. The reason is that the transmission of signals in a brain is way, way too slow to account for the very rapid speed of human thought and human memory retrieval.

Information travels about in a modern computer at a speed thousands of time faster than nerve signals travel in the human brain. If you type in "speed of brain signals" into the Google search engine, you will see in large letters the number 286 miles per hour, which is a speed of 128 meters per second. This is one of many examples of dubious information which sometimes pops up in a large font at the top of the Google search results. The particular number in question is an estimate made by an anonymous person who quotes no sources, and one who merely claims that brain signals "can" travel at such a speed, not that such a speed is the average speed of brain signals. There is a huge difference between the average speed at which some distance will be traveled and the maximum speed that part of that distance can be traveled (for example, while you may briefly drive at 40 miles per hour while traveling through Los Angeles, your average speed will be much, much less because of traffic lights). 

A more common figure you will often see quoted is that nerve signals can travel in the human brain at a rate of about 100 meters per second. But that is the maximum speed at which such a nerve signal can travel, when a nerve signal is traveling across what is called a myelinated axon. Below we see a diagram of a neuron. The axons are the tube-like parts in the diagram below.


The less sophisticated diagram below makes it clear that axons make up only part of the length that brain signals must travel.


There are two types of axons: myelinated axons and non-myelinated axons (myelinated axons having a sheath-like covering shown in blue in the diagram above). According to this article, non-myelinated axons transmit nerve signals at a slower speed of only .5-2 meters per second (roughly one meter per second). Near the end of this article is a table of measured speed of nerve signals traveling across axons in different animals; and in that table we see a variety of speeds varying between .3 meters per second (only about a foot per second) and about 100 meters per second. 

But from the mere fact that nerve signals can travel across myelinated axons at a maximum speed of about 100 meters per second, we are not at all entitled to conclude that nerve signals typically travel from one region of the brain to another at 100 meters per second. For nerve signals must also travel across dendrites and synapses, which we can see in the diagrams above. It turns out that nerve signal transmission is much slower across dendrites and synapses than across axons. To give an analogy, the axons are like a road on which you can travel fast, and the dendrites and synapses are like traffic lights or stop signs that slow down your speed.

According to neuroscientist Nikolaos C Aggelopoulos, there is an estimate of 0.5 meters per second for the speed of nerve transmission across dendrites. That is a speed 200 times slower than the nerve transmission speed commonly quoted for myelinated axons. According to Bratislav D. Stefanovic, MD, the conduction speed across dendrites is between .1 and 15 meters per second. Such a speed bump seems more important when we consider a quote by UCLA neurophysicist Mayank Mehta: "Dendrites make up more than 90 percent of neural tissue."  Given such a percentage, and such a conduction speed across dendrites, it would seem that the average transmission speed of a brain must be only a small fraction of the 100 meter-per-second transmission in axons. 

Besides this “speed bump” of the slower nerve transmission speed across dendrites, there is another “speed bump”: the slower nerve transmission speed across synapses (which you can see in the top “close up” circle of the first diagram above). There are two types of synapses: chemical synapses and electrical synapses. The parts of the brain allegedly involved in thought and memory have almost entirely chemical synapses. (The sources here and here and here and here and here refer to electrical synapses as "rare."  The neurosurgeon Jeffrey Schweitzer refers here to electrical synapses as "rare."  The paper here tells us on page 401 that electrical synapses -- also called gap junctions -- have only "been described very rarely" in the neocortex of the brain. This paper says that electrical synapses are a "small minority of synapses in the brain.")

We know of a reason why transmission of a nerve signal across chemical synapses should be relatively sluggish. When a nerve signal comes to the head of a chemical synapse, it can no longer travel across the synapse electrically. It must travel by neurotransmitter molecules diffusing across the gap of the synapse. This is much, much slower than what goes on in an axon.

Diagram of a synapse

There is a scientific term used for the delay caused when a nerve signal travels across a synapse. The delay is called the synaptic delay. According to this 1965 scientific paper, most synaptic delays are about .5 milliseconds, but there are also quite a few as long as 2 to 4 milliseconds. A more recent (and probably more reliable) estimate was made in a 2000 paper studying the prefrontal monkey cortex. That paper says, "the synaptic delay, estimated from the y-axis intercepts of the linear regressions, was 2.29" milliseconds. It is very important to realize that this synaptic delay is not the total delay caused by a nerve signal as it passes across different synapses. The synaptic delay is the delay caused each and every time that the nerve signal passes across a synapse. 

Such a delay may not seem like too much of a speed bump. But consider just how many such "synaptic delays" would have to occur for, say, a brain signal to travel from one region of the brain to another. It has been estimated that the brain contains 100 trillion synapses (a neuron may have thousands of them).  So it would seem that for a neural signal to travel from one part of the brain to another part of the brain that is a distance away only 5% or 10% of the length of the brain, that such a signal would have to endure many thousands of such "synaptic delays" requiring a total of quite a few seconds of time. 

An average male human brain has about 1300 cubic centimeters. Let's try to calculate the minimum number of synapses that would have to be sequentially traversed in order for a neural signal to travel through a volume of only 1 cubic centimeter (.39 of an inch). 

If there are 100 trillion synapses in a brain of 1300 cubic centimeters,  then the number of synapses in this volume of 1 cubic centimeter would be roughly 100 trillion divided by 1300, which gives 77 billion. (This page gives an estimate of 418 billion synapses per cubic centimeter, but notes that estimates of synapse density vary; so let's just stick with the smaller number.)

It would be a big mistake to assume that a neural signal would have to sequentially traverse all those 77 billion synapses. To traverse the shortest path across this area, the signal would have to merely pass through a number of synapses that is roughly the cube root of the total number of synapses in this volume (the number that you would have to multiply by itself three times to get the total number of synapses in this volume).  Similarly, if we imagine a ball with 64 equally spaced connected nodes, including nodes in the center, something rather like the ball shown below,  then it is clear that the shortest path between any one node at the outer edge of the ball to another node on the opposite end of the ball would require that you traverse a number of nodes that is at least the cube root of 64, which is 4. 

So to roughly compute the shortest series of synapses that would have to be traversed for a brain signal to travel though this 1 cubic centimeter volume, we can take the cube root of 77 billion (the number that multiplied by itself three different times equals 77 billion).  The cube root of 77 billion is 4254.  So it seems that to traverse the shortest path through a volume of 1 cubic centimeter containing 77 billion synapses, traveling a distance of about 1 cubic centimeter, a neural signal would have to pass sequentially through a path containing at least 4000 different synapses (along with other neural elements such as dendrites).  

To calculate how long this traversal would take across a 1 cubic centimeter region of the brain, considering only the dominant delay factor of synaptic delays,  we can simply multiply this number of 4000 by the synaptic delay (the time needed for the signal to cross a single synaptic gap).  Using the smallest estimate of the synaptic delay (an estimate from 1965 of about .5 millisecond), and ignoring the more recent year 2000 estimate of 2.29 milliseconds for the synaptic delay, this gives us a total time of 4000 multiplied by .5 millisecond.  This gives us a total time of two seconds (2000 milliseconds) for how long it would take a nerve signal to travel across one cubic centimeter of brain tissue.   The velocity of nerve signal speed we get from this calculation is a speed of less than 1 centimeter per second (it's actually a speed of a half a centimeter per second).  

Take careful note that this speed is more than 10,000 times slower than the "100 meters per second" figure that is given by some experts when they are asked about how fast a brain signal travels. Such an expert answer is very misleading, because it only calculates the fastest speed that a nerve signal can travel inside the brain, while it is traveling through the fastest tiny parts of the brain (myelinated axons),  not the average speed of such a brain signal as it passes through different types of brain tissue and many different synapses. It turns out that because of the "speed bump' of synaptic delays, the average speed of a nerve signal traveling though the brain should be about 20,000 times slower than "100 meters per second" -- a slowpoke speed of about a half of a centimeter per second.  That's half the maximum speed at which a snail can move.  If I had used the year 2000 estimate of the synaptic delay (2.29 milliseconds), I would have got a speed estimate for brain signals that is only about .125 centimeters per second, which is one eighth the speed of a moving snail. 

slow brain

This calculation is of the utmost relevance to the question of whether the brain is fast enough to account for extremely rapid human thinking and instantaneous memory retrieval.  Based on what I have discussed, it seems that signal transmission across regions of the brain should be very slow -- way too slow to account for very fast thinking and instantaneous recall and recognition.  

Many a human can calculate as fast as he or she can recall. For example, the Guinness world record web site tells us, "Scott Flansburg of Phoenix, Arizona, USA, correctly added a randomly selected two-digit number (38) to itself 36 times in 15 seconds without the use of a calculator on 27 April 2000 on the set of Guinness World Records in Wembley, UK."  Such speed cannot be explained as the activity of a brain in which signals literally move at a less than a snail's pace. 

To give another example, In 2004 Alexis Lemaire was able to calculate in his head the 13th root of this number:

85,877,066,894,718,045, 602,549,144,850,158,599,202,771,247,748,960,878,023,151, 390,314,284,284,465,842,798,373,290,242,826,571,823,153, 045,030,300,932,591,615,405,929,429,773,640,895,967,991,430,381,763,526,613,357,308,674,592,650,724,521,841,103,664,923,661,204,223

In only 77 seconds, according to the BBC, Lemaire was able to state that it is the number 2396232838850303 which when multiplied by itself 13 times equals the number above.  Here we have calculation speed far beyond anything that could be possible if calculation is done by a brain in which signals travel at less than a snail's pace.  

In this matter it seems our neuroscientists have acted as if they were afraid to put two and two together. They have measured the speed of brain signal transmission in axons, dendrites and synapses. But I find a curious avoidance in neuroscience literature of the basic topic of the average time it should take a signal to travel across one region of the brain to another. It's like our neuroscientists are afraid to do the math which might lead them to the conclusion that signals cannot travel from one random brain region to another nearby region at a rate of more than an inch a second. For if they were to do such math, their claim that brains are the source of our thinking and recall would be debunked.  

Echoing part of what I have said here, a textbook says "the cumulative synaptic delay may exceed the propagation time along the axons." But why aren't scientists more explicit, by telling us that this cumulative synaptic delay will actually exceed the propagation time along the axons by a factor of more than 1000, leading to "snail's pace" brain signals? Another source vaguely tells us that "cumulative synaptic delay would affect the speed of information processing at every level of cognitive complexity" without mentioning what a crippling effect this would be if our brains were doing thinking and recall. 

I may note whenever a neuroscientist answers a question such as "how fast do brain signals travel" by mentioning only the fastest rate at which a brain signal can travel through the fastest little parts of the brain (through a myelinated axon), such as neuroscientists typically do, such an answer is either deceptive or very clumsy. It's like answering the question "how fast can you travel across Manhattan" by citing the maximum speed limit on any Manhattan cross-street such as 42nd Street, without considering all the delays caused by traffic lights.  Synaptic delays are comparable to traffic light delays, and they are a factor that must be calculated when realistically considering how fast a brain signal typically travels inside the brain.  

It is interesting that both this 1979 scientific paper and this 2008 scientific paper estimate the number of synapses in the human cortex as being about a billion per cubic millimeter, which equals a trillion per cubic centimeter.  This is 10+ times greater than the 77 billion per cubic centimeter figure I was using above. The more synapses, the more speed bumps, and the slower the brain signal. If I had done the brain speed calculation specifically for cortex tissue (the supposed center of higher thought), the calculation would have come up with a brain signal speed very much slower than the  half a centimeter per second result that was reached. 

To sum up,  we have several gigantic reasons for thinking that brains must be too slow to account for instantaneous recall:

(1) Finding the exact little spot where a memory was stored would be like finding a needle in a haystack, given the lack of any indexing system or position coordinate system in the brain.
(2) Decoding stored memories from encoded neural states would take additional time that would make neural memory recall much less than instantaneous.
(3) The "snail's pace" speed of brain signals (greatly slowed by synaptic delays) would prevent an instantaneous recall of memories and stored information such as humans often have. 

The slowness of the brain is one of many neuroscience reasons for believing that the brain cannot be the storage place of our memories, and cannot be the source of our thinking and consciousness.  Human mentality must be primarily a psychic or spiritual or non-biological reality rather than a neural reality. 

I can imagine various ways in which a person could try to rebut some of the argumentation in this post, Someone could simply say that we know that signals must travel very fast in a brain, because humans are able to recall things instantly or recognize things instantly. But we do not at all know that recognition or recall are actually effects produced by the brain, and we have good reasons for doubting that they are (such as the short lifetimes of synapse proteins and the fact that the high noise levels in brains and synapses is incompatible with the fact that humans such as Hamlet actors can flawlessly recall very large bodies of memorized information). So we cannot use the speed of recognition or recall to deduce the speed of brain signals. 

Another way you could try to rebut this post would be to cite some expert who estimated how fast signals move about in a brain.  But further analysis would generally show that such an estimate was not derived from a calculation of all the low-level factors (such as synaptic delay) affecting the speed of brain signals, but was simply a calculation based on the assumption that brains must pass about signals at the speed at which humans recognize or recall things or respond to things.  We cannot use such circular reasoning or "begging the question" when considering this matter. The only intelligent way to calculate the speed of a brain signal is to do a calculation based on low-level things (such as synaptic delays) that we definitely know, rather than starting out making grand assumptions about the mind and brain that are unproven and actually discredited by the very low-level facts (such as the length of synaptic delays)  that should be examined. 

Although neuroscientists typically claim that synapses are where memories are stored in the brain, there are four ways in which the characteristics of synapses are telling us that thinking and memory is not brain-caused:

(1) Synapses show no signs of having stored information, and their main structural feature (the disorganized little blob or bag that is the synaptic knob or head) seems like pretty much the last type of structure we'd expect to see in something storing information for decades. 
(2) Synapses are unstable units undergoing spontaneous remodeling, and synapses consist of proteins with average lifetimes of only a few weeks, only a thousandth of the maximum length of time that humans store memories.
(3) Synapses are very noisy, so noisy that one expert tells us that a signal passing through a synapse "makes it across the synapse with a probability like one half, or even less," making synapses unsuitable as reliable transmitters of memory information that humans such as Wagnerian tenors can recall abundantly with 100% accuracy. Given such noise levels, which would seem to have the effect of rapidly extinguishing brain signals,  there would seem to be good reason for suspecting that it is effectively impossible for brain signals to travel more than a centimeter or an inch without vanishing or becoming mere tiny traces of their original strength. 
(4) The most common type of synapse is slow,  and although the synaptic delay in a single synapse is only about a millisecond,  when we calculate the cumulative synaptic delay we find that brain signals must be slower than a snail's pace, way too slow to explain instantaneous recall and fast thinking. 

In fact, if some designer of the human body had specifically designed something to tell us (by its characteristics) that our brains cannot be the source of our fast thinking and instantaneous memory, it's rather hard to imagine anything that would do a better job of telling us that than our signal-slowing, very noisy, unstable synapses.  Our synapses are telling us (by their characteristics) that thinking and memory is not brain-caused, but our neuroscientists (trapped in ideological enclaves of dogma and reigning speech customs) aren't listening to what our synapses are telling us. 

Postscript: I may note that you do not get a much faster estimate for the speed of brain signals if you calculate the speed from one neuron to the nearest neuron, rather than the speed through a cubic centimeter. The speed is the same snail's pace I have calculated, because the signal will always have to pass through synapses that are the dominant slowing factor. 

There is an entirely different method you could use to calculate the speed of signals inside the brain, using not estimates of the number of synapses per cubic centimeter, but instead the average distance between neurons. This paper mentions an average distance of about 26 micrometers between neurons in a rat cortex, and it says, "we believe that the parameter of 26 ┬Ám [micrometers] average distance between neurons is also a valid assumption in the human brain." I assume that by "average distance between neurons" this source means the average distance between two adjacent neurons. Below are some calculation figures that we get if we use this average distance figure, and we use a synaptic delay estimate that is about the average of the .5 millisecond and 2.29 millisecond estimates quoted above. 

Average distance between neurons in micrometers 26
This distance in centimeters 0.0026
Synaptic delay in milliseconds 1
Time needed to cross distance above (in seconds), considering only the synaptic delay.001
Total distance that could be traversed by a brain signal in a second 1000*0.0026 centimeter=2.6 centimeter
Signal speed between adjacent neurons in centimeters per second 2.6

Using this method, we get a result in the same ballpark as the result calculated by my first method.  The first method found that brain signals travel at a rate of about .5 centimeters per second, and this method finds that brain signals travel at about 2.6 centimeters per second, which is about an inch per second.  Either way, this speed is way too slow to account for instantaneous recall and very rapid thinking. 

A most-realistic estimate of brain signal speed would also take into account two other factors ignored in the calculations above (and also ignored by neuroscientists when discussing the speed of brain signals):
(1) The noise in synapses, and the fact that in the cortex, signal transmission across synapses is highly unreliable. A scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower."  Considered over a large section of brain tissue, this unreliability would be equivalent to a big additional slowing factor, and might well lead to speed estimates much lower than I have made here. 
(2) Synaptic fatigue,  a temporary inability of the head or vesicle of a synapse to send a signal, because of a depletion of neurotransmitters. Referring to synaptic fatigue, one paper states the following:

By contrast, following neurotransmission, synaptic vesicle membranes are internalized within seconds, and the recycled synaptic vesicles can be reloaded with neurotransmitter within 1–2 minutes.

This paper mentions a much shorter "timescale of vesicle recovery" of 800 milliseconds, but even that would be a large slowing factor, making it all the more unlikely that brain signals inside the cortex can regularly travel about at much more than about a centimeter per second. 

There are two other factors I didn't mention in my original post, both tending to further slow the speed of a signal from one end of the brain to another:
(1) Tortuosity: the fact that a shortest path between two different brain areas is typically a sinuous, snake-like path rather than a straight line (tortuosity is the technical term for such a thing).
(2) Folding of cerebral tissue: the surface area of cerebral tissue is much larger than you might think by looking at the top of a head; and a brain signal can't travel a straight line between these folds.   

So below is a list of all of the factors that must be considered when considering the true speed of signals between two opposite areas of the brain:
(1) The speed of transmission through dendrites, which can be 200 or more times slower than the "100 meters per second" estimate based on transmission through axons.
(2) Synaptic delays, which end up being a huge slowing factor because so many synapses must be traversed.
(3) Synaptic unreliability or noise, the fact that a signal is often transmitted with only between 10% to 50% likelihood, a factor that is typically ignored but which has a huge impact on effective speed.
(4) Synaptic fatigue, the fact that a synapse will so often need a rest period after firing, a period that can be more than a minute.
(5) Tortuosity, the fact that nerve signals must travel through sinuous paths that are not straight lines.
(6) Folding of cortex tissue, a further slowing factor. 

Every one of these factors is ignored by 95% of discussions of brain signal speed in the popular press. 

A 2011 LiveScience.com article is entitled "Speed of Brain Cell Chatter Clocked for the First Time." Buried within the article is a fact that corroborates what I have said in this post. The article discusses an experiment in which scientists clocked signal speed in mouse brains, by tagging neurotransmitters with a fluorescent protein The article reports, "On average, it takes about five seconds for the cells to collect up the neurotransmitters and this timeframe didn't vary much between a cell's different synapses." This is the only speed figure given by the article. Since the article was talking about a mouse brain (no larger than about two centimeters across), this is exactly the "snail's pace" signal transmission rate that I have claimed in this post, a pace of roughly one centimeter per second.