Header 1

Our future, our universe, and other weighty topics


Sunday, August 18, 2019

He Tries to Play Natural History Mr. Fix-it

The book “Lamarck's Revenge” by paleontologist Peter Ward is a book that tries to kind of tell us: evolution theory is broken, but there's a fix. Unfortunately, the fix described is no real fix at all, being hardly better than a tiny little band-aid.

On page 43 the author tells us the following:

'Nature makes no leap,' meaning that evolution took place slowly and gradually. This was Darwin's core belief. And yet that is not how the fossil record works. The fossil record shows more 'leaps' than not in species.”

On page 44 the author states the following:

Charles Darwin, in edition after edition of his great masterpiece, railed against the fossil record. The problem was not his theory but the fossil record itself. Because of this, paleontology became an ever-greater embarrassment to the Keepers of Evolutionary Theory. By the 1940s and '50s this embarrassment only heightened. Yet data are data; it is the interpretation that changed. By the mid-twentieth century, the problem posed by fossils was so acute it could no longer be ignored: The fossil record, even with a century of collecting after Darwin, still did not support Darwinian views of how evolution took place. The greatest twentieth century paleontologist, George Gaylord Simpson, in midcentury had to admit to a reality of the fossil record: 'It remains true, as every paleontologist knows, that most new species, genera, and families, and that nearly all new categories above the level of families, appear in the record suddenly, and are not led up to by known, gradual, completely continuous transitional sequences.' "

So apparently the fossil record does not strongly support the claims of Darwin fans that new life forms appear mainly through slow, gradual evolution.  Why have we not been told this important truth more often, and why have our authorities so often tried to insinuate that the opposite was true? An example is that during the Ordovician period, the number of marine animals in one classification category (Family, the category above Genus) tripled, increasing by 300% (according to this scientific paper). But such an "information explosion" is dwarfed by the organization bonanza that occurred during the Cambrian Explosion, in which almost all animal phyla appeared rather suddenly -- what we may call a gigantic complexity windfall. 


information explosion

Ward proposes a fix for the shortcomings of Darwinism – kind of a moldy old fix. He proposes that we go back and resurrect some of the ideas of the biologist Jean-Baptiste Lamarck, who died in 1829. Ward's zeal on this matter is immoderate, for on page 111 he refers to a critic of Darwinism and states, “The Deity he worships should be Lamarck, not God.” Many have accused evolutionary biologists of kind of putting Darwin on some pedestal of adulation, but here we seem to have this type of worshipful attitude directed towards a predecessor of Darwin.

Lamarck's most famous idea was that acquired characteristics can be inherited. He stated, "All that has been acquired, traced, or changed, in the physiology of individuals, during their life, is conserved...and transmitted to new individuals who are related to those who have undergone those changes." For more than a century, biologists told us that the biological theories of Lamarck are bunk. If the ideas of Lamarck are resurrected by biologists, this may show that biologists are very inconsistent thinkers who applaud in one decade what they condemned the previous decade.

Ward's attempt to patch up holes in Darwinism is based on epigenetics, which may offer some hope for a little bit of inheritance of acquired characteristics. In Chapter VII he discusses the huge problem of explaining the Cambrian Explosion, which he tells us is “when the major body plans of animals now on Earth appeared rapidly in the fossil record.” Ward promises us on page 111 that epigenetics can “explain away” the problem of the Cambrian Explosion. But he does nothing to fulfill this promise. On the same page he makes a statement that ends in absurdity:

The...criticism is that Darwinian mechanisms, most notably natural selection combined with slow, gene-by-gene mutations, can in no way produce at the apparent speed at which the Cambrian explosion was able to produce all the basic animal body plans in tens of millions of years or less. Yet the evidence of even faster evolutionary change is all around us. For example, the ways that weeds when invading a new environment can quickly change their shapes.”

The latter half of this statement is fatuous. The mystery of the Cambrian Explosion was how so many dramatic biological innovations (such as vision, hard shells and locomotion) and how so many new animal body plans were able to appear so quickly. There is no biological innovation at all (and no real evolution) when masses of weeds change their shapes, just as there is no evolution when flocks of birds change their shapes; and weeds aren't even animals. The weeds still have the same DNA  (the same genomes) that they had before they changed their shapes. Humans have never witnessed any type of natural biological innovation a thousandth as impressive as the Cambrian Explosion.

Ward then attempts to convince us that “four different epigenetic mechanisms presumably contributed to the great increase in both the kinds of species and the kinds of morphologies that distinguished them that together produced the Cambrian explosion” (page 113). By using the phrase “presumably contributed” he indicates he has no strong causal argument. To say something “contributed” to some effect is to make no strong causal claim at all. A million things may contribute to some effect without being anything close to being an actual cause of that effect. For example, the mass of my body contributes to the overall gravitational pull that the Earth and its inhabitants exert on the moon, but it is not true that the mass of my body is even one millionth of the reason why the moon keeps orbiting the Earth. And each time you have a little charcoal barbecue outside, you are contributing to global warming, but that does not mean your activity is even a millionth of the cause of global warming. So when a scientist merely says that one thing “contributes” to something else, he is not making a strong causal claim at at all. And when a scientist says that something  “presumably contributed” to some effect, he is making a statement so featherweight and hesitant that it has no real weight.

On page 72-73 Ward has a section entitled “Summarizing Epigenetic Processes,” and that section lists three things:

  1. Methlyation,” that “lengths of DNA can be deactivated.”
  2. Modifications  of gene expression...such as increasing the rate of protein production called for by the code or slowing it down or even turning it on or off.”
  3. Reprogramming,” which he defines as some type of erasing, using the phrase “erase and erase again,” also using the phrase “reprogramming, or erasing.”

There is nothing constructive or creative in such epigenetic processes. They are simply little tweaks of existing proteins or genes, or cases of turning off or erasing parts of existing functionality. So epigenetics is useless in helping to explain how some vast burst of biological information and functional innovation (such as the Cambrian Explosion) could occur.

Trying to make it sound as if the idea that epigenetics had something to do with the Cambrian Explosion, Ward actually gives away that such an idea has not at all gained acceptance, for he states this on page 118: “From there the floodgates relating to epigenetics and the Cambrian explosion opened, yet none of this has made it into the textbooks thus far.” Of course, because there's no substance to the idea that epigenetics can explain even a hundredth of the Cambrian explosion. The Cambrian Explosion involved the relatively sudden appearance of almost all animal phyla, but Ward has failed to explain how epigenetics can explain the origin of even one of those phyla (or even a single appendage or organ or protein that appeared during the Cambrian Explosion).  If the inheritance of acquired characteristics were true, it would do nothing to explain the appearance of novel biological innovations, never seen before, because the inheritance of acquired characteristics is the repetition of something some organisms already had, not the appearance of new traits, capabilities, or biological innovations. 

So we have in Ward's book the strange situation of a book that mentions some big bleeding wounds in Darwinism, but then merely offers the tiniest band-aid as a fix, along with baseless boasts that sound like, “This fixes everything.”

A book like this seems like a complexity concealment.  Everywhere living things show mountainous levels of organization, information and complexity, but you would never know that from reading Ward's book.  He doesn't tell you that cells are so complex they are often compared to factories or cities. He doesn't tell you that in our bodies are more than 20,000 different types of protein molecules, each a different very complex arrangement of matter, each with the information complexity of a 50-line computer program, such proteins requiring a genome with an information complexity of 1000 books. He doesn't mention protein folding, one of the principal reasons why there is no credible explanation for the origin of proteins (the fact that functional proteins require folding, a very hard-to-achieve trick which can occur in less than .00000000000000000000000001 of the possible arrangements of the amino acids that make up a protein).  In the index of the book we find no entries for "complexity," "organization," "order," or "information." Judging from its index, the book has only a passing reference to proteins, something that tells you nothing about them or how complex they are.  The book doesn't even have an index entry for "cells," merely having an entry for "cellular differentiation." All that Ward tells you about the stratospheric level of order, organization and information in living things is when Ward tells you on page 97 that "there is no really simple life," and that life is "composed of a great number of atoms arranged in intricate ways," a statement so vague and nebulous that it will go in one of the reader's ears and out the other. 

complex_figure
Some arrangements are too complex to have appeared by chance

Wednesday, August 14, 2019

The Baloney in Salon's Story About Memory Infections

The major online web site www.salon.com (Salon) has put up a story on human memory with the very goofy-sounding title, “A protein in your brain behaves like a virus, infecting your cells with memories.” The story is speculative nonsense, and it seems to be based on a BS hogwash misstatement (but not one told by the writer of the story). The story tells us, “Viruses may actually be responsible for the ability to form memories,” a claim that not one in 100 neuroscientists has ever made.

The story mentions some research done by Jason Shepherd, associate professor of neurobiology at the University of Utah Medical School. Shepard has done some experiments testing whether a protein called Arc (also known as Arc/Arg 3.1) is a crucial requirement for memory formation in mice. He does this by removing the gene for Arc in some mice, which are called “Arc knockout” mice, meaning that they don't have the gene for Arc, and therefore don't have the Arc protein that requires the Arc gene.

In the Salon.com story, we read the following claim by Shepard: “ 'If you take the Arc gene out of mice,' Shepherd explains, 'they don’t remember anything.' ” Right next to this quote, there's a link to a 2006 scientific paper by 26 scientists, none of them Shepard (although strangely the Salon story suggests that the paper is research by Shepard and his colleagues). Unfortunately, the paper does not at all match the claim Shepard has made, for the paper demonstrates very substantial memory and learning in exactly these “Arc knockout” mice that were deprived of the Arc gene and the Arc protein.

The Morris water maze test is a test of learning and memory in rodents. In the test a rodent is put at the center of a large circular tub of water, which may or may not have milk powder added to make the water opaque. Near one edge of the tub is a submerged platform that the rodent can jump on to escape the tub. A rodent can find out the location of the platform by exploring around. Once the rodent has used the platform to escape the tub, the rodent can be put back in the center of the tub, to see how well the rodent remembers the location of the little platform previously discovered. 

In the paper by the 26 scientists, both 25 normal mice and 25 “Arc knockout” mice were tested on the Morris water maze. The “Arc knockout” mice remembered almost as well as the normal mice. The graphs below show the performance difference between the normal mice and the mutant “Arc knockout mice,” which was only minor. The authors state that in another version of the Morris water maze test of learning and memory, there was no difference between the performance of the “Arc knockout” mice and normal mice. We are told, “No differences were observed in the cue version of the task.”

Unimpressive “water maze” results from the paper by 26 scientists, showing no big effect from knocking out the Arc gene 

What title did the 26 scientists give their paper? They gave it the title, “Arc/Arg3.1 Is Essential for the Consolidation of Synaptic Plasticity and Memories.” But to the contrary, the Morris water maze part of the experiments shows exactly the opposite: that you can get rid of Arc/Arg3.1 in mutant “knockout” mice, and that they will still remember well, with good consolidation of memories (the test involved memory over several days, which requires consolidation of learned memories). The 26 authors have given us a very misleading title to their paper, suggesting it showed something, when it showed exactly the opposite.

A 2018 paper by 10 scientists testing exactly the same thing (whether “Arc knockout” mice do worse on the Morris water maze) found the same results as the paper by the 26 authors: that there were only minor differences in the performance of the mutant “Arc knockout mice” when they were tested with the Morris water maze test. In fact that paper specifically states, “Deletion of Arc/Arg3.1 in Adult Mice Impairs Spatial Memory but Not Learning,” which very much contradicts Shepard's claim that “If you take the Arc gene out of mice, they don’t remember anything.”

Unimpressive “water maze” results from the 2018 paper by 10 scientists, showing no big effect from knocking out the Arc gene

So why did scientist Shepard make the false claim that “If you take the Arc gene out of mice, they don’t remember anything,” a claim disproved by the paper by the 26 scientists (despite its misleading title contradicting its own findings), and also disproved by the 2018 paper? Shepard has some explaining to do.

In the paper here, Shepard and his co-authors state, “The neuronal gene Arc is essential for long-lasting information storage in the mammalian brain,” a claim that is inconsistent with the experimental results just described in two papers, which show long-lasting information retention in mice that had the Arc gene removed. In the paper Shepard refers in a matter-of-fact manner to “information storage in the mammalian brain,” but indicates he doesn't have any real handle on how such a thing could work, by confessing that “we still lack a detailed molecular and cellular understanding of the processes involved,” which is the kind of thing people say when they don't have a good scientific basis for believing something.

We have seen here two cases of the misuse of the word "essential" in scientific papers.  I suspect that the word is being misused very frequently in biological literature, in cases where scientists say one thing is essential for something else when the second thing can exist quite substantially without the first.  Similarly, a scientific paper found that the phrase "necessary and sufficient" is being massively misused in biology papers that failed to find a "necessary and sufficient" relationship.  To say that one thing is necessary and sufficient for some other thing means that the second thing never occurs without the first, and the first thing (by itself) always produces the second.  An example of the correct use of the phrase is "permanent cessation of blood flow is both necessary and sufficient for death." But it seems biologists have massively misused this phrase, using it for cases in which no such "necessary and sufficient" relationship exists.  A preposterous example is the paper here which tells us in its title that two gut microbes (microorganisms too small to see) are "necessary and sufficient" for cognition in flies -- which is literally the claim that tiny microbes are all that a fly needs to think or understand. 

Sunday, August 11, 2019

13 Ways Experts Can Evade and Stifle Unwanted Observations

Experts sometimes portray themselves as people who do not hesitate to follow the evidence wherever it points. But in reality many a modern expert shows a strong tendency to evade and stifle evidence that conflicts with his cherished dogmas, his entrenched ideas about the way reality works. There are quite a few ways in which an expert may evade or stifle some observational result that he doesn't want to accept. Let's look at some of these ways.

Way #1: Just “File-Drawer” the Unwanted Result

The term “file drawer effect" refers to the existence of unpublished scientific studies. After running a study getting an undesired result, a scientist may decide to not even write up the study as a scientific paper. It is believed that the file drawers of scientists contain data on many studies that were never written up as papers, and never published in scientific journals.

If a scientist gets a negative result failing to show an effect he was looking for, the scientist may never write up the experiment as a paper, and he may justify this by saying that it would be hard to get the paper published, because scientific journals don't like to publish negative results. Or, if the results seem to conflict with existing dogmas, the scientist may think to himself that it would be hard to get a paper with such a result published, given existing prejudices. study making surveys of scientists found that 63% of US psychologists surveyed and 63% of evolution researchers surveyed confessed to "not reporting studies or variables that failed to reach statistical significance (e.g. p ≤0.05) or some other desired statistical threshold." This is quite the intellectual sin, because it is just as important to report negative results as it is to report positive results. 

Way #2: Prune the Data to Remove the Unwanted Result

When a scientist gets an unwanted result in an experiment or a meta-analysis of previously published papers, the scientist might try to get a better result by removing some of the data items collected, or removing some of the scientific papers included in the meta-analysis. If the scientist collected the data himself, he may arbitrarily remove any extreme data item, on the grounds that it was an “outlier.” Or he may apply some filter criteria that gets rid of certain data items. For example, if a study with 10,000 subjects is analyzing the safety of a drug, and 100 of them died not long after taking the drug, some of those hundred troubling data points might be removed by introducing some inclusion criteria which excludes most of those who died.

Or if a scientist is doing a meta-analysis of 100 studies of the effectiveness of some medical technique not popular with scientists (for example, acupuncture or homeopathy), and the result ends up showing an effectiveness for the technique, this undesired result can be modified by modifying the inclusion criteria used to decide which studies will be included in the meta-analysis. Similarly, data items may be yanked out of a scientific paper, to help the paper achieve something that may be reported as "statistically significant." study making surveys of scientists found that 38% of US psychologists surveyed and 23.9% of evolution researchers surveyed confessed to "deciding to exclude data points after first checking the impact on statistical significance."

Way #3: Keep on Collecting More Data Until You Get a Result Less Unwanted

If a scientist collects a certain amount of observational data, and he is left with an unwanted result, another technique he can use to get a more desirable result is to continue collecting more data until a more desirable result is achieved. For example, lets us imagine a scientist tries to do a study showing that lots of television watching increases your chance of sudden cardiac death. Suppose the scientist gets data on 10,000 years of living of 1000 subjects, but does not find the effect he is looking for. The scientists can just keep collecting more data, such as trying to get data on 20,000 years of living of 2000 subjects. As soon as the desired result appears, the scientist can then stop collecting data, and write up his paper for publication. It is easy to see how such a technique can distort reality. study making surveys of scientists found that 55.9% of US psychologists surveyed and 50.7% of evolution researchers surveyed confessed to "collecting more data after inspecting whether the results are statistically significant."  

Way #4: Just Switch the Hypothesis If the Original Hypothesis Is Not Supported by the Experiment

A very common technique is what is called HARKing, which stands for Hypothesizing After Results are Known. A scientific experiment is supposed to state a hypothesis, gather data, and then analyze whether the data supports the hypothesis. But imagine you are some scientist gathering data that does not support the original hypothesis. Rather than writing this up as a result against the original hypothesis, you can come up with some other hypothesis after the data has been collected, and then write up the scientific experiment describing it as a study that confirms the new hypothesis. You might avoid even mentioning that the experiment had failed to confirm the original hypothesis. One problem with this is that it has the effect of stifling or covering up a negative result which should have been the main news item coming from the scientific paper. The bad effects of Hypothesizing After Results Are Known are described here.

There is a general technique that can be used to prevent or discourage the four effects just describe. The technique is for institutions to arrange for pre-registered studies with guaranteed publication. Under such a method, a scientist has to publish in writing (before collecting data) the hypothesis that he is testing, and specify in detail the exact experimental method that will be used, including data inclusion criteria. Then regardless of the result achieved, the result must be written up and published in a journal, in a way true to the original experimental design specification. It is widely recognized that such pre-registration and guaranteed publication would result in more reliable and robust scientific papers, with a reduction in misleading publication bias and file-drawer effects. But the scientific community has made almost no effort to use such a methodology. 

study making surveys of scientists found that 27% of US psychologists surveyed and 54% of evolution researchers surveyed confessed to "reporting an unexpected finding as having been predicted from the start."  Should this cause us to suspect that evolution researchers are more prone to lie than psychology researchers? 

Way #5: Mask the Undesired Result with a Paper Title Not Mentioning It

Scientists write the titles of their own papers, and such titles can often mask or hide undesired results. An example was a scientific paper describing an attempt to look for long-lived proteins in the synapses of the brain, proteins that might help to explain human memories that can last for 50 years. As discussed here, the paper found no real evidence for any such thing.

The paper found the following:

  • Studying thousands of brain proteins, the study found that virtually all proteins in brains are very short-lived, with half-lives of less than a week.
  • Table 2 of the paper gives specific half-life estimates for the most long-lasting brain proteins, and in this table only 10 out of thousands of brain proteins had half-lives of 10 days or longer.
  • Of the proteins whose half-life is estimated in Table 2, only one of them has a half-life of longer than 30 days, that protein having a half-life of only 32 days.
  • A graph in the paper indicates that none of the synapse proteins had a half-life of more than 35 days.

So what was the title of this paper finding the undesired result of no evidence for synapse proteins with a lifetime of years? It was this: 'Identification of long-lived synaptic proteins by proteomic analysis of synaptosome protein turnover.” The paper thereby masked and stifled the actual observational result, that no truly long-lived synaptic proteins were found.

Way #6: Get Your Study's Press Release to Mask the Undesired Result with a Headline Not Mentioning It

How a scientific paper is reported is determined not just by the title used for the paper but the headline used in the press release announcing the paper. Such press releases often mask or hide undesired observational results. A good example is the press release for the scientific paper just discussed, which had the very misleading title, “In Mice, Long-Lasting Brain Proteins Offer Clues to How Memories Last a Lifetime.” This title masked and stifled the actual observational result, that no evidence of any synaptic proteins lasting years was found.

A scientist may claim that he has nothing to do with the headline used by the university press office when it issues a press release describing the scientist's study. But I suspect the typical situation is that such a press release is sent to the scientist before it is released, with the scientist having the opportunity to object if anything in the press release is inaccurate or misleading.

Way #7: Bury the Undesired Result Outside of the Abstract, Placing It Within a Sea of Details

If a scientist gets an undesired result, one way to kind of stifle it is to not state the result in the abstract of the paper, but to place it somewhere deep inside the paper, surrounded by paragraphs and paragraphs of fine details. Only the most diligent readers of the paper will notice the result. Since most scientific papers are hidden behind paywalls, with only the paper abstract being easily discovered through a web search, the result will be that very few people will ever read about the undesired result.

Way #8: State the Undesired Result Using Jargon That Almost No One Can Understand

Yet another way for a scientist to stifle an undesired result is to state the result using some fancy jargon or mathematical expression that almost no one will be able to understand. Such a thing is easy to do, just by using “science speak” that is like some foreign language to the average reader. For example, if the paper has found that some popular pill increases the chance of people dying, such a result can be announced by saying that the pill "produces a statistically significant prognostic detriment," a phrase a reader will be unlikely to understand. 

Way #9: Attack Someone's Methodology Producing an Undesired Result

The previous ways involved how a scientist may stifle or evade a result that came up in his own activity. But there are also ways to stifle or evade results produced by other people. Let's look at some of those.

One way is to attack the methodology of an study that produced a result you don't want to accept. For example, if an ESP experimenter got a result indicating ESP exists, you can complain about a lack of screens between the test subject and the observer. If such screens are used, you can complain that the observer and the experimenter were not in separate rooms. If they were in separate rooms, you can complain that a bit of noise might have traveled between the rooms. If the rooms were far-separated, you can complain that maybe the test subject could have peeked through a keyhole or a transom, or left a secret video camera somewhere.

Although there is nothing wrong per se in attacking the methodology of an experiment, given that many scientific studies have a very poor methodology, it should be noted that methodological criticisms often involve hypocrisy. Scientist X will often complain about a lack of some thing in Scientist Y's work, when Scientist X does not include that thing in his own experiments.

Way #10: Try to Undermine Confidence in Those Reporting the Unwanted Observations

An expert may use several different techniques to undermine confidence in an observer who has reported an unwanted observation. One commonly used technique is what is called gaslighting. It involves making one or more insinuations that the observer is suffering from some psychological problem, credibility problem or observational problem. 


shaming

Way #11: Speak As If the Observations Were Not Made

One of the most common techniques an expert may use to stifle an unwanted observational result is the technique of simply speaking as if some type of observational result was never made, without trying to discredit or even mention the undesired observations. For example, it is not uncommon for neuroscientists to say that there is no evidence that consciousness can continue after signs of brain activity have ceased, despite very strong evidence that exactly such a thing happens during many near-death experience.  To give another example, scientists often simply claim there is no evidence for paranormal phenomena, despite the very strong laboratory evidence for ESP that has been collected over many years. 


Way #12: "Speculate Away" the Undesired Observations


Faced with an undesired result, a scientist will often resort to elaborate speculations designed to explain away the result.  Here are some examples: 

(1) Faced with nothing but negative results from decades of searches for extraterrestrial radio signals,  scientists have resorted to speculations (such as the zoo hypothesis) designed to allow them to continue to believe that our galaxy is filled with intelligent life. 
(2) Faced with observational results suggesting that scientists don't well understand the composition of the universe or the dynamics of stellar movements, scientists invented the speculations of dark energy and dark matter,  rather than confess their ignorance about the composition of the universe  and the dynamics of stellar movements. 
(3) Faced with evidence that the proteins in synapses are too short-lived for synapses to be a storage place for memories, scientists created very elaborate chemical speculations designed to explain away this result.
(4) Faced with undesired evidence from near-death experiences that human consciousness can keep operating after brains have shut down, scientists created various elaborate speculations designed to explain away such evidence, including speculations of secret caches of hallucinogens in the brain that have not yet been discovered. 

Way #13: Just Avoid Mentioning the Undesired Observations, and Make No Mention of Them in Textbooks or Authoritative Subject Reviews

The final way is the simplest way for experts to stifle unwanted observations. They can simply avoid mentioning the observational results in places where they should be discussed, places like textbooks and review articles written by scientists.  This way is used so massively that certain science information sources (such as the journal Nature and typical textbooks) effectively act like "filter bubbles," preventing their readers from learning about many important facts that may conflict with their worldviews. 

Wednesday, August 7, 2019

30 Reasons for Doubting the “Brains Store Memories” Dogma

The claim that memories are stored in brains is merely a speech custom of scientists, not at all something that scientists have established through observations or experiments.  This claim is not at all intuitively obvious, or even a claim suggested to us by our own bodies. When I use my hand to retrieve an apple from a table, my body does two things to tell me that my hand is involved in this retrieval. The first thing is that I can see my hand touching the apple, and the second is that I can feel my fingers touching the apple. But when I retrieve a memory such as a memory of my youth, my body does nothing at all to suggest to me that my brain has achieved this retrieval. I do not at all have a head sensation that is correlated with such a memory retrieval. 

We often read news stories (triggered by scientific papers) trying to suggest or insinuate that memories are stored in brains. But such reports are not very convincing, for they typically suffer from one or more of the many flaws discussed at great length in my posts "The Building Blocks of Bad Science Literature" and "Why So Much of Neuroscience News Is Unreliable." 

I will now discuss 30 reasons for doubting the claim that human memories are stored in the brain. In this post when I refer to memories, I mean episodic and conceptual memories, the type of memories you can recall while being motionless.  I am not referring to anything along the lines of what is sometimes called "muscle memory," such as muscular skills that you learn when you learn how to ride a bicycle. 

Reason #1: Synapses (the reputed storage place of memories) and dendritic spines are made up of proteins that are very short-lived, having average lifetimes only about a thousandth of the maximum length of time humans can retain memories.

The modern neuroscientist typically claims that synapses are where memories are stored. A synapse has no characteristic at all that should cause anyone to suspect that a memory is stored in it. If neuroscientists suggest synapses are where memories are stored, it's only because they have no other more plausible candidate in the brain for a substrate of memory.

We do know of several important characteristics that synapses have which should cause us to reject the claim that memories are stored in synapses. One is that synapses are made up of proteins that are very short-lived. The average lifetime of a synapse protein is only a few weeks. There are more than a thousand proteins used in synapses, and fewer than five have been determined to have a lifetime of more than a month. It has not been proven that any synapse protein has a lifetime of more than a year. But old people can remember things for 50 years or longer. Astonishingly, the maximum length of time that people can remember things is about 1000 times longer than the average lifetime of a synapse protein. If the proteins in synapses have short lifetimes, it seems impossible that synapses could be storing memories for 50 years. Similarly, you couldn't store your childhood recollections for 50 years if you wrote them on maple leaves that crumble away after a few months.

Dendritic spines (structures that synapses can connect to) are made up of proteins just as short-lived as the proteins in synapses.  The 2018 study here precisely measured the lifetimes of more than 3000 brain proteins from all over the brain, and found not a single one with a lifetime of more than 75 days (figure 2 shows the average protein lifetime was only 11 days). 

Reason #2: Synapses and dendritic spines do not have anything like the lifetimes they would need to have to account for 50-year-old memories.

The previous discussion was about the lifetime of proteins inside synapses and dendritic spines. Now let us consider: how long do synapses and dendritic spines last as particular structural units?  Synapses connect to bump-like dendrite protrusions called dendritic spines. But those spines have lifetimes of less than 2 years.  

Dendritic spines last no more than about a month in the hippocampus, and less than two years in the cortex. This study found that dendritic spines in the hippocampus last for only about 30 days. This study found that dendritic spines in the hippocampus have a turnover of about 40% each 4 days. This 2002 study found that a subgroup of dendritic spines in the cortex of mice brains (the more long-lasting subgroup) have a half-life of only 120 days. A paper on dendritic spines in the neocortex says, "Spines that appear and persist are rare." While a 2009 paper tried to insinuate a link between dendritic spines and memory, its data showed how unstable dendritic spines are.  Speaking of dendritic spines in the cortex, the paper found that "most daily formed spines have an average lifetime of ~1.5 days and a small fraction have an average lifetime of ~1–2 months," and told us that the fraction of dendritic spines lasting for more than a year was smaller than 1 percent. A 2018 paper has a graph showing a 5-day "survival fraction" of only about 30% for dendritic spines in the cortex.  A 2014 paper found that only 3% of new spines in the cortex persist for more than 22 days. Speaking of dendritic spines, a 2007 paper says, "Most spines that appear in adult animals are transient, and the addition of stable spines and synapses is rare." A 2016 paper found a dendritic spine turnover rate in the neocortex of 4% every 2 days. A 2018 paper found only about 30% of new and existing dendritic spines in the cortex remaining after 16 days (Figure 4 in the paper). 

Synapses also don't last long enough to store memories lasting decades. Below is a quote from a scientific paper:

"A quantitative value has been attached to the synaptic turnover rate by Stettler et al (2006), who examined the appearance and disappearance of axonal boutons in the intact visual cortex in monkeys.. and found the turnover rate to be 7% per week which would give the average synapse a lifetime of a little over 3 months."

You can read Stettler's paper here. You can google for “synaptic turnover rate” for more information. The paper here says the half-life of synapses is "from days to months." 
So we have two types of instability in regard to synapses and dendritic spines. One is the internal instability caused by rapid protein turnover inside synapses and dendritic spines, and another is the larger structural instability caused by the fact that individual synapses (and the dendritic spines they may connect to) don't last for more than a few years. Either one of these facts is sufficient to rule out the claim that synapses or dendritic spines are storing memories for decades. Similarly, you would know that someone could not store information for decades if he wrote it on fallen maple leaves that he placed on a picnic table in his back yard. The first reason would be that the leaves would crumble after a year, and the second reason would be that the wind would blow away the leaves at least once a month.

Reason #3: Although claimed to be caused by “synapse strengthening,” humans can form new memories instantly, far faster than the much longer time needed for synapses to strengthen.

How is it that a synapse could store a memory? Our neuroscientists never give a precise to this answer, and never even give an exact speculation as to precisely how synapses could possibly store an episodic experience such as, say, your experience of hitting a home run in the big ball game. All that neuroscientists do is to make vague claims in this regard, typically the claim that memories are stored when synapses are strengthened. This idea makes no sense. We know that when information is stored, symbolic tokens are written. There is no proven case of any information ever being stored by an act of mere strengthening.

Moreover, we know of an exact reason why memories cannot be stored through some act of synapse strengthening. It is the fact that synapse strengthening requires a synthesis of proteins, which takes minutes. But it is not at all true that a person requires minutes to form a new memory. If someone shoots a person standing next to you, you will instantly form a permanent new memory that you will never forget. There is no neuroscience theory of the formation of memories in the brain that is consistent with the fact that humans can form permanent new memories instantly.

Reason #4: People who have hemispherectomy operations (in which half of the brain is surgically removed) seem to have relatively small loss of memories.

Hemispherectomy is a surgical procedure in which half of the brain is removed. The procedure can be performed on young children suffering from seizures, with surprisingly little negative impact. But this scientific paper also tells us on page 3 that Although most hemispherectomies are performed on young children, adults are also operated on with remarkable success.” Very interestingly, we are told that when half of their brains are removed in these operations, “most patients, even adults, do not seem to lose their long-term memory such as episodic (autobiographic) memories.” The paper tells us that Dandy, Bell and Karnosh “stated that their patient's memory seemed unimpaired after hemispherectomy,” the removal of half of their brains. We are also told that Vining and others “were surprised by the apparent retention of memory after the removal of the left or the right hemisphere of their patients.” These facts are entirely inconsistent with claims that memories are stored in the brain.

Reason #5: Humans such as Wagnerian tenors and Hamlet actors are able to perfectly recall very large bodies of memorized information, but this should not be possible if recall occurs using synapses that transmit signals with a low reliability of only 10% to 50%.

There are certain physical facts that have gigantic implications, and it is our duty to fully contemplate the implications of such facts. For example, the fact that carbon dioxide levels in the atmosphere are rising each year is a fact with gigantic implications that we must pay attention to. One simple fact with gigantic implications is the fact that synapses inside the human cortex do not reliably transmit signals. It has been found that synapses inside the cortex transmit brain signals with a reliability of between 10% and 50%. A scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower."

This little fact has the most gigantic implications. It means that it cannot be that a memory is transferred reliably from the sensory areas of the brain or hippocampus to the cortex of the brain to be stored in the cortex, for in passing through the cortex the brain signals would be all distorted and mangled by the low signal reliability. It is not at all true that a memory would have to pass through only one cortex synapse before being stored in the cortex. The signal would have to pass through many synapses, each with a transmission probability of between 10% and 50%. It would seem that only a tiny trace of the original signal would be left by the time the memory was stored. Similarly, if you recalled memories using your brain, every time that you tried to recall something, there would be an equally strong signal mangling effect. When your brain tried to pass recalled information by passing it through multiple synapses, each with a low probability of successful transmission, no more than a trace of the stored memory would be recalled.

But we know that such a thing is not happening during many cases of human memory recall. While it is possible for you to remember a trace of some distant memory, there are very many people who recall with 100% accuracy very large bodies of memorized information. For example, most of the times time an actor goes on stage to play Hamlet, he successfully recalls with 100% accuracy 1476 lines of dialog (and a similar feat occurs when Wagnerian tenors sing roles such as Siegfried and Tristan). Even greater recall is shown by some Muslims who can recall all 6000+ verses of their holy book. Such feats of recall should be impossible if we are using our brains to remember things. The information would have to be passing many times through synapses with poor reliability (between 10% and 50%), which would prevent the accurate recall of very large bodies of information.

Reason #6: A brain storage of memories would require massive amounts of information encoding, but no one has ever found any sign of encoded information in the brain other than the genetic information in DNA.

Scientists have published innumerable papers on memory that used "encoding" in their title. Encoding refers to a supposed translation effect in which information in our minds is translated so that it can be stored in our brains. But the existence of all these papers does not actually establish that any such thing as memory encoding actually occurs. In the past scientists published innumerable papers and essays on caloric, phlogiston and the interplanetary ether, none of which is now believed to exist. And scientists in recent decades have published many thousands of papers on supersymmetry, superstrings, and dark matter, none of which has been proven to exist. The vast majority of papers that use "encoding" in their title are simply papers about human memory experiences and experiments, papers that do nothing at all to show that perceptual experiences or learning are translated into neural states. The term "encoding" is often used in such papers to give a hard-science aura to research that is purely psychological. No scientist has ever provided convincing evidence that experiences or conceptual learning are actually translated or encoded in some way that allows them to be stored as neural states.

We know exactly the type of observation that would be made if an encoding of memory information were to occur in a brain. Scientists would see some type of mysterious repeated symbols in brain tissue, signs that some translation or encoding scheme was being used. But no such things have been found. If memories were stored in brains, there would be two levels of potential discovery. In the first level, scientists would merely see some mysterious encoded information that they did not understand, but which they were sure was encoded information because of a repetition of symbols. At this level scientists would be like students of ancient Egypt looking at hieroglyphics before scholars knew how to translate hieroglyphics. At a second level of discovery, scientists would actually be able to read such encoded information, and successfully translate it, so that memories could be read from brain tissue in a lab. Neither one of these levels has been reached. Scientists have not seen any sign at all of encoded memory information in the brain. But it is hard to imagine how there could not be such a sign if our memories were actually being stored in our brains.

Reason #7: No one has ever been able to read memory information from any type of brain outside of his own body.

There is encoded genetic information in the nucleus of most cells, information which is pretty much the same in every cell, and is not memory information. That information was discovered in 1953. Since 1953 our technology has grown enormously. But there is still not a single case of anyone ever reading a memory from any brain (human or animal) outside of his own body. Scientists have never been able to read a memory by scanning a living person's brain, or scanning some brain tissue. Do not be fooled by press accounts that sometimes grossly exaggerate scientific experiments, and give us headlines such as “Scientists invent mind-reading device.” Such experiments (typically trying to read neural correlates of visual perception) do not actually involve thought reading, and do not at all involve a reading of stored information in the brain.

Reason #8: In the case of Lorber's patients and the case of the French civil servant, we have cases of people with fairly normal memory function but only very small amounts of functional brain tissue.

Inside a normal brain are tiny structures called lateral ventricles that hold brain fluid. In a French civil servant's case, the ventricles had swollen up like balloons, until they filled almost all of the man's brain. When the 44-year-old man was a child, doctor's had noticed the swelling, and had tried to treat it. Apparently the swelling had progressed since childhood. The man was left with what the Reuters story calls “little more than a sheet of actual brain tissue.”

But this same man, with almost no functioning brain, had been working as a French civil servant, and had his IQ tested to be 75, higher than that of a mentally retarded person. The Reuters story says: “A man with an unusually tiny brain managed to live an entirely normal life despite his condition, caused by a fluid buildup in his skull.” The case was written up in the British medical journal The Lancet (link).

In 1980 John Lorber, a British neurologist, recounted a similar case of a brain filled with fluid. “There's a young student at this university,” said Lorber, “who has an IQ of 126, has obtained a first-class honors degree in mathematics, and is socially completely normal. And yet the boy has virtually no brain.” According to Lorber, “We saw that instead of the normal 4-5 centimeter thickness of brain tissue...there was just a thin layer of mantle measuring a millimeter or so. His cranium is filled mainly with cerebrospinal fluid.” Lorber found other similar cases. Here is a link discussing his work.

According to the scientific paper published here, Lorber's findings have been confirmed by others:

"John Lorber reported that some normal adults, apparently cured of childhood hydrocephaly, had no more than 5% of the volume of normal brain tissue. While initially disbelieved, Lorber’s observations have since been independently confirmed by clinicians in France and Brazil."

Such cases show that someone can have fairly normal memory function despite losing almost all of his brain tissue. These cases provide further evidence against claims that brains store memories.

Reason #9: There is little evidence that strokes and traumatic brain injury (which occur more than a million times in the US every year) cause retrograde amnesia, and no clear evidence that memory loss in Alzheimer's or dementia correlates with neuron loss. 

If our memories were stored in our brains, what we would expect is that people would often get amnesia after a stroke or traumatic brain injury. But such a thing seems to happen only very rarely, so rarely that is hard to reliably postulate any causal relation.  A scientific paper says, "Reports of amnesic syndrome due to unilateral stroke have appeared infrequently." The paper lists some new cases which it claims are new examples, but when we read the examples we find typically only mild things like an inability to recall a daughter's phone number. Speaking of strokes, the paper says, "There have been two reported cases of persistent amnesia following unlitateral infarctions in which there were no other neurological deficits," indicating the rarity of such a thing. The paper also says that of a group of 68 patients who had brain infarctions, there were no cases of amnesia.  Talking about strokes, this paper says, "Amnesia as the main symptom of acute ischemic cerebral events is rare, mostly transient, and easily mistaken for TGA [ transient global amnesia]." Serious forms of amnesia (such as losing all memories for years of experience) is very rare, as you can see by doing a Google search for "amnesia is rare." 

But what about Alzheimer's disease? Isn't Alzheimer's a "brain shriveling" or "brain shrinking" affair? Here the public has been misled by visuals that compare a normal brain and a shriveled Alzheimer's brain.  The actual evidence for greater-than-normal neuron loss in Alzheimer's disease is slim, and people with good memories sometimes have just as much neuron loss as those with Alzheimer's.  In 2017 there was a news story entitled, "New Discovery Suggests Neuron Death Does Not Kickstart Dementia." The story reported this:

"The leading theory in Alzheimer’s disease is that memory loss is the result of neuron death and nerve ending damage, which lead to memory loss, are caused by the formation of toxic protein clumps in the brain, called tau tangles and beta-amyloid plaques. But a new, small study challenges this theory, showing that the loss of neurons in brains of people with dementia is actually very small."

The news story quotes a scientist saying the following:

"Much to our surprise, in studying the fate of eight neuronal and synaptic markers in our subjects’ prefrontal cortices, we only observed very minor neuronal and synaptic losses. Our study therefore suggests that, contrary to what was believed, neuronal and synaptic loss is relatively limited in Alzheimer’s disease."

After telling us on page 35 that "there are many reports of people carefully diagnosed...as clearly having the clinical symptoms of dementia and yet showing no evidence of brain pathology,"  a book gives this quote from a neuroscientist named Robert Terry:

"Over the years, investigators have sought assiduously for lesions or tissue alterations in the Alzheimer's brain which...might at least correlate with clinical determinants of the disease severity....Despite 30 years of such efforts, clinico-pathologic correlations have been so weak or entirely lacking that determination of the proximate, let alone the ultimate, cause of Alzheimer's disease (AD) has not been possible." 

Reason #10: In cases of retrograde amnesia (a problem in recall of past memories), episodic memories older than some particular date are preserved, which is the opposite of what we would expect if memories are stored in our brains.

Retrograde amnesia (an inability to recall old memories) is rare. In almost all cases of it, a person's childhood memories are preserved.  Sometimes a person will lose only the last few years of memories, or lose only adult memories.  For example, an 80-year-old man suddenly lost memories of the last 60 years of his life; and the 33-year-old man described here lost only the last 10 years of his memories. 

The tendency of retrograde amnesia to affect younger memories more strongly than older memories is called Ribot's Law.  The Wikipedia article on it says, "Ribot's law states that following a disruptive event, patients will show a temporally graded retrograde amnesia that preferentially spares more distant memories," and also states, "A large body of research supports the predictions of Ribot's law."

We know that the proteins that make up synapses and dendritic spines have average lifetimes of only a few weeks, and that the synapses and dendritic spines themselves are subject to spontaneous remodeling that makes them unstable, giving them lifetimes less than a few years.  Given such realities, if memories were stored in brains, the ones that you would lose first are the oldest memories, since there would be so much more time for such memories to physically deteriorate. Similarly, if words were written on leaves, the older the writing was, the smaller the chance that it would survive.  But according to Ribot's Law, exactly the opposite is the case, and people are much more likely to lose younger memories than older memories. 

Reason #11: People can recall things instantly, but there is no way to explain how a brain could instantly navigate to just the right tiny spot where some memory was stored, which would be like instantly finding a needle in a haystack.

One of the most powerful arguments against the claim that brains store memories is what I call the navigation argument. This argument can be simply stated like this: a long-term memory cannot be stored in some particular part of the brain, because there could be no way in which your brain could ever instantly find the exact location where such a memory was stored.

Let's consider a simple case. You hear the name of a movie star. You then instantly recall what that person looks like, and see a faint image of that person in your “mind's eye.” But how could this ever happen, if the memory of that person is stored in some particular tiny part of your brain? In such a case, you would need to know or find the exact place in the brain where that memory was stored. But there would be no way for your brain to do such a thing. It would be like trying to find one particular needle in a skyscraper-sized stack of needles.

We know that indexing can make possible very fast retrieval of information such as occurs in database systems.  But the brain has no sign of having any indexing system. An indexing system can only occur if there exists an addressing system or a position notation system (for example, a book has the position notation system called page numbering, and the index at the back of the book leverages that position notation system).  Brains have neither an indexing system nor an addressing system nor the position notation system that is a prerequisite for an indexing system. There are no neuron numbers and no neural coordinate system that a brain might use to allow fast, indexed retrieval of a memory.  So there's no way to explain how instant recall could occur if you are retrieving memories from your brain. 


memory retrieval

Reason #12: The cumulative effect of synaptic delays (a delay caused by the time needed for a signal to travel across a synaptic gap) means that brains should be far too slow to account for instantaneous memory recall.

Scientists have been guilty of many misstatements about the speed of brain signals.  When such a topic is discussed, we typically hear only about the speed of signal transmission in axons, the fastest tiny parts inside the brains. What our scientists fail to tell us about are the very serious "speed bumps" that we know must occur when brain signals travel, such as the much slower speed of transmission of a signal through dendrites. One such slowing factor is what is called synaptic delay. Synaptic delay is a delay of between about .5 millisecond and 2 milliseconds when a brain signal travels across a small gap in a chemical synapse.  The problem is that a brain signal would need to pass over very many of these synaptic gaps every time a brain signal travels a centimeter.  When you do some math estimating the number of synaptic delays that would occur when a brain signal is traveling over a centimeter,  such as I have done here, you reach the conclusion that an average brain signal should not be able to travel at a speed much faster than about 1 or 2 centimeters per second.  This means that brain signals should be far too slow to account for the instantaneous memory recall that humans very often have. 

Reason #13: Synaptic fatigue (a delay caused after a synapse has fired) should make brains far too slow to account for instantaneous memory recall.

Besides synaptic delay, which occurs each and every time a brain signal crosses the gap in a chemical synapse, there is another very serious slowing factor involving synapses: what is called synaptic fatigue.  Synaptic fatigue is a temporary inability of a synapse to transmit brain signals, an inability that occurs just after the synapse has successfully transmitted a brain signal.  What happens is that the neurotransmitters in one end of the synapse diffuse over to the other end of the synaptic gap, causing a depletion of available neurotransmitters, preventing the same transmission from re-occurring immediately.  Just as a penis cannot keep firing sperm continually, and needs a rest period, a synapse cannot keep firing continually, and needs a rest period. This rest period has been estimated as being between a full second and many seconds. 

Such a delay must have an enormous effect on the effective speed at which brain signals can travel in a brain, and also must have a very strong negative effect on the reliability of brain signal transmission. This is another huge reason for believing that the human brain is far too slow to account for instantaneous and very accurate memory recall that humans often demonstrate. By analogy, think of how slow and unreliable your television would be if a particular pixel area on your screen had to take a rest for a second or more each time that it transmitted a few pixels.  Under such conditions, you would never get information in anything like an accurate real-time basis.  

Reason #14: No one has ever advanced a credible detailed theory of how human learned knowledge and episodic memories could be translated into neural states or synapse states.

If a brain were to store memories, it would have to do encoding.  Encoding is supposedly some translation that occurs so that a memory can be physically stored in a brain, so that it might last for years. The problem is that human memories include incredibly diverse types of things, and we have no idea how any of these things could be stored as neural states or synapse states. Consider only a few of the types of things that can be stored in a human memory:

  • Memories of daily experiences, such as what you were doing on some day
  • Facts you learned in school, such as the fact that Lincoln was shot at Ford's Theater
  • Sequences of numbers such as your social security number
  • Sequences of words, such as the dialog an actor has to recite in a play
  • Sequences of musical notes, such as the notes an opera singer has to sing
  • Abstract concepts that you have learned
  • Memories of particular non-visual sensations such as sounds, food tastes, smells, pain, and physical pleasure
  • Memories of how to do physical things, such as how to ride a bicycle
  • Memories of how you felt at emotional moments of your life
  • Rules and principles, such as “look both ways before crossing the street”
  • Memories of visual information, such as what a particular person's face looks like

How could all of these very different types of information ever be translated into neural states so that a brain could store them? No neuroscientist has ever presented a precise theory credibly explaining how such a thing could possibly occur. 

You may better appreciate this difficulty if you consider what goes on when a computer stores information. Let's imagine you speak, "I saw a bird" into a voice recognition system in your computer, and press a button to store this sentence. Here are some of the things that go on:

(1) First, sounds are converted into letters, using a voice recognition system and the scheme known as the English alphabet.
(2) Then the alphabet letters are converted into decimal numbers using the scheme known as the ASCII system (a scheme requiring a full page to state, as you can see here).
(3) Then these decimal numbers are converted to binary numbers using a decimal-to-binary conversion algorithm.
(4) Finally binary numbers (a sequence such as 01010111010) is written to your hard driven using a system in which particular magnetic marks stand for binary numbers. 

To discuss what happens when you store a photo on your disk, I would have to discuss details of equal complexity, but this task would require a whole other intricate encoding scheme that does not use the ASCII system, but some other protocol such as a very specific color-to-number protocol.  But how could a brain ever do such encoding requiring so many complicated translation systems and protocols, when your brain presumably doesn't have anything like the ASCII system lying around for it to use, or anything like a color-to-number protocol, or a decimal-to-binary conversion algorithm? We cannot imagine that the brain evolved such protocols 10,000 years ago, for they would have to be based on the English alphabet, which has only existed (in either its current or earlier form) for less than a thousand years. 

Reason #15: The brain does not seem to have any mechanism for writing a memory.

If we are to believe that memories are stored in a brain, we would need to believe that the brain has some system for writing memories.  We know exactly how computers write information. For example,  on a portable computer is a spinning disk, and when the computer needs to write some information, a little unit called a read-write head moves to the right position on the disk, and transfers information. 

But we know of nothing similar in the brain.  The brain seems to have no mechanism whatsoever for writing memory information to some particular place. 

Reason #16: The brain does not seem to have any mechanism for reading a memory.

On a typical portable computer, a read-write head handles both the job of reading information from a hard drive, and the job of writing information.  There is nothing like this in the brain, nor does there seem to be anything like some brain component that accomplishes the job of reading information from some part of the brain.  

So imagine that your brain has some memory information stored in one particular place. How could that information be read?  There is no component at all in the brain that moves around to different places when something is recalled.  It is not at all true that when you remember something, there is some part of the brain (or even anything like a "gang of molecules") that moves to some specific location, reads information, and that carries that information to some other place.  So how is it that a brain could ever read a memory stored in it? Our scientists have no explanation. 

For lack of a better idea, neuroscientists often speculate that "synaptic patterns" store memories, suggesting that the arrangement of synapses might somehow be some code that specifies memories. But we might note that the brain has nothing at all like a synapse pattern reader, so such an idea cannot be correct. 

Reason #17: There is so much noise in the brain (with each neuron bombarded by signals from thousands of other neurons) that if a brain were to somehow read a stored memory, it would quickly be drowned out by all of the neural noise.

We may use the term signal drowning for what happens when there are so many signals from so many sources that a particular signal is effectively drowned out. Such signal drowning would occur in a malfunctioning television which showed the signal from every cable TV channel all at once, or a malfunctioning radio which played simultaneously the music and words from every AM station at the same time. It would seem that in the cortex there would have to be exactly such signal drowning, because each neuron emits a signal very frequently (about once per second or more), and each neuron is connected directly to more than a thousand other neurons.

Such a fact is a big reason for thinking your brain does not store your memories.  For lack of a better idea, our scientists speculate that long-term memories are stored in the cortex. But it should not be that a brain signal representing sensory information could even travel to the cortex to be stored, because signal drowning would very quickly drown out the signal.  It also should not be that a memory can be retrieved from some particular part of the brain, because such a retrieved memory could only move about as a brain signal, and before the brain signal could travel more than a few centimeters,  signal drowning would wipe out the signal. 

Reason #18: The brain shows no sign of looking different or acting different or working harder when a memory is recalled.

If you are retrieving memories from your brain when you recall something, we would expect that there would be detectable signs of such brain activity.  But we cannot detect anything different in our bodies when we are working hard to remember something. When the heart works harder, we get a noticeable physical sign of this: an increased heart beat that can be noticed by checking your pulse. But when we work hard to retrieve memories (at a time such as when we take a school examination), we have no sensation of anything in our head working harder. 

Some would claim that brains do show signs of working harder during recall, but that they are subtle signs that can only be picked up by brain scanners. This is not correct.  Brain scanning studies actually provide no evidence that brains are working harder or doing anything different doing human recall activity.  As discussed here, such studies often make use of dubious statistical methods, too-small sample sizes, and presentation techniques in which very small signal differences are graphically depicted (in a misleading way) to look like big signal differences. The way to get to the real truth of any brain scanning study is to search for the phrase "percent signal change." 

In this post, I cite 9 different neuroscience studies on the neural correlates of memory recall. In these studies, subjects had their brains scanned while the subjects were performing recall and recognition tasks. None of the 9 studies showed more than a 1.3% percent signal change, and the average percent signal change reported was much smaller than that: only about 1 part in 200 or part in 400.  Such results do not show any real evidence of brains working harder during recall, and are quite consistent with the belief that you do not retrieve memories stored in your brain when you recall things. We would expect tiny variations such as 1 part in 200 to occur because of random variations in brain activity having nothing to do with a retrieval of memories. 

Reason #19: Humans are able to recall extremely long sequences such as all of the lines of Hamlet, but the brain does not have any architecture that would seem to support sequential memorization.

An interesting feature of human memory is that it can display enormous sequential proficiency. Humans are not merely capable of recalling a large number of individual facts. Human are capable of exactly recalling extremely long sequences of text. We have one example in the case of actors who successfully memorize all 1476 lines of the role of Hamlet, in correct sequential order. An even more dramatic case is that of Muslims who memorize every single word of their holy book, a book with more than 6000 verses (each verse being roughly as long as a line). 

But the brain seems to have no architecture that could conceivably support such an ability. Each neuron is connected to more than 1000 other neurons.  This means that for a neuron, there is no "next neuron" or "previous neuron."  So how could a sequence of 1476 lines or 6000+ lines ever be stored in a brain? We cannot at all imagine, for example, that individual words are stored on individual neurons, and that once the beginning neuron is found, that the brain keeps traveling along a chain-like route that constantly leads to the next neuron. With each neuron being connected to 1000+ other neurons, there never is a "next neuron" for a neuron.  Given such a neural architecture, we can imagine no way in which very long sequences could ever be stored and retrieved. 

Reason #20: There is no good evidence of any physical change in brains corresponding to an acquisition of a conceptual or episodic memory.

See the post here for a detailed discussion of some of the claims that have been made trying to persuade us that some neural mark of memory or learning has been found, and why such claims are not robust and convincing.  

Reason #21: Outside of synapses and dendritic spines there is no place in the brain that could be a suitable storage place for memories lasting 50 years.

We have seen (in Reason #1 and Reason #2) why the theory that memories are stored in synapses or dendrtic spines is untenable (both are made up of proteins with an average lifetime of only .001 of the maximum time people can hold a memory, and neither synapses nor dendritic spines last for years).  Is there any other place in the brain that might serve as a storage place for memories that last for 50 years or more? There is not.

The most common idea for a non-synaptic storage place for memories is the idea that memories may somehow be stored in DNA.  But DNA has been exhaustively analyzed through huge multi-year projects such as the Human Genome Project and the ENCODE project, and no one has found any trace of human learned knowledge in DNA or a human episodic memory in DNA.  We are aware of a severe semantic limitation in DNA. It uses a genetic code that limits DNA to specifying groups of amino acids. No one has detected any sign in DNA of some other code suitable for storing memories, one that would have to be thousands of times more complicated than the genetic code.  If DNA stored memories, we would expect that different DNA molecules in the nucleus of cells would be very different, but they are pretty much all the same. When DNA molecules are read by a cell, this reading takes minutes, so DNA molecules cannot be the storage place of memories that we can instantly recall. 

We also cannot believe that memories are stored in microtubules. A scientific paper tells us how short-lived brain microtubules are:


"Neurons possess more stable microtubules compared to other cell types (Okabe and Hirokawa, 1988; Seitz-Tutter et al., 1988; Stepanova et al., 2003). These stable microtubules have half-lives of several hours and co-exist with dynamic microtubules with half-lives of several minutes."

Reason #22: People with dramatically higher recall of episodic memories seem to have no larger brains or brain superiority that could explain this.

What is called hyperthymesia or Highly Superior Autobiographical Memory is a rare ability of someone to remember almost all of the things that have happened to him or her during adulthood.  An article in the Guardian discussed the case of Jill Price:

"Price was the first person ever to be diagnosed with what is now known as highly superior autobiographical memory, or HSAM, a condition she shares with around 60 other known people. She can remember most of the days of her life as clearly as the rest of us remember the recent past, with a mixture of broad strokes and sharp detail. Now 51, Price remembers the day of the week for every date since 1980; she remembers what she was doing, who she was with, where she was on each of these days. She can actively recall a memory of 20 years ago as easily as a memory of two days ago, but her memories are also triggered involuntarily."

Under the hypothesis that memories are stored in brains, there is really no way to account for cases such as Jill Price, unless you imagine her with some gigantic head three or four times bigger than an ordinary head.  But Jill Price has an ordinary head that is smaller than the average male head.  No one has been able to find any dramatic difference in the brains of those with Highly Superior Autobiographical Memory.  Some studies have claimed to find some tiny little difference, but anyone checking 100 different areas in two sets of brains will always be able to find some tiny difference somewhere. The people such as Jill Price who can remember almost everything that happened to them in adulthood have brains 99% identical to people who can remember only a small fraction of what happened to them in adulthood.  Such a fact conflicts with the claim that memories are stored in brains. 

Reason #23: People with damaged brains sometimes show types of memory recall superior to that of people with ordinary brains. 

Under the theory that memories are stored in brains, we should expect that people with defective brains should be worse at remembering things.  Although this is sometimes true, there are quite a few dramatic cases of people who had defective brains but greatly superior memory.  If you do a Google search for "autistic savant," you will find a discussion of quite a few such cases. One dramatic case is that of HK, who was born after only 27 weeks, which is 13 weeks early. A scientific paper discusses HK's dramatically superior autobiographical memory:

"As can be seen in Figure 1,for dates between this first memory until his 10th year of life, HK shows a relatively steady increase in accuracy for autobiographical events. Accuracy takes a noticeable jump to near 90% in 2001 at age 11. From that point forward, HK’s recollection of autobiographical events is near perfect."

The paper also gives us insight as to what it is like to have such a memory:

"He reports that he is able to relive memories in his mind as if they just happened. HK stated that everything about his memory, including sounds, smells, and emotions, are vividly re-experienced when he remembers a particular event in time...,He stated that there is no difference in the vividness of his recollection between events that occurred when he was five and events that he experienced within the past month."

The paper tells us that the volumetric analysis “reveals significantly reduced total tissue volume in HK” and that “a volumetric analysis of subcortical structures shows general reduction in subcortical volumes in HK (1019 mL) relative to controls (1249 ± 29 mL).” So the person with this miracle memory had a brain about 20% smaller.  Similarly, the Guardian describes an interesting case of someone with a severe learning disability: "Blind and brain-damaged, Derek Paravicini is a musical marvel, able to play back any tune after one listen. "  The same astonishing ability -- to flawlessly play back an entire song heard only once -- is also possessed by another brain-damaged person, Leslie Lemke.  The wikipedia.org article on Leslie says he can play back a musical piece of "any length." 

Cases such as these (greatly superior memory and substantial brain damage) are exactly the opposite of what we would expect, if it is true that memories are stored in the brain. 

Reason #24: Humans can acquire detailed memories even when the brain has effectively shut down because the heart has stopped.  

Experimental results on the cessation of brain electrical activity after heart stoppage are summarized on page 28 of this document.  There we are told that Hossmann and Kleihues in 1973 tested with 200 cats and 21 monkeys, and found that EEG (a measure of the electrical activity in the brain) became "isoelectric" (in other words, a flat line) within 20 seconds following the stop of blood to the heart.  We are also told that a result of the brain flat-lining within 15 seconds was produced in 1991 with 37 dogs (Stertz et. al.), with 143 cats (Hossmann, 1988), and with 10 monkeys (Steen et. al. 1985).  

So under the theory that our memories are stored in our brains, we would not expect anyone to be recalling memories of what happened while their heart had stopped, except for experiences lasting only a few seconds. But, to the contrary, many people whose heart stopped report lengthy vivid experiences occurring during such a heart stoppage, what are called near-death experiences.  These often involve minutes of mental experience. 

Reason #25: If memories were stored in brains, there would need to be many hundreds of genes with the job of handling the gigantic chore of encoding memories into neural states; but no one has found even one such gene. 

If human brains were to actually be translating thoughts and sensory experiences so that they can be stored as memory traces in the brain, such a gigantic job would require a huge number of genes – probably many times more than the 500 or so genes that are used for the very simple encoding job of translating DNA nucleotide base pairs into amino acids.  But we see no sign of any such memory encoding genes in the human genome.

There is a study that claims to have found possible evidence of memory encoding genes, but its methodology is ridiculous, and involved the absurd procedure of looking for weak correlations between a set of data extracted from one group of people and another set of data retrieved from an entirely different group of people. See the end of this post for reasons we can't take the study as good evidence of anything. There is not one single gene that a scientist can point to and say, “I am sure this gene is involved in memory encoding, and I can explain exactly how it works to help translate human knowledge or experience into engrams or memory traces.” But if human memories were actually stored in brains, there would have to be many hundreds of such genes.

Reason #26: There are no drugs that can produce retrograde amnesia. 

If the brain stored memories, we would think it would be rather easy to make a drug that would produce temporary retrograde amnesia.  Such a drug could simply mess with some chemistry in the brain that would be used if the brain was retrieving memories. But there is no drug that can cause retrograde amnesia. There is no drug that will even temporarily cause a person to stop remembering knowledge he acquired in school, or to stop recognizing friends and family. 

Reason #27: There is substantial evidence that memories can long survive the decay and destruction of the brain. 

There are many cases in the literature of parapsychology suggesting that human episodic memories can long survive the decay and destruction of a brain. Such cases include cases of mental mediumship and cases of past-life memories. In mental mediumship a medium will report communication with some unseen communicant who claims to have knowledge known only to a deceased person.  The alleged communication will often include very specific and correct details that should have been unknown to the medium. 

The case of Leonora Piper was a dramatic case in which such alleged paranormal communication seemed to again and again produce very specific episodic details correctly, details that should have been unknown to the medium. The case was investigated for decades by competent investigators, who found a high degree of accuracy and no evidence of fraud.  The main investigator (Hodgson) started out as a complete skeptic, but became convinced of her authenticity after long investigation.  In Chapter 6 of the book "100 Cases for Survival After Death," particularly case #63, you can read about some very impressive examples of evidence involving Leonora Piper. In case #75 of the same book, you can read about equally impressive results with the medium Gladys Osborne Leonard.  Scientific tests of mediums in recent years suggest a real inexplicable paranormal phenomenon. 

The most prominent researcher of past-life accounts was Ian Stevenson, a professor and psychiatrist who documented very many cases of children who reported past-life memories, with the details often matching the details of previous lives that were discovered. For example, a young child might report being a particular person (in a previous life) who had particular parents living in some particular place, and having some particular type of death; and it would often be found that there was such a person with such parents living in that place who did die in such a way. 

Cases such as these do not alone disprove the claim that the brain stores memories. But such cases are a point against such a claim. If there is evidence suggesting some memories can survive the death of an individual and the decay of the brain, this hints that human memories are not stored in brains, but somehow accumulated in some way that can allow for survival of memories beyond a person's death and the decay of his brain.  

Reason #28: The time delay caused by a need to encode memories in a brain would prevent the instantaneous formation of memories that we often see in humans. 

If the brain stored memories, it would need to do some elaborate encoding in which ideas, concepts and sensations are converted or translated into permanent memory traces. Neuroscientists who believe in a brain storage of memories have almost universally assumed that such encoding occurs. But if such encoding were to occur, it would seem to require significant time, almost certainly minutes.  Electronic computers can instantly translate information from one form to another, but a biological translation would be much slower.  Referring to a general type of biological activity not related to memory, an "expert answer" web page tells us that "Transcription and translation both occur on the time scale of 1 minute for a protein of typical length," and also notes that more complicated proteins take longer, with one (titin) requiring an hour for its transcription. 

It seems that to do all of the biological work needed to translate episodic and conceptual memories into permanent neural traces, it would take a brain at least a few minutes to encode an average memory. But we know that humans can form new permanent memories instantly.  The child who places a hand on a hot burner instantly forms a new permanent memory, as does someone who sees a person next to him being murdered. We can form new memories faster than we could if our brains were storing memories using encoding. 

Reason #29: The time delay caused by a need to decode memories stored in a brain would prevent the instantaneous recall that humans routinely display.

Whenever information is stored in encoded form, that information can only be retrieved using a translation system that is the opposite of the original encoding. Such a system is called decoding.  Here is an example. When you type some words that are stored on a computer hard drive, first the letters in the words are converted into decimal numbers such as 59, then the decimal numbers are converted into binary numbers such as 111011,  and then the binary numbers are written to the hard drive. Such a process is an example of encoding. If you retrieve this information from your hard drive, the reverse occurs: first the binary numbers are read from the hard drive, then the binary numbers are converted into decimal numbers, and then the decimal numbers are converted into alphabetic characters.  This is an example of decoding. 

Since the brain is not electronic, and has to do things with relatively slow chemical methods, it would take time for a brain to decode information that had been stored in it through encoding.  It would almost certainly take quite a few seconds (or possibly minutes) for a brain to decode information that had been stored in it using encoding. The comparable tasks of protein transcription and protein translation take a minute or more. But we do not experience such a delay. For example, if you say, "Gregory Peck," it takes me just one second to recall his face; and if you show me a photo of Kathryn Grayson,  I'll tell you her name within two seconds. 

Reason #30: Human memory information can exhibit high degrees of hierarchical organization, but the brain has no structure that could support such organization. 

Let us consider the hierarchical nature of human memory and knowledge. We can imagine a conversation between two people.

Jack: What are some of the planets in the solar system?
Jill: Jupiter, Saturn, Earth, Mars, Venus and others.
Jack: So let's take Earth. What continents belong to it?
Jill: North America, South America, Asia, and others. 
Jack: So what are some of the countries in North America?
Jill: Canada, the United States, Mexico, and others.
Jack: So what are some of the states in the United States?
Jill: New York, California, Florida and others.
Jack: And what are some of the cities in New York?
Jill: Albany, Buffalo, Syracuse, New York City and others. 
Jack: And what are some of the boroughs in New York City?
Jill: Manhattan, Queens, Brooklyn and others.
Jack: And what are some of the streets in Manhattan?
Jill: Broadway, 42nd Street, and others. 
Jack: And what are some of the buildings on 42nd Street?
Jill: Grand Central Terminal, the New York Public Library, and others. 

Jill's answers might all be given from memory by a New York City resident such as myself. Such memory recalls show a hierarchical organization of information, in which someone repeatedly knows what database professionals call parent-child relationships.  But it would seem that a brain could never allow information to be stored in a way supporting a hierarchical organization of information.  The brain has no structural features that might support a hierarchical organization of information. 

Consider how hierarchical information is stored in a computer. The fanciest way to store such information is through a relational database. Relational database have specific features (what are called primary keys and foreign keys)  that support the hierarchical storage of information.  A simpler way to store hierarchical information is through a file system,  which allows you to create folders or directories that are children of a parent directory or folder.  Hierarchical information can also be stored on a computer using the power of complex HTML tables or complex Microsoft Word tables. 

But the brain seems to have no such features that might facilitate the storage of hierarchical information. Neurons and synapses are distributed rather uniformly throughout the brain, and are not grouped into neuron groups or synapse groups that show any type of structural organization that might reflect a hierarchical organization of information.  A typical neuron is connected to a thousand or more other neurons. There is no way for a neuron or group of neurons to be the "parent neuron" or the "parent neuron group" of some other neuron of group of neurons.  Neither neurons nor synapses nor dendritic spines seem to show any type of physical grouping or structural organization that might allow a hierarchical storage of information. 

Conclusion

The evidence against the dogma that brains store memories seems  overwhelming. But like fundamentalists continuing to advance some discredited dogma, our neuroscientists continue to spout the claim that our memories are stored in our brains.  Such a claim is a social norm of the belief community that neuroscientists belong to,  and straying from that norm is a taboo in that community. 

This situation is rather like one we can imagine in a physician's office. Let's imagine a doctor who is examining a patient lying on a table. The doctor makes the following low-level observations:

(1) The doctor takes the patient's blood pressure, and finds that it is zero. 
(2) The doctor takes the patient's pulse, and finds that it is zero beats per minute.
(3) The doctor strikes the patient's knee with a hammer, and observes no movement of the leg.
(4) The doctor takes the patient's temperature, and finds that it is 70 degrees, only room temperature.
(6) The doctor tries to gently move the patient's arm, but observes that it is stiff and immobile.

Let us suppose that the doctor then declares, "This patient is fine, and he will soon be talking and walking." That would be a case of reaching a high-level conclusion that is contrary to the low-level facts collected.  Similarly, when our neuroscientists tell us that our memories are stored in our brains, they are stating a high-level conclusion that is contrary to the low-level facts neuroscientists have collected, the facts I have discussed in this post.  Through such low-level facts, the brain is speaking  to us in a very loud voice,  saying, "I don't store memories; that isn't my job." 

There are two reasonable alternative theories about where memories are stored. The first is that memories are stored in a non-neural and mysterious facility local to the human body.  Such a thing is often called a soul or spirit.  A soul or spirit might be completely immaterial, or it might consist of some subtle energy that humans do not yet understand. Given the fact that scientists are claiming that 70% of the universe consists of some subtle energy they do not understand and have never observed (what is called dark energy), it is unreasonable to exclude the possibility of some subtle energy associated with the human body. 

A second reasonable possibility is that our memories are not stored locally in our bodies, but exist in some mysterious consciousness infrastructure or information infrastructure that is shared by humans and possibly other life-forms.  Imagine a person playing some app on her smart-phone in which she collects her favorite photos of TV or movie stars.  If you ask her, "Where are your celebrity photos stored?" she may say, "Inside my smart phone, of course." But such photos may not at all be stored inside her phone. Instead they may be stored "somewhere in the cloud," such as on some relational database server in a distant city.  Similarly, our memories may be stored in some mysterious information infrastructure that humans are able to access.  Just as an app user may have a login allowing him to access only his data, there may be some system restricting each of us to accessing only the memories that we deposited in such an infrastructure. 

We do not at all know where are memories are stored, but the many reasons listed here argue very powerfully that our episodic and conceptual memories are not stored in our brains and cannot be stored in our brains.