Header 1

Our future, our universe, and other weighty topics

Monday, August 29, 2016

Galaxy Expert Confesses We Don't Understand How Galaxies Form

Last Friday scientists made a surprising announcement. They announced findings about a mysterious galaxy 300 million light-years away, a galaxy named Dragonfly 44. This galaxy seems to have about roughly the mass of our galaxy, but only emits 1 percent of the light our galaxy emits. Astronomers stated that this Dragonfly 44 galaxy is 99.99% dark matter. Dark matter is believed to be a mysterious form of matter that makes up about 27% of the universe. Ordinary matter is believed to make up only about 5% of the universe, with dark energy making up about 68% of the universe.

But such an announcement presents a great paradox. If ordinary matter makes up 5% of the universe, and dark matter and ordinary matter are mixed together throughout the universe, how could it possibly be that a particular galaxy would be 99.99% dark matter? This would seem to be as unlikely as that there might be some weird local change in the composition of the air, which is normally 78% nitrogen and 21% oxygen. Imagine if somehow the nitrogen in the air became so dominant that the air above a town became 99% nitrogen, causing all the people in the town to die of oxygen starvation. Such an event seems as improbable as that some galaxy would consist of 99.99% dark matter, when dark matter and ordinary matter are mixed throughout the universe.

After a result this surprising, we must take a step back and realize that there is really no firm basis for making statements such as the claim that this Dragonfly 44 galaxy is 99.99% dark matter. Dark matter has never been directly observed, and a multi-year attempt to directly observe it has failed. What we see in this Dragonfly 44 galaxy can most simply be described like this: a galaxy is behaving in a way that is inexplicable under our current understanding of gravity, inexplicable by a factor of 1000 times. Imagine if I see a bus floating up into the air. It would be rather presumptuous to make a statement such as: “The bus must consist of 99% antigravity material.” I should instead simply say that the bus is behaving in a way I don't understand. Similarly, rather than using some exact dark matter figure that makes it sound as if they understand what is going on, our scientists should be candidly confessing their lack of understanding of what is going on.

But that is not the way of the modern theoretical scientist. The modern theoretical scientist seems to be very prone to exaggerate his understanding, to make it look as if his understanding of some great mystery of nature is good, even when it is very poor. Here are some of the techniques that are typically employed as part of such a thing.

Ignore the unanswered questions. Asked to explain what we know about a particular topic, a modern scientist will probably go into a discussion that focuses entirely on what has been discovered, as well as what has been theorized, without mentioning what we are ignorant about. For example, a scientist asked to talk about the Big Bang will go into a discussion of why we think there was a Big Bang, and may also go into speculations about some details of the first second of the Big Bang. He will avoid mentioning that we don't understand the cause of this event.

Weave a blend of fact and speculation. Asked to explain a mystery such as the origin of life, the modern scientist is often like someone who needs to have a coat, but who merely has some assorted threads of fact. The scientists will then augment his threads of fact with some threads of speculation. By artfully weaving these together, and focusing only on bits and pieces here and there, the scientist may leave you with the impression that he has something like a coat, even though he may have perhaps merely a few scattered pieces, like a collar that is half fact and half speculation, and a coat elbow that is more speculation than fact.

Clutter the answer with jargon and minutiae. Asked what we know about some mystery that is not understood, the modern scientist will very often give an answer filled with jargon and a discussion of intricate fine details – the details often being details of a speculation rather than details of fact. To the average person, this may seem very impressive, and may leave him with an impression the scientist has a deep understanding of the mystery, even when the scientist has no such thing. For example, if asked to explain how humans can remember childhood memories for 50 years, a neurologist may launch into a jargon-filled discussion of some “clustering dynamics” theory attempting to explain the persistence of human memory. It may seem impressive in all its details, until you find out that it is mere speculation.

Don't mention the problems with your explanation. Very many or most theoretical explanations have some problems associated with them, reasons for doubting such explanations. When asked about some great mystery of nature, a scientist will very often confidently discuss some theoretical explanation, but fail to make any mention of problems associated with such an explanation. For example, when asked about how memory is stored, a neuroscientist may tell you that this is caused by LTP (long term potentiation) in synapses – but completely fail to mention that LTP is actually something that quickly decays, and generally doesn't last longer than a few weeks. Similarly, when asked about the origin of species, the modern biologist will confidently offer the Neo-Darwinism explanation that the cause was natural selection and random mutations. Our biologist will not mention that such an explanation does absolutely nothing to explain why any particular biological implementation would have made the large leap from a starting point to a “reward threshold” level of complexity and coordination (often very high) necessary for the implementation to first start yielding any survival value reward. Our biologist also will not mention that helpful random mutations are many times less common than harmful mutations. 
I have seen these techniques used hundreds of times by scientists trying to depict themselves as “lords of knowledge” about matters which mankind is really very ignorant about. So I was utterly flabbergasted to read a statement by galaxy expert Pieter van Dokkum, a statement of great candor. Talking about the mysterious Dragonfly 44 galaxy just discussed, van Dokkum says, "It means we don’t understand, kind of fundamentally, how galaxy formation works."

No doubt van Dokkum's colleagues would respond by saying, “It's just gravitation,” but I would remind them that gravitation cannot explain the persistence of spiral galaxies such as the Milky Way, nor can it even explain the appearance of galaxies with such a shape.  See here for more on why the persistence of spiral galaxies is so hard to explain.

A spiral galaxy (Credit: NASA)

I think van Dokkum deserves great applause for this rather rare case of explanatory candor by a scientist. This is what I want from a scientist – a candid statement of ignorance where knowledge is lacking, rather than some pretentious pedantic affectation in which the speaker pretends to understand some deep mystery he does not really understand. Let's hope we can start seeing more of this candor from our biologists and neuroscientists.

Thursday, August 25, 2016

An Analysis of the Recent Claim the Solar System Is in a “Unique Area of the Universe Just Right for Life”

Imagine if some huge extraterrestrial spaceship were to appear in a fixed position above some US city. Suppose the extraterrestrials wanted to get started communicating with us. How could they start the conversation, if their language was so different from ours that English was utterly unintelligible to them? One way would be for them to place 1836 identical small objects in a field. Every physicist would understand the meaning of this. 1836 is the ratio between the proton mass and the electron mass, a constant throughout the universe.

Or if the extraterrestrials wanted to do something similar that wouldn't require so many objects, they could place 137 identical small objects on a field. Every physicist would recognize what this meant. 137 is the number associated with a universal constant of nature known as the fine-structure constant. Its value is normally represented as 1/137 (or more exactly, 0.007297351). The behavior of stars crucially depends on the value of the fine-structure constant.

A few days ago the Daily Galaxy web site had an article involving the rather prosaic topic of the fine-structure constant. Following the Daily Galaxy's standard rule of “spice things up to the max,” the article had this sensational title: Our Solar System “Is In a Unique Place in the Universe – Just Right for Life.”

Such a title must have excited those who like to believe the egotistical idea that man is the centerpiece of the universe. But the facts cited by the Daily Galaxy story do not warrant the article's sensational title implying something special about the position of our solar system.

The article in question refers to some research published in 2012 by John Webb and his colleagues at the University of New South Wales. The relevant scientific paper can be found here. Studying the fine-structure constant (a fundamental constant generally believed not to vary in time or space), the scientists claimed to find evidence that the fine-structure constant “increases with increasing cosmological distance from Earth.”

But the variation reported was only about 1 part in 100,000. There is probably insufficient basis for thinking that a variation of only 1 part in 100,000 in the fine-structure constant would rule out the habitability of a particular region.

There are some reasons for thinking that stars like the sun could not exist if the fine-structure constant were much larger or smaller. The fine-structure constant controls the strength of electromagnetism. On page 73 of his book The Accidental Universe, Paul Davies states the following:

If gravity were very slightly weaker, or electromagnetism very slightly stronger, (or the electron slightly less massive relative to the proton), all stars would be red dwarfs. A correspondingly tiny change the other way, and they would all be blue giants.

But we see yellow stars like the sun all over the galaxy, and in many other nearby galaxies. So a space-dependent variation of 1 part in 100,000 cannot justify any claim that our solar system is in a “unique place in the universe – just right for life,” not unless you mean “place” to mean some large fraction of the universe.  The problem with such a claim is not the "just right for life" part, but the "unique" part implying some special zone of habitability in just one part of the universe.

 Bubble around a bright star (Credit: NASA)

There has been other research on the fine-structure constant that does not agree with that of Webb and his colleagues. A more recent paper (published in June 2016) found no evidence for variation in the fine-structure constant, not even 3 parts in a million.

So the Daily Galaxy's article title seems to be unwarranted. Another interesting result on the fine-structure constant was reported in 2016 in a scientific paper by scientist McCullen Sandora. Sandora dealt with the “inverse fine structure constant,” which is the fine-structure constant divided by 1 (this has been measured to be 137.036). The iron lying around our planet (needed for technical civilizations) is believed to have arisen in the core of a distant star (stars shoot out iron when they explode in supernova explosions). Sandora found that for stars to produce iron, the inverse fine-structure constant must have a value of 145, give or take 50. 

Sandora also found a more sensitive requirement, finding that for a planet to have plate tectonics like the Earth, the inverse fine-structure constant must be 145, give or take 9. Sandora gives some complicated reasons why such plate tectonics are a requirement for the appearance of creatures such as us. 

The latter finding puts the measured value of the inverse fine-structure constant (137.036) just barely inside the range consistent with a planet like Earth (a range between 136 and 154). This finding is consistent with the claim in this scientific paper, which says that an inverse fine structure constant “close to 137 appears to be essential for the astrophysics, chemistry and biochemistry of our universe.”

The fine-structure constant is actually derived from three other fundamental constants of nature. The formula for the fine structure constant is that it is equal to e2/hc, where e is the charge of the proton, h is Planck's constant, and c is the speed of light.

According to Sandora, planets just like ours (with plate tectonics) could not exist if the fine-structure constant varied by more than 6%. If the fine structure constant must fall in a very narrow range, then think of how fine-tuned the proton charge must be, if the fine structure constant depends on the square of the proton charge.

This is only one way in which the proton charge must be exquisitely fine-tuned. There is the additional fact (involving a far-greater sensitivity) that planets will not hold together unless the proton charge and the electron charge match each other to many decimal places (the only difference being that the electron charge is negative). For if there were not so precise a match (far more unlikely than you randomly guessing correctly someone's Social Security number), the electromagnetic force (more than a trillion trillion trillion times stronger than the gravitational force) would cause repulsion exceeding the gravity holding the planet together (as mentioned here). Experiments have shown that the proton charge and the electron charge do actually differ by less than 1 part in 1,000,000,000,000,000,000. This fact is unexplained by our physicists, and is extremely surprising given that each proton has a mass 1836 times greater than each electron.

We therefore have hints some very precise fine-tuning went on here, although we have no adequate reason for thinking that it is some special blessing applying only to our local region of the universe. 

Sunday, August 21, 2016

The Thousandfold Shortfall of the Neural Reductionists

My previous three posts constitute quite an astonishing thing: a kind of neurological case for something like a human soul. Each of the posts presented an argument for the claim that your brain cannot be storing all your memories. One argument was based on the apparent impossibility of the brain ever naturally developing all of the many encoding protocols it would need to store the many different things humans store as memories. The second argument was based on the apparent impossibility of explaining how the human brain could ever be able to instantly recall memories, if memories are stored in particular locations in the brain, because there would be no way for the brain to know or figure out where a memory was stored. The third argument was based on the fact that humans can remember 50-year old memories, but scientists have no plausible explanation for how the human brain can store memories for longer than a single year.

No doubt some counterarguments arose in the minds of some readers, particularly any readers prone toward neural reductionism (the idea that human mental experiences can be entirely explained by the brain). In this post I will rebut such possible counterarguments.

The most obvious counterargument that one could give would be based on the fact of strokes and Alzheimer's disease. It can be argued that such afflictions show that very long-term memories are stored in the brain. But such an objection can be easily answered.

One way of answering such an objection is to point out that the evidence on brain damage and memory problems is very mixed. The physician John Lorber documented many cases of people whose brain tissue had been largely destroyed by disease, and found many cases of people whose memory and intelligence was still good. The operation called a hemispherectomy is sometimes performed on children, and involves removal of half of their brain. An article in Scientific American tells us, “Unbelievably, the surgery has no apparent effect on personality or memory.”

An even better way of answering such a counterargument is to point out that we cannot tell whether Alzheimer's patients or strokes have actually suffered a loss of memories. For it might be that such patients merely experience a difficulty in retrieving memories.

Imagine you are used to visiting cnn.com to get the news each morning. But one day you turn on your computer and find you can no longer access any information at cnn.com. Does this prove that the information stored at cnn.com has been lost? It certainly does not. The problem could merely be an inability for you to retrieve information at cnn.com, perhaps because of a bad internet connection. Similarly, if I write the story of my life, and place it on my bookshelf, I may one day go blind and be unable to access that information. But the information is still there on my bookshelf.

In the same vein, the memories of people with Alzheimer's may be perfectly intact, but such persons may be merely experiencing some difficulty in retrieving their memories. There are, in fact, reports of incidents called terminal lucidity, in which people suffering from memory loss or dementia suddenly regained their memories shortly before dying. Such reports tend to support the idea that memory problems such as Alzheimer's involve difficulties in retrieving memories rather than the actual destruction of memories stored in the brain. 

There is actually a way in Alzheimer's may argue against the idea that your memories are all stored in your brain.  A doctor reports the following:

One of the big challenges we face with Alzheimer's is that brain cell destruction begins years or even decades before symptoms emerge. A person whose disease process starts at age 50 might have memory loss at 75, but by the time we see the signs, the patient has lost 40 to 50 percent of their brain cells.

If your brain cells were the only place your memories were stored, why would you not notice memory loss until 40% or 50% of your brain cells were gone?
Another way of rebutting my argument about very-long term memory (and our inability to explain it) is to do an internet search for “long term memory,” and then find some biological component I didn't mention (perhaps some kind of protein), a component that is described as “playing a role in long-term memory.” You might then argue that I failed to mention that component, and that maybe that is the secret to very long-term memory.

But such an approach would be fallacious. Very confusingly, scientists use the term “long-term memory” for any memory lasting longer than a day. So anything at all that affects memories lasting longer than a day may be described as something that “plays a role” in long-term memory. In general, such things do nothing to answer the problem of very long-term memory, the problem of how memories can last for years or decades. For example, PKM protein molecules have been described as “playing a role in long-term memory.” But having lifetimes of less than a month, such molecules cannot explain how the brain could store memories for years.

Very rarely, it is suggested that maybe epigenetics can explain long-term memory. Epigenetics involves the effect of methyl molecules that can attach themselves to a DNA molecule. To use a rough analogy, we can imagine a DNA molecule as a protein recipe book, and we can imagine these methyl molecules as being strips of black electrical tape that block out certain parts of that book. Another analogy is that they are like “off” switches for parts of DNA. As these methyl molecules are very simple molecules, they are quite unsuitable for information storage use. Also, the methyl molecules attached to a DNA molecule are not connected to each other. So the methyl molecules involved in epigenetics lack the two characteristics of synapses that caused scientist to suspect their involvement in memory: the fact that they can have different “weights,” and the fact that they are connected to each other.

Another objection that can be made is to point out that some scientists have come up with theories trying to explain how information could be maintained long-term in synapses. Such theories are sometimes called theories of “synaptic plasticity maintenance” or “synaptic plasticity persistence,” although given that they involve attempts to show how unstable synapses could preserve information indefinitely, it might be better to call them “magic maintenance” theories. Far from debunking my objection that molecular turnover should make it impossible for synapses to maintain memories, such theories actually show that such an objection is quite weighty, and one that scientists have taken very seriously.

I can characterize such theories with an analogy. Let us imagine a young boy whose mother died a year ago. He comes to a beach where his mother used to swim, and he sees written in the wet sand a message of love.

Engaging in wishful thinking, the boy decides that the letters were written by his mother who died a year ago. When his father points out that this couldn't be, because letters in wet beach sand don't last, the boy comes up with an elaborate theory. Perhaps, the boy reasons, his mother hired some people to constantly preserve the letters she wrote in the sand. Perhaps, the boy thinks, whenever the words are overwritten by the high tide, some person hired by his mother makes sure that the letters are rewritten just as his mother wrote them.

Such a theory – basically a reality denial mechanism – would be very much like the attempts that have been made to advance theories by which synapses could preserve memories even though the protein molecules in synapses are recreated every few days. Both involve complex speculations hoping to get us to believe that some information that should be very short-lived is really very long-lived. The synapse theories in question – these “magic maintenance” theories – are not any more believable than the little boy's theory about the sand.

One such theory is a very sketchy theory of “bistable synaptic weight states,” which just offers the most fragmentary suggestion of how some special and very improbable “bistable distribution” setup might help a little --not very much, and not enough to account for memories lasting for years. But as this 2015 scientific paper states, “Despite extensive investigation, empirical evidence of a bistable distribution of two distinct synaptic weight states has not, in fact, been obtained.” The theory in question relies on an assumption of unobserved biochemical positive feedback loops, but one scientific paper notes that “modeling suggests that stochastic fluctuations of macromolecule numbers within a small volume such as a spine head are likely to destabilize steady states of biochemical positive feedback loops,” making them unsuitable for anything that can explain long-term memory.

Another such theory is a “cluster dynamics” theory advanced in this paper. The author makes a totally unwarranted ad-hoc speculative assumption. Speaking of the insertion of a new protein – where a new protein would appear when a brain cell replaces a protein, as it constantly does – the paper says, “The primary effect of this implementation is that the insertion probability at a site with many neighbors (within a cluster or on its boundary) is orders of magnitude higher than for a site with a small number of neighbors.” There is no reason for thinking that there should be any difference in these probabilities. He then attempts to show that this imagined effect can work to preserve the information in synapses, by giving us a “simplest possible case” example of a square shape that is preserved on a grid, despite turnover of its parts. But such an effect would not work in cases much more complicated than this “simplest possible” case, and would not work to preserve information more complicated than a square.

I tried created a spreadsheet that would use this type of effect. After a great deal of trial and error, I was able to find an “intensity of neighbors” formula that would set things up so that the lower set of numbers in the photo below (marked Generation 2) recreates the data in the first set of numbers (marked Generation 1). The formula used to create Generation 2 used the same idea in the “cluster dynamics” theory – the value of an item in the second generation depends on how high are the nearby values in the earlier generation. When I tried using some square-shaped data in the Generation 1 area, the effect worked okay, and the Generation 2 area looked like the Generation 1 area. 

But as soon as I tried using some data in the Generation 1 area that was not square-shaped (such as X-shaped data or E-shaped data), the effect no longer worked. Here are the results when you try some E-shaped data. The resulting Generation 2 does not look like an E-shape.

Simple as this experiment was, it captures the fatal flaw of this “cluster dynamics” theory. While it might work to explain a preservation of the simplest possible data (square shapes), it will not work to preserve any data more complicated, such as E-shapes and X-shapes. Of course, our memories involve memories of things infinitely more complicated than an X shape.

All of these “synaptic plasticity maintenance” theories have one thing in common: they have already been ruled out by experiments with long-term potentiation (LTP) and synapses. Such experiments have shown that LTP very rapidly degrades, and that synapses do not magically maintain their information states over long periods.

Another objection that could be made is one along these lines: sure, scientists don't have an explanation now for very long-term memory, but they'll find out someday how the brain is storing it. My answer to this is: no, they won't, because there are good reasons for thinking there is no place where it could be stored in the brain. The only candidates for places where very long-term memory could be stored in the brain are: synapses, DNA in nerve cells, protein molecules surrounding the DNA, other nerve cell proteins, or what is called the perineuronal net. My previous post made a good case that none of these are plausible storage places for very long-term memory. If you've ruled out all the places where something could be in some unit of space, then you have to conclude it's not in that space. For example, if you're looking for your car keys and you've ruled out all the places they might be in your bedroom, you should conclude they are not in your bedroom (rather than telling yourself: I'll find them some day hidden in my bedroom).

What is very astonishing is the difference between the longest memory that can be plausibly explained by neuroscience, and the longest memory that humans can hold. As discussed in my previous post, the best “candidate explanation” for memory is LTP (long-term potentiation), but that decays in a matter of days. The protein molecules between synapses also last no longer than two weeks. Even if you were to claim that LTP is a good mechanism for memory (something quite questionable, since it doesn't correlate highly with memory), then about the longest memory that science can currently explain (without resorting to baroque speculation) is a memory lasting about two weeks. Because humans can hold memories for 50 years, they are displaying a memory capacity that is about 1000 times longer than what can be plausibly explained by neuroscience. The explanatory gap here is like the gap between Jupiter and Saturn. How long will it be until we wake up, smell the coffee, and realize that to explain our minds we must go beyond the brain?

Wednesday, August 17, 2016

There Is No Plausible Explanation For How Brains Can Store Very Long-Term Memory

Neurologists like to assume that all your memories are stored in your brain. But there are actually quite a few reasons for doubting this unproven assumption, including the research of scientists such as Karl Lashley and John Lorber. Their research showed that minds can be astonishing functional even when large parts of the brain are destroyed, either through disease or deliberate surgical removal. Lorber documented 600 cases of people with heavy brain damage (mostly due to hydraencephaly), and found that half of them had above average intelligence. Some children with brain problems sometimes undergo an operation called a hemispherectomy, in which half of their brain is removed. An article in Scientific American tells us, “Unbelievably, the surgery has no apparent effect on personality or memory.” In Figure 1 of this paper, we see a brain X-ray of a person with very little brain tissue, who is described as "normal."  Apparently he got along fairly well with very little brain.

Given such very astonishing anomalies, we should give serious consideration to all arguments against the claim that your brain is storing all your memories. In my previous two posts, I presented two such arguments. One argument was based on the apparent impossibility of there naturally developing all of the many encoding protocols the brain would need to store the many different things humans store as memories. The second argument I gave was based on the apparent impossibility of explaining how the human brain could ever be able to instantly recall memories, if memories are stored in particular locations in the brain, because there would be no way for the brain to figure out where in the brain a memory was stored.

In this post I will give a third argument against the claim that your brain stores all your memories. The argument can be summarized as follows: there is no plausible mechanism by which the human brain could store very long-term memories such as 50-year-old memories. Every neurological memory theory that we have cannot explain any memories that have persisted for more than a year.

The Fact That Humans Can Remember Things for 50 Years

First, let's look at the basic fact of extreme long-term memory storage. It is a fact that humans can recall memories from 50 years ago. Some people have tried to suggest that perhaps human memory doesn't work for such a long time, and that remembering very old memories can be explained by the idea of what is called “rehearsal.” The idea is that perhaps a 60-year-old remembering is really just remembering previous recollections that he had at an earlier age. So perhaps, this idea goes, when you are 60 you are just remembering what you remembered from your childhood at 50, and that at 50 you were just remembered what you remembered from your childhood at age 40, and so forth.

But such an idea has been disproved by experiments. A scientific study by Bahrick showed that “large portions of the originally acquired information remain accessible for over 50 years in spite of the fact the information is not used or rehearsed.” The same researcher tested a large number of subjects to find out how well they could recall the faces of high school classmates, and found very substantial recall even with a group that had graduated 47 years ago. Bahrick reported the following:

Subjects are able to identify about 90% of the names and faces of the names of their classes at graduation. The visual information is retained virtually unimpaired for at least 35 years...Free-recall probability does not diminish over 50 yr for names of classmates assigned to one or more of the Relationship Categories A through F.

I know for a fact that memories can persist for 50 years, without rehearsal. Recently I was trying to recall all kinds of details from my childhood, and recalled the names of persons I hadn't thought about for decades, as well as a Christmas incident I hadn't thought of for 50 years (I confirmed my recollection by asking my older brother about it). Digging through my memories, I was able to recall the colors (gold and purple) of a gym uniform I wore, something I haven't thought about (nor seen in a photograph) for some 47 years. Upon looking through a list of old children shows from the 1960's, I saw the title “Lippy the Lion and Hardy Har Har,” which ran from 1962 to 1963 (and was not syndicated in repeats, to the best of my knowledge). I then immediately sung part of the melody of the very catchy theme song, which I hadn't heard in 53 years. I then looked up a clip on a youtube.com, and verified that my recall was exactly correct. This proves that a 53-year-old memory can be instantly recalled.

So in trying to explain human memory, we need to have a theory that can explain human memories that persist for 50 years. Very confusingly, scientists use the term “long-term memory” for any memory lasting longer than an hour, which is very unfortunate because almost every thing you will find on the internet (seaching for “long term memory”) does not actually explain very long-term memory such as memories lasting for 50 years.

Why LTP and Synapse Plasticity Cannot Explain Very Long-Term Memory

Now let's look at neuroscientists' theories of memories. Quora.com is a “expert answer” website which claims to give “the best answer to any question.” One of its web pages asks the question, “How are memories stored and retrieved in the human brain?” The top answer (the one with most upvotes) is by Paul King, a computational neuroscientist. King very dogmatically gives us the following answer:

At the most basic level, memories are stored as microscopic chemical changes at the connection points between neurons in the brain..As information flows through the neural circuits and networks of the brain, the activity of the neurons causes the connection points to become stronger or weaker in response. The strengthening and weakening of the synapses (synaptic plasticity) is how the brain stores information. This mechanism behind this is called "long-term potentiation" or "LTP."

But there is actually no proof that any information is being stored when synapses are strengthened. From the mere fact that synapses may be strengthened when learning occurs, we are not entitled to deduce that information is being stored in synapses, for we also see blood vessels in the leg strengthen after repeated exercise, and that does not involve information storage. In order to actually prove that a synapse is storing information, you would need to do an experiment such as having one scientist store a symbol in an animal's brain (by training), and then have another scientist (unaware of what symbol had been stored) read that symbol from some synapses in the animal's brain, correctly identifying the symbol. No such experiment has ever been done.

The evidence does not even clearly indicate that LTP correlates with memory, as the following scientist's summary of experimental results indicates (a summary utterly inconsistent with the claim LTP is a general mechanism to explain memory).

What this means is that LTP and memory have been dissociated from each other in almost every conceivable fashion. LTP can be decreased and memory enhanced. Hippocampus-dependent memory deficits can occur with no discernable effect on LTP...There will be no direct quantitative or even qualitative relationship between LTP measured experimentally and memory measured experimentally—that is already abundantly clear from the available literature...The most damning observations probably are those examples where LTP is completely lost and there is no effect on hippocampus-dependent memory formation.

A scientific paper states this about LTP:

Based on the data reviewed here, it does not appear that the induction of LTP is a necessary or sufficient condition for the storage of new memories.

What is misleadingly called “long-term potentiation” or LTP is a not-very-long-lasting effect by which certain types of high-frequency stimulation (such as stimulation by electrodes) produces an increase in synaptic strength. Synapses are gaps between nerve cells, gaps which neurotransmitters can jump over. The evidence that LTP even occurs when people remember things is not very strong, and in 1999 a scientist stated (after decades of research on LTP) the following:

[Scientists] have never been able to see it and actually correlate it with learning and memory. In other words, they've never been able to train an animal, look inside the brain, and see evidence that LTP occurred.

Since then a few studies have claimed to find evidence that LTP occurred during learning. But there is actually an insuperable problem in the idea that long-term potentiation could explain very long-term memories. The problem is that so-called long-term potentiation is actually a very short-term phenomenon. Speaking of long-term potentiation (LTP), and using the term “decays to baseline levels” (which means “disappears”), a scientific paper says the following:

Potentiation almost always decays to baseline levels within a week. These results suggest that while LTP is long-lasting, it does not correspond to the time course of a typical long-term memory. It is recognized that many memories do not last a life-time, but taking this point into consideration, we would then have to propose that LTP is only involved in the storage of short-term to intermediate memories. Again, we would be at a loss for a brain mechanism for the storage of a long-term memory.

A more recent scientific paper (published in 2013) says something similar, although it tells us even more strongly that so-called long-term potentiation (LTP) is really a very short-term affair. For it tells us that “in general LTP decays back to baseline within a few hours.” “Decays back to baseline” means the same as “vanishes.”

Another 2013 paper agrees that so-called long-term potentiation is really very short-lived:

LTP always decays and usually does so rapidly. Its rate of decay is measured in hours or days (for review, see Abraham 2003). Even with extended “training,” a decay to baseline levels is observed within days to a week.


So evidently long-term potentiation cannot be any foundation or mechanism for long-term memories. This is the conclusion reached by the previous paper when it makes this conclusion about long-term potentiation (LTP):

In summary, if synaptic LTP is the mechanism of associative learning—and more generally, of memory—then it is disappointing that its properties explain neither the basic properties of associative learning nor the essential property of a memory mechanism. This dual failure contrasts instructively with the success of the hypothesis that DNA is the physical realization of the gene.

But what about syntaptic plasticity, previously mentioned in my quote from the neurologist King​? Since he claimed that LTP is the mechanism behind synaptic plasticity, and LTP cannot explain any memory lasting longer than a year, then synaptic plasticity will not work to explain very long-term memories.

Why Synapses Cannot Explain Very Long-Term Memory

Long-term memory cannot be stored in synapses, because synapses don't last long enough. Below is a quote from a scientific paper:

A quantitative value has been attached to the synaptic turnover rate by Stettler et al (2006), who examined the appearance and disappearance of axonal boutons in the intact visual cortex in monkeys.. and found the turnover rate to be 7% per week which would give the average synapse a lifetime of a little over 3 months.

You can read Stettler's paper here.

You can google for “synaptic turnover rate” for more information. We cannot believe that synapses can store-long memories for 50 years if synapses only have an average lifetime of about 3 months. Furthermore, it is known that the proteins existing between the two knobs of the synapse (the very proteins involved in synapse strengthening) are very short-lived, having average lifetimes of no more than a few days. A graduate student studying memory states it like this:

It’s long been thought that memories are maintained by the strengthening of synapses, but we know that the proteins involved in that strengthening are very unstable. They turn over on the scale of hours to, at most, a few days.

A scientific paper states the same thing:

Experience-dependent behavioral memories can last a lifetime, whereas even a long-lived protein or mRNA molecule has a half-life of around 24 hrs. Thus, the constituent molecules that subserve the maintenance of a memory will have completely turned over, i.e. have been broken down and resynthesized, over the course of about 1 week.

The paper cited above also states this (page 6):

The mutually opposing effects of LTP and LTD further add to the eventual disappearance of the memory maintained in the form of synaptic strengths. Successive events of LTP and LTD, occurring in diverse and unrelated contexts, counteract and overwrite each other and will, as time goes by, tend to obliterate old patterns of synaptic weights, covering them with layers of new ones. Once again, we are led to the conclusion that the pattern of synaptic strengths cannot be relied upon to preserve, for instance, childhood memories.

When you think about synapses, visualize the edge of a seashore. Just as writing in the sand is a completely unstable way to store information, long-term information cannot be held in synapses. The proteins in between the synapses are turning over very rapidly (lasting no longer than about a week), and the entire synapse is replaced every few months.

In November 2014 UCLA professor David Glanzman and his colleagues published a scientific paper publishing research results. The authors said, “These results challenge the idea that stable synapses store long-term memories." Scientific American published an article on this research, an article entitled, “Memories May Not Live in Neuron's Synapses.” Glanzman stated, “Long-term memory is not stored at the synapse,” thereby contradicting decades of statements by neuroscientists who have dogmatically made unwarranted claims that long-term memory is stored in synapses.

Why Very Long-Term Memories Cannot Be Stored in the Cell Nucleus

His research has led Glanzman to a radical new idea: that memories are not stored in synapses, but in the nerve cell nucleus. In fact, in this TED talk Glanzman dogmatically declares this doctrine. At 15:34 in the talk, Glanzman says, “memories are stored in the cell nucleus – it is stored as changes in chromatin.” This is not at all what neurologists have been telling us for the past 20 years, and few other neuroscientists have supported such an idea.

We should be extremely suspicious and skeptical whenever scientists suddenly start giving some new answer to a fundamental answer, an answer completely different from the answer they have been dogmatically declaring for years. For example, if scientists were to suddenly start telling us that galaxies are not hold together by gravity (as they've been telling us for decades), but by, say, “dark energy pulsations,” we should be extremely skeptical that the new explanation is correct. In this case, there are very good reasons why Glanzman's recently-hatched answer to where long-term memories are stored cannot be right.

Chromatin is a term meaning DNA and surrounding histone protein molecules. Histone molecules are not suitable for storing very long-term memories because they are too short-lived. A scientific paper tells us that the half-life of histones in the brain is only about 223 days, meaning that every 223 days half of the histone molecules will be replaced.

So histone molecules are not a stable platform for storing very long-term memories. But what about DNA? The DNA molecule is stable. But there are several reasons why your DNA molecules cannot be storing your memories. The first reason is that your DNA molecules are already used for another purpose – the storing of genetic information used in making proteins. DNA molecules are like a book that already has its pages printed, not a book with empty pages that you can fill. The second reason is that DNA molecules use a bare bones “amino acid” language quite unsuitable for writing all the different types of human memories. The idea that somewhere your DNA has memory of your childhood summer vacations (expressed in an amino-acid language) is laughable.

The third reason is that the DNA of humans has been exhaustively analyzed by various multi-year projects such as the Human Genome Project and the ENCODE project, as well as various companies that specialize in personal analysis of the DNA of individual humans. Despite all of this huge investigation and analysis, no one has found any trace whatsoever of any type of real human memory (long-term or short-term) being stored in DNA. If you do a Google search for “can DNA store memories,” you will see various articles (most of them loosely-worded, speculative and exaggerating) that discuss various genetic effects (such as gene expression) that are not the same as an actual storage of a human memory. Such articles are typically written by people using the word “memories” in a very loose sense, not actually referring to memories in the precise sense of a recollection.

The fourth reason is that there is no known bodily mechanism by which lots of new information can be written to the storage area inside a DNA molecule.

To completely defeat the idea that your memories may be stored in your DNA, I will merely remind the reader that DNA molecules are not read by brains – they are read by cells. It takes about 1 minute for a cell to read only the small part of the DNA needed to make a single protein (and DNA has recipes for thousands of proteins). If your memories were stored in DNA, it would take you hours to remember things that you can actually recall instantly. Thinking that DNA can store memories is like thinking that your refrigerator can cook a steak.

But couldn't very-long term memories just be stored in some unknown part of a neuron? No, because the proteins that make up neurons have short lifetimes. A scientist explains the timescales:

Protein half-lives in the cell range from about 2 minutes to about 20 hours, and half-lives of proteins typically are in the 2- to 4-hour time range. Okay, you say, that's fine for proteins, but what about "stable" things like the plasma membrane and the cytoskeleton? Neuronal membrane phospholipids turn over with half-lives in the minutes-to-hours range as well. The vast majority of actin microfilaments in dendritic spines of hippocampal pyramidal neurons turn over with astonishing rapidity—the average turnover time for an actin microfilament in a dendritic spine is 44 seconds...As a first approximation, the entirety of the functional components of your whole CNS [central nervous system] have been broken down and resynthesized over a 2-month time span. This should scare you. Your apparent stability as an individual is a perceptual illusion.

When It Comes to Explaining Very Long-Term Memory, Our Neuroscientists Are in Disarray

So how can we summarize the current state of scientific thought on how long-term memory is stored? The word that comes to mind is: disarray. In this matter our scientists are flailing about, wobbling this way and that way; but they aren't getting anywhere in terms of presenting a plausible answer as to how very long-term memory can be stored in the brain. Our scientists have done nothing to plausibly solve the permanence problem – the problem that very long-term memories cannot be explained by evoking transient “shifting sands” mechanisms such as LTP which last much less than a year (or in neurons, which are rebuilt every two months due to protein turnover). On this matter our scientists have merely presented explanatory facades – theories that do not hold up to scrutiny, like some movie studio building facade that you can see is a fake when you walk around and look behind it, finding no rooms behind the front. 

long term memory

Another sign of this disarray is a 2013 scientific paper with the title, “"Very long-term memories may be stored in the pattern of holes in the perineuronal net." After basically explaining in its first paragraph why current theories of long-term memories do not work and are not plausible, the author goes on to suggest a wildly imaginative and absurdly ornate speculation that perhaps the brain is a kind of a giant 3D punchcard, storing information like data used to be stored on the old 2D punchcards used by IBM electronic machinery in the 1970's. The author provides no good evidence for this wacky speculation, mainly discussing imaginary experiments that would lend support to it. The very appearance of such a paper is another sign that currently scientists have no good explanation for very long-term memory. I may note that IBM punchcards only worked because they were read by IBM punchcard-reader machines. In order for the brain to work as giant 3D punchcard, we would have to imagine a brain-reader machine that is nowhere to be found in the human body. There has never existed such a thing as a punchcard that can read itself.

Often the modern neuroscientist will engage in pretentious talk which makes it sound as if there is some understanding of how very long-term memory storage can occur. But just occasionally we will get a little candor from our neuroscientists, such as when neuroscientist Sakina Palida admitted in 2015, “Up to this point, we still don’t understand how we maintain memories in our brains for up to our entire lifetimes.”


For the reasons given above, there is no plausible mechanism by which brains such as ours could be storing memories lasting longer than a year. There are only a few possible physical candidates for things that might store very long-term memory in our brain, and as we have seen, none of them are plausible candidates for a storage of very long-term memory.

So given this explanatory failure and the proven fact that human memories last 50 times longer than a year (a period of 50 years), we must reject neural reductionism, the idea that human mental experiences can be fully explained by the brain. We must postulate that very long-term memory involves some mysterious reality that transcends the human brain – presumably some soul reality or spiritual reality. The reasons for this rejection include not just the matter discussed in this post, but the equally weighty reasons given in my two previous posts: the fact that the storage of all memories in the human brain would involve insuperable encoding problems (as discussed here), and the “instant retrieval” problem (discussed here) that there is no way to explain how your brain could know where to find a particular stored memory if the memory was stored in your brain.

The fact that our neurologists claim to have theories as to how very long-term memories could be stored does not mean that any such theory is tenable. Imagine if you lived on a planet in which your consciousness and long-term memory was due to a soul, and that the first time scientists dissected a brain, they found that the brain was filled with sawdust. No doubt such scientists would get busy inventing clever theories purporting to explain how sawdust can generate consciousness and long-term memories.

I may note that memories stretching back 50 years are inexplicable not merely from a neurological standpoint but also from a Darwinian standpoint. As I will argue in another post, from the standpoint of survival of the fittest and natural selection, there is no reason why any primate organism should ever need to remember anything for longer than about a year or two (it would work just fine to just keep remembering last year's memories). I may note that according to an article on wikipedia.com, the average life span in the Bronze Age was only 26 years old. There is no reason why natural selection (prior to the Bronze Age) would have equipped us to remember things for a length of time twice the average life span in the Bronze Age, and it is not plausible that very long-term memories are a recent evolutionary development.

Some objections can be made against my claim that very long-term memories cannot be stored in the brain. One such objection involves the fact of memory impairment in Alzheimer's disease and stroke. This objection is very easily answered, and I will do so in my next post.

Postscript: I forgot to mention capacity considerations that give another reason for ruling out DNA as a storage place for human memory. It has been estimated that 1 gigabyte of memory (1000 megabytes) can store about 3000 books. The entire storage capacity of a DNA molecule is only about 750 megabytes. But the memory savant Kim Peek was able to remember the entire contents of 10,000 books (in addition to countless other things). Even if we assume 250 megabytes of free storage available in a DNA molecule, it wouldn't be a tenth of what is needed to store human memories, and would probably be less than a hundredth.