Header 1

Our future, our universe, and other weighty topics

Tuesday, July 30, 2019

Integrated Information Theory Is an Explanatory Flop

Neuroscientists have no credible explanations for the most important mental phenomena such as consciousness and memory. All that scientists have in this regard are some mangy speculations that don't hold up to scrutiny. I have often written about the deficiencies of neuroscientist theories about memory. I will now look at the poor quality of one of the theories that is put forth as a “theory that might explain consciousness”: what is called integrated information theory.

Before discussing the theory, I must discuss the general problem with any type of theory postulating that consciousness or human mentality is a product of the brain. We realize this huge problem when we consider that a brain is something material or physical, but consciousness or mentality is something immaterial or mental. There seems nothing wrong with the general idea that a physical cause might produce a physical effect, and we know of many examples of physical effects producing physical causes (such as tsunamis producing flooding or lightning producing electrocution). There also seems nothing wrong with the general idea of a mental cause producing a mental effect (for example, the mental idea or knowledge that you are going to soon die may produce the mental effect of anxiety). But there does seem something wrong with the idea of a physical cause producing a mental phenomenon such as consciousness or imagination or understanding. To many a philosopher, the idea that some particular arrangement of cells might cause an idea to pop up seems no more right than the idea that some arrangement of thoughts might conjure up some animal. It seems no more reasonable that some mere arrangement of atoms might cause an idea to arise than for some arrangement of thoughts in your mind to conjure up physical objects.

Now I will discuss integrated information theory. The theory is presented in this paper, and summarized in this wikipedia.org article. You can expand the visual below to get some reasoning that is the core of integrated information theory (the visual is from the wikipedia.org article). The arrangement mirrors what is presented in the paper on integrated information theory. On the left column is a theory of “axioms” about consciousness, an axiom being something that is self-evidently true.  In the visual, these “axioms” are described as “essential properties of every experience.” The first one is evident, that “consciousness exists.”

integrated information theory
From the wikikpedia. org article

But others in this list of “axioms” are not so evident at all. The second “axiom” is that “Consciousness is structured: each experience is composed of phenomenological distinctions.” Most experiences do consist of multiple aspects, but it is quite possible to have an experience that is not structured, and is not composed of multiple aspects. Lying motionless on a bed with my eyes closed, I can think of nothing but the blackness of outer space (or think of  absolutely nothing at all). There is nothing structured about that, and it doesn't consist of multiple aspects.  On the right side of the visual are what are called “postulates.” These are described as “properties that physical systems (elements in a state) must have to account for an experience.”

What is going is just a big example of begging the question, of assuming what was supposed to be proved. The creator of this scheme has simply taken it for granted that some physical system with some set of properties can give rise to a conscious experience. Before looking at such a set of properties, we should reject the underlying assumption. It actually seems there is no physical system that can account for conscious experiences. We can think of no reason why neurons or any groups of neurons should give rise to conscious experience or self-hood. 

I can give an analogy for the type of reasoning that is going on here. Suppose someone were to start trying to explain rock levitation by making a list of the set of properties that a rock levitation incantation would need to succeed. The person might start listing properties such as (a) a specification of which rock to levitate; (2) a specification of how high the rock should be levitated; (3) an appeal to some deity or to spirits of the dead. He might then claim that his rock levitation incantation met all of these properties, and that this explains how he was able to levitate a rock. But before giving much scrutiny to this “set of properties that a rock levitation incantation would need to have,” we should veto such a list at the very beginning, saying, “It's no fair presenting such a list unless you have first proven that rocks can be levitated by incantations.” And similarly, to the person who would start listing “properties that physical systems (elements in a state) must have to account for an experience,” we should not at all concede the possibility of such a thing, but demand first that someone show why we should believe that a physical system could ever account for a conscious experience.

I won't go too much into the details of the “postulates” in the second column of the visual, other than to note that they are as doubtful as some of the axioms in the first column from which these postulates supposedly derive. The paper consists largely of specialized jargon and doubtful mathematics, perhaps to provide some imposing sounds and sparkles to impress the easily impressed.

Integrated information theory claims that all systems with integrated information have some level of consciousness, and that the brain has high consciousness because it has lots of integrated information in the form of stored memories. But there is actually no evidence that integrated information exists in the brain. The claim that memories are stored in brains is simply a speech custom of scientists, a dogma they keep stating without any proof. There are extremely strong reasons for thinking that this dogma cannot be true. They include the following:

  1. Synapses (the supposed storage place of memories) are made of proteins with an average lifetime of only a few weeks, which is only a thousandth of the  maximum length of time that humans can remember things.
  2. There is so much noise in synapses and neurons that accurate recall of large amounts of information should be impossible if you remembered by reading things from your brain.
  3. There is no credible theory of how a brain could instantly retrieve a memory, such as when you instantly recall information about someone after merely hearing their name. Finding such a memory instantly in the brain would be like instantly finding a needle in a haystack. 
  4. No one has discovered any actual example of learned knowledge or episodic memories in any bit of brain tissue.
  5. Brains seem to have neither a mechanism for writing a memory nor a mechanism for reading a memory.
  6. No one has conceived of a detailed theory explaining how human knowledge could ever be translated into neural states so that it could be stored in brains.

We know there is genome information in each neuron, but that is not integrated information. It's just massively redundant information, with each DNA molecule in a neuron duplicating the same information. As for the claim of integrated information theory that all systems with integrated information are partially conscious, such a claim has absurd implications, such as the implication that your thermostat must be kind of conscious or that your smartphone must be kind of like a person.

Another defect of integrated information theory is that it makes no sense to try to present a "theory of consciousness" in isolation, because the thing that needs to be explained is human mentality in all its aspects, and consciousness is only one of those aspects.  Given all the many aspects and capabilities of the human mind (including memory, imagination, volition, emotion, abstract reasoning, understanding, self-hood, and many others),  trying to explain the human mind with a mere "theory of consciousness" is rather like advancing a theory of the earth's origin which merely explains the earth's shape rather than also explaining the earth's mass, position and composition.  

I can imagine a computer exercise that helps to illustrate how there is no sense at all in the idea that integrated information produces consciousness. Let us imagine that you have a website that is rather like wikipedia. Suppose the web site consists of 10 million pages, each of which has text scanned in from an old encyclopedia. Now, you might want to make this information more integrated. So you might write a computer program that scans through all these web pages, creating hyperlinks that add the integration. After the program finishes running on all these pages, there would then be many millions of hyperlinks integrating the information. So, for example, whenever a reader came to a page with a title of “World War II,” all of the people names, place names, and weapon names would appear as hyperlinks that a reader can click to go to a page discussing that particular person, place or weapon. And in each of those articles there would be a link back to the general article on World War II.

Now, imagine that after testing this program, you then run it on all of your 10 million web pages, creating millions of hyperlinks, and vastly increasing the amount of integrated information. According to integrated information theory, you then would have made your web site more conscious than it was before, because now the information is a lot more integrated. But that's nonsensical. There is not the slightest reason to suppose that your web site would be any more conscious than it was before. And if you had a massive web site with a trillion pages, and you ran such a program to create a quadrillion hyperlinks,  creating a vast mountain of integrated information, there would still not be the slightest reason for thinking that this vast leap in integrated information would have made your web site the slightest bit more conscious than it was before.  

Scott Aaronson (not to be confused with the cartoonist Scott Adams) has stated the following about integrated information theory (IIT):

"In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly 'conscious' at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data.  Moreover, IIT predicts not merely that these systems are 'slightly' conscious (which would be fine), but that they can be unboundedly more conscious than humans are."

Information and integrated information are things that can be produced by conscious agents.  Neither information nor integrated information do anything to explain why consciousness exists. Similarly, people make art, but art doesn't do anything to explain why people exist.  

You may object to what I have stated about memory and the brain, pointing out that a few days ago there was an article in the esteemed journal Nature which stated the following:

"Researchers know that memories are encoded in the mammalian brain when the strength of the connection between neurons increases. That connection strength is determined by the amount of a particular type of receptor found at the synapse."

This sounds very confident, but when we read the quote in context, we should lose all confidence in the claim. Here is the full quote:

"Researchers know that memories are encoded in the mammalian brain when the strength of the connection between neurons increases. That connection strength is determined by the amount of a particular type of receptor found at the synapse. Known as AMPA receptors, the presence of these structures must be maintained for a memory to remain intact. 'The problem, Hardt says, 'is that none of these receptors are stable. They are moved in and out of the synapse constantly and turn over in hours or days.' "

The latter part of the quote should cause us to lose all confidence in the first part of the quote.  Given such instability in synapses, they cannot be the storage place for memories that last for decades.  The author of the article is a science journalist, and science journalists have a history of uncritically regurgitating dubious claims by theorists and researchers. 

Rather than making that factually inaccurate claim that "researchers know that memories are encoded in the mammalian brain when the strength of the connection between neurons increases," along with the claim that this occurs at synapses, our science journalist should have noted that this very month in the journal Nature there appeared a scientific paper disputing such a claim. The paper stated the following: "There are nonetheless both theoretical arguments and experimental data against the idea that long-term memory resides in synapses with learning-altered weights," and also, "Whether or not memory is necessarily stored at synapses is still unclear." 

Friday, July 26, 2019

A Strange Double Standard of Mainstream Academia

The object called 'Oumuamua is an unusual object that entered the solar system in 2017. Two Harvard scientists (Abraham Loeb and Shmuel Bialy) have written paper speculating that the object might have been a probe designed by extraterrestrials. This paper has triggered much coverage in the popular press. 

One unusual thing cited about 'Oumuamua is its shape. We have been repeatedly told that the object was cigar-shaped, and press coverage has repeatedly shown a visual of a cigar-shaped object. But that visual is not an actual photo. It is a speculative “artist's visualization” thing. No actual photographs have been taken of 'Oumuamua. In a recent Scientific American article, Loeb says this:

"We do not have a photo of ‘Oumuamua, but its brightness owing to reflected sunlight varied by a factor of 10 as it rotated periodically every eight hours. This implies that ‘Oumuamua has an extreme elongated shape with its length at least five to 10 times larger than its projected width."

So 'Oumuamua may be only five times longer than its width, which would make it merely pickle-shaped rather than cigar-shaped. A pickle-shaped object is not particularly strange, and many asteroids and comets have such a shape. 

There are some reasons for doubting that 'Oumuamua was sent by extraterrestrials. The first is that  'Oumuamua has a tumbling motion, traveling in an end-over-end manner like a potato tumbling down some stairs.  It seems that no one would design a spacecraft with such a motion, since it would prevent the spacecraft from traveling in a particular direction using a rocket thrust.  The second reason is that 'Oumuamua came nowhere close to Earth.  The third is that the object did not transmit radio signals. Scientists searched for radio signals from 'Oumuamua, and detected none. Some University of Maryland scientists recently concluded that 'Oumuamua was not an interstellar spacecraft. 

So there has been this back and forth between scientists about whether an object resembling an asteroid was a product of deliberate design, and such activity was regarded by all as being very scientific. 


Another place in which scientists look for design is in deep-space radio waves.  Radio waves are naturally produced by quite a few astronomical bodies or large masses of gas floating about in space. But there is a group of scientists called SETI scientists who scan radio waves from space, looking for one that is the product of design.  The idea is to look for a radio signal that was deliberately sent by an extraterrestrial civilization. 

Recently these SETI scientists reported a discouraging result. The "Breakthough Listen" project reported that it had spent years searching 1327 nearby stars looking for extraterrestrial radio signals, but had not found any that can be classified as designed radio signals deliberately sent by intelligent beings.  These and all other efforts to find deliberately designed extraterrestrial radio signals have failed.  But nonetheless the search for design in radio waves from deep space is regarded by all as being very scientific, and the results were written up in a regular science journal. 


Such deep-space efforts have been futile. But there's another place where we can look for evidence of design, a place not far away. We can look for evidence of design in the very proteins and cells that make up our body.  This at first seems like a very promising line of inquiry. Our bodies are made up of more than 20,000 types of protein molecules. Our biologists have yet to give us a credible explanation for the origin of any of these protein molecules, each of which is as complex as a 60-line computer subroutine.  

A scientific paper entitled "Gene duplication and the origin of novel proteins" (written by an orthodox biologist) almost confesses this shortfall, for it states  (talking about the origin of proteins) "cases where  we can reconstruct with any confidence the evolutionary steps involved in the functional diversification are relatively few," and tells us that "positive Darwinian selection" (the core of evolutionary explanations) "is thought to be relatively rare at the molecular level." In the same vein, this long recent review of the topic of protein evolution states this near its end (referring to protein folds that are crucial to the functional performance of protein molecules):

"It is not clear how natural selection can operate in the origin of folds or active site architecture. It is equally unclear how either micromutations or macromutations could repeatedly and reliably lead to large evolutionary transitions. What remains is a deep, tantalizing, perhaps immovable mystery."

Similarly, many of the cells in our bodies are of fantastic complexity, being so complex that they have been compared to factories or small cities.  Explanations for the origins of such things will not be found in the works of Darwin, who had no inkling of the complexity of protein molecules and cells, and who incorrectly regarded cells as being almost featureless little blobs.  Showing rare candor about the explanatory shortcomings of modern biology, a biology professor confesses the following:  "The processes underlying evolutionary innovation are, however, remarkably poorly understood, which leaves us at a surprising conundrum: while biologists have made great progress over the past century and a half in understanding how existing traits diversify, we have made relatively little progress in understanding how novel traits come into being in the first place." Stomping upon the cherished legend that such novel traits are explained by natural selection, the same professor tells us, "It is difficult to see how selection could have played a role in the origin of novel traits and functions." 

But as soon as it is suggested that we should search for or ponder evidence of design in our own cells and proteins,  a loud howl of protest comes from many of our scientists, who say that we can't do that because it's not scientific. 

You will notice the absurd hypocrisy and double standard that is involved here. How can it be very scientific to look for evidence of design in deep-space radio waves and deep-space objects resembling asteroids, but unscientific to be looking for evidence of design in our own biology?

Someone might argue that it is scientific to look for design in deep-space objects or deep-space radio waves, but not scientific to look for design in cells and proteins in our bodies, on the grounds that design in something like deep-space radio waves would only be coming from a natural source (extraterrestrial civilizations) but design found in cells or proteins would only be coming from a supernatural source.  This argument is not valid, for three reasons:

(1) First, it is just as possible that a designed radio message might come from a supernatural source as from a natural source.  Before the SETI search for extraterrestrial radio signals, it was proposed that a deity might communicate to humans through radio messages, and the idea was the plot of a Hollywood movie.  Believers in what is called EVP actually maintain that we can pick up radio messages from the deceased on the Other Side. So searching for mysterious unexplained radio signals is not necessarily a "natural causes only" type of activity.  
(2) Second, it is just as possible that design discovered in cells or proteins might have a natural source as a supernatural source.  Designed cells or designed proteins might have come from extraterrestrial visitors that arrived here long ago.  Or designed cells or designed proteins might have come from some currently unfathomable top-down information principle or organizational principle that could ultimately be classified as natural, centuries from now when it was understood. 
(3) We must reject the whole idea of judging whether a research inquiry is scientific based on the cause of an effect.  The cause of an effect has nothing to do with whether it is or is not scientific to research the effect. 

It is clear that there is no consistent intellectual principle that can bless as scientific a search for design in deep-space radio waves or deep-space asteroids, but condemn as unscientific a search for design in the cells and proteins of our bodies. If you bless as scientific a search for design in deep-space radio waves or deep-space objects, but condemn as unscientific a search for design in the cells and proteins of our bodies, you are simply guilty of double-standards hypocrisy, like someone who says that it's okay if billionaires have three-week vacations, but it's not okay if factory workers take such vacations. 

Below is a video of Jeannie C. Riley singing "Harper Valley PTA" which is the best song I know of on the topic of double standards and hypocrisy:

Monday, July 22, 2019

There Is No Good Evidence for a Neural Hallmark of Conceptual Learning or Memory Storage

If memories were stored in brains, we would expect that when a person learned something, there would be some type of physical change in the brain that could be observed, although it might be a tiny subtle thing that was hard to identify.  We can call such a thing a neural hallmark of memory. But no neural hallmark of conceptual or episodic memory has ever been observed.  

Let us imagine two different experimental subjects, either animals or humans. Imagine that the brains of both are thoroughly scanned, in an attempt to determine the exact state of their brains.  Then imagine the first subject was immobilized in a black silent room for ten hours, and the second subject experienced an intense learning experience for ten hours.  If memories are stored in brains, it should be possible to detect some change in the brain that the second subject had that the first did not. 

No test like this has ever produced good evidence of a neural hallmark of conceptual learning or knowledge acquisition.  But there have been some experiments similar to that described above, and some have claimed to have found evidence of memory formation in what are called dendritic spine differences. 

Dendritic spines are little bumps that protrude out of dendrites in the brain. We see below a schematic visual depicting 24 dendritic spines:

 Visual cropped from this paper

The idea that long-term memories are stored in dendritic spines is untenable, for two reasons. The first is that it is known that the proteins that make up dendritic spines are very short-lived, having average lifetimes of only a few weeks.  So there is nothing stable inside a dendritic spine. The second reason is that dendritic spines themselves are unstable, and they generally last for much less than two years. 

Synapses often protrude out of bump-like structures on dendrites called dendritic spines. But those spines have lifetimes of less than 2 years.  Dendritic spines last no more than about a month in the hippocampus, and less than two years in the cortex. This study found that dendritic spines in the hippocampus last for only about 30 days. This study found that dendritic spines in the hippocampus have a turnover of about 40% each 4 days. This study found that a subgroup of dendritic spines in the cortex of mice brains (the more long-lasting subgroup) have a half-life of only 120 days. The wikipedia article on dendritic spines says, "Spine number is very variable and spines come and go; in a matter of hours, 10-20% of spines can spontaneously appear or disappear on the pyramidal cells of the cerebral cortex." A paper on dendritic spines in the neocortex says, "Spines that appear and persist are rare." While a 2009 paper tried to insinuate a link between dendritic spines and memory, its data showed how unstable dendritic spines are.  Speaking of dendritic spines in the cortex, the paper found that "most daily formed spines have an average lifetime of ~1.5 days and a small fraction have an average lifetime of ~1–2 months," and told us that the fraction of dendritic spines lasting for more than a year was less than 1 percent. A 2018 paper has a graph showing a 5-day "survival fraction" of only about 30% for dendritic spines in the cortex.  A 2014 paper found that only 3% of new spines in the cortex persist for more than 22 days. 

In the light of such facts, it is pretty ridiculous to be looking for signs of a physical hallmark of long-term memory by looking at dendritic spines, which typically have lifetimes only a thousandth as long as the maximum length of time that humans can remember things. But nonetheless some scientists have attempted to do such a thing. A few scientists have presented scientific papers showing "before and after" photos of dendritic spines, papers that try to insinuate that some physical hallmark of learning can be seen.  No one should be persuaded by such experiments,  which have glaring methodological flaws. 

Here is what we will read in a typical scientific paper describing such an experiment:

(1) We are told that there were two sets of rodents, one group that did not engage in learning, and another group that did engage in learning. 
(2) We are shown two photos of dendritic spines, microscopic little bumps that protrude from neural components called dendrites.  Each photo will show about ten of these dendritic spine bumps. One photo will be from the learning group, and one from the control group.
(3) Some captions on the photos will suggest that the learning group has more dendritic spines than the control group. 
(4) There will be some graph suggesting the same thing. 

You may realize such papers are very flawed after you consider the question: how did the scientists select these particular dendritic spines out of many millions or billions of dendritic spines in the brain of the animal being studied?  A huge number of dendritic spines appear and disappear every day in the brain of every human and rodent. So it could not at all be true that the scientists scanned the brains of their subjects, and found some special little group of ten or twenty dendritic spines that were the only ones that had changed. 

What has actually happened in such experiments is that the scientists have simply randomly or arbitrarily selected a group of about ten or a hundred dendritic spines out of millions or billions they could have selected.  Perhaps this was done in a truly random way, using some random selection technique, or perhaps the scientists scanned many dendritic spines looking for a set that would show the alleged "memory storage" effect they were trying to show.  It's usually hard to tell from the way the papers are worded how the tiny set of "study spines" was selected.  Usually there will be no explanation of why this tiny set of about 10 or 100 dendritic spines is being photographed or carefully studied, rather than any 10 or 100 others of millions or billions of dendritic spines that could have been chosen.  In one paper we are told that the set of dendritic spines photographed was a "representative sample."  But we have no idea whether such a sample is truly representative, any more than we would know that ten randomly selected New York residents on the street are "representative samples" of New York residents. 

I can give an analogy explaining how bogus and bunk such a methodology is.  Imagine your hypothesis was that your memories are stored in the flowers of Central Park. You might do an experiment with this protocol: (1) you photograph some groups of Central Park flowers the first week of April; (2) you learn a lot on the second week of April; (3) you could go back to Central Park to photograph the same groups of flowers on the third week of April. Looking for a group of flowers that would support your hypothesis, you would probably have little trouble finding a few flowers that had grown nicely during the second week of April. You could then  publish "before and after" photos of such flowers to try to back up your claim that  your memories are being stored in flowers.  This would, of course, be a completely bunk methodology.  You would have no reason for suspecting that your memories were actually stored in the particular group of flowers you had photographed. 

Similarly, the scientists who perform experiments like the one I have described have no reason for believing that any memories acquired during testing are stored in the tiny group of dendritic spines they are showing in their papers.  

There are a few studies suggesting a little bit of a neural hallmark during muscular exertion activities (such as maze training), but such studies may be showing a bit of a kind of "muscle memory" effect that should not be confused with a physical sign of conceptual learning or knowledge acquisition.  My leg muscles also increase if I do enough activities involving walking, but that does not show that memories are stored in my legs.  We know that nerves are connected to muscles, so even if a brain is not storing learned knowledge, we might expect that parts of brains related to muscle activity might bulk up a tiny bit during novel muscle activities.  But if such a "muscle memory" exists, it is not a real memory, for the hallmark of a real memory is that it can be retrieved by a motionless person. 

In 1979 a scientific paper by Huffenlocher reached these conclusions:
  1. Synaptic density was constant throughout adult life (age 16 to 72 years), with a density of about 1100 million synapses per cubic millimeter.
  2. There was only a slight decrease in old age, with density decreasing to about 900 million synapses per cubic millimeter.
  3. Synaptic density increased during infancy, reaching a maximum at age 1--2 years which was about 50% above the adult mean.”
So according to the paper, the density of synapses sharply decreases as you grow up, in contrast to claims that learning or knowledge acquisition produces synapse strengthening. 

Some have claimed that a hallmark of knowledge acquisition can be found in London taxi drivers. To become a London cab driver, you have to memorize a great deal of geographical information. A study followed London cab drivers for 4 years, taking MRI scans of their brains.

But the study did not find that such cab drivers have bigger brains, or brains more dense with synapses. The study has been misrepresented in some leading press organs. The National Geographic misreported the findings in a post entitled “The Bigger Brains of London Cab Drivers.” Scientific American also inaccurately told us, “Taxi Drivers' Brains Grow to Navigate London's Streets.” 

But when we actually look at a scientific paper stating the results, the paper says no such thing. The study found no notable difference outside of the hippocampus, a tiny region of the brain. Even in that area, the study says “the analysis revealed no difference in the overall volume of the hippocampi between taxi drivers and controls.” The study's unremarkable results are shown in the graph below. 

The anterior part of the left half of the hippocampus was about 25% smaller for taxi drivers (100 versus 80), but the posterior part of the right half of the hippocampus was slightly larger (about 77 versus 67).  Overall, the hippocampus of the taxi drivers was about the same as for the controls who were not taxi drivers, as we can see from the graph above, in which the dark bars have about the same area as the lighter bars. So clearly the paper provides no support for the claim that these London cab drivers had bigger brains, or brains more dense with synapses.

In this case, the carelessness of our major science news media is remarkable. They've created a “London cab drivers have bigger brains” myth that is not accurate.  The supposedly bigger part (an area of about the size of a jelly bean) is only about 1/500 of the size of the brain. Give me any two randomly chosen set of people, and give me the freedom to make comparisons of 500 parts of their brains, and I will probably be able to find some little part that differs in size by 25% or more, purely because of random variations. This is no strong evidence for anything. 

Another scientific study claimed to find evidence that people called pandits who had memorized Sanskrit scriptures had brains different from normal people.  In a Scientific American story, we have a claim which initially sounds (to the casual reader) like evidence that memorization changes the brain. A scientist says, "Numerous regions in the brains of the pandits were dramatically larger than those of controls, with over 10 percent more grey matter across both cerebral hemispheres, and substantial increases in cortical thickness." But when we take a close look at the study, we find no robust evidence for any brain change caused by memorization. 

The study is what is called a whole-brain study. This means that the authors had complete freedom to check hundreds of tiny regions of the brain, looking for any differences between their 20 pandits who memorized scriptures, and a group of 20 controls.  The problem with that is that a scientist may simply find deviations that we would expect to exist by chance in tiny little brain regions, and then cite these as evidence of a brain effect of memorization.   Note that the scientist did not claim that the brains of the pandits who memorized scriptures had 10 percent more grey matter than ordinary people. He merely claimed that in "numerous regions" there was a 10% difference.  If I take 20 random people, scan their brains, and compare them to 20 other random people whose brains I scanned, I will (purely by chance) probably be able to find quite a few little regions in which the first group had more grey matter than the second (as well as quite a few regions in which the first group had less gray matter).  In the paper, we read that these pandits who memorized scriptures "showed less GM [grey matter] than controls in a large cluster (62% of subcortical template GM) encompassing the more anterior portions of the hippocampus bilaterally and bilateral regions of the amygdala, caudate, nucleus accumbens, putamen and, thalamus."  So the study found in some regions of their brains, these pandits who memorized scriptures had less gray matter than ordinary people, and that in other regions of their brains, they had more grey matter.  That is basically what we would expect to find by chance, and provides no good evidence for anything. 

On page 23 a technical paper tells us how many subjects we would need to have when doing this kind of "whole brain" study using brain scanning:

"With a plausible population correlation of 0.5, a 1000-voxel whole-brain analysis would require 83 subjects to achieve 80% power. A sample size of 83 is five times greater than the average used in the studies we surveyed: collecting this much data in an fMRI experiment is an enormous expense that is not attempted by any except a few major collaborative networks."

How many subjects did the whole-brain analysis study of Sanskrit pandits use? Only 21.  Meaning that it used only one-fourth of the subjects needed for a moderately convincing result, using the approach used.  The results are therefore not robust evidence for anything.  See here for a fuller explanation of why the Sanskrit pandits study provides no good evidence of a neural effect of memorization. 

A certain class of studies called environmental enrichment studies compares groups of animals, one raised in an environment that is not stimulating, and another raised in an environment that is stimulating (such as one with toys and exercise wheels for rats).  It is claimed that the animals raised in the "enriched environment" may have slightly denser or larger areas in certain parts of the brain. But we have no idea whether this is caused by simply increased muscular activity rather than anything pertaining to memory. A review of the topic says"Several studies have even suggested that physical activity is the sole contributor to the neurogenic and neurotropic effects of environmental enrichment." 

As for the claim so often made that "neurons that fire together wire together," a dogma originated by Hebb,  there is no robust evidence for this dogma of neuroscience that synapses that are more often used become stronger than synapses that are not used.  To get good evidence for this claim, a neuroscientist would need two things: (1) some method of measuring how often synapses fire, and (2) some method of measuring how much they are strengthening.  But unlike the situation with a car (which comes up with an odometer that allows you to precisely know how much it is being used), there is no way of monitoring precisely in vivo how often a neuron or synapse is firing; and it is also extraordinarily difficult to tell whether or not a synapse has strengthened during some period of time. So we don't have any adequate way of testing the Hebbian dogmas that synapses are strengthened more strongly when used more often. And since we could never tell whether a synapse strengthening was caused by mere physical activity rather than memory storage, something similar to muscles increasing in size after greater physical activity, a claimed example of synapse strengthening couldn't be cited as a neural hallmark of conceptual learning.  

A small number of studies have claimed to show evidence of synapse strengthening after learning, but fail to do that in a convincing manner. Such studies typically involve tracking a small number of synapses in some animal.  But there are billions of synapses in every mammal, and we have no way of knowing whether the small number of synapses studied had any connection with a learning experience.  We can compare such studies to a study trying to prove that flowers wilt in Central Park when the Yankees lose, by showing us pictures of a few flowers that showed such a wilting.  To prove the idea of synapse strengthening during learning, you would need to compare the total synapse strength of one group of subjects that had learning and a control group of subjects that did not. But we have no method allowing a scientist to measure the total strength of synapses in an organism. 

The 2019 study here is the latest example of an unconvincing study trying to show some evidence of memories being stored in a brain.  There are two big reasons why the study shows nothing of the sort:
(1) The study uses a technique in which animals are trained to fear some stimulus, and are then subjected to a brain "cell reactivation" that can be roughly described as a brain zapping.  The animals supposedly froze more often when this brain zapping happening, and the study interpreted this behavior as evidence of an artificially produced memory recall of a fear memory. But such a technique does nothing to show that a memory is being recalled, because it is well known that there are many parts of a mouse brain that will cause freezing behavior when artificially stimulated.  The freezing behavior is probably a result of the strange stimulus, and not actual evidence of memory recall.  If you were walking along, you would also freeze if someone turned on some brain-zapping chip implanted in your brain. 
(2) The study is using sample sizes so small that there is a very high chance of a false alarm.  The number of animals per study group was only 10 to 12. But 15 animals per study group is the minimum needed for a modestly convincing result, and a neuroscientist has stated that to get a decent statistical power of .5, animal studies should be using at least 31 animals per study group. 

The second problem is one that is epidemic in modern neuroscience.  Neuroscientists are well aware that the sample sizes typically used in neuroscience studies (the number of animals per study group) are so low that there must be a very high chance of false alarms in very many or most of their experimental studies; but they continue year after year producing such unreliable studies.  There is a "publication quota" expectation that provides a strong incentive for such professional malpractice. 

In considering matters such as these, I like to remember a particular rule:

The rule of well-funded and highly motivated research communities: almost any large well-funded research community eagerly desiring to prove some particular claim can be expected to  occasionally produce superficially persuasive evidence in support of such a claim, even if the claim is untrue.  

We can consider an example of this rule, one involving astrology, the claim that the stars and planets exert a mysterious occult influence on the destiny of humans. Let us imagine that instead of there being merely a handful of poorly funded astrology researchers in the United States, there were instead 10,000 or more very well-funded astrology researchers, with billions of dollars in research grants to use to try to support their belief in astrology, by doing things like crunching statistics in various ways with computers.  It would then occur that we would occasionally read in the press stories presenting superficially persuasive evidence for astrology.  Such evidence probably would not stand up well to very close scrutiny, but it would be sufficient to give some talking points to astrology supporters. 

Similarly, if there was a large community of 10,000 ardent fairy researchers who were funded with billions of dollars, we would probably occasionally see superficially persuasive papers offering evidence for fairies. For example, with such an army of researchers, and so much money to spend, there might be occasional infrared heat signature studies suggesting anomalous little blobs of heat floating about that might be interpreted as fairies.  The researchers would be helped by the research rule that says, "Torture the data sufficiently, and it will confess to almost anything." 

And so it is for the 10,000 or more US neuroscientists funded with billions of dollars of research money (more than 5 billion dollars each year, according to this site).  Such scientists are able to occasionally produce studies providing superficially persuasive evidence for the dogmas the neuroscientists want to believe in, such as the idea that there is a physical hallmark of conceptual learning in the brain. Such evidence does not hold up well to very close scrutiny, but it is at least sufficient to provide some talking points for the neuroscientists.  Such evidence is actually no greater than the evidence we would expect to be produced for an untrue claim, given the "rule of well-funded and highly motivated research communities" cited above. 

Thursday, July 18, 2019

Experts Stumble Within Overconfidence Communities

Overconfidence is a huge factor causing errors in fields such as science, politics, government and the military. Some people define overconfidence as if it only pertained to the future. But since the English language lacks any good word meaning specifically “having too high an opinion of what you know or what your skills are,” it seems appropriate to define overconfidence very broadly, as something that may involve both the present and the future. We can define overconfidence as “having too high an opinion of the skills or accomplishments of yourself or others, or the chance of future success of yourself or others.”

By doing a Google search for “overconfidence” you can find various interesting treatments of the topic. But such treatments often simply look at an individual mind, and ignore the social aspects of overconfidence. A large amount of all overconfidence is something that arises in a social context. One of the main reasons why people become overconfident is that they become part of what can be called an overconfidence community. We can define an overconfidence community as a group of people that overestimate their own knowledge or overestimate the accomplishments of people in the group or overestimate the likelihood of future success of the group or some people in it.

It is easy to come up with historical examples of overconfidence communities. In the early 1940's Hitler and the Nazis built an overconfidence community in which it was believed there would be a high likelihood of success when Germany engaged in very risky military undertakings such as the invasion of the Soviet Union. Anyone in the spring of 1941 realistically assessing the odds of Germany succeeding in an invasion of the Soviet Union would have been very cautious or pessimistic, on the grounds that the Soviet Union had a much higher population, a much bigger country, a much bigger army and far more tanks (the Soviet Union had more than 14,000 combat-ready tanks, and Germany less than 4000). But in the community of the Nazi leadership and German army leadership, a belief took hold that victory was very likely. Germany ended up losing the war.

There was an overconfidence community created when the United States invaded Iraq in 2003, with many predicting a short military involvement not costing very much in dollars or casualties. This overconfidence was epitomized by the “Mission Accomplished” banner raised on an aircraft carrier President George W. Bush visited shortly after the war began. The invasion led to long years of chaos in the country with endless terrorist bombings and much of the country being taken over by the ISIS terrorist group. US costs (counting interest payments) were in the trillions, with 4000+ dead and tens of thousands wounded. The “Mission Accomplished” banner is now a symbol of overconfidence and hubris.

Overconfidence communities can consist of a group of investors. One such overconfidence community existed just before the stock market crash of 1929. In the summer of 1929, people were thinking that investing in the stock market was a sure-fire financial move. A similar overconfidence community developed involving investors in high-tech stocks in the late 1990's. But then in 2000 there was a great crash in the value of high-tech stocks. Another overconfidence community consisted of investors in real estate and mortgage-backed securities around 2005 and 2006. The community was very frequently warned that a huge “housing bubble” had developed, but the investors paid little or no attention. Then housing prices plummeted after 2006, leading to the Great Recession of 2008.

In the world of organized religion, there are quite a few overconfidence communities. Of course, if you happen to believe that your organized religion and its leaders or scriptures are divinely guided or divinely inspired, you then will not think that your particular community is an overconfidence community. But since there are many organized religions with conflicting teachings which cannot all be right, even the adherent of a traditional organized religion will concede that some other religious communities are overconfidence communities that have too high an opinion of their own state of knowledge.

In the world of politics, there can be overconfidence communities. A prominent example is the group of political pundits who predicted that Donald Trump had no chance of winning the Republican nomination in 2016, and who then predicted after he won that nomination that he had no real chance of winning the presidency.  

And even in the world of scientific academia, there are overconfidence communities: communities that have improperly raised their own "Mission Accomplished" banners. One very large overconfidence community is the community which maintains that scientists have figured out the origin of biological organisms. The explanatory pretensions of this community are mountainous, but its members offer only a tissue-thin explanation to back up their main claim: the idea that incredibly organized biological innovations are explained by random mutations (blind chance variations) and what they call natural selection (the mere fact that fit organisms reproduce more).  Another overconfidence community is the one that maintains that all the astonishing wonders of the human mind can be explained by mere brain activity. Those in this community have no remotely persuasive explanations of how noisy neurons and synapses with very frequent protein turnover can explain things such as accurate instant retrieval of 50-year-old memories, human imagination, and minds that create deep philosophical thoughts. But the community proclaims its “neurons explain it all” dogma as if it were fact, and ignores a large body of evidence conflicting with such a dogma.

Another example of an overconfidence community in academia is one that predicted for decades that a speculative theory called supersymmetry would by our time have been confirmed by the activity of the Large Hadron Collider. That gigantic machine has been running for years, and has found no evidence the theory is true.  Then there is the SETI overconfidence community, which since about 1965 has been speaking as if we are not long away from receiving radio signals from extraterrestrials. Pitchmen of this community (people such as Carl Sagan) spoke so confidently that around 1973 I thought extraterrestrial radio signals would be discovered by the year 2000. 

Part of the way in which overconfidence communities preserve overconfidence is by shielding their members from facts that might shake their overconfidence.  For example, it was recently reported that a Russian person with a university degree in engineering (who worked decades as an engineer, and reported no difficulties) had been found to have lived his life with only half a brain. This news matched the finding of a previous scientific paper finding that someone with half a brain had above-average intelligence. But this news item (tending to disrupt confidence in "brains make minds" dogma) was not reported on any of the major science news sites beloved by various overconfidence communities.  Such sites also fail to report objectively on evidence of paranormal phenomena. 

One of the key factors fueling the growth of an overconfidence community is what is known as social proof. Social proof is when the likelihood of someone adopting a belief or doing something becomes proportional to how many other people adopted that belief of did that thing. If we were to write a kind of equation for social proof, it would be something like this:

Social proof of belief or action (s) = number of people believing that or doing that (x) multiplied by the average prestige of such people (y) multiplied by how much such people are like yourself (z).

If lots of people adopt a belief or do some thing, that thing, there will be a larger amount of social proof. If some of those people are famous or popular or prestigious or influential, there will be a larger amount of social proof. If some or lots of those people are like yourself, there will be a larger amount of social proof. So, for example, we might not be influenced if told that most Mongolians water their lawns every week, but if we live on Long Island, and we hear that most Long Island residents water their lawns every week, we may well start doing such a thing.

Given these factors, it is rather easy to see how overconfidence communities can get started in the academic world, even when the communities are rather tiny. A physics professor may advance some far-fetched theory, and get a few supporters among other physics professors. These few professors each has a high prestige, since our society has adulation for physics professors. If you are then another physics professor, you may be drawn into the overconfidence community which will already have two of the three “social proof” factors in its favor – because the few adherents are just like you, and are high-prestige people. So even with only a few believers, it may be possible for the overconfidence community to get started. The more people who start believing in the idea, the more of a “social proof” snowball effect is created.

When you belong to an overconfidence community, it can cast a spell on you, and make you accept bad reasoning you would never accept if you were outside of the community. Once you leave the community, there can be a kind of “the scales fall from your eyes” effect, and you can ask yourself: what was I thinking when I believed that?  In the future, as it becomes ever more clear that the members of overconfidence communities in academia are making unsound claims, and pretending to know things they don't actually know, there will be many people who drift out of such overconfidence communities, and experience “the scales fall from your eyes” moments. And in such moments the questions they will ask will be something like “what the hell was I thinking?” or “how could I have believed in something so unbelievable?”

Sunday, July 14, 2019

Exhibit A Suggesting Scientists Don't Know How a Brain Could Retrieve a Memory

Most neuroscientists claim that memories are retrieved by some action of the brain. But they have no coherent credible theories as to how this could happen. As Exhibit A in support of this claim, I refer you to this page on the ResearchGate.net site. The page (dating from 2015) simply asks the question, “How are memories retrieved in the brain?” 

I read the page carefully, hoping for clarification on the seemingly insurmountable problem of explaining how a brain could instantly retrieve a memory.  One aspect of this problem is what I call the navigation problem. This is the problem that if a memory were to be stored on some exact tiny spot on the brain, it would seem that there would be no way for a brain to instantly find that spot. For that to occur would be like someone instantly finding a needle in a gigantic haystack, or like someone instantly finding just the right book in a vast library in which books were shelved in random positionsNeurons are not addressable, and have no neuron numbers or neuron addresses. So, for example, we cannot imagine that the brain instantly finds your memory image of Marilyn Monroe (when you hear her name) because the brain knows that such information is stored at neural location #235355235.  There are no such "neural addresses" in the brain. 

Then there is also the fact that the brain seems to have nothing like a read mechanism by which some small group of neurons are given special attention. The hard disk of a computer has a read/write head, but there's nothing like that in the brain. Then there is the fact that if memory information were encoded into neural states, the brain would have to decode that encoded information; but such a decoding would seem to require time that would prevent instantaneous recall. 

There are 68 "expert answers" on the ResearchGate.net page, but only 1 of them is rated as a “popular answer” by the site. This one “popular answer" is simply to a link to a speculative paper talking about some weird and not-currently-popular “holographic” theory of memory. The paper says this at its beginning:

"Yet, all attempts to describe human memory as a hologram have failed up to now. Hence, the holographic brain hypothesis is simply ignored in neuroscience textbooks. Probably, my attempt will fail too."

So the most popular answer on the page is one in which the author does not sound like she has much confidence in her claims.  I will now review some of the answers on the page, going from its top to its bottom. One of the first answers (given by one Richard Traub) states the following:

"I have no doubts at all that the mechanisms controlling retrieval are a collaborating team of high-level cognitive, affective and motorically-influence & influencing subsystems that encode and identify specific circumstantially relevant goals and, in terms of which, index circumstantially relevant sensory circuit assemblies - the computed output of which causes a top-down activation of the decided-upon distributed assemblies and consequent re-representation of their own combined output in the earlier event of analyzing, perceiving and mapping the "live" stimuli upon which a particular episodic memory was originally founded."

This gobbledygook may sound impressive, until we realize that there is no real theory behind it at all, except for the idea that when a memory is recalled the brain is replaying the sensory experience that caused the memory to form. That does nothing to explain how the brain could find the exact location of the correct tiny spot where a memory was stored. We also know that the basic idea of this "replaying sensory experience when you learned something" theory is not correct. When someone asks me how many states are in the United States, my mind does not play back the sensory experience I had when I first learned that the United States consists of 50 states. And if someone asks me who killed Abraham Lincoln, I do not play back the sensory experience of my fifth-grade teacher telling me that John Wilkes Booth did this act.

The next answer is from an Engineering PhD named Mells who admits, “I don't have an immediate answer to the retrieving of 'old' data.” This is followed by a humble answer by Simon Penny that offers no theory or explanation. We then have an answer by Salman Zubedat, who works at a neuroscience lab, but merely refers to speculative theories of neural memory storage, without mentioning any theory or explanation for memory retrieval.

We then have an answer from Herwig Lange, a neuroscientist who says nothing in answer to the question “How are memories retrieved in the brain?” other than, “He who finds out wins the next Nobel-prize.” We can interpret this as a confession that neuroscientists do not currently understand how a brain could retrieve a memory.

We then have an answer from physiologist Sutarmo Vincentius Setiadji who states the following:

"New experience firstly stimulates some components of sensory organs. These organs then stimulates some neural circuits in the higher parts of nervous system and then stimulate one or more primary sensory cortex/cortices. These stimulations go to uniassociation cortex or also directly to multi association cortex. From there through entorhinal cortex go to the hippocampus for being processed for several times. After that from hippocampus will be sent back through entorhinal cortex to multi association cortex as long term memory."

This is not an explanation as to how a brain could find the exact location at which a memory is stored, nor is it an explanation of how any reading effect could occur from such a location. The account above basically amounts to saying that sensory experience causes electricity to start traveling around between different parts of the brain, but that doesn't explain memory retrieval. Moreover, memory retrieval very often occurs without sensory experience being the start of things. For example, I may start randomly recalling my mother, without seeing anything that caused such a memory retrieval.

Then we have Ursula Ehfield offering the not-very-clear poetics below:

"As to my view it is a huge concert of waves and oscillations involved. (EEG are very important as a 'global player'!) Any neurotransmitter population plays its own concert. Sometime the piano starts, sometime the violin, sometimes the trumpets, which ever is resonating first."

We then have a rather long argument back and forth between Ehfield, Setiadji, and Graeme Smith. Within that argument we do not get any real explanation for how a brain could retrieve a memory.

We then have an answer from psychology PhD John S. Antrobus who tells us that memories are not retrieved but activated, and then merely says, “After that, the answer could fill a book.” Later on in the page John gets more detailed, although he isn't really telling us anything. He states the following:

"Lexical recognition of a word is accomplished by the activation of the neurons that represent that word, and the suppression of all others. Auditory word recognition requires another network. The 'meanings,' syntax, motor networks of speech, typing, etc., pictorial representation, values, and other features are accomplished by reciprocal circuits, largely in the prefrontal cortex. Any of these may play a part in activating the lexical representation, and may modify the largely 'bottom-up' recognition network. All, and in different ways, are part of the 'memory' of the lexical representation in that they are able to activate [or] suppress the activation of the neurons that represent a particular word."

This long statement really says nothing at all other than vaguely claiming that some type of activation is going on, claiming that some type of circuits are involved, and claiming that some neurons represent a word. But how could neurons conceivably represent a word? We can imagine no combination of neurons that would represent the word “freedom” or “religion” or “eternity” or “proficiency.” And if there were such a set of neurons somehow storing the meaning of a word, how could my brain ever instantly find that exact tiny set of neurons instantly, as soon as I heard that word? Our author provides no answers.

Since electricity passes around between in the brain, the brain can be conceived rather loosely as a huge collection of circuits. But we don't do anything to explain instant memory retrieval by saying it “is accomplished by circuits,” just as we don't explain something going on in your smartphone or computer by vaguely saying that it is “accomplished by circuits” or “accomplished by components” or “accomplished by electrons.”

We then have Paul Michael Guinther PhD state this: “Asking how the brain in some way causes the retrieval of memories involves a lot of metaphor ...and therefore isn't really answerable in any kind of scientifically meaningful way.” This is followed by a long answer from a “Deleted Profile” user who doesn't cast much light on this question.

We then have additional comments by Graeme Smith, who has no credentials in this area, He states the following:

"Output comes in the form of release of neurotransmitters, and in some rare cases actual electric contact between cells. ...A great amount of the storage and retrieval of memories has to be thought of as interpretation of the signals to retrieve the sense of them despite the processing steps taken at the same time as the storage steps. ...Different areas of the Cortex are interpreted differently especially the modal areas which take the inputs from specific sensory zones, and analyse it according to the mode of sensory input those sensory zones respond to. At different stages in the memory different areas in the cortex are activated resulting in processing of different types of outputs. The architecture of the brain, and micro-structure of the tissues, act together to guide information of a specific type through processes of a particular type, to other areas of the brain forming networks that process the information in a pattern that is similar across the brain."

This very much sounds like the talk of someone who does not understand how a brain could retrieve a memory, and who is just tossing around a few vague phrases, hoping that it sounds like something resembling understanding.

We then have an answer from the authority Dorian Aur of the Department of Comparative Medicine at Stanford University. Aur claims, “Fragments of memory are written inside neurons and synapses within molecular structure.” There is no real evidence that this is true. There is no evidence of any information-writing capability in the brain, nothing like the write head of a disk drive. No one has a coherent detailed theory as to how learned knowledge or episodic memories could be translated into neural states. We know that the proteins that make up synapses are very short-lived, having average lifetimes of only a few weeks. There is no workable theory as to how a brain could store memories lasting for decades.

Aur then states, “These structures vibrate and generate a broad electromagnetic spectrum,” and says, “In computational terms, meaningful fragments of information which are stored inside the structure are read out.” This isn't really saying anything. Aur doesn't give an answer to the question or an explanation, other than claiming memories are written and read.

There then follows a long passage by Graeme Smith, who talks about electrical signals crossing synapses gaps, and the artificially produced effect called LTP. This provides nothing to clarify how a brain could instantly find a memory and load it into your mind so that you thought of that memory.

So this finishes my look at the 68 answers the experts have given to the basic question, “How Are Memories Retrieved in the Brain?” I have quoted all the best answers. The answers run to a total of about 8000 words. But the experts provide no real insight as to how a brain could instantly retrieve a memory. The authors toss around their erudition in various ways without any answers to the basic questions. There seems to be an awful lot of “just faking it” kind of verbiage, the type of empty phraseology and gobbledygook that people use when trying to persuade you that they understand something that they don't. None of the main problems are answered, and some of the main problems are not even mentioned.

None of the authors offers anything like a theory as to how instant memory retrieval could occur. The authors speak as if they were completely ignorant of this difficulty. None of them seems to be aware that explaining how a brain could find a memory instantly is 1000 times harder than explaining how a brain could find a memory if it has hours or days to scan though memories stored in it.

None of the authors suggests anything like a theory as to how a brain could read encoded information stored in it, translating that into a thought. The authors don't even offer a suggestion as to some neural effect that could act like a read mechanism.

Quite a few of the people offering answers are guilty of what we may call jargon bluffing, which is what goes on when someone offers dense, jargon-laden prose trying to make you think that he understands something he does not at all understand.  Below is an imaginary example of this type of prose, which sounds like quite a few of the answers that are found on this page:

"The remembrance phenomenon is produced by a complex symphony of interlocking reciprocal causal factors.  Neurotransmitters, circuits, specific synapses and highly specialized neural components all play specific roles in the intricate functionality.  We are beginning to unravel the mechanistic specificity that evokes what we experience as a distinct recollection event. A distinct repertoire of biochemical events involving diverse interconnected cells may elicit a vivid reminiscence." 

A statement like this is just bluffing, and neither shows any understanding of how a brain could retrieve a memory, nor does it describe any real theory as to how such a thing could occur.  We should not be impressed at all by this type of empty verbiage, which is found all over the place on the web page I am discussing. 

But two or three of the writers have spoken honestly and candidly by confessing (in one way or another) that they do not understand how a brain could retrieve a memory.  We get some similar candor in a recent  book Why Only Us? Language and Evolution by the leading linguist Noam Chomsky and Professor Robert C. Berwick. Here is an excerpt (pages 50-51):

"The very first thing that any computer scientist would want to know about a computer is how it writes to memory and reads from memory....Yet we do not really know how this most foundational element of computation is implemented in the brain."

The complete lack of any workable theory for how memory recall can occur so quickly is admitted by neuroscientist David Eagleman, who states the following:

"Memory retrieval is even more mysterious than storage. When I ask if you know Alex Ritchie, the answer is immediately obvious to you, and there is no good theory to explain how memory retrieval can happen so quickly." 

I offer this ResearchGate.net web page as Exhibit A that modern neuroscientists have no understanding at all as to how a brain could instantly retrieve a memory. The lack of any credible theory of how instantaneous memory retrieval could occur is one of the major reasons for rejecting the claim that the brain stores memories. Many other such reasons are discussed at this site. 

On the "How Stuff Works" site, which often has pretentious and dogmatic answers in which writers pretend to understand things they don't understand, we have a five-page answer with the title "How Human Memory Works." The author is the neuroscientist Richard C. Mohs.  Engaging in speculation for which he provides no references, evidence or citations, Mohs states the following:

"Each part of the memory of what a 'pen' is comes from a different region of the brain. The entire image of 'pen' is actively reconstructed by the brain from many different areas. Neurologists are only beginning to understand how the parts are reassembled into a coherent whole."

The last sentence gives away that Mohs isn't actually referring to something that is known here, and we have no actual evidence that such a claim is true. If Mohs' speculation were true, it would make it many times harder to explain a memory retrieval -- for explaining instantaneous retrieval from "many different areas" would be harder than explaining instantaneous retrieval from a single area. 

At the end of the first page, Mohs promises, "On the next page, you'll learn how encoding works and the brain activity involved in retrieving a memory."  But the pages that follow do nothing to explain such things. The page entitled "Memory encoding" does nothing to explain how a brain could possibly translate conceptual knowledge or episodic memories into neural states, a gigantic unsolved difficulty as great as the difficulty of explaining memory recall. No neuroscientist has a coherent detailed theory on this matter, and Mohs certainly does not state one.  On the page entitled "Memory Retrieval," Mohs writes about 500 words that do nothing to explain how such a thing could occur in the brain. He presents no theory and no speculation, and says nothing specific about the brain. 

Our neuroscientists do not understand how a brain could encode a memory, do not understand how a brain could instantly retrieve a memory, and do not understand how a brain could store a memory for decades despite rapid protein turnover in synapses which should prevent stored memories from lasting in synapses for even as long as a month.  If a neuroscientist ever gives you the impression he understands such things, it's a sham or a bluff, or a case of self-deception in which someone has deluded himself into thinking he understands something he doesn't. 

This morning I had a good example of the type of memory recall that is inexplicable through any theory of neural recall. Turning to the TCM channel, I saw for the first time the 1951 movie The People Against O'Hara.  The movie was in its middle, and there was a scene in which a witness delivered testimony. Looking at the actor, I instantly identified him as  the little-known actor William Campbell.  But I had only seen William Campbell in two other movies or TV shows, those being two Star Trek episodes made in 1967.  I was able to instantly identify him even though he looked 16 years younger than I had ever seen him, and even though I should have by all rights forgot his name (a name I cannot recall thinking about, reading about or hearing about in the thirty years previous to today).  No one will ever be able to explain how such instantaneous recollection of very obscure memories (not accessed in decades) could occur if memories are stored in the brain. 

biology word cloud
Modern biology word-cloud