Header 1

Our future, our universe, and other weighty topics


Friday, March 4, 2022

Quanta Magazine Draws the Wrong Conclusion About Neural Noise and Memories

The online science journalism web site called Quanta Magazine at www.quantamagazine.org is a graphically slick affair, but its articles often are defective, because its writers so often reverently extol or hype or misrepresent dubious studies by scientists, and almost never seem to apply critical judgment when analyzing such questionable work products. Very many times at this site we will read articles on the deepest topics of physics and biology and the human mind, containing dubious or presumptive statements about  the fundamental nature of reality or life or mind or the universe, written by quite youthful-looking writers.  When I look up the biographies of such writers on the site, I sometimes find they lack any stated relevant science education qualifications.  Sometimes the authors have relevant PhD's, but in quite a few cases we have biographies of young adults that may list only a relevant bachelor's degree or maybe not even that.  Many years of independent study on a topic can make up for failing to study that topic very deeply in college, but when I see a youthful-looking face, I tend to doubt that many such years of independent study occurred. 

A recent example of a defective article in Quanta Magazine was an article entitled "New Map of Meaning in the Brain Changes Ideas About Memory."  The article is a credulous treatment of some very dubious neuroscientist studies that are guilty of the same type of  Questionable Research Practices found in a large fraction of all  experimental neuroscience papers these days.  In the article we read repeated claims that some kind of neural representations have been found.  Representation is when one thing represents or stands for another, according to some system of symbols or tokens. All claims of representations in the brain outside of DNA are groundless. We know that the DNA in brain cells (and all other cells) have a form of representation, because certain groups of nucleotide base pairs in DNA stand for or represent particular amino acids, under the representation system known as the genetic code. Other than this, there is no robust evidence of any type of representation anywhere in the human body.  There is zero evidence that there exists any such thing as a "map of meaning in the brain." 

The Quanta article mentioned above refers to this neuroscience paper guilty of the usual Questionable Research Practices so prevalent in the dysfunctional realm that is modern experimental neuroscience. The paper ("A Continuous Semantic Space Describes the Representation of Thousands of Object and Action Categories across the Human Brain") makes this groundless claim: "The brain represents object and action categories within a continuous semantic space." The paper gives no robust evidence for any such representations.  Among the defects of the paper are these:

  • Insufficient sample size. The paper describes some brain scanning of a mere five subjects who watched movies.  Fifteen subjects per study group is the minimum for a modestly persuasive experimental result for studies of this type. A sample size calculation (something necessary for any experimental paper of this type to be taken seriously) would have revealed the shortfall, but no such calculation was done.  According to Table 7 of the paper here, a correlation experiment reporting the effect sizes reported by this paper should have used at least 27 subjects. 
  • No control group was used, quite the serious defect for a study like this. For each of the brain-scanned subjects watching movies, there should have been an equal number of brain-scanned control subjects who were not watching movies. 
  • The study used no blinding protocol (something necessary for any experimental paper of this type to be taken seriously), and neither the word "blind" nor "blinding" appears in the paper. 
  • The study was not a pre-registered study registering before data collection an exact hypothesis and exactly how data would be gathered and analyzed, meaning the authors were free to apply any type of analysis they wished after gathering data, and free to improvise how they gathered data,  in a "fishing expedition" type of affair maximizing the chance they would report finding whatever they hoped to find, possibly after slicing and dicing the data until they got the desired result. 
The same defects (the same Questionable Research Practices) are found in another scientific paper referenced by the Quanta Magazine article mentioned above: the paper "A network linking scene perception and spatial memory systems in posterior cerebral cortex," which has the same problems as listed above, except that the study group sizes were 14, 13 and 6 rather than 5.  The same defects (the same Questionable Research Practices) are found in another scientific paper referenced by the Quanta Magazine article mentioned above: the paper "Natural speech reveals the semantic maps that tile human cerebral cortex," which has the same problems as listed above, except that the main study group size was only 7 subjects.  The same defects are found in a paper discussed last month in a misleading Science Daily article entitled "Key brain mechanisms for organizing memories in time." That paper ("Hippocampal ensembles represent sequential relationships among an extended sequence of nonspatial events") used only 5 mice. The paper relied on machine learning, which is being carelessly used by neuroscientists who fail to realize machine learning has an extremely large potential for finding causally irrelevant meaningless correlations.  An expert on computers and software says, "Perhaps the biggest issue with current machine learning trends, however, is our flawed tendency to interpret or describe the patterns captured in models as causative rather than correlations of unknown veracity, accuracy or impact."  A full discussion of how neuroscientists are abusing machine learning will require a separate post. 

There is zero robust evidence for any such things as semantic maps in the human brain, and zero robust evidence for any kind of representations in the brain other than the "nucleotide base pair combinations representing amino acids" representations found in the nucleus of neurons and all other cells. 

In Quanta Magazine what routinely happens is that some article will discuss some shoddy-science research guilty of Questionable Research Practices, and we will see photos showing some scientist involved in the research, with a big proud grin on his or her face.  Were such photos to show facial expressions matching the quality of the research practices involved, we would see people holding their heads in shame. 

scientist puff piece

A community with very strong ideological motivations, the neuroscientist research community resembles some ideologically-motivated research community dedicated to proving that the ghosts of deceased animals linger in the clouds.  What would happen if such a ghosts-in-the-clouds research community were to follow a very rigorous set of research standards? Conceivably they might get some good evidence in support of their cherished belief that animal ghosts live in the clouds. But what would be the main results if such a community of researchers had bad research practices, and failed to follow rigorous standards? Then what they would mainly produce in their literature are false alarms, such as photos of clouds looking a little like animals.  Similarly, what should we expect to mainly get from a community of neuroscience researchers who are very eager to prove claims that brains produce minds, when such researchers routinely act as if they had almost no experimental research standards, and routinely act as if they were oblivious to sensible rules for producing reliable results?  We should mainly expect to get false alarms. That is mainly what is showing up in the neuroscience articles appearing in Quanta Magazine and other science news sites: mainly false alarms. 

In the 2020 study "Sample size evolution in neuroimaging research: An evaluation of highly-cited studies (1990-2012) and of latest practices (2017-2018) in high-impact journals" by Denes Szucs and John P.A.  Ioannidis we have some facts that illuminate the typically dismal quality of experimental neuroimaging studies.  Analyzing more than 1000 papers, the study found the following:
  • that the median reported sample size for neuroimaging imaging studies is a mere 12 (even though, according to section 4.1 of the paper, a sample size more like 33 or 34 is typically needed for a study to be robust);
  • that even though "publishers and funders should require pre-study power calculations necessitating the specification of effect sizes,"   almost none of the papers had such pre-study power calculations (in which you calculate the number of subjects needed for a robust effect).

Another defective article on neuroscience recently appearing in Quanta Magazine is an article entitled "Neural Noise Shows the Uncertainty of Our Memories."  The article refers us to the paper "Joint representation of working memory and uncertainty in human cortex."  In that paper we find the same Questionable Research Practices listed above: no blinding protocol, no pre-registration of a hypothesis and techniques to be followed, too small sample sizes (only 11 participants for Experiment 1), and the only control group being a way-too-small group of three subjects.  15 subjects per study group (including controls) are the minimum for a modestly reliable experimental result for studies of this type. 

The Quanta Magazine article suggests the laughable idea that neurons use Bayesian mathematics or probabilistic calculations in recognition or memory.  The young author of the article cites some very relevant problems with the related research, but apparently fails to be alarmed by the giant red flags involved, like someone cheerfully driving along while red lights are flashing on his car dashboard.  The article states this:

"Still, 'one thing to realize is that the actual correlations are very low,' said Paul Bays, a neuroscientist at the University of Cambridge who also studies visual working memory. Compared to the visual cortex, fMRI scans are very coarse-grained: Each data point in a scan represents the activity of thousands, perhaps even millions of neurons. Given the limitations of the technology, it’s notable that the researchers were able to make the kinds of observations in this study at all.  Hsin-Hung Li, a postdoctoral researcher in Curtis’ laboratory at NYU, used a brain scanner to measure the neural activity associated with a working memory, then assessed the research subject’s uncertainty about the memory.  'We are using a very noisy measurement to tease apart a very tiny thing,' said Hsin-Hung Li, a postdoctoral researcher at NYU and first author of the new paper."

What the Quanta Magazine writer should have done is to recognize that these confessions are indications that no reliable evidence has been found, and that what we have is some gossamer-thin false alarm results that are being "teased out" by researchers eager to find some particular result, like some eager ghost-in-the-clouds believer enthusiastically scanning the heavens for faint traces of ghosts in the sky. 

The title of the Quanta Magazine article is inaccurate. Neural noise does not show that human memories are uncertain. There is a great abundance of very severe noise all over the place in the brain. But that does nothing to show that human memories are unreliable.  To the contrary, it is a fact of human experience that humans can memorize with perfect accuracy very long bodies of information.  This is shown every time a stage actor plays the very long role of Hamlet (a role of 1480 lines) without committing an error, and is also shown every time a Wagnerian tenor sings the very long role of Siegfried (a role of 6000+ words) without committing an error.  No such feats should be possible if such persons are retrieving memories stored in brains, given how much noise exists in the brain. 

Brains are extremely noisy. Many neurons fire at unpredictable intervals, just as maple leaves fall from a tree in autumn at unpredictable intervals. A scientific paper tells us, “Neuronal variability (both in and across trials) can exhibit statistical characteristics (such as the mean and variance) that match those of random processes.” Another scientific paper tells us that Neural activity in the mammalian brain is notoriously variable/noisy over time.” Another paper tells us, "We have confirmed that synaptic transmission at excitatory synapses is generally quite unreliable, with failure rates usually in excess of 0.5 [50%]." A paper tells us that there are two problems in synaptic transmission: (1) the low likelihood of a signal transmitting across a synapse, and (2) a randomness in the strength of the signal that is transmitted if such a signal transmission occurs. As the paper puts it (using more technical language than I just used):

"The probability of vesicle release is known to be generally low (0.1 to 0.4) from in vitro studies in some vertebrate and invertebrate systems (Stevens, 1994). This unreliability is further compounded by the trial-to-trial variability in the amplitude of the post-synaptic response to a vesicular release." 

The 2010 paper "The low synaptic release probability in vivo" by Borst is devoted to the topic of what is the chance that a synapse will transmit a signal that it receives. It tells us, "A precise estimate of the in vivo release probability is difficult," but that "it can be expected to be closer to 0.1 than to the previous estimates of around 0.5."  Slide number 20 of the 2019 Power Point presentation here has a graph showing that this release probability is often around 0.1 or 0.2, and the same page mentions 0.3 as a typical release probability. 

Another paper concurs by also saying that there are two problems (unreliable synaptic transmission and a randomness in the signal strength when the transmission occurs):

"On average most synapses respond to only less than half of the presynaptic spikes, and if they respond, the amplitude of the postsynaptic current varies. This high degree of unreliability has been puzzling as it impairs information transmission."

It would be almost impossible to overestimate the significance of such facts, which would have the greatest effects all over the place if brains were the storage place of human memories.  Such large levels of neural noise would not just prevent learned information from being accurately stored.  Such neural noise would also prevent learned information from being accurately retrieved.  Imagine if you had a computer so unreliable that each time you saved a file, a particular character (such as a,b,c and so forth) was saved with a likelihood of less than 50%. Imagine the same computer was so unreliable that when it read a stored file, each character would be displayed on your screen with a likelihood of less than 50%.  With such a computer you would never be able to store and retrieve information reliably. We can say the same thing about a brain that has as much noise as the neural noise discussed above.  


signal noise


But the fact is: despite all this neural noise, many humans are able to memorize with perfect accuracy very large bodies of information.  This is shown not just by Hamlet actors (who perfectly recite 1480 lines in one evening) and Wagnerian tenors who perform similar feats, but also by some Islamic scholars who can recite perfectly every line of their long holy book of more than 6000 lines, and also Christian scholars who were able to perform similar feats of memorization (an example being Tom Meyer who memorized twenty books of the Bible). Akira Haraguchi was able to recite correctly from memory 100,000 digits of pi in 16 hours, in a filmed public exhibition. Many equally astounding cases are discussed here and here, such as the case of a mathematician (Euler) who could recite every line of Homer's Aeneid (a work of 9883 lines). 

There is a correct conclusion to draw from the fact of extremely abundant neural noise. That conclusion is that the brain is not (and cannot possibly be) the storage place of human memories. Such a conclusion is consistent with all well-established facts known about the brain, such as the very short lifetime of proteins in the brain (about 1000 times shorter than the maximum length of time that old people can remember things), the rapid turnover and high instability of dendritic spines, the failure of scientists to ever find the slightest bit of stored memory information when examining neural tissue, the existence of good and sometimes above-average intelligence in some people whose brains had been almost entirely replaced by watery fluid (such as the hydrocephalus patients of John Lorber),  the lack of any indexing system or coordinate system or position notation system in the brain that might help to explain the wonder of instant memory recall, the good persistence of learned memories after surgical removal of half a brain to treat severe seizures,  the ability of many "savant" subjects (such as Kim Peek and Derek Paravicini) with severe brain damage to perform astounding wonders of memory recallthe fact of very vivid and lucid human experience and human memory formation in near-death experiences occurring after the electrical shutdown of the brain following cardiac arrest, and the complete lack of anything in the brain that can credibly explain a neural writing of complex learned information, a neural reading of complex learned information, or a neural instant retrieval of learned information. 

If our neuroscientists were to stop wasting so much time on poorly designed experiments failing to follow good research practices,  and were they to deeply delve into the massive evidence for anomalous mental phenomena utterly beyond any neural explanation (a topic they are extremely negligent in studying),  our neuroscientists might find themselves on a path that might lead to some good alternate non-neural theories to explain the basic wonders of human mental phenomena such as understanding, self-hood and memory. 

Today we have another defective story about memory in Quanta Magazine, one with the untrue headline "Scientists Watch a Memory Form in a Living Brain." The story refers to a study that has  inadequate sample sizes (such as 5 and 11), no blinding protocol, and no pre-registration; but it least it used control groups. Fish were supposedly taught something and then synapses were observed. The result did not even match neuroscientist teachings that memories form from synapse strengthening.  We read this:

"The researchers imaged the pallium before and after the fish learned, and analyzed the changes in synapse strength and location. Contrary to expectation, the synaptic strengths in the pallium remained about the same regardless of whether the fish learned anything.Instead, in the fish that learned, the synapses were pruned from some areas of the pallium — producing an effect 'like cutting a bonsai tree,' Fraser said — and replanted in others."

There is zero justification for using the term "replanted" here. Very probably, what was being observed was simply random fluctuations in synapse numbers, the type of fluctuations that would have occurred even if nothing had been learned, and even if an organism had been sleeping.  Referring to TFC (tail-flick conditioning, a type of learning), the paper tells us that learning made no difference in the total number of synapses  (using L to mean learners, PL to mean partial learners, NL for non-learners): 

"When the total number of synapses before versus after TFC was compared for L, PL, and controls (CS only, US only, and NS) over the entire pallium, no significant difference was found. A modest decrease (∼10%) was found for NL (Fig. 4A; P > 0.05 for L, PL, and controls; P < 0.05 for NL, Wilcoxon test). Additionally, synapse numbers did not differ significantly between L, NL, PL, and control fish at either time point (SI Appendix, Fig. S5; P > 0.05, Kruskal–Wallis test). Thus, our data indicate that TFC learning is not associated with a significant change in the total number of synapses in the pallium."

To their credit, the paper authors make no unwarranted claim in their paper title or abstract. The paper title is "Regional synapse gain and loss accompany memory formation in larval zebrafish."  Yes, synapses are gained and lost when you form a memory -- and also when you don't form a memory.  Synapse gain and loss is continual throughout the brain, regardless of memory formation, largely because synapse proteins last for only about two weeks or shorter, and because individual synapses don't last for longer than one or a few years.  Deplorably, Quanta Magazine has reported this result (doing nothing to show that memories are formed in brains) with the groundless headline "Scientists Watch a Memory Form in a Living Brain." 

No comments:

Post a Comment