Header 1

Our future, our universe, and other weighty topics

Saturday, December 4, 2021

Fragment Follies of the Guessing Glue Guys

There were many misleading statements and phrases that helped build the mighty cultural empire of materialist Darwinism, and such duplicity continues to this day. Consider the example of a recent press announcement of the supposed discovery of a Homo naledi skull. No skull was actually discovered. All that was discovered were small bone fragments that some scientists claimed might be from the skull of the same individual (without any good warrant for such a claim). 

A typical credulous and misleading press annoncement of this discovery was an article entitled "First Intact Skull of Tiny Human Ancestor Homo Naledi Discovered." We do not at all know that Homo naledi was a human ancestor, and the small bone fragments in question do not at all make up a skull (which is defined as a framework of bones enclosing the brain). 

A story in the Daily Mail also inaccurately describes these bone fragments as a "skull."  But the story includes a large picture that shows that no such skull was found. We see a speculative skull that looks perhaps 90% black clay, and maybe about 10% old bone material.  There are hundreds of different speculative skulls one could have made from the black plaster, which would have fit with the skull fragments.  So we have no idea whether the skull fragments belonged to an individual with a skull having the shape shown in the photo. We don't even know whether the skull fragments came from a single organism, or a single species. The scientific paper does not claim any likelihood that the fragments are from a single individual, and merely mentions this as a "hypothesis," claiming the results are "consistent" with such a hypothesis. 

In the Daily Mail article (and in a press release) we hear a paleontologist give a very weak justification for the claim that the fragments all came from the same skull: "There were no replicating parts as we pieced the skull back together and many of the fragments refit, indicating they all came from one individual child."  That is not anything like a robust justification for the claim that the fragments are from a single individual.  At the end of this post I will explain why.  And no one "pieced the skull back together," as only small bone fragments were found. 

How did the press get the idea that these small bone fragments constitute a skull? It is because of a misleading university press release that encouraged them to believe them. That press release issued by the University of Witwatersrand includes a very misleading visual. 

The press release is here (the original source server is now acting sluggishly, but when I wrote this post it was behaving okay).  It begins by saying, "Meet Leti, a Homo naledi child discovered in the Rising Star Cave System."  What a flight of fancy, calling the discovery of a few tiny bone fragments the discovery of a child. The press release inaccurately describes the small bone fragments as a "skull." 

Near the top of the press release,  I saw a very misleading visual (the sluggish, erratic performance of the page may prevent you from seeing it). It was artwork showing an almost-full fossil skull. But nothing like such a skull was discovered. All that was discovered were fragments that were maybe a tenth the size of the skull shown in the artwork.  This is like someone who has merely built half of a foundation of a house trying to mislead you into thinking that he built almost a whole house. 

The perhaps 90% black molding clay skull-shaped mockup shown in the Daily Mail story is only palm-sized. So how can someone claim that small fragments of a monkey-sized skull belongs to a human ancestor? Through the speculation that there was a small child corresponding to the sparse fragments.  It's seems more likely that we are seeing some fragments from one or more organisms vastly different from a human, and we have no warrant for believing we are seeing a single skull of a human ancestor. 

What can we call such overenthusiastic duplicity? Maybe we should call it "cheating for Charles," since it is all about trying to back up the explanatory ideas of Charles Darwin. A dubious fossil claim was recently debunked.  The fossil of some animal with four limbs was hailed as a four-limbed snake, and claimed as evidence that snakes had descended from lizards. Further analysis shows that the fossil corresponds to a lizard that was not at all a snake. 

One of the country's leading science publications recently published an article on how a prominent researcher is under a cloud of suspicion for allegedly faking data. The researcher was identified as an ecologist. But a search for his papers on Google Scholar shows that the researcher has published quite a few papers trying to prove claims of evolutionary biology.  Apparently the well-known science publication did not want its readers to be disturbed by suspicions that an evolutionary biologist may be faking things. So rather than referring to the person as an "ecologist and evolutionary biologist," he was merely referred to as an ecologist. 

In order to make skulls or skeletons that are mostly plaster or some similar modeling material, paleontologists or their helpers often resort to gluing pieces of bone together, often making wild guesses about "fitting fragments." Even when not making such mostly plaster skulls or skeletons, paleontologists often glue bone fragments together. A large fraction of such activity should be regarded as cheating.  Glue is often deceptively used to make it look like a big fragment was discovered when only smaller fragments were discovered.  Paleontologists freely admit to engaging in this shady practice of gluing together fragments. Do a Google search for "paleontologist gluing together fragments" and you will find many confessions.  Paleontologists often use resin-like substances to bind together fragments, substances that are kind of like cement, quite capable of binding together fragments that do not naturally fit. 

At one site we read that people are using superglues to bind together fossil fragments, and that "a lot of people use sodium bicarbonate or baking soda to help bulk out the superglue as a fast-drying gap filler that has the same strong bonding properties of the superglue alone." We can imagine how that would work. You might take two bone fragments that do not naturally fit together, apply some gap-filling white-colored baking soda mixed with superglue, and then scrape off some of that filler in a kind of molding manner, to make what looks like a single bone, when the original bone fragments don't really fit together. Very convenient, but with a very high chance of creating visually deceptive results. 

What probably goes on very often is that paleontologists or their helpers are binding together fragments that never were actually close to each other in the same organism. The only honest way for a paleontologist to bind together fragments would be to use some binding substance that had a very non-white color (such as bright red or black), so that anyone could see all the spots where glue-binding occurred. Instead paleontologists typically use binding materials that have a bone-like color (such as baking soda mixed with superglue), which works better for disguising the fragment-gluing, and works better for fooling people into thinking that multiple fragments were found as a single fragment. 

We should not imagine that our lofty professors do the bulk of the messy work of gluing fragments. It is probably done largely by the less accomplished, such as assistants and students. The problem is that when you have bone fragments that might be from any of a large number of species and from any of many individuals from a particular species, there is usually a large element of speculation involved in the decision to glue two pieces of bone together. Such a decision involves presuming that the bones long ago fit together in a single individual.  But such a presumption is often a wild guess without any sound justification. 

paleontology problem

Short of gluing bone fragments in a bright distinctive color or black color allowing anyone to see where filler was used, the only honest way for a paleontologist to present fragments is to simply arrange the fragments on a flat surface, without doing any gluing. That is what occurred for the famous Lucy fragments, but it is not what usually occurs. 

Superglue was invented in 1942, but long before that there was shellac. A few coats of shellac will surround something in a rock-like translucent resin, as anyone knows who has used shellac on a wooden floor.  Many old fossil displays were produced by putting  together fragments (possibly from different organisms), gluing them in a not very stable way, and then coating the result in shellac to create a stable structure. 

A Scientific American article in entitled "How Fake Fossils Pervert Paleontology." The subtitle is "A nebulous trade in forged and illegal fossils is an ever-growing headache for paleontologists." We hear about poor people in distant lands who first heard that you can get lots of cash by finding a good fossil, and who then started to make fake fossils in hopes of getting lots of money.  

It seems that many of the fossils in museums (particularly in China) may be such fake fossils. Even in countries like the United States, a significant fraction of the "fossil exhibits" in museums are outrageous fakes. Museums (often directed by professors) often display structures that countless visitors think are fossils, but which are merely fiberglass or plaster artworks inspired by some data obtained from studying fossils.  One large natural history museum confesses that only 85% of the specimens in one of its exhibit halls are actual fossils. When we consider that a large fraction of all fossils in natural museums were produced by people speculatively gluing together fragments, we should wonder whether most of the fossils in natural history museums are in some sense fakes.  

We should regard as impeached most fossil displays that involved gluing in their construction, and should disqualify them as evidence. Whenever you see a fossil that involved gluing, there should be great doubt that any particular organism had those exact bones, unless the display is accompanied by a description telling us that all of the fragments were found in the same very small area equal to about the length of the organism.  Such descriptions are typically lacking.  Paleontologists or their helpers may gather bone fragments from an area much larger than the length of an organism, and then glue and mortar those fragments together, trying to pass them off as the skeleton of a single organism.   For example,  fragments supposedly coming from a 2-meter tall organism may be found scattered over 10 meters or 20 meters.  Typically a press release will fail to mention how wide and long and deep was the area of gathering, and we will merely be told something like "they were found at the same place." The longer and wider and deeper the area of gathering, the less plausible are claims that the fragments came from a single organism.  

We can imagine an extremely careful protocol that could be used to determine the exact distance between found bone fragments. On a flat surface, a pole might be planted at the center of the gathering area, with an attached tape measure, and with a North and South point marked at the pole. Whenever a bone fragment was gathered, its position would be noted by measuring the distance from the pole, and also the degrees (between 0 and 360) of the position, using those North and South points.  The bone fragment would be put in a plastic bag that included that tape measurement with the degrees estimate.  Using such careful measurements, it could be possible to estimate the exact distance between found bone fragments.  I doubt that so careful a technique is used by most paleontologists or their helpers.  I would imagine it's usually just "gather up bone fragments here and there, and say they were found at the same spot."  But unless the greatest care is taken to measure the distance between found bone fragments, there can be no reliable claims that they all came from the same individual. And what if you are gathering fragments in some cave or ditch where such a method as I described is not practical? Then you probably won't bother trying to keep track of the exact distance between bone fragments. 

I referred above to the weakness of claiming that 28 small fragments are fragments of a single individual, on the basis that "there were no replicating parts." Computing the probability of a replication of parts with 28 small bone fragments is a mathematical problem similar to the famous birthday problem, the problem of calculating the likelihood of two matching birthdays in a group of certain number of people. Mathematicians have well-studied that problem, and have determined that (surprisingly) if there are more than 23 people in the group, there will be a likelihood of at least one matching birthday. 

Now, if the average size of some bone fragments were greater than 1/365 of a skeleton, you might think that there would be a likelihood of one fragment matching the position of another when there are more than 23 fragments. However, with small fragments that have an average size less than 1/365 of a skeleton (as is the case, apparently,  with the Homo naledi fragments recently reported which have an average size of perhaps 1/500 of a skeleton), the chance of a match of any two fragments is much lower. In fact, there would be an unlikelihood of any position match given 28 small random fragments from different individuals when the average fragment size was less than 1/500 of a skeleton.  So the fact that no positional match was found does nothing to establish a likelihood here of the fragments being from a single individual. 

We must also consider another factor: the large possibility of researchers discarding (or differentially classifying) matching fragments. Let us imagine a paleontologist trying to position each of 29 small bone fragments within a speculative plaster skull or skeleton.  He wants to claim that these are all fragments from a single individual. But suppose that two of the fragments seem to be from the same position (for example, two bone fragments looking like part of the right pinky finger).  What would we would expect the paleontologist to do? We would expect the paleontologist to simply put aside the inconvenient match, and write up a paper telling us about 28 bone fragments, none of which matched (or to conveniently describe one of the matching fragments as coming from some other part of the body, so that there is no match).  This is another reason why a lack of match in some set of a small number of bone fragments does nothing to establish a likelihood that the bone fragments are all from a single individual.

For decades we were told that the bone fragments of the famous Lucy fossil find were are all from the same individual. But in 2015 scientific analysis suggested that one of the bones was from an entirely different species, a baboon. In an article discussing this, an expert states, "The co-mingling of skeletons is quite common in the archaeological record and it can often be difficult to separate out different elements if multiple bodies are mixed together." We can only wonder how many other parts of Lucy are not from the same individual. 

The famous Lucy fossil fragments were first discovered by Donald C. Johanson, who describes finding the first fragments (with graduate student Tom Gray) on pages 16-18 of his book Lucy: The Beginnings of Humankind. In the excerpt below the careful reader will find a good reason for doubting claims that Lucy is a skeleton of a single individual:

"That afternoon everyone in camp was in the gully, sectioning off the site and preparing for a massive collecting job that ultimately took three weeks. When it was done, we had recovered several hundred pieces of bone (many of them fragments) representing about forty percent of the skeleton of a single individual."

I can find no description of how long and wide and deep was the size of the area scoured during these three weeks to get these Lucy bone fragments, either in this book or in the original scientific paper reporting the find. But from the fact that it took "everyone in camp" three weeks of work to gather the fragments, we may presume that they were gathered over a rather large area, which is not at all what we would expect if the fragments came from a single individual. 

Acting in an astonishingly credulous manner, science writers seem to almost never bother to ask the crucial question of "how long and wide and deep was the gathering area" when hearing some claim that fossil bone fragments came from a single individual. They seem to have zero interest in such a question, which is crucial to testing the reliability of claims that fossil fragments came from a single individual.  

The lead researcher of the latest Homo naledi paper is the co-author of a book I mentioned in 2019, one entitled Almost Human: The Astonishing Tale of Homo naledi and the Discovery That Changed Our Human Story." But on page 193 of their book, the authors give us an estimate of the brain size for the Homo naledi organisms corresponding to the fossils they found. It was about 560 cubic centimeters for a male, and about 450 cubic centimeters for a female. The average size of a male human brain is about 1350 cubic centimeters. So far from being “almost human,” Homo naledi had a brain only about 41% of the size of the modern human brain. Judging from brain size, we shouldn't even consider Homo naledi as half-human. So why was the title "Almost Human" used? 

Recent coverage of very small fossil fragment finds has a "grasping at straws" sound to it, along with the same old misleading words we hear so often in coverage of paleontology activity. We hear a LiveScience.com story that refers to "fossils" and "bones." Covering the same story, Phys.org also uses the term "fossils" and "bones" while telling us that none of the 3800 items found was longer than 4 centimeters, about the length of a peanut.  At least there's no glue fakery. On the basis of extremely dubious DNA analysis, in which only tiny fragments of DNA were analyzed, some scientists made  unwarranted conclusions about who had these the tiny fragments of bone.  The DNA fragments must have been tiny, because the half-life of DNA is only 521 years. Tiny fragments of bone plus tiny fragments of DNA do not add up to reliable conclusions. 

Very recently we had quite a confession in a biology paper whose primary author is Tom Roschinger, a CalTech scientist in the Division of Biology and Biological Engineering. The paper begins by saying, "Biological systems have evolved to amazingly complex states, yet we do not understand in general how evolution operates to generate increasing genetic and functional complexity." The statement stands in defiance of long-standing groundless boasts that scientists have figured out how complex biological things originated. Funny how confessions like this tend to come more often from people who have studied engineering. 

Tuesday, November 30, 2021

The Bigelow Prize Essays on the Best Evidence for Life After Death

 The Bigelow Institute for Consciousness Studies (funded by the very wealthy businessman Robert Bigelow) recently published a series of prize-winning essays attempting to answer the question of what is the best evidence for life after death. Most of the results are very impressive. Because there were very large money prizes to be won ($500,000 for first place and $300,000 for second place), the contestants seemed to go the extra mile, typically writing essays that are more like small books rather than mere essays.  These long essays are loaded with references to original sources, which allow you to investigate some of the discussed topics further. The best thing is that you can very conveniently read all of the prize-winning essays by going to the page here.  Each essay must be read as a .pdf file. 

The first prize essay by Jeffrey Mishlove covers a very wide variety of topics such as near-death experiences,  what are called "after-death experiences," evidence for reincarnation, mental mediumship, physical mediumship, and ESP.  There are many links to video clips.  I never knew it was possible to embed video clips in .pdf files. Many of the clips are of people who Mishlove personally interviewed.  I like the technique of including many video clips of eyewitnesses and people who personally experienced the paranormal.  Skeptics sometimes try to portray those who report the paranormal as being unhinged or kooky, but the truth is that people who report the paranormal tend very strongly to be ordinary people who talk in ordinary tones and expressions. You can find that out by playing some of the clips in Mishlove's long essay. 

Mishlove has a notable eyewitness account of his own to report, that before learning of his uncle's death, Mishlove had a very vivid dream in which it was like his uncle was talking to him at length. Concerned, he wrote his mother asking about his uncle's health status, and got a phone call two days later saying that he died on the same day as Mishlove's dream. 

The second prize essay by Pim van Lommel MD discusses mainly near-death experiences, a topic he has researched for many years. He reports observing in 1969 a near-death experience in a man undergoing cardiac arrest, at a time in which the term "near-death experience" hadn't even been coined.  On page 33 of his essay we read an experience personally related to him by someone else, an experience of interest to people such as me who photograph orbs. Here is the account he received:

"At the end of 2000 my eldest son died by suicide. From the police report estimating time of death, I learned it was about the same time a curious thing happened to me. I saw a sphere of light enter my window and then enter my forehead whereupon I was suddenly ‘awareness without a body', in a place profoundly lit, with a sense of spacious depth but without any features. I felt profound warmth and love, felt my son was ok, and heard the words, 'There is nothing wrong and there has never been anything wrong.' Somehow, I knew this to be true with every cell of my body even though he had been suffering with pain for years. The next day I learned of his death."

The third-prize essay by psychologist Leo Ruickbie shows a lot of scholarship about the relevant observational topics such as apparition sightings, clairvoyance and similar topics. I certainly must credit Ruickbie (a psychology PhD)  as having done more to study the paranormal than 99% of all psychology PhD's, who as a whole are very unscholarly on this topic.  

We then have eleven "second tier" essays by contestants who each seem to have won $50,000. The first of these is by Julie Beischel, a PhD who for many years has tested modern-day mediums, apparently finding that some of them stood up very well to rigorous testing protocols. The second "second tier" essay is an essay by Stephen E. Braude that is less powerful than most of the other essays in terms of presenting evidence, and seems mainly devoted to questions such as what qualifies as good evidence. 

The third $50,000 winner essay is by philosopher Bernardo Kastrup, whose title indicates that he will provide "a Rational, Empirical case for postmortem survival based solely on mainstream science." I object to the implied idea,  that we should perhaps not pay attention to that which is outside of so-called "mainstream science." Good reliable observations appearing in large numbers should be respected and discussed and studied, regardless of whether they have been declared taboo by people calling themselves "mainstream scientists," who have a very bad habit of declaring reports of many varieties of human experience as things that are "out of bounds" (mainly because they conflict with the unproven dogmas of such scientists). 

Kastrup's essay is cluttered up with some references to not-very-convincing neuroscience studies that I suspect used Questionable Research Practices (as a large fraction of all experimental neuroscience experiments do these days).  He has some idea that brain function is decreased when you take so-called mind-expanding drugs, but his case for this rests on marginal brain scanning studies that probably are of low statistical power, and probably fail to show robust evidence for a strong effect. Kastrup's case is largely philosophical, and his essay would have been far stronger if he had not started out by deciding to admit only evidence that would be classified as "mainstream science." 

But there is a way to make a case for life after death without leaving so-called "mainstream science."  Following the latter part of the essay here, that way is to concentrate on low-level details of the brain, and to show all of the ways in which the brain is unsuitable for explaining the main mental phenomena of humans, such as instantaneous acquisition of new memories, instantaneous memory retrieval, very fast thinking, very fast recognition, understanding, insight and so forth.  This approach involves a very careful study of things such as the very short lifetimes of brain proteins, the low stability of synapses and dendritic spines, the lack of any actual read or write mechanism in the brain, the absence of any evidence that brains do any such thing as converting learned information to neural states or synapse states, the variety of very high noise levels in the brain which should prevent fast accurate recall, the lack of any coordinate system or indexing system in the brain that would allow instant memory retrieval, the unreliability of synaptic transmission in the cortex, and so forth. The approach leads to a life after death conclusion, because if our brains cannot explain our minds and our memories, there is no reason to think that such things should perish when our brains perish.  But Kastrup fails to take that approach, and his essay ends up seeming weaker than most of the other essays. 

The fourth $50,000 winner essay by Elizabeth G. Krohn is mainly an eyewitness account of remarkable paranormal experience, which includes an account of an apparent gain of psychic functioning after being struck by a lightning bolt. It makes interesting reading, and its autobiographical approach is a nice change of pace from the other essays. 

The fifth $50,000 winner essay by Jeffrey Long MD is by one of the leading researchers of near-death experiences, and is well worth reading. The sixth $50,000 winner essay by Michael Nahm is a discussion of an extremely wide variety of evidence relevant to life after death, one that shows considerable depth of study and scholarship. 

The seventh $50,000 winner essay by Sharon Hewitt Rawlette has some fascinating accounts. We read this on page 7 an example of the very common phenomenon called deathbed visions:

"For instance, in one case, a five-year-old girl named Lalani who was dying of leukemia began to talk about her interactions with someone named 'George,' whom no one else in the room could see. Everyone thought she was imagining things until one night when her grandmother went through a photo album with her. Her grandmother turned to a page Lalani had never seen before, and Lalani suddenly exclaimed, 'There’s George!' The man in the photograph was the grandmother’s own godfather, who had died when the grandmother was herself only five years old. Though Lalani had no normal way of knowing it (according to the family), his name was indeed George." 

We read this very interesting account on page 32:

"In another case cataloged by Rivas, Dirven, and Smit, this one investigated by Dr. Melvin Morse and Paul Perry, Olga Gearhardt of San Diego, California, was receiving a heart transplant. Her whole family had gathered at the hospital during her surgery, except for her son-in-law, who had a phobia of hospitals. At 2:15am, the new heart would not beat properly and then stopped completely. The resuscitation process took hours, but finally her new heart was persuaded to function properly. Meanwhile, the son-in-law, at home, woke up at 2:15am to see Olga standing at the foot of his bed. She was so lifelike that he thought it was actually her, that her plans must have changed and, instead of getting surgery, she had come to his house. He asked her how she was doing, and she told him, 'I am fine, I’m going to be all right. There is nothing for any of you to worry about.' When she disappeared, he got up and wrote down the time and what she had said. The next morning, when Olga came out of surgery, she mentioned 'the strange dream' she’d had, which appears to have been a near-death experience. She not only had the experience of being out of her body watching the doctors operate, but she went to her family in the waiting room and tried to communicate with them. Unable to get through, she then decided to go to her son-in-law at his home, where 'she was sure she had stood at the foot of her son-in-law’s bed and told him that everything was going to be all right.' ”

The eighth $50,000 winner essay is by a man who devotes himself to describing the career of a Brazilian medium named Chico Xavier, who authored some 450 books, which he claimed were mainly produced with the help of discarnate spirits. A shorter summary of this case can be found here. I haven't studied the case, so I won't comment on it. 

The ninth $50,000 winner essay is a very strange and speculative essay by Nicolas Rouleau, who like Kastrup includes some not-very-convincing brain studies trying to support his ideas.  There's a lot of talk about electromagnetism and geomagnetic fields that I don't find very convincing. On page 37 of his essay he refers to the "electromagnetic forces that give rise to experience and thought," an idea that is as implausible as the claim that brains give rise to experience and thought. 

I recommend skipping the tenth $50,000 winner essay, one by David Rousseau and Julie Billingham. It features a bunch of boldface principles that are sometimes doubtful or untrue. One is the untrue statement that "a hypothesis cannot be proven, we can only argue for its plausibility." Another is the claim that "naturalistic things are the things that science can study," a principle that has long been used as an excuse for not studying the paranormal or the seemingly supernatural. If scientists were to follow such a principle, they would not study global warming, which is presumably an effect produced not by natural causes but by artificial human causes; and scientists would also not be looking for radio signals from extraterrestrials, things which would be of  artificial origin rather than a natural origin.  You won't learn much of anything in this essay that isn't discussed in the other essays. 

Finally in the list of $50,000 winner essays we have a very cleverly conceived  essay by Michael Tymn. Tymn imagines a kind of courtroom battle between two sides, one that believes in life after death, and one that does not. We read the prosecution case in favor of life after death, in which various witnesses are called, all actual people who wrote about the paranormal. Tymn makes up the questions that are asked, but all of the answers given seem to be quotations from the published works of the witnesses.  The overall result is a very readable presentation of evidence for life after death. 

On the same page we have "honorable mention" winners that each won a $20,000 prize.  One of these is an essay claiming to provide "definitive proof" of reincarnation, based on a single case.  The case fails to do that. We have similarities between statements of a child and the career or death of a pilot dying in World War II. The similarities could be the result of coincidence, or of some kind of paranormal knowledge acquisition (clairvoyance or ESP) that does not involve reincarnation, such as telepathy between a living person and a person living in some afterlife realm. A very scholarly and thorough entry in the "honorable mention" category is the almost-book-length essay of Walter Meyer zu Erpen, covering an extremely wide variety of topics (and including some very interesting personal observations).  

Of the $20,000 prize winners, perhaps the best is an essay by hospice doctor Christopher Kerr, who has long studied end-of-life visions and dreams in the dying. Thus far I have only been able to find rather skimpy papers or articles by Kerr on this topic, but here we have a long essay with much data, case examples and links to video interviews. Kerr reports this:

"Shortly before death, the dying have dreams and visions of their predeceased loved ones, scenes of vivid and meaningful reunions that testify to an inexplicably rich and transformative inner life. The phenomenon includes a lived, felt, often lucid experiential reality whereby those loved and 'lost' return to the dying in ways that cannot be explained by memory alone. Children and parents sometimes lost decades earlier come back to put patients back together and help them transition peacefully. At the precise moment we associate with darkness, loss, physical decline, and sadness, their presence helps the dying achieve peace, comfort, and forgiveness, which suggests an existence beyond our bodily form."

Kerr uses the phrase "shortly before death," but I have well-documented reasons to suspect that something along such lines may sometimes get started a year or more before a person's death. Reporting on a study of 1500+ patients in Buffalo, New York, Kerr states this: "The data confirmed that the vast majority of dying patients, shortly before death, have these comforting dreams and visions that most commonly summon predeceased loved ones."   

William Blake book cover
A cover art illustration by William Blake

Friday, November 26, 2021

Don't Be Impressed by a Consensus That Is Enforced or Sociogenic

 At the New Atlantis web site we have a very long essay by M. Anthony Mills on the topic of scientific consensus. The title is "Manufacturing Consensus" and the subtitle is "Science needs conformity — but not the kind it has right now."  We see a picture at the top of a herd of sheep all heading in the same direction. But don't be fooled by the photo, the title and the subtitle, which are all kind of like quarterback pump fakes (when a quarterback pretends to throw in one direction, but then throws in another direction). In the essay Mills rather speaks as if he wants you to be one of the unquestioning sheep, one of the herd of meek minions who blindly believe as they are told to believe by science professors. 

The purpose of Mills seems to be to encourage a credulous "believe-it-all" attitude towards science professors and their belief traditions. Mills seems to want us to trust in our science professors as much as a Soviet commissar would trust in his politburo.  And just as Soviet commissars very much believed that Marxist party orthodoxy should be enforced, Mills tells us multiple times that scientist belief traditions should be enforced.

Using "false dilemma" reasoning, in which the reader is given a choice between two radically opposed alternatives that are not the only choices,  Mills paints a choice between two extremes. The first extreme he describes like this:

"According to one influential view, consensus should play no role in science. This is because, so the argument goes, science is fundamentally about questioning orthodoxy and elite beliefs, basing knowledge instead on evidence that is equally available to all. At the moment when science becomes consensus, it ceases to be science."

This is a "straw man" portrayal. Critics of unjustified claims of a scientific consensus do not typically claim "consensus should play no role in science." Everyone agrees that there is a consensus about things such as the existence of cells and the high temperature of the sun, and no has a problem with there being a consensus about such well-established facts. But critics of unjustified claims of a scientific consensus may reasonably claim that (1) some of the claims made about a scientific consensus are untrue because such a consensus (defined as a unanimity of opinion) does not really exist, or (2) some of the claims made about a scientific consensus are inappropriate because the underlying belief is untrue. 

After constructing and then knocking down this "straw man" of ultra-rebellious thinking in which all established facts are put in doubt, Mills proceeds to advocate what he seems to think is the proper way things should occur in the world of science. He uses a kind of reasoning very similar to some reasoning made by Catholic authorities during the Protestant Reformation, who argued that only those trained sufficiently by the Catholic Church could criticize what the Catholic Church was doing. Mills seems to advocate a world in which only those trained within science academia belief traditions can criticize the claims of such traditions. He does this by stating this:

"In order to participate in or contribute to established science — much less to criticize or overthrow it — one has to have been trained in the relevant scientific fields. That is to say, one has to have been brought up in a particular scientific tradition, whether geocentric or heliocentric astronomy, or classical or relativistic physics."

This statement is very wrong.  It is not necessary for a person to have been trained and "brought up in a particular scientific tradition" in order to criticize the statements of scientists in that tradition. Any person studying by himself for a sufficient length of time can gain enough knowledge to make good and worthwhile criticisms of the statements of scientists in many scientific fields.  Also, a person can contribute to science without being "brought up in a particular scientific tradition."  There are many ways in which citizen scientists can and have contributed to established science.

A very important point that Mills fails to realize is that you don't need to thoroughly understand a theory in order to make solid, important rebuttals of the theory.  Suppose someone hands me some bizarre 80-page document advancing some elaborate ancient astronaut theory that includes (among its many complicated claims) the claim that extraterrestrials are living in tall castles on the moon. I don't have to understand all or most of this theory to refute it. I need merely point out that astronomers have thoroughly mapped the moon, and have found no such high castles on it.  Similarly, I don't need to know much about the complicated theoretical speculations of what is called cosmic inflation theory to point out that this theory of something strange happening near the first instant of the universe it not an empirically well-founded theory. I need merely learn that cosmologists say that throughout the first 200,000 years of the universe's history matter and energy were so dense that no observations from such a period will ever be possible. Armed with that one fact, I know that this cosmic inflation theory can never possibly be verified.  

We then have a statement which is characterized by an  overawed naivete. Mills seems to suggest that scientists receiving training should trustingly accept triumphal stories handed down by their elders, rather like Catholics reverently accepting stories of the wondrous deeds of the medieval saints. He states this:

"To be initiated into a tradition, one has to first submit to the authority of its bearers — as an apprentice does to the master craftsman — and to the institutions that sustain the tradition. In the natural sciences, the bearers of tradition are usually exemplary figures from the past, such as Newton, Einstein, Darwin, or Lavoisier, whose stories are passed down by teachers and textbooks."

This is not the way any one should study nature.  We should learn about nature by studying observations, always questioning belief dogmas and whether observational claims are robust, always asking whether some claim about nature is a belief mandated by facts and observations, or merely some speech custom or belief tradition of overconfident authorities.  Sacred lore and kneeling to tradition may have its place in religion, but has no proper place in the world of science. Whenever scientists are taught through a kind of process in which "stories are passed down" like sacred lore, and revered "not-to-be-questioned" scientists are put on pedestals, it is a sign that scientific academia has gone off track. 

We may wonder whether the visual below might be a good representation of the ideas of Mills on how scientists should be trained:

bad way to train scientists

Very confusingly defined in multiple ways, "consensus" is a word that some leading dictionaries define as an agreed opinion among a group of people. The first definition of "consensus" by the Merriam-Webster dictionary is "general agreement: unanimity."  Mills fails to see that some of the most important claimed examples of scientific consensus are cases where we do not have good evidence that there actually is a consensus (defined as a unanimous opinion). The only way to reliably tell whether a consensus exists among scientists is to do a secret ballot vote. Such votes are not done of scientists. So, for example, we do not know whether even 90% of scientists believe in Darwinism or in claims that minds come mainly from the brain.  

But there are quite a few of what we may call cases of a socially enforced reputed consensus. That is where there is a general idea that there is an expectation that a scientist is supposed to support some idea (or at least not contradict it), or else he is going to be in career trouble.  Mills seems to give his approval of the idea of a socially enforced consensus.  First he states, "This is why consensus is so vital to science — and why the institutions of science not only can and do but should use their authority to enforce it." This is only one of 21 times Mills uses the word "authority" in his essay. 

Enforcing a consensus

The idea of an enforced consensus is self-contradictory. According to the Merriam-Webster definition, "consensus" means agreement or unanimity. When people agree on something, there is no need to enforce opinions. The instant someone talks about enforcing a consensus, it is an indication that no real consensus exists. 

Mills then gives us a rather long description of a scientific consensus as an excuse for placing questions "out of bounds" and for "gatekeeping" in which there is a kind of protected country club in which hated opposing observations and arguments are kept out, like black applicants to 1950's country clubs. Mills approvingly cites the tendency of modern scientists to ignore a host of arguments and observations that conflict with their belief dogmas. Such behavior, so unworthy of a first-class scientist or first-class scholar,  is an appalling tendency to fail to study observational reports that conflict with your beliefs and are evidence against some thing you claim is true. Referring to the literature of critics and contrarians as "venues," he writes this:

"What is striking, however, is that the arguments presented in these venues are almost never refuted by mainstream scientists. They may be publicly denounced, but without elaborate argumentation in professional journals. Most of the time, they are simply ignored....Science could never advance if it had to re-establish every past theory, counter every objection, or refute every crank."

The term "crank" is a term that merely means someone who irritates you. The reasoning that scientists should be excused from countering objections to their theories because they don't have time to do that is one of the silliest arguments made by scientists too lazy to respond to objections to their theories, and is here repeated by Mills. Scientists nowadays waste enormous amounts of time and federal dollars cranking out poorly reproducible results and poorly designed experiments described in papers that typically get almost no readership, with a large fraction of the papers being highly speculative or mostly unintelligible or devoted to topics so specialized and obscure that they are of no real value. The idea that scientists are too busy to respond to arguments and evidence against their theories is absurd. Were scientists to stop wasting so much time, they would have plenty of time to respond to arguments and evidence against their theories.

There then follows a bad argument by Mills that we must accept the teachings of scientists all over the place because we are "dependent" on such teachings. It involves another silly argument about time,  an argument that kind of seems to say that we can't challenge any dogmas of scientists because we don't have time to study up on the relevant topics. That isn't true at all. For example, anyone can read up on a Saturday on the very slight reasoning Darwin used to advance his very simple theory of so-called "natural selection," and anyone can read up on a Sunday on some powerful reasons for rejecting such claims. 

Scientific theories can be cast into doubt or effectively refuted by someone who has not spent terribly much time studying such theories. I will give one of hundreds of possible examples. Consider the case of out-of-body experiences, which are very widely reported by humans.  Prevailing neuroscientist theory holds that the mind is purely the product of the brain, and prevailing evolutionary theory holds that the collective reality of human minds is purely the product of some brain-related random mutations in the past. But suppose a scholar collects many reliable accounts of people who report traveling out of their bodies and observing them from above; and suppose he finds that in many of these cases the person had no appreciable brain activity; and suppose he gathers evidence that quite a few of these people discovered things that should have been impossible for them to know if such experiences were mere hallucinations.  Now we have evidence that would seem to cast very great doubt on two major scientific theories: both the theory that mind is purely the product of the brain, and the theory  holding that the mentality of the human species is  purely the product of some brain-related random mutations in the past.  To produce such evidence, it was not even necessary to study closely the theories cast into doubt by the evidence.  I could give countless other examples of how independent research by non-scientists can cast doubt on the claims of hallowed scientific theories, without requiring great study of the intricacies of such theories by the non-scientists. Such examples help show that we are not at all "dependents" who must meekly accept scientific theories we have not become experts on, like little children forced to believe whatever their parents tell them.  

Later Mills mentions "the modern evolutionary synthesis" (in other words, Neo-Darwinism), making the very dubious claim that it is a "consensus."  He is apparently unaware that there is a large respectable scholarly body of literature opposing Neo-Darwinism, and that nearly a thousand scientists have signed their names indicating their agreement to the following statement:

"We are skeptical of claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for Darwinian theory should be encouraged.”

Any secret ballot of scientists would show that a significant fraction do not believe or doubt Darwinism. I doubt Mills realizes that a claim of a consensus is a powerful rhetorical weapon, and that long before there is anything like a consensus, believers in some scientific idea may start claiming all over the place that there is a consensus in support of an idea.  This is a powerful form of psychological  intimidation, a sneaky way by which bad ideas can start to go viral and rise to the top.  Advocates of some theory will claim that all or almost all of the smart people are moving or have moved to embrace the theory,  although that may not at all be true. The "everybody moving in one direction" impression may largely be an illusion fostered by those who know the persuasion power of portraying some consensus that does not yet exist.  "Bandwagon and herd-effect your way to the top" has been the secret by which many a bad theory becomes dominant. 

Mills cites "the modern evolutionary synthesis" as an example of something "that scientific institutions can and should enforce." Here he sounds like some Soviet commissar telling us that Marxist dogma must be enforced.  The commissars believed it was right for Marxist dogma to be enforced, because they regarded it as a form of science, what they called "economic science" or "scientific communism.

Towards the end, Mills mentions how scientists were wrong in stating in 2020 claims that there was a consensus that COVID-19 had purely natural origins. But he seems to have learned no sound lesson from such a failure, and seems to mention this failure just as a lure to attract readers interested in hearing someone with ideas different from his own, like someone using cheese in a mouse trap.

The end of the claim that there was a scientific consensus on COVID-19 origins came about largely because of the work of non-scientists such as journalists and people in non-scientific fields (such as Jamie Metzl) who kept writing articles giving us facts opposing such a consensus.  Here we have an important reality contrary to what Mills has suggested, that science matters should be left to the scientists because they are too complicated for people to understand, a claim Mills made about the time he said this: "In order to participate in or contribute to established science — much less to criticize or overthrow it — one has to have been trained in the relevant scientific fields." Mills has failed to draw the right lesson from what happened here: that non-scientists may have something very valuable to contribute to scientific debates.  The "leave science to the scientists" thinking that Mills seems to advocate makes no more sense than "leave war to the generals."

Mills tries to portray the year 2020 inaccurate claims of a COVID-19 origins consensus as some freak aberration. But there was nothing very out-of-the-ordinary about such a thing. It was just another example of the very long-standing tendency of many science professors to  jump to conclusions, to claim to know things that they do not actually know, to avoid studying evidence that conflicts with the conclusions they have reached, to claim they understand the origin of something before achieving almost any of the prerequisites needed before credibly making such a claim, to meekly bow to the authority of their peers or predecessors, and to unfairly characterize reasonable critics as unreasonable extremists. To see a table listing many parallels between COVID-19 origins groupthink and human origins groupthink, see this post.  

In the last paragraph, Mills returns to his ideas seeming to recommend that we should be ruled by the belief traditions of professors, telling us that "science’s integrity must be protected by enforcing consensus."  When people say such things, they are using the secondary definition of "integrity," which is "the state of being whole and undivided." Similarly, medieval authorities argued that "the integrity of the Church must be protected" when trying to justify abominations such as the Inquisition and the Albigensian Crusade. The idea that a consensus should be enforced is a medieval-sounding and Soviet-sounding idea that has no place in properly functioning science. Science by itself is a morally neutral activity with no intrinsic moral component, and in the past hundred years many scientists in the Soviet Union, Maoist China, imperial Japan and also the United States had a long history of entanglement (direct or indirect) with brutality and oppression, often funded by governments. So you may find Mills saying that Darwinism needs to be enforced almost as scary as Mike Flynn saying that America should have only one religion. 

Mills would do well to study the chart below, and to ask himself whether he is encouraging some of the dysfunctional items on the right end, while discouraging some of the good things on the left end. 

good science practice versus academia reality

You should tend not to be impressed by a claim of a consensus, whenever such a consensus is enforced or sociogenic. Imagine if I query fifty architects about their opinion about a new building in some city, in fifty separate emails each addressed solely to one of fifty different people, and the architects all give me the same answer about whether the building was well-designed. Such a consensus is impressive because it is not a social conformity effect, and there is no enforcement involved. But it is not impressive when 100% of the graduates of some teaching program claim to believe in some dogma that everyone who went through that program was strongly pressured into believing.  For example, 100% of the graduates of biblical fundamentalist theology schools may believe in biblical fundamentalism, but that does nothing to show that biblical fundamentalism is true. And 100% of the graduates of some neuroscience master's degree program may believe the brain is the sole cause of the mind, but that does nothing to show that such a belief is true; for within such a program anyone who rejected such a dogma would have been treated like an outcast or a leper. 

Let's consider things generically, in which the term "Theory X" can stand for any number of theories. Suppose there arises a Theory X which starts to gain enough acceptance that there appears professional training programs to indoctrinate people in Theory X, making them Theory X experts. It may be that a large fraction or most of the population rejects Theory X as unbelievable. But it may be that a small number of people are particularly inclined to believe in Theory X.  It will be usually be only such people who sign up to begin some long Theory X training program to produce Theory X experts. In that program these trainees are constantly pressured to maintain their belief in Theory X, and in the program it is made clear that anyone who rejects Theory X will be scorned and ostracized.  Later when the training is finished, a new group of Theory X experts appears. Upon getting jobs as Theory X experts, the social pressure continues, with all of their fellow Theory X experts pressuring them to keep espousing the tenets of Theory X.  The "old guard" keeps the new guys in conformity with Theory X. 

Suppose then a poll is taken indicating that 100% or 95% of Theory X experts believe in Theory X. Should we be the least bit impressed by this consensus or near-consensus? Certainly not, because it is merely a sociogenic effect.  When a training program acts like a cookie cutter to produce uniformity in the trainees,  that does nothing to establish the likelihood that the training program's tenets are correct.

Monday, November 22, 2021

A Hailed "Breakthrough" May Be the Same Old Schlock

Behold the modern beast that is online science news reporting.  To understand it, we must understand its financial dynamics. Major websites have learned the following fundamental formula, which applies to any web page containing ads:

                         Page Views = Cash Income

The reason for this is that major websites make money from online ads. So the more people view a web page giving some hyped-up science story, the more money the website makes. This means that science reporting sites have a tremendous financial incentive to hype and exaggerate science stories. If they have a link saying, “Borderline results from new neuron study,” they may make only two dollars from that story. But if they have a story saying, “Astonishing breakthrough unveils the brain secret of memory,” they may make five hundred dollars from that story. With such a situation, it is no wonder that the hyping and exaggeration of scientific research is at epidemic levels.

There is a rule of thumb you can follow as a general guideline: the more ads you see on a page reporting scientific results, the more suspicious you should be that you are reading some hyped-up clickbait.  Using that rule we should be very suspicious indeed of a recent story on www.statnews.com, one with the untrue title "Using optogenetics, scientists pinpoint the location and timing of memory formation in mice."  The page devoted to giving this story has no less than four online ads. 

Here is the opening of the story, ending in an unjustified claim, and describing what the author fails to see is a very defective method for testing memory in mice:

"A mouse finds itself in a box it’s seen before; inside, its white walls are bright and clean. Then, a door opens. On the other side, a dark chamber awaits. The mouse should be afraid. Stepping into the shadows means certain shock — 50 hertz to the paws, a zap the animal was unfortunate enough to have experienced just the day before. But when the door slides open this time, there is no freezing, no added caution. The mouse walks right in. ZAP. The memory of this place, of this shock, of these bad feelings had been erased overnight by a team of neuroscientists at four leading research institutions in Japan using lasers, a virus, and a fluorescent protein normally produced in the body of sea anemones. Their work, published Thursday in Science, pinpoints for the first time the precise timing and location of minute brain changes that underlie the formation and consolidation of new memories."

We read above a description of scientists using one of the silliest methods used by neuroscientists: trying to determine whether a rodent is feeling fear based on some subjective and unreliable judgment about "freezing behavior" in which an animal supposedly stops moving because it is afraid. Trying to determine fear in an animal by judging "freezing behavior" is unreliable, and makes no sense. Many times in my life I suddenly saw a house mouse that caused me or someone else to shreik, and I never once saw a mouse freeze. Instead, they seem invariably to flee rather than to freeze. So what sense does it make to assume that the degree of non-movement ("freezing") of a rodent should be interpreted as a measurement of fear?  Moreover, judgments of the degree of "freezing behavior" in mice are too subjective and unreliable. 

Fear causes a sudden increase in heart rate in rodents, so measuring a rodent's heart rate is a simple and reliable way of measuring fear in a rodent.  A scientific study showed that heart rates of rodents dramatically shoot up instantly from 500 beats per minute to 700 beats per minute when the rodent is subjected to the fear-inducing stimuli of an air puff or a platform shaking. But rodent heart rate measurements seem to be never used in neuroscience rodent experiments about memory.  Why are the researchers relying on unreliable judgments of "freezing behavior" rather than a far-more-reliable measurement of heart rate, when determining whether fear is produced by recall? 

The Stat News story (written by a writer with a bachelor's degree in biology and a master's degree in journalism) is a typical example of "gee whiz" science journalism in which dubious triumphal claims are uncritically parroted or amplified.  The author repeats a legend that Steve Ramirez implanted a memory in a mouse. Read here for a 2016 post I wrote debunking such a claim reported in a 2013 paper. 

The Stat News story then repeats another groundless legend spread by neuroscientists, the legend that study of so-called "long-term potentiation" has shed some light on how memories are created. We read this:

"One way that scientists think that happens is through something called long-term potentiation. All the sights and sounds and smells and emotions associated with a given experience cause certain neurons to fire. And when they do, it leads to enduring changes in those cells and in cells nearby — they sprout protrusions that help transmit electrical signals and make more connections with nearby neurons."

What is misleadingly called “long-term potentiation” or LTP is a not-very-long-lasting effect by which certain types of high-frequency stimulation (such as stimulation by electrodes) produces an increase in synaptic strength.  The problem is that so-called long-term potentiation is actually a very short-term phenomenon. A 2013 paper states that so-called long-term potentiation is really very short-lived:

"LTP always decays and usually does so rapidly. Its rate of decay is measured in hours or days (for review, see Abraham 2003). Even with extended 'training,' a decay to baseline levels is observed within days to a week."

So-called long-term potentiation is no more long-term than a suntan. The use of the term "long-term potentiation" for such an effect is deceptive, particularly when it is suggested that so-called "long-term potentiation" might have something to do with explaining memories that can last for 50 years or longer. 

The new Japanese study is all based on equating so-called "long-term potentiation" with memory, and since this so-called "long-term potentiation" is very short-lived, we can be rather sure that nothing has been done to explain how brains could form or store permanent memories.  Besides this and the reliance on unreliable "freezing behavior" judgments mentioned above, there are other reasons why we should suspect the new Japanese study is just another example of the same old schlock, yet another example of Questionable Research Practices by neuroscientists. 

The link here goes to the paper, but it is hidden behind a paywall; and I can't find a preprint on the main biology preprint server.  But the page does have two links I can use, one a link allowing you to download Materials and Methods information.  When I use that link I see figures such as Figure S5 (which tells us some of the sample sizes used for the experiments, which were numbers such as 6 mice, 8 mice and 10 mice) and Figure S10 (which mentions a sample size  of only 6 mice).  These sample sizes are way too small for a reliable result to be claimed. As a general rule, we should suspect that a  neuroscience experiment has produced a mere false alarm whenever it fails to use at least 15 animals per study group.  Some of the study group sizes were less than half the size that should be used for a reliable result. 

It is well known that it is very easy to get misleading false alarm results whenever too-small sample sizes are used.  Here's an example.  Suppose there are twenty researchers who ask five friends whether a flipped coin will be heads or tails.  Probably one of them purely by chance will report that all five of his friends correctly predicted the results. But that's just a false alarm effect. Change the experiment so that each experimenter asks fifteen of his friends to predict the coin flips, and very probably not a single researcher will report that all of them guessed correctly. 

There is a standard way for a serious researcher to calculate whether he has used a sample size sufficient to get a result that is very probably not a false alarm. The standard method is to do what is called a sample size calculation.  When I use the "MDAR Reproducibility Checklist" link on the page giving the abstract for the Japanese researchers,  I see this confession:

"No statistical method was used to determine sample size. Sample sizes are similar for those used in the field."

Oops, our researchers failed to calculate how large a sample size they should use.  That's like a rocket engineer failing to calculate whether his moon rocket will land on the moon or crash into it. Given the kind of sample sizes mentioned in one of their documents (6 mice, 8 mice and 10 mice), we should assume the Japanese researchers used way too small a sample size to get a moderately persuasive result.  Their claim that their sample sizes are "similar for those used in the field" should do nothing to restore our trust. In experimental rodent research there has for a very long time been a gigantic failure of experimenters to use adequate sample sizes, with inadequate sample sizes being more of a rule than an exception. 

A neuroscience experiment researcher who confesses to failing to calculate how large a sample size should be used is like a man standing in public with his pants pulled down. Such researchers know that good research practice mandates calculating how large a sample size is needed to show a robust result, and using at least such a sample size. The failure of most neuroscience experimental  researchers to follow such a practice is a huge ongoing scandal of modern neuroscience. 

The paper by the Japanese researchers begins by making an unbelievable claim, stating, "Episodic memory is initially encoded in the hippocampus and later transferred to other brain regions for long-term storage." To the contrary, data in the leading scientific paper on this topic indicates that no such thing is true. I refer to the paper "Memory Outcome after Selective Amygdalohippocampectomy: A Study in 140 Patients with Temporal Lobe Epilepsy." That paper gives memory scores for 140 patients who almost all had the hippocampus removed to stop seizures.  Using the term "en bloc" which means "in its entirety" and the term "resected" which means "cut out," the paper states, "The hippocampus and the parahippocampal gyrus were usually resected en bloc."  The paper refers us to another paper  describing the surgeries, and that paper tells us that hippocampectomy (surgical removal of the hippocampus) was performed in almost all of the patients. 

The "Memory Outcome after Selective Amygdalohippocampectomy" paper does not use the word "amnesia" to describe the results. That paper gives memory scores that merely show only a modest decline in memory performance.  The paper states, "Nonverbal memory performance is slightly impaired preoperatively in both groups, with no apparent worsening attributable to surgery."  In fact, Table 3 of the paper informs us that a lack of any significant change in memory performance after removal of the hippocampus was far more common than a decline in memory performance, and that a substantial number of the patients improved their memory performance after their hippocampus was removed. 

Consequently there is no scientific warrant for the Japanese researchers claiming that "episodic memory is initially encoded in the hippocampus."  We know that humans can typically remember pretty well after removal of the hippocampus.  And when scientists make claims about memories being encoded in the brain, they are bluffing.  No one has any understanding of how learned information could be encoded into neural states or synapse states. The job of neurally encoding all of the different things people learn would require an army of genes dedicated to such a task, and there is no sign that such genes exist. A neural encoding of memories would leave fingerprints of representation all over the place in the brain, and no such fingerprints exist. The complete lack of any substantial and credible theory of neural memory encoding (and the lack of evidence for a neural encoding of memories) are two of the biggest reasons for rejecting all claims that memories are stored in brains. Read here for many others. 

lack of evidence for neural memory encoding

Why did the scientists make this strange claim that "e
pisodic memory is initially encoded in the hippocampus and later transferred to other brain regions for long-term storage"?  Is it because anyone has seen a memory moving from one part of the brain to another? Not at all. No one has actually observed a memory stored in a brain, nor has anyone seen a memory moving around in the brain. 

The reason why neuroscientists make such a claim is mainly that internally the hippocampus is extremely unstable.  In the hippocampus the things called dendritic spines (hypothesized to be related to memory) last for only about 30 days. Almost nowhere in the brain is there more internal instability and dendritic spine turnover.  Not wishing to claim that memories are stored permanently in this brain place of such very high instability, neuroscientists have resorted to telling the ridiculous tall tale that first memories appear in the hippocampus and then magically migrate to the cortex.  Such a tale is futile, because the cortex is almost as unstable as the hippocampus. A study found that the half-life of dendritic spines in the cortex is only 120 days. So the tall tale of memories moving from the hippocampus to the cortex is kind of a "out of the fire into the frying pan" kind of story.  Humans can reliably remember things for fifty years, so you don't solve the instability problem by imagining that memories migrate to a cortex where dendritic spines and synapses don't last for years. As for the proteins that make up such dendritic spines and synapses, they have average lifetimes of only a few weeks or less. Such numbers indicate that the brain cannot be the storage place of memories that can last for 50 years.