Wednesday, May 29, 2019

Astonishing Accounts of ESP

There are two types of evidence for extrasensory perception (ESP) or telepathy. One type of evidence consists of scientific experiments. This evidence is very strong, and such experiments have very often produced results very far above what would be expected by chance (see here and here and the table below for some examples). Another type of evidence is anecdotal evidence, simply consisting of the accounts of people who seemed to experience extrasensory perception. 

Louisa Rhine collected thousands of such accounts, some of which can be read in the fascinating book The Gift by psychologist Sally Rhine Feather. Before Louisa Rhine's activity, a world-famous astronomer (Camille Flammarion) compiled many reports he received of extrasensory perception. Let us now look at some of the reports of ESP made by Flammarion in his superb book The Unknown. The accounts are taken from Chapter VI of that book, which can be read here.

A Dr. Texte cited by Flammarion on page 246 reported  that a woman put it into a hypnotic trance “followed a conversation during which I expressed myself only mentally,” and that she “answered the questions which I addressed to her in this manner.” In the pages preceding and following this, Flammarion describes quite a few cases of people who seemed to show ESP while under hypnosis. 

There seems to have been a poor follow-up on this promising lead, that ESP may be much higher for people in hypnotic trances. We do know that in the “ganzfeld” laboratory experiments involving people deprived of normal sensory inputs, subjects consistently score far above chance, guessing with an accuracy of between 32 percent and 37 percent, much higher than the 25 percent expected by chance. Such a result suggests that abnormal states may increase ESP. 


Ganzfeld results reported here (page 135), expected hit rate of 25%

There have also been some experiments suggesting ESP works more powerfully in dreams. The idea that ESP may be much higher for people in a hypnotic trance is consistent with such results, and more efforts should be done to test such an idea.  

On pages 261-262 Flammarion discusses an ESP experiment he performed himself with a person who has famed as a mind reader. He states the following:

I willed that Ninof should go and take a photograph, which was lying by the side of several others at the end of the salon, and then carry it to a gentleman whom I did not know, and whom I selected simply as being the sixth person seated among thirty spectators. This mental order was executed exactly, and without hesitation.

On page 266 Flammarion quotes a letter from a man talking about his wife. The man stated, “Very often one of us gives verbal expression to an opinion or an idea exactly at the same moment when the other was about to express it in the same terms.” On page 269 and 270 and 275 and page 266-267 Flammarion quotes five examples of what seems to be a rather common event: someone experiencing a strange feeling of fear or dread at the same time as their parent or child or spouse unexpectedly experienced danger or injury or death at a distant location. In several of these cases the person struck by the feeling was compelled to rush home in the middle of the day, even though his plan for the day was to work or travel away from home.

On page 271 Flammarion quotes a doctor who said that he was suddenly struck by the feeling that a particular one of his patients would soon appear. He went to his window, and a few minutes later exactly that patient arrived. On pages 278-279 Flammarion quotes a remarkable story of a man who suffered a toothache at night. Unable to sleep, he spent hours in bed thinking both of the dentist he must see after sleeping, and an article he was planning to write about surgical treatment of stomach cancer. The following day the dentist told the patient that he had repeatedly dreamed of him the previous night, and had dreamed that he had stomach cancer, and that the patient was going to perform surgery to cure him.

On page 293 Flammarion tells the astonishing tale of a young boy named Ludovic, who seemed to be able to instantly solve almost any problem that was read from a book by his mother. For example, one problem stated the diameter of the earth in kilometers, and the distance (in earth diameters) from the sun to the moon, and then asked: what is the distance from the earth to the sun in leagues? (A league is 5556 meters.) The boy was able to instantly answer correctly when this question and many equally difficult questions were asked. It was for a while suspected that the boy was some super-genius, but after a while it became rather clear that the boy was reading the mind of his mother, who had before her in print the answers to all the hard questions he was being asked. The story reminds of the two cases described here, in which modern children seemed to be able to read well the minds of one of their parents.


ESP test
A page from Flammarion's book, giving the results of ESP tests

Another book containing accounts of ESP is the book Phantasms of the Living by Gurney, Myers and Podmore.  The book is mainly accounts of apparition sightings, but it also discusses many examples of ESP.  For example,  on page 246 of Volume I of the book, we hear an account by Charles Curtis, who reported that a young girl suddenly said, "Davie's drowned."  It was soon found out that Davie had indeed drowned at about the same time, at a location 40 miles away.  On page 259 we hear a similar account of someone who declared that someone had died,  with the news soon arriving that exactly the same person had died far away at the same time.  On page 260 we have another case of a person who stated that her brother was drowning far away, only to receive word on the same day that the brother had drowned at about the same time as the sister made the unusual claim. 

I have several times experienced incidents very strongly indicating ESP.  When I was young I once did a test with my sister, in which a person would think of an object somewhere in the house, and the other person would try to guess that object. The guessing person could only ask questions with a "yes" or "no" answer, and as soon as there was a single "no" answer a round was considered a failure. Including the basement, the house had four floors.  There were at least ten consecutive successful rounds in which all the answers were "yes," with the correct object being guessed. This involved roughly 50 or 60 consecutive questions in which every single question was answered "yes."  After each round the guesser was switched, so it couldn't have just been a case of my sister always saying, "yes."  The odds of something like this occurring by chance are less than 1 in a quintillion.  After we were scolded by an older sister for being enthusiastic about the result, we never retried the experiment. 

In another case my daughter was leaving to go outside, and I thought of saying to her, "see you later, alligator," a phrase I can never recall speaking to her previously.  But I decided not to say that, thinking she might be offended by being called an alligator. So I simply said, "see you later" in a normal voice. My daughter immediately replied by saying, "see you later, alligator," an expression I had never heard her use before. 

Several years ago I was at the Queens Zoo in New York City with my two daughters when they were teenagers. We were looking at a feline animal called a puma, which we could see distantly, far behind a plastic barrier. Suddenly (oddly enough) I had a recollection of a zoo visit I had experienced about ten years earlier, when I saw a gorilla just behind a plastic barrier, at the zoo at Busch Gardens in Florida. About three seconds later (before I said anything), my younger daughter said, “Do you remember that gorilla we saw close-up in Busch Gardens?” I was flabbergasted. It was as if there was telepathy going on. The incident seems all the more amazing when you consider that teenagers live very much in the present or the near future, and virtually never talk about not-very-dramatic things that happened 10 years ago. There was nothing in our field of view that might have caused both of us to have that recollection at the same time. On that zoo visit we hadn't seen a gorilla, nor had we seen any animal near a plastic barrier. 

Speaking of telepathy or ESP, Flammarion states on page 304, “The action of one human being on another, from a distance, is a scientific fact; it is as certain as the existence of Paris, of Napoleon, of oxygen or of Sirius.” On page 309 he emphatically ends his chapter (which gave many examples of evidence for ESP) by saying this: What is certain is: THAT TELEPATHY CAN AND OUGHT TO BE HENCEFORTH CONSIDERED BY SCIENCES AS AN INCONTESTABLE REALITY.

Postscript:  Below is an astonishing case told by Flammarion on page 11-12 of his book Death and Its Mystery:

On June 27, 1894, at about nine o'clock in the morning. 
Dr. Qallet, then a student of medicine in Lyons, was study- 
ing in his room, in company with a fellow-student, Dr. Varay, 
for the first examination for the degree of doctor, and was 
very much absorbed in his work, when he was irresistibly 
distracted from it by a sentence that obsessed him, the repetition, in his inner consciousness, of the words, "Monsieur 
Casimir Perier was elected Pnesident of the republic by four 
hundred and fifty-one votes." The student wrote this sentence upon a sheet of paper which he handed to his companion, complaining of the obsession. Yaray read it, shrugged his shoulders, and when his 
friend insisted that he believed it to be a real premonition, asked him, harshly enough, to let him work undisturbed...That day the election was held at Versailles, at two o'clock. Presently, while the students from Lyons were refreshing themselves upon the terrace of a cafe, newsboys passed, and shouted: "Monsieur Casimir Perier elected President of the republic by four hundred and fifty-one votes!" 

Saturday, May 25, 2019

Where Adams Goes Wrong on Cosmic Fine-tuning

This year scientist Fred C. Adams published to the physics paper server a massive paper on the topic of cosmic fine-tuning (a topic I have often discussed on this blog). The paper of more than 200 pages (entitled "The Degree of Fine Tuning in our Universe -- and Others") describes many cases in which our existence depends on cases of some number in nature having (against all odds) a value allowing the universe to be compatible with the existence of life. There are several important ways in which the paper goes wrong or creates inappropriate impressions. I list them below.

Problem #1: Using habitability as the criteria for judging fine-tuning, rather than “something as good as what we have.”

Part of the case for cosmic fine-tuning involves the existence of stars. It turns out that some improbable set of coincidences have to occur for a universe to have any stars at all, and some far more improbable set of coincidences have to occur for a universe to have very stable, bright, long-lived stars like our sun.

Adams repeatedly attempts to convince readers that the universe could have had fundamental constants different from what we have, and that the universe would still have been habitable because some type of star might have existed. When reasoning like this, he is using an inappropriate rule-of-thumb for judging fine-tuning. Below are two possible rules-of-thumb when judging fine-tuning in a universe:

Rule #1: Consider how unlikely it would be that a random universe would have conditions as suitable for the appearance and long-term survival of life as we have in our universe.

Rule #2: Consider only how unlikely it would be that a random universe would have conditions allowing some type of life.

Rule #1 is the appropriate rule to use when considering the issue of cosmic-fine tuning. But Adams seems to be operating under a rule-of-thumb such as Rule #2. For example, he tries to show that a universe significantly different from ours might have allowed red dwarf stars to exist. But such a possibility is irrelevant. The relevant consideration: how unlikely is it that we would have got a situation as fortunate as the physical situation that exists in our universe, in which stars like the sun (more suitable for supporting life than red-dwarf stars) exist? Since "red dwarfs are far more variable and violent than their more stable, larger cousins" such as sun-like stars (according to this source),  we should be considering the fine-tuning needed to get stars like our sun, not just any type of star like a red dwarf. 

I can give an analogy. Imagine I saw in the woods a log cabin house. If I am judging whether this structure is the result of chance or design, I should be considering how unlikely it would be that something as good as this might arise by chance. You could do all kinds of arguments trying to show that it is not so improbable that a log structure much worse than the one observed might not be too unlikely (such as arguments showing that it wouldn't be too hard for a few falling trees to make a primitive rain shelter). But such arguments are irrelevant. The relevant thing to consider is: how unlikely would it be that a structure as good as the one observed would appear by chance? Similarly, living in a universe that allows an opportunity for humans to continue living on this planet for billions of years with stable solar radiation, the relevant consideration is: how unlikely that a universe as physically fortunate as ours would exist by chance? Discussions about how microbes might exist in a very different universe (or intelligent creatures living precariously near unstable suns) are irrelevant, because such universes do not involve physical conditions as good as the one we have.

Problem #2: Charts that create the wrong impression because of a “camera near the needle hole” and a logarithmic scale.

On page 29 of the paper Adams gives us a chart showing some fine-tuning needed for the ratio between what is called the fine-structure constant and the strong nuclear force. We see the following diagram, using a logarithmic scale that exaggerates the relative size of the shaded region. If the ratio had been outside the shaded region, stars could not have existed in our universe.


This doesn't seem like that lucky a coincidence, until you consider that creating a chart like this is like trying a make a needle hole look big by putting your camera right next to the needle hole. We know of no theoretical reason why the ratio described in this chart could not have been anywhere between .000000000000000000000000000000000000000000001 and 1,000,000,000,000,000,000,000,000,000,000,000. So by using such a narrow scale, the chart gives us the wrong idea. In a a less misleading chart that used an overall scale vastly bigger, we would see this shaded region as merely a tiny point on the chart, occupying less than a millionth of the total area on the chart. Then we would realize that a fantastically improbable coincidence is required for nature to have threaded this needle hole.  Adams also uses a logarithmic scale for figure 5, to make another such tiny "needle hole" (that must be threaded for life to exist) look like it is relatively big.


Needle holes can look big when your eye is right next to them

Problem #3: An under-estimation of the strong force sensitivity.

On page 30 of the paper, Adams argues that the strong nuclear force (the strong coupling constant) isn't terribly fine-tuned, and might vary by as much as 1000 without preventing life. He refuses to accept what quite a few scientists have pointed out: that it would be very damaging for the habitability of the universe if the strong nuclear force were only a few percent stronger. Quite a few scientists have pointed out that in such a case, the diproton (a nucleus consisting of two protons and no neutrons) would be stable, and that would drastically affect the nature of stars. Adams attempt to dismiss such reasoning falls flat. He claims on page 31 that if the diproton existed, it would cause only a “modest decrease in the operating temperatures of stellar cores," but then tells us that this would cause a 1500% change in such temperatures (a change from about 15 million to one million), which is hardly modest. 

Adams ignores the fact that small changes in the strong nuclear force would probably rule out lucky carbon resonances that are necessary for large amounts of carbon to exist. Also, Adams ignores the consideration that if the strong nuclear force had been much stronger, the early universe's hydrogen would have been converted into helium, leaving no hydrogen to eventually allow the existence of water. In this paper, two physicists state, “we show that provided the increase in strong force coupling constant is less than about 50% substantial amounts of hydrogen remain.” What that suggests is that if the strong force had been more 50% greater, the universe's habitability would have been greatly damaged, and life probably would have been impossible. That means the strong nuclear force is 1000 times more sensitive and fine-tuned that Adams has estimated.

Problem #4: An under-estimation of the fine structure constant's sensitivity and sensitivity of the quark mass.

On page 140 of the paper, Adams suggests that the fine structure constant (related to the strength of the electromagnetic force) isn't terribly fine-tuned, and might vary by as much as 10000 times without preventing life. His previous discussion of the sensitivity of the fine structure constant involved merely a discussion of how a change in the constant would affect the origin of elements in the Big Bang. But there are other reasons for thinking that the fine structure is very fine-tuned, reasons Adams hasn't paid attention to. A stellar process called the triple-alpha process is necessary for large amounts of both carbon and oxygen to be formed in in the universe. In their paper “Viability of Carbon-Based Life as a Function of the Light Quark Mass,” Epelbaum and others state that the “formation of carbon and oxygen in our Universe would survive a change” of about 2% in the quark mass or about 2% in the fine-structure constant, but that “beyond such relatively small changes, the anthropic principle appears necessary at this time to explain the observed reaction rate of the triple-alpha process.” This is a sensitivity more than 10,000 times greater than Adams estimates. It's a case that we can call very precise fine-tuning. On  page 140, Adams gives estimates of the biological sensitivity of the quark masses, but they ignore the consideration just mentioned, and under-estimates the sensitivity of these parameters.

Problem #5: The misleading table that tries to make radio-fine tuning seem more precise than examples of cosmic fine-tuning.

On page 140 Adams give us the table below:



The range listed in the third column represents what Adams thinks is the maximum multiplier that could be applied to these parameters without ruling out life in our universe. One problem is that some of the ranges listed are way too large, first because Adams is frequently being over-generous in estimating by how much such things could vary without worsening our universe's physical habitability (for reasons I previously discussed), and second because Adams is using the wrong rule for judging fine-tuning, considering “universes that allow life” when he should be considering “universes as habitable and physically favorable as ours.”

Another problem is that the arrangement of the table suggests that the parameters discussed are much less fine-tuned than a radio that is set to just the right radio station, but most of the fundamental constants in the table are actually far more fine-tuned than such a radio. To clarify this matter, we must consider the matter of possibility spaces in these cases. A possibility space is the range of possible values that a parameter might have. One example of a possibility space is the possible ages of humans, which is between 0 and about 120. For an AM radio the possibility space is between 535 and 1605 kilohertz.

What are the possibility spaces for the fundamental constants? For the constants involving one of the four fundamental forces (the gravitational constant, the fine-structure constant, the weak coupling constant and the strong coupling constant), we know that the four fundamental forces differ by about 40 orders of magnitude in their strength. The strong nuclear force is about 10,000,000,000,000,000,000,000,000,000,000,000,000,000 times stronger than the gravitational force. So a reasonable estimate of the possibility space for each of these constants is to assume that any one of them might have had a value 1040 times smaller or weaker than the actual value of the constant.

So the possibility space involving the four fundamental coupling constants is something like 1,000,000,000,000,000,000,000,000,000,000,000,000 larger than the possibility space involving an AM radio. So, for example, even if the strong coupling constant could have varied by 1000 times as Adam claims, and still have allowed for life, for it to have such a value would be a case of fine-tuning more than 1,000,000,000,000 times greater than an AM radio that is randomly set on just the frequency. For the range of values between .001 and 1000 times the actual value of the strong nuclear force is just the tiniest fraction within a possibility space in which the strong nuclear force can vary by 1040 times. It's the same situation for the gravitational constant and the fine-structure constant (involving the electromagnetic force). Even if we go by Adam's severe under-estimations of the biological sensitivity of these constants, and use the estimates he has made, it is a still a situation that this is fine tuning trillions of times more unlikely to occur by chance than a radio being set on just the right station by chance, because of the gigantic probability space in which fundamental forces might vary by 40 orders of magnitude.

A similar situation exists in regard to what Adams calls on page 140 the vacuum energy scale.  This refers to the density of energy in ordinary outer space such as interstellar space. This is believed to be extremely small but nonzero. Adams estimates that it could have been 10 orders of magnitude larger without preventing our universe's habitability. But physicists know of very strong reasons for thinking that this density should actually be 1060 times or 10120  times greater than it is (it has to do with all of the virtual particles that quantum field theory says a vacuum should be packed with).  So for the value of the vacuum energy density to be as low as it is would seem to require a coincidence with a likelihood of less than 1 in 1050 .  Similarly, if a random number generator is programmed to pick a random number between 1 and 1060 with an equal probability of any number between those two numbers being the random number, there is only a microscopic chance of the number being between 1 in 1050 .

Adam's chart has been cleverly arranged to give us the impression that the fundamental constants are less fine-tuned than a radio set on the right station, but the opposite is true. The main fundamental constants are trillions of times more fine-tuned than a radio set on the right station. The reason is partially because the possibility space involving such constants is a more than a billion quadrillion times larger than the small possibility space involving what station a radio might be tuned to.

Problem #6: Omitting the best cases from his summary table.

Another huge shortcoming of Adams' paper is that he has omitted some of the biggest cases of cosmic fine-tuning from his summary table on page 140.   One of the biggest cases of fine-tuning involves the universe's initial expansion rate. Scientists say that at the very beginning of the Big Bang, the universe's expansion rate was fine-tuned to more than 1 part in 1050 , so that the universe's density was very precisely equal to what is called the critical density.  If the expansion rate had not been so precisely fine-tuned, galaxies would never have formed.  Adams admits this on page 40-41 of his paper. There is an unproven theory designed to explain away this fine-tuning, in the sense of imagining some other circumstances that might have explained it.  But regardless of that, such a case of fine-tuning should be included in any summary table listing the universe's fine-tuning (particularly since the theory designed to explain away the fine-tuning of the universe's expansion rate, called the cosmic inflation theory,  is a theory that has many fine-tuning requirements of its own, and does not result in an actual reduction of the universe's overall fine-tuning even if the theory were true).  So why do we not see this case in Adams' summary table entitled "Range of Parameter Values for Habitable Universe"? 

Adams' summary table also makes no mention of the fine-tuning involving the Higgs mass or the Higgs boson, what is called "the hierarchy problem." This is a case of fine-tuning that so bothered particle physicists that many of them spent decades creating speculative theories such as supersymmetry designed to explain away this fine-tuning, which they sometimes said was so precise it was like a pencil balanced on its head.  Referring to this matter, this paper says, "in order to get the required low Higgs mass, the bare mass must be fine-tuned to dozens of significant places." This is clearly one of the biggest cases of cosmic fine-tuning, but Adams has conveniently omitted it from his summary table. 

Then there is the case of the universe's initial entropy, another case of very precise fine-tuning that Adams has also ignored in his summary table. Cosmologists such as Roger Penrose have stated that for the universe to have the relatively low entropy it now has, the entropy at the time of the Big Bang must have been fantastically small, completely at odds from what we would expect by chance.  Only universes starting out in an incredibly low entropy state can end up forming galaxies and yielding life.  As I discuss here,  in a recent book Penrose suggested that the initial entropy conditions were so improbable that it would be more likely that the Earth and all of its organisms would have suddenly formed from a chance collision of particles from outer space. This gigantic case of very precise cosmic fine-tuning is not mentioned on Adams' summary table. 

Then there is the case of the most precise fine-tuning known locally in nature, the precise equality of the absolute value of the proton value and the absolute value of the electron charge.  Each proton in the universe has a mass 1836 times greater than the mass of each electron. From this fact, you might guess that the electric charge on each proton is much greater than the electric charge on each electron. But instead the absolute value of the electric charge on each proton is very precisely the same as the absolute value of the electric charge on each electron (absolute value means the value not considering the sign that is positive for protons and negative for electrons).  A scientific experimental study determined that the absolute value of the proton charge differs by less than one part in 1,000,000,000,000,000,000 from the absolute value of the electron charge. 

This is a coincidence we would expect to find in fewer than 1 in 1,000,000,000,000,000,000 random universes, and it is a case of extremely precise fine-tuning that is absolutely necessary for our existence.  Since the electromagnetic force (one of the four fundamental forces) is roughly 10 to the thirty-seventh power times stronger than the force of gravity that holds planets and stars together, a very slight difference between the absolute value of the proton charge and the absolute value of the electron charge would create an electrical imbalance that would prevent stars and planets from holding together by gravity (as discussed here).  Similarly, a slight difference between the absolute value of the proton charge and the absolute value of the electron charge would prevent organic chemistry in a hundred different ways. Why has Adams failed to mention in his summary table (or anywhere in his paper) so precise a case of biologically necessary fine-tuning? 

Clearly, Adams has left out from his summary table most of the best cases of cosmic fine-tuning.  His table is like some table entitled "Famous Yankee Hitters" designed to make us think that the New York Yankees haven't had very good hitters, a table that conveniently omits the cases of Babe Ruth, Lou Gehrig, Joe DiMaggio, and Micky Mantle. 

Below is a table that will serve as a corrective for Adams' misleading table.  I will list some fundamental constants or parameters in the first column. The second column gives a rough idea of the size of the possibility space regarding the particular item in the first column. The third column tells us whether the constant, parameter or situation is more unlikely than 1 chance in 10,000 to be in the right range, purely by chance. The fourth column tells us whether the constant, parameter or situation is more unlikely than 1 chance in a billion to be in the right range, purely by chance.   For the cosmic parameters "in the right range" means "as suitable for long-lasting intelligent life as the item is in our universe." The "Yes" answers follow from various sensitivity estimates in this post, in Adams' paper, and in the existing literature on this topic (which includes these items).  For simplicity I'll skip several items of cosmic fine tuning such as those involving quark masses and the electron/proton mass ratio. 


Parameter or Constant or Situation Size of Possibility Space More Unlikely Than 1 in 10,000 for Item to Be in Right Range, by Chance? More Unlikely Than 1 in 1,000,000,000 for Item to Be in Right Range, by Chance?
Strong nuclear coupling constant 1040 (difference between weakest fundamental force and strongest one) Yes Yes
Gravitational coupling constant 1040 (difference between weakest fundamental force and strongest one) Yes Yes
Electromagnetic coupling constant 1040 (difference between weakest fundamental force and strongest one) Yes Yes
Ratio of absolute value of proton charge to absolute value of electron charge .000000001 to 1000000000 Yes Yes
Ratio of universe's initial density to critical density (related to initial expansion rate) 1040 to 1/ 1040 Yes Yes
Initial cosmic entropy level Between 0 and some incredibly large number Yes Yes
Vacuum energy density Between 0 and 1060 or 10120 its current value, as suggested by quantum field theory Yes Yes
AM radio tuned to your favorite AM station 535-1605 kilohertz No No
FM radio tuned to your favorite FM station 88-108 megahertz No No

The truth is that each of these cases of cosmic fine-tuning is trillions of time more unlikely to have occurred by chance than a random radio being exactly tuned to your favorite AM station or FM station.  To imagine the overall likelihood of all of these cases of fine-tuning happening accidentally, we might imagine an archer who successfully hits more than 7 archery targets randomly positioned around the circumference of a 180-meter circle,  with the archer being at the center of the circle and blindfolded. 

cosmic fine tuning

Postscript: In June 2019 some scientists published a paper entitled "An update on fine-tunings in the triple alpha process." They end by stating "a relatively small ~0.5% shift" in the light quark mass "would eliminate carbon-oxygen based life from the universe." This is a case of very precise fine-tuning, contrary to the claims of Adams, who tries to make it look like a parameter that could have differed by three orders of magnitude (a factor of about 1000 times).  The scientists also state that "such life could possibly persist up to ~7.5% shifts" in the electromagnetic coupling constant (equivalent to the fine structure constant).   This is also a case of precise fine-tuning, contrary to the claims of Adams, who tries to make the fine structure constant look like a parameter that could have differed by four orders of magnitude (a factor of about 10,000 times). 

In his table on page 140 Adams has told us that the electron/proton mass ratio isn't very sensitive to changes. But in his book The Particle at the End of the Universe, page 145 to 146, physicist Sean Carroll states the following:

 “The size of atoms...is determined by...the mass of the electron. If that mass were less, atoms would be a lot larger. .. If the mass of the electron changed just a little bit, we would have things like 'molecules' and 'chemistry', but the specific rules that we know in the real world would change in important ways...Complicated molecules like DNA or proteins or living cells would be messed up beyond repair. To bring it home: Change the mass of the electron just a little bit, and all life would instantly end.”  

Tuesday, May 21, 2019

Wishful Thinking Often Drives Science Activity

A recent article in The Atlantic is entitled "A Waste of 1000 Research Papers." The article is about how scientists wrote a thousand research papers trying to suggest that genes such as SLC6A4 were a partial cause of depression, one of the leading mental afflictions.  The article tells us, "But a new study—the biggest and most comprehensive of its kind yet—shows that this seemingly sturdy mountain of research is actually a house of cards, built on nonexistent foundations." 

Using data from large groups of volunteers -- between 62,000 and 443,000 people -- the scientific study attempted to find whether there was any evidence that any of the genes (such as SLC6A4 and 5-HTTLPR) linked to depression were more common in people who had depression. "We didn't find a smidge of evidence," says Matthew Keller, the scientist who led the study.  "How on Earth could we have spent 20 years and hundreds of millions of dollars studying pure noise?" asks Keller, suggesting that hundreds of millions of dollars had been spent trying to show a genetic correlation (between genes such as SLC6A4 and depression) that didn't actually exist. 

The article refers us to a blog post on the Keller study, one that comments on how scientists had tried to build up the gene 5-HTTLPR as a causal factor of depression. Below is a quote from the blog post by Scott Alexander:

"What bothers me isn’t just that people said 5-HTTLPR mattered and it didn’t. It’s that we built whole imaginary edifices, whole castles in the air on top of this idea of 5-HTTLPR mattering. We 'figured out' how 5-HTTLPR exerted its effects, what parts of the brain it was active in, what sorts of things it interacted with, how its effects were enhanced or suppressed by the effects of other imaginary depression genes. This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot."

How is it that so many scientists came up with an answer so wrong?  One reason is that they used sample sizes too small. The article in The Atlantic explains it like this:

"When geneticists finally gained the power to cost-efficiently analyze entire genomes, they realized that most disorders and diseases are influenced by thousands of genes, each of which has a tiny effect. To reliably detect these miniscule effects, you need to compare hundreds of thousands of volunteers. By contrast, the candidate-gene studies of the 2000s looked at an average of 345 people! They couldn’t possibly have found effects as large as they did, using samples as small as they had. Those results must have been flukes—mirages produced by a lack of statistical power. "

Is this type of problem limited to the study of genes? Not at all. The "lack of statistical power" problem (pretty much the same as the "too small sample sizes" problem) is rampant and epidemic in modern neuroscience. Today's neuroscientists very frequently produce studies with way-too-low statistical power, studies in which there is a very high chance of a false alarm, because too-small sample sizes were used. 

It is well known that at least 15 animals per study group should be used to get a moderately convincing result. But very often neuroscience studies will use only about 8 animals per study group. If you use only 8 animals per study group, there's a very high chance you'll get a false alarm, in which the result is due merely to chance variations rather than a real effect in nature.  In fact, in her post "Why Most Published Neuroscience Studies Are False," neuroscientist Kelly Zalocusky suggests that neuroscientists really should be using 31 animals per study group to get a not-very-strong statistical power of .5, and 60 animals per study group to get a fairly strong statistical power of .8.  


This is the same “too small sample size” problem (discussed here) that plagues very many or most neuroscience experiments involving animals. Neuroscientists have known about this problem for many years, but year after year they continue in their errant ways, foisting upon the public too-small-sample-size studies with low statistical power that don't prove anything because of a high chance of false alarms.  Such studies may claim to provide evidence that brains are producing thinking or storing memories, but the evidence is "junk science" stuff that does not stand up to critical scrutiny. 

So part of the explanation for why the "depression gene" scientists were so wrong is that they used too-small sample sizes, producing results with way-too-low statistical power. But there's another big reason why they were so wrong: their research activity was driven by wishful thinking.  Showing that genes drive behavior or mental states has always been one of the major items on the wish list of scientists who favor reductionist materialism.  

If you're an orthodox Darwinist, you're pretty much locked in to the idea that genes control everything.  Darwinists believe that a progression from ape-like ancestors to humans occurred solely because of a change in DNA caused by random mutations.  So if you think that some difference in DNA is the sole explanation for the difference between humans and apes,  you've pretty much boxed yourself into the silly idea that every difference between a human and an ape boils down to some gene difference or DNA difference.  I call the idea silly because the genes that make up DNA basically specify proteins, but no one has a coherent idea for how a protein could cause a mental state such as sadness, imagination,  spirituality or curiosity, nor a coherent idea of how a three-dimensional body plan could ever be stored in DNA (written in a "bare bones" language limiting it to only very low-level chemical information such as the amino acids that make up a polypeptide chain that is the beginning of a three-dimensional protein molecule). 

So the scientists wanted very much to believe there were "genes for depression," and genes for almost every other human mental characteristic; and they let their belief desires drive their research activities. 

Does this happen rarely in the world of science? No, it happens all the time. Very much of modern scientific activity is driven by wishful thinking. It is easy to come up with examples that remind us of this "depression genes" misadventure:

  • Countless “low statistical power” neuroscience papers have been written trying to suggest that LTP has something to do with memory, an idea which makes no sense given that LTP is an artificially produced effect produced by high-frequency stimulation, and given that LTP typically lasts only hours or days, in contrast to human memories that can persist for 50 years.
  • Countless “low statistical power” neuroscience papers have been written trying to suggest that synapses or dendritic spines are involved in memory storage, an idea which makes no sense given that synapses and dendritic spines are made up of proteins with lifetimes of only a few weeks or less, and that neither synapses nor dendritic spines last for even a tenth of the time that humans can remember things (50 years or more).
  • Countless “low statistical power” brain imaging studies have tried to show neural correlates of thinking or recall, but they typically show that such activities do not cause more than a 1% change in signal strength, consistent with random variations that we would see even if brains do not produce thinking or memory recall.
  • Having a desire to reconcile the mutually incompatible theories of quantum mechanics and general relativity, physicists wrote thousands of papers filled with ornate speculations about something called string theory, a theory for which no evidence has ever been produced.
  • Faced with biological organisms having functionally complex systems far more complex than any found in the most complex human inventions, and wishing fervently to avoid any hypothesis of design, scientists engaged in countless speculations about how such innovations might have been produced by random mutations of genomes, ignoring not merely the mathematical improbability of such "miracle of luck" events happening so many times, but also the fact that genomes do not specify body plans (contrary to the "DNA is a blueprint for your body" myth advanced to support such speculations).  
  • Faced with an undesired case of very strong fine-tuning involving the Higgs boson or Higgs field, scientists wrote more than 1000 papers speculating about a theory called supersymmetry which tries to explain away this fine-tuning; but the theory has failed all experimental tests at the Large Hadron Collider.  Many of the same scientists want many billions more for the construction of a new and more powerful particle collider, so that their fervent wishes on this matter can be fulfilled. 
  • Faced with an undesired result that the universe's expansion rate at the time of the Big Bang was apparently fine-tuned to more than 1 part in 1,000,000,000,000,000,000,000, scientists wrote more than a thousand speculative “cosmic inflation” cosmology papers trying to explain away this thing they didn't want to believe in, by imagining a never-observed instant in which the universe expanded at an exponential rate (with the speculations often veering into multiverse speculations about other unseen universes). 
  • According to this paper, scientists have written some 15,000 papers on the topic of "neural coding," something that scientists want to believe in because they want to believe the brain is like a computer (which uses coding).  But there is not any real evidence that the brain uses any type of coding other than the genetic code used by all cells, and no one has been able to discover any neural code used for either transmitting information in the brain or storing information in the brain.  Referring to spikes in brain electricity, this paper concludes "the view that spikes are messages is generally not tenable," contrary to the speculations of thousands of neuroscience papers. 
  • Scientists have had no luck in trying to create a living thing in experiments simulating the early Earth, and have failed to create even a single protein molecule in such experiments. But because scientists really, really wish to find extraterrestrial life somewhere (to prove to themselves that the origin of life is easy to naturally occur), scientists want billions of dollars for a long-shot ice-drilling space mission looking for life on Jupiter's moon Europa.
  • When the Large Hadron Collider produced some results that might have been preliminary signs of some new particle, physicists wrote more than 500 papers speculating about the interesting new particle they wanted to believe in, only to find the new particle was a false alarm. 
  • Scientists ran many very expensive experiments attempting to look for dark matter, something that has never been observed, but which scientists devoutly hoped to find, largely to prove to themselves that their knowledge of large-scale cosmic structure is not so small. 

Quite a few of these cases seem to be "imaginary edifices" in which the main controlling factor is what a scientist wants to believe or does not want to believe. 


Europa mission
NASA visual of a long-shot mission to look for life on Europa

The article in The Atlantic states the following, painting a portrait of a science world with very serious problems: 

“ 'We’re told that science self-corrects, but what the candidate gene literature demonstrates is that it often self-corrects very slowly, and very wastefully, even when the writing has been on the wall for a very long time,' Munafo adds. Many fields of science, from psychology to cancer biology, have been dealing with similar problems: Entire lines of research may be based on faulty results. The reasons for this so-called “reproducibility crisis” are manifold. Sometimes, researchers futz with their data until they get something interesting, or retrofit their questions to match their answers. Other times, they selectively publish positive results while sweeping negative ones under the rug, creating a false impression of building evidence."
In the world of science academia, bad ideas arise, and often hang around way too long, even after the evidence has turned against such ideas, because what is really driving things is what scientists want to believe or do not want to believe.  Wishful thinking is in the driver's seat, and the result of that is quite a few houses of cards, quite a few castles in the air, quite a few imaginary edifices that are often sold as "science knowledge."  

Friday, May 17, 2019

Big Brain Holes Didn't Seem to Disrupt Their Minds

The claim that the human mind is produced by the human brain has always been a speech custom of scientists, rather than an idea that has been established by observations. No one has any idea of how neurons might produce human mental phenomena such as abstract thinking and imagination. Contrary to the predictions of the idea that brains make minds, there are a huge number of case histories showing that human minds suffer surprisingly little damage when massive brain injury or loss of brain tissue occurs. I have published three long posts (here, here, and here) citing many such cases, including cases of epilepsy patients who had little loss of intelligence or memory after they lost half of their brains in a hemispherectomy operation to stop seizures, and patients who had above-average or near-average intelligence despite loss of most of their brains. I will now cite some additional cases of minds little affected by huge brain damage, cases I have not mentioned before.

The cases I will discuss are mainly referred to as abscesses. An abscess is an area of the brain that has experienced necrosis (cell death) because of infection or injury. A medical source refers to an abscess as “an area of necrosis,” and another medical source defines necrosis as “the death of body tissue.” If you do a Google image search for “abscess,” you will see that a brain abscess generally appears as a dark patch in a brain scan. It is roughly correct to refer to an abscess as a brain hole, although the hole is a filled hole, filled mainly with pus, dead cells and fluid. An image of an abscess is below.


The two cases in the quoted paragraph below are reported on page 78 of the book From the Unconscious to the Conscious by physician Gustave Geley. You can read the book here. Astonishingly, Geley refers in the first sentence to a man who lived a year “without any mental disturbance” despite a great big brain abscess that left him with “a brain reduced to pulp”:

"M. Edmond Perrier brought before the French Academy of Sciences at the session of December 22nd, 1913, the case observed by Dr R. Robinson; of a man who lived a year, nearly without pain, and without any mental disturbance, with a brain reduced to pulp by a huge purulent abscess. In July, 1914, Dr Hallopeau reported to the Surgical Society an operation at the Necker Hospital, the patient being a young girl who had fallen out of a carriage on the Metropolitan Railway. After trephining, it was observed that a considerable portion of cerebral substance had been reduced literally to pulp. The wound was cleansed, drained, and closed, and the patient completely recovered."

The following report (quite contrary to current dogmas about brains) was made in a Paris newspaper of a session of the Academy of Sciences on March 24, 1917, and is quoted by Geley on page 79 of his book:

"He mentions that his first patient, the soldier Louis R , to-day a gardener near Paris, in spite of the loss of a very large part of his left cerebral hemisphere (cortex, white substance, central nuclei, etc.), continues to develop intellectually as a normal subject,in despite of the lesions and the removal of convolutions considered as the seat of essential functions. From this typical case, and nine analogous cases by the same operator, known to the Academy, Dr Guepin says that it may now safely be concluded:
(i). That the partial amputation of the brain in man is possible, relatively easy, and saves certain wounded men whom received theory would regard
as condemned to certain death, or to incurable infirmities.
(2). That these patients seem not in any way
to feel the loss of such a cerebral region."

On page 80 of Geley's book we have the following astonishing case involving an abscess in the brain. We are told the boy had “full use of his intellectual faculties” despite a huge brain abscess and a detachment “which amounted to real decapitation”:

"The first case refers to a boy of 12 to 14 years
of age, who died in full use of his intellectual faculties
although the encephalic mass was completely detached
from the bulb, in a condition which amounted to real
decapitation. What must have been the stupefaction
of the operators at the autopsy, when, on opening
the cranial cavity, they found the meninges heavily
charged with blood, and a large abscess involving
nearly the whole cerebellum, part of the brain and
the protuberance. Nevertheless the patient, shortly
before, was known to have been actively thinking.
They must necessarily have wondered how this could
possibly have come about. The boy complained of
violent headache, his temperature was not below
39 °C. (io2.2°F.) ; the only marked symptoms
being dilatation of the pupils, intolerance of light,
and great cutaneous hyperesthesia. Diagnosed as
meningo-encephalitis."

On page 81 we learn of the following equally astonishing case involving a patient who “thought as do other men” despite having three large brain abscesses, each as large as a tangerine:

"A third case, coming from the same clinic, is
that of a young agricultural labourer, 18 years of
age. The post mortem revealed three communicating
abscesses, each as large as a tangerine orange,
occupying the posterior portion of both cerebral
hemispheres, and part of the cerebellum. In spite
of these the patient thought as do other men, so
much so that one day he asked for leave to settle
his private affairs. He died on re-entering the
hospital."

These cases are quite consistent with more modern cases reported in recent decades, cases in which we also see very little loss of function despite massive brain damage. A 2015 scientific paper looked at 162 cases of surgery to treat brain abscess, in which parts of the brain undergo the cell death known as necrosis, often being replaced with a yellowish pus. The article contains quite a few photos of people with holes in their brains caused by the abscesses, holes in their brains of various sizes. The paper says that “complete resolution of abscess with complete recovery of preoperative neuro-deficit was seen in 80.86%” of the patients, and that only about 6% of the patients suffered a major functional deficit, even though 22% of the patients had multiple brain abscesses, and 30% of the abscesses occurred in the frontal lobe (claimed to be the center of higher thought). 

Interestingly, the long review article on 162 brain abscesses treated by brain surgery make no mention at all of amnesia or any memory effects, other than to tell us that “there was short-term memory loss in 5 cases.” If our memories really are stored in our brain, how come none of these 162 cases of brain abscesses seem to have shown an effect at all on permanent memories?

Similarly, a scientific paper on 100 brain abscess cases (in which one fourth of the patients had multiple brain abscesses) makes no mention of any specific memory effect or thinking effect. It tells us that most of the patients had “neurological focal deficits,” but that's a vague term that doesn't tell us whether intellect or memory was affected. (A wikiepdia.org article says that such a term refers to "impairments of nervespinal cord, or brain function that affects a specific region of the body, e.g. weakness in the left arm, the right leg, paresis, or plegia.")   The paper tells us that after treatment “80 (83.3%) were cured, eight (8.3%) died (five of them were in coma at admission), seven had a relapse of the abscess,” without mentioning any permanent loss of memory or mental function in anyone.

Another paper discusses thousands of cases of brain abscesses, without mentioning any specific thinking effects or memory effects.  Another paper refers to 49 brain abscess patients, and tells us that "the frontal lobe was the most common site," referring to the place that is claimed to be a "seat of thought" in the brain. But rather than mentioning any great intellectual damage caused by these brain holes, the paper says that 39 of the patients “recovered fully or had minimal incapacity,” and that five died.

In 1994 Simon Lewis was in his car when it was struck by a van driving at 75 miles per hour. The crash killed Lewis' wife, and “destroyed a third of his right hemisphere” according to this press account. Lewis remained in coma for 31 days, and then awoke. Now, many years later, according to the press account, “he actually has an IQ as high as the one he had before the crash.” In 1997, according to the press account, Lewis had an IQ of 151, which is 50% higher than the average IQ of 100. How could someone be so smart with such heavy brain damage, if our brains are really the source of our minds?  

These cases are merely a small part of the evidence that large brain damage very often produces only very small effects on mind and memory. The three posts here and here and here give many other cases along the same lines, some suggesting even more dramatically that a large fraction of the brain (often as much as 50% and sometimes as much as 80%) can be lost or removed without causing much memory loss or preventing fairly normal mental function and memory function. The facts of neuroscience do not match the dogmas of neuroscientists, who make unwarranted “brains store memories” and “brains make minds” claims that are in conflict with facts such as medical case histories of high brain damage with little mind damage, the short lifetimes of the proteins that make up synapses, the low signal transmission reliability of noisy synapses, and the failure of scientists to detect any sign of encoded information (other than DNA gene information) in brains.

A study published in December, 2018 attempted to draw a link between neural parameters (such as cortical thickness and neuron size) and intelligence.  The study failed to present any convincing evidence for such a thing.  The study involved only a few dozen subjects, and the neurons analyzed were a few dozen neurons arbitrarily chosen.  Given the freedom to make 100 comparisons chosen as you wish from a mass of data, you can produce weak correlations suggesting whatever hypothesis you favor. An example of the very weak correlations in the paper is Figure 2D in the paper, which attempts to show a correlation between cortical thickness and IQ. But if you click on the "see more" link, you will see the correlation measure (R squared) is only .15, which is basically no real evidence of a correlation (as explained here), particularly in a sample size so small.  When there is good evidence for a correlation, you have an R squared such as .5 or .7.  Similarly weak correlations (with an R squared average of only .19) are presented in 4 other graphs. 

But there is in the paper evidence that conflicts with the whole idea that brains produce minds. That evidence is found in Table 1, which lists the IQ scores of people with serious brain tumors requiring surgery.  The IQ tests were taken shortly before the surgery, and tell us about the intelligence of the people after their brains were devastated by tumors.   Here are the IQ scores of the people with brain tumors: 88, 119, 88, 107, 125, 84, 110, 97, 77, 83, 102, 99, 82, and 114. This gives us an average of 98, which is only very slightly smaller than the average IQ of 100. The figures are not what we would expect from the claim that the brain produces intelligence, and the figures are consistent with the hypothesis that brain tumors do not have a large effect on intelligence.  Similar results are found in this paper, in which 49 brain tumor patients were found (in pre-surgical IQ tests) to have an average IQ of 95.4.  We can easily account for the slightly-below-average scores by simply assuming that in these brain tumor patients there would often be visual perception problems, muscular coordination problems, psychological distress, and head pain problems, which would tend to slightly decrease scores in pencil-and-paper IQ tests, without there actually being a decrease in intelligence.