Header 1

Our future, our universe, and other weighty topics


Saturday, September 29, 2018

Computer Tests Shed Light on the Chance of Getting Historical ESP Lab Results

A classic work in the field of parapsychology was the 1940 book Extra-Sensory Perception After Sixty Years by J. G. Pratt and Joseph B. Rhine (a professor of psychology), along with Smith, Stuart and Greenwood. The book summarized sixty years of experimental research into extrasensory perception (ESP). Since the book is readily available online (at this URL), and is one of the chief pieces of evidence in support of ESP, no one who has not read this book has any business dismissing the evidence for ESP; but almost all who do that have failed to read this book.

The book is almost as good a book as you could ever hope for in terms of supplying laboratory evidence for ESP. But the book has one shortcoming, in that the evidence results are presented not quite as clearly as they could be. The main evidence is reported in tables with columns marked “Dev.” and “C.R.” Someone unfamiliar with statistics reading these tables may not be able to understand how dramatic the results were as evidence. Sometimes the results list a standard deviation in a column marked S.D., but most non-scientists cannot tell the difference between a standard deviation of 2 and a standard deviation of 30 (the first is weak evidence, and the latter is very strong evidence).

In the year 2018 there is a way to show how dramatic the results in this book are. The method is to run computer simulations that perform random guessing. For example, let us consider a result reported in the Extra-Sensory Perception After Sixty Years book. It is the result presented in Table 11, that in 1939 Pratt and Woodruff did 60,000 trials in which the deviation above the chance result was 489. Using a computer program I wrote, I can run 100,000 simulated runs that each involve 60,000 guesses, and I can see whether in any of these runs there is a chance result as impressive as Pratt and Woodruff got.

Below is a table comparing the Pratt and Woodruff result with the computer simulations I ran using a program I wrote (the text of which is at the end of this post).


Experimenter(s) Pratt and Woodruff
Year 1939
Source
Extra-Sensory Perception After Sixty Years, Table 11 (URL).
See also this URL.  
Number of Trials 60000
Special test conditions Two experimenters, independent recording, sensory cues excluded, official record sheets, triple checked.” Opaque screens used between subject and experimenter.
Probability of random guess being correct 1 in 5
Number of correct guesses better than the number expected by chance 489
Number of computer runs, each consisting of 60,000 trials guessing a number between 1 and 5 100000
Maximum number of correct guesses better than the number expected by chance, in any of these runs 426
Average guess in these trials 2.999984948333333
Number of runs matching or beating the human experimental result 0
Program arguments (use code below to reproduce) 100000 60000 5

So in my computer experiments there were 100,000 runs that each consisted of 60,000 trials (the same number as in the historical ESP experiment described above). In the run that was most successful out of the 100,000, there were 426 more correct guesses than would occur on average by chance. But this best random result out of 100,000 was much less impressive than the actual experimental tests involving human subjects guessing, for in those actual tests involving humans there were 489 more correct guesses than we would expect by chance.

You can reproduce this result by compiling the code at the bottom of this post in a Java compiler, and using “100000 60000 5” as the program arguments.

In row 4 of Table 5 of the book we have data that corresponds to the top rows of the table below. The data is for ESP experiments in which the probability of guessing correctly was 1 in 5. The experiments were done between 1934 and 1939, by a variety of experimenters including Rhine.


Experimenter(s) Rhine and others
Year 1934 to 1939
Source
Extra-Sensory Perception After Sixty Years, Table 5, Row 4 (URL)
Number of Trials 2757854
Probability of random guess being correct 1 in 5
Number of correct guesses better than the number expected by chance 52720
Number of computer runs, each consisting of 2757854 trials guessing a number between 1 and 5 10000
Maximum number of correct guesses better than the number expected by chance, in any of these runs 2113
Average guess in these trials
2.999995953592902
Number of runs matching or beating the human experimental result 0
Program arguments (use code below to reproduce) 10000 2757854 5


So in my computer experiments there were 10,000 runs that each consisted of 2,757,854 random trials (the same number in the historical ESP tests mentioned above). In the run that was most successful out of the 10,000, there were 2113 more correct guesses than would occur on average by chance. But this best random result out of 10,000 was very much less impressive than the actual experimental tests involving human subjects guessing, for in those actual tests involving humans there were 52,720 more correct guesses than we would expect by chance.

You can reproduce this result by compiling the code at the bottom of this post in a Java compiler, and using “10000 2757854 5” as the program arguments, although when I did this using the NetBeans Java compiler, it took 23 minutes for the program to finish.

Zener cards used in ESP experiments

I can only wonder how many runs I would have to do in excess of 10,000 to get a random result as good as the result produced by actual human guessers. Given the very large gap between the 2113 number reported above and the 52,720 number given above, I think I would have to let the computer run for so long that it produced millions or billions or trillions of runs. I would probably die before the computer simulated result reached an excess above chance as great as 52,720.

For the next comparison I will use a test Rhine made with Hubert Pearce, who produced astonishing results under ESP tests. I will only cite only the results produced under the strict condition of an opaque screen between the subject (Pearce) and the experimenter.


Experimenter(s) Rhine
Year 1937
Source
http://www.sacred-texts.com/psi/esp/esp14.htm (URL)
Number of Trials 600
Special test conditions Screen between experimenter and subject
Probability of random guess being correct 1 in 5
Number of correct guesses better than the number expected by chance 95
Number of computer runs, each consisting of 600 trials guessing a number between 1 and 5 1000000
Maximum number of correct guesses better than the number expected by chance, in any of these runs 50
Average guess in these trials
3.0000910866666666
Number of runs matching or beating the human experimental result 0
Program arguments (use code below to reproduce) 1000000 600 5

So in my computer experiments there were 1,000,000 runs that each consisted of 600 random trials (the same number in the historical ESP experiment mentioned above). In the run that was most successful out of the one million runs, there were 50 more correct guesses than would occur on average by chance. But this best random result out of these million runs was much less impressive than the actual experimental tests involving a human subject guessing, for in those actual tests there were 95 more correct guesses than we would expect by chance.

ESP research didn't stop in the 1930's. There have been many ESP experiments in more recent decades. Some of the more successful have been tests using the ganzfeld sensory deprivation technique. In 2010 Storm and Tressoldi did a meta-analysis of ganzfeld ESP experiments, in a paper published in a scientific journal. The analysis summarized the results of 63 studies in which there were four possible answers. The studies involved 4442 trials and 1326 hits (correct answers), which was an accuracy rate of 29.9%, much higher than the 25% rate expected by chance. The table below compares this result with the result obtained in a computer test.

Experimenter(s) Various
Year 1992 – 2008
Source Meta-Analysis of Free-Response Studies, 1992–2008: Assessing the Noise Reduction Model in Parapsychology” by Lance Storm, Patrizio Tressoldi and Lorenzo Di Risio
http://deanradin.com/evidence/Storm2010MetaFreeResp.pdf
Page 475
Number of Trials
4442
Special test conditions Sensory deprivation of subjects
Probability of random guess being correct 1 in 4
Number of correct guesses better than the number expected by chance 215
Number of computer runs, each consisting of 4442 trials guessing a number between 1 and 4 100000
Maximum number of correct guesses better than the number expected by chance, in any of these runs 134
Average guess in these trials
2.5000297208464657
Number of runs matching or beating the human experimental result 0
Program arguments (use code below to reproduce) 100000 4442 4


So in my computer experiments there were 100,000 runs that each consisted of 4442 random trials (the number in the set of ESP experiments discussed above). In the run that was most successful out of the 4442, there were 134 more correct guesses than would occur on average by chance. But this best random result out of 100,000 was much less impressive than the actual experimental tests involving human subjects guessing, for in those actual tests involving humans there were 215 more correct guesses than we would expect by chance.

You can reproduce this result by compiling the code at the bottom of this post in a Java compiler, and using “100000 4442 4” as the program arguments.

These computer simulations help show that the experimental evidence for extrasensory perception (ESP) is overwhelming. In not one of the more than 1,200,000 simulations did the computer guessing produce a result anywhere near as high as was obtained using human subjects. Three of the four experimental results involved special precautions that should have excluded any reasonable possibility of cheating. In the book Extra-Sensory Perception After Sixty Years the authors address and debunk all of the common objections made against laboratory research into ESP.

Very many scientists reject the evidence for ESP, even though the evidence for ESP is vastly stronger than the evidence for some of the things that scientists believe in. Here is an example. Scientists tell us in a matter-of-fact manner that the Higgs Boson exists. But the wikipedia.org article on the Higgs Boson tells us that it was established with experimental evidence that had a standard deviation of merely 5.9 sigma, which corresponds to a probability of about 1 in 588 million of occurring by chance. That's an experimental result not nearly as strong as the Rhine result mentioned above. 

Another book you can read online (at this URL) is the "Handbook of Tests in Parapsychology" by Betty Humphrey. We read on page 42 of that book that a Critical Ratio of 5.0 corresponds to a probability of  1 in 3,384,000.  The Pratt-Woodruff experiment described above is listed in the Extra-sensory Perception After 60 Years book (Table 11) as having such a Critical Ratio of 5.0 (4.99 to be precise).  The Humphrey book tells us that a Critical Ratio of 6 corresponds to a probability of 1 in 1,000,000,000. The Rhine set of 2757854 trials (discussed above) is listed in Table 5 of the Extra-sensory Perception After Sixty Years book as having an enormous Critical Ratio of 79.  If a Critical Ratio of 5 equals a probability of 1 in 3,384,000, and a Critical Ratio of 6 equals a probability of 1 in 1,000,000,000, you can get an idea of what a "never by chance in the history of the universe" probability would correspond to a a Critical Ratio of 79.  Such a result is vastly more impressive than the 5.9 sigma result cited for the Higgs Boson.  In the Riess ESP test discussed here, a test in which the subject and the experimenter were in separate buildings, a young woman achieved a phenomenal 73 percent accuracy rate (making 1850 guesses that should only have been 20 percent accurate by chance).  The Critical Ratio for that experiment was 53. 

Inexplicably, many a physicist believes in the Higgs Boson, but not in ESP, even though the experimental evidence is incomparably stronger for ESP. The evidence for ESP includes other experimental results even stronger than the ones mentioned here (see the table at the end of this post for examples), along with a vast amount of anecdotal evidence in which people sensed or had thoughts of things that had not been revealed by their senses.

Below is the simple Java code I used for these experiments. You can run these experiments (using the program arguments listed above) by compiling this code with a Java compiler (I used the Net Beans compiler). 

package randomnumbertrials;
import java.math.*;
import java.util.Random; 
/**
 *
 * @author Mark
 */
public class RandomNumberTrials {

    /**
     * @param args the command line arguments
     */
    public static void main(String[] args) {
               
       if (args.length < 3)
       {
           System.out.println("Must supply 3 program arguments:");
           System.out.println("Number of runs, number of trials per run, max guess per trial");
           return;    
       }    
        long numberOfRuns = Long.parseLong(args[0]);
        long numberOfTrials = Long.parseLong(args[1]);
        int maxGuessPerTrial =  Integer.parseInt(args[2]);

        double averageResult = 0;
        long maxNumberOfSuccesses = 0;
        long totalTrials = 0;
        double resultTotal = 0;
        for (long i = 0; i < numberOfRuns; i++)
        {
           Random rand = new Random(); 
            int numberOfSuccesses = 0;
            for (long j  = 0; j < numberOfTrials; j++)
            {
           
                int randomInt = getRandomNumber(rand,  maxGuessPerTrial);
                int guess = getRandomNumber(rand,  maxGuessPerTrial);
                if (randomInt == guess)
                    numberOfSuccesses++;
                if (numberOfSuccesses > maxNumberOfSuccesses)
                    maxNumberOfSuccesses = numberOfSuccesses;
                totalTrials++;
                resultTotal += guess;
            }
            
        }
          System.out.println("Ran " + numberOfRuns + " each with " + numberOfTrials + " trials.");
          System.out.println("Highest number of successes: " + maxNumberOfSuccesses);
          double averageExpectedResult = numberOfTrials / maxGuessPerTrial;
          double deviation = maxNumberOfSuccesses - averageExpectedResult;
          System.out.println("Highest deviation above expected result: " + deviation);
           averageResult = resultTotal/totalTrials; 
            System.out.println("Average result =  " + averageResult);
    }
    static public int getRandomNumber( Random rand,  int highestNum)
    {
       int retVal = rand.nextInt(highestNum+1);
       while (retVal == 0)
        // This function returns a number between 0 and highestNum
          retVal =  rand.nextInt(highestNum+1);
       //System.out.println("Random number =  " + retVal);
       return retVal;
    }
 }

Postscript: I have had many personal experiences strongly suggestive of extrasensory perception. I'll give just one example, not at all the most impressive one. I once asked one of my daughters to fill in the statement "Why is there ___ rather than ___?" I was thinking of the classic philosophical question "Why is there something rather than nothing?" but also thought of the question, "Why is there war rather than peace?" My daughter answered, "Why is there fighting rather than peace?" I asked her to ask my wife the question over the phone, and my wife said that there was too much noise where she was, and she needed a peaceful place to think about it. She later answered, "Why is there so much death rather than life?" which is pretty much the same as "Why is there war rather than peace?" I then asked the question by email to my other daughter, and she answered, "Why is there war rather than peace?"  You can try this with your friends. 

Tuesday, September 25, 2018

Barash's Poor Logic on Cosmic Fine-Tuning

Our universe seems to be incredibly fine-tuned to allow the existence of biological organisms such as ourselves. Against all odds, the fundamental constants have values that allow the existence of long-lived stars, planets and living beings. Make minor changes in any of a dozen places in the universe's fundamental constants and laws, and observers such as us would be impossible.

An example (one of many discussed here) is the exact numerical equality of the absolute value of the proton charge and the electron charge. Given that each proton has a mass 1836 times greater than the mass of each electron, we would not at all expect these two fundamental particles to have electric charges that are exactly equal or exactly opposite. But according to modern science the electric charge of each electron in the universe is the exact opposite of the electric charge of each proton in the universe. The equality has been proven to be an exact match to at least 18 decimal places. We would not expect a coincidence like this to occur in 1 in a trillion random universes. The scientist Greenstein has stated that if this coincidence did not exist, planets could not hold together, because the electromagnetic repulsion between particles in a planet would totally overwhelm the gravity that holds the planet together (electromagnetism being a fundamental force more than a trillion trillion trillion times stronger than gravitation).


In Aeon magazine we recently had an evolutionary biologist named David Barash do his best to sweep under the rug the gigantic reality of cosmic fine-tuning. His “Anthropic Arrogance” essay is a grab bag of points that do not add up to any forcible objection to claims such as, “Our universe shows life-favoring characteristics so fantastically improbable that we should suspect a grand purpose behind its physical reality.”

Barash tries to raise doubt about the topic by quoting Einstein's statement “What really interests me is whether God had any choice in the creation of the world?” He states the following:

Note that Einstein was asking if the deep laws of physics might have in fact fixed the various physical constants of the Universe as the only values that they could possibly have, given the nature of reality, rather than having been ordained for some ultimate end – notably, us. At present, we simply don’t know whether the way the world works is the only way it could; in short, whether currently identified laws and physical constants are somehow bound together, according to physical law, irrespective of whether human beings – or anything else – eventuated.

But it is not at all correct to claim that “we don't know whether the the way the world works is the only way it could.” We do know exactly such a thing. We know, for example, that each proton has a mass 1836 times greater than each electron, and that the electric charge of each proton is the exact opposite of each electron. There is no a priori reason why such numbers could not have been totally different. And so it is with all of the fundamental constants of the universe. The hope occasionally expressed by physicists (that they might one day have a super-theory that explains all the fundamental constants and laws) is just a fantasy hope, kind of like some child saying, “One day I hope to own a marble mountain-top castle in Spain.” There does not exist any theory showing why any of the fundamental constants could not have had a vastly different value.   In the most unlikely event that physicists ever produce such a “this explains it all” theory, then appealing to such a thing may have some force; but until then, appealing to such a highly improbable possibility has no force.

Barash then resorts to a quite ridiculous argument sometimes made, that the universe isn't so fine-tuned for life because most of it is inhospitable to life. He says, “The stark truth is that nearly all of it is incompatible with life – at least our carbon-based, water-dependent version of it.” True, since the majority of the universe is just empty space. But anyone familiar with gravitation will know that you can't have a life-bearing planet without most of a solar system or galaxy being empty space. For example, if the space of the solar system mostly consisted of planets, the mass of such planets would exert so much gravitational force that the atmosphere of the earth would be pulled into space, and no one could breathe (not to mention that you'd be pulled out into outer space whenever you walked outside of your door).

Barash then resorts to a completely fallacious “extremely improbable things are very common” argument often made when materialists discuss cosmic fine-tuning. He points out that if you shuffle a set of cards, the chance of getting that exact sequence is something like 1 in 10 to the sixtieth power. Similarly, he reasons, if you strike a golf ball, there are trillions of different positions where the golf ball could end up, each extremely unlikely. Barash states, “For us to marvel at the fact of our existing (in a Universe that permits that existence) is comparable to a golf ball being amazed at the fact that it ended up wherever it did.”

This type of reasoning is completely erroneous, for it commits the fallacy known as false analogy. The fallacy of false analogy is committed when you draw an analogy between two things that aren't similar. The reason why the average shuffled deck of cards is not comparable to a fine-tuned universe is that a fine-tuned universe is something that resembles a product of design, but a random deck of shuffled cards does not resemble a product of design. The reason why a randomly landed golf-ball is not comparable to a fine-tuned universe is that a fine-tuned universe is something that resembles a product of design, but a randomly landed golf ball does not resemble a product of design. It is therefore erroneous to claim that, “For us to marvel at the fact of our existing (in a Universe that permits that existence) is comparable to a golf ball being amazed at the fact that it ended up wherever it did” – for in the first case there is something that resembles design, and in the second case there isn't.

Barash continues the same witless reasoning by talking about the improbability of one particular sperm uniting with one particular egg to produce a baby. It's the same “extremely improbable things are very common” bad reasoning. He points out that since there are 150 million sperm in a man's ejaculation, it's very improbable that any one sperm would unite with an egg. This is also a false analogy, because all of those sperm are identical, so the uniting of one particular sperm with an egg does not resemble a product of design or even something terribly lucky. So it's a false analogy to compare such a thing with a fine-tuned universe that seems to resemble a product of design and has all kinds of “lucky coincidences” all over the place.

Below is a conversation that illustrates the fallacious nature of the type of reasoning Barash uses in this case:

Son: Bye, Mom. I'm going to Las Vegas, and I will gamble my college fund at the roulette table, continuing to bet all my winnings until I become a billionaire.
Mom: That's crazy – you're all but certain to lose it all.
Son: But Mom, haven't you heard that very improbable things often happen? Why, if I shuffle this deck of cards, the chance of getting that particular sequence of cards is one in a gazillion. So my chance of winning the billion isn't so low.
Mom: You silly goose! Only run-of-the-mill, humdrum improbable things happen all the time. Extremely lucky random events don't happen often.

The son's reasoning is entirely fallacious, because while there very often happens very improbable outcomes that are not lucky and do not resemble the product of design, it is extremely rare and unlikely to have a random outcome resemble a product of design. So the chances of him winning the billion is every bit as low as his mother thinks.

Barash then asks two rhetorical questions about an asteroid collision millions of years ago, and I may note that such questions do nothing to advance his case.

Barash then appeals to the possibility of the multiverse as an explanation for cosmic fine-tuning, the idea that there are a large number of other universes. The fallacy of such an appeal is discussed in detail in this post, in which I give six reasons why such an appeal is fallacious. The best reason for rejecting the multiverse as an explanation for cosmic fine-tuning is the simple fact that you do not increase the likelihood of any one random trial being successful if you increase the number of random trials. For example, your chance of winning a million dollars in a weekend at Las Vegas is exactly the same regardless of whether or not there are an infinity of universes filled with gamblers who gamble at casinos. So whether or not there are a vast number of other universes has no effect on the probability of our universe being accidentally habitable. If there are a sufficient number of improbable coincidences, adding up very forcibly to an appearance of design, we should suspect such design if we think there is only one universe; and we should suspect such design with exactly the same force if we think there are many other universes.


Bad reasoning about your chances at Las Vegas

Barash then has a long paragraph building on the statement, “Shanks suggests that the multiverse hypothesis ‘does to the anthropic Universe what Copernicus’s heliocentric hypothesis did to the cosmological vision of the Earth as a fixed centre of the Universe’.” In the paragraph he drops the names of Galileo, Kepler and Copernicus. But it's not an appropriate comparison, because the conclusions of Galileo, Kepler, and Copernicus were based on observations, and there are zero observations of any other universe. What we have going on here is the same rhetorical trick that I discuss in my post “When Scientific Theorists Use 'Prestige by Association' Ploys.” Barash is trying to give some credibility to the groundless notion of the multiverse by trying to draw a very strained association between the multiverse and the hallowed scientific names of Galileo, Kepler and Copernicus. We shouldn't be fooled by such a maneuver.

Barash then appeals to the possibility of extraterrestrial life-forms that can get by on conditions much worse than we have. But this possibility does nothing to weaken the case for cosmic fine-tuning. I can give an analogy to explain why. If I come to a log cabin in the woods, I may reason that it's too improbable that such a house could have appeared by a chance arrangement of falling logs, and that the house is probably the product of design. If you were there with me, you might say, “That's not true because an organism could have just used a lucky tree hollow as its home.” But that does nothing to defeat my argument. Similarly, if its incredibly improbable that long-lived stable sun-like stars could exist in a random universe (and it is for the reasons discussed here), the existence of conditions that allow such stars strengthens the case for cosmic fine-tuning, regardless of whether some organism could barely get by living on planets revolving around stars that are less favorable for life, such as a star that periodically zaps its planets with high doses of radiation.

Barash then refers us to Lee Smolin's groundless speculation that attempted to combine the idea of natural selection with some weird speculation that collapsing black holes spit out baby universes. This wildly imaginative theory, known as the theory of cosmological natural selection, has not been widely accepted by physicists. We know of no evidence at all that black holes spit out new universes. And since universes don't have genes, and don't mate with other universes, it is preposterous for Smolin to be claiming that natural selection might come into play on the level of universes. Even if it were true that black holes did spawn child universes, this would do nothing to explain the fine-tuned characteristics of our universe, for the same reason that natural selection on planet Earth does not explain the appearance of very complex visible biological innovations (contrary to the claims of those like Barash).

The reason is the same in both cases: the fact that natural selection cannot occur in regard to some particular innovation until after that innovation appears. We cannot explain the appearance of something like a vision system in organisms by saying that such an innovation improved their survival and reproduction rate, because such an improvement (the same as a degree of natural selection) would not occur until after such a biological innovation first appeared; and a consequence that follows something is never the cause of that thing. For similar reasons, natural selection could not be the cause of some universe being fine-tuned. The idea of yanking natural selection from the biological world and trying to fit it into the vastly different world of cosmology makes no more sense than trying to apply Freudian psychology to a discussion of colliding subatomic particles.

Next in Barash's essay he reminds us of the very surprising fact that at the end of Carl Sagan's novel Contact, scientists found that after computing pi (the ratio of the circumference of the circle to its radius) to many additional digits, the scientists found a gigantic circle embedded within the digits of pi. In Sagan's novel this discovery is treated as proof that the universe had a designer. This ending was omitted from the movie of Contact. I'm not sure why Barash is bringing this up. Perhaps he is trying to suggest that scientists finding something suggesting the universe is designed belongs only in fiction. But it suggests that this very influential scientist (Sagan) was not too averse to such a possibility, so it does nothing to help the case Barash is trying to make.

In his last paragraph Barash builds on his previous attempts to associate the idea of cosmic fine-tuning with a claim that humans are the center of the universe, or that the universe was designed for humans. But there is no necessary association between the two, and few people promoting the idea of cosmic fine-tuning claim that the universe was designed for humans specifically, preferring the more general idea that the universe may have been designed for intelligent observers. So in this regard Barash is attacking a straw man. Someone can believe that the universe was designed for life, and that there are numerous different types of intelligent life forms scattered across the universe. You can believe that without believing that humans are unique, and without believing that humans are the most advanced biological organisms, and without believing that humans are the centerpiece of the universe.

So you may have Copernican-style objections about humans being the centerpiece of the universe, but that does nothing to defeat or discredit the idea of cosmic fine-tuning. Whether the universe was fine-tuned for living observers, and whether man is the center of the universe or the most advanced organism in the universe are two entirely different questions. The title of Barash's essay is “Anthropic Arrogance.” But there is nothing arrogant at all about noticing a long series of extremely lucky coincidences and favorable facets of the universe's fundamental constants and laws, and suspecting that more than mere chance is involved.

After noticing the fallacy-ridden reasoning of evolutionary biologist Barash on this topic of the universe's fine-tuned physics, we should ask: in what other places have evolutionary biologists got away with fallacious reasoning? We should then go back and scrutinize their more doubtful statements, such as claims that vastly complex functional systems such as vision systems (more complex than a smartphone) can be explained by saying that they appeared because of accumulations of random mutations, a kind of “stuff piles up” explanation as vacuous as the explanation of “stuff happens.”

Friday, September 21, 2018

He Could Not Speak Until They Took Out Half of His Brain

Our neuroscientists have the bad habit of frequently spouting dogmas that have not been established by observations. We have all heard these dogmas stated hundreds of times, such as when neuroscientists claim that memories are stored in brains, and that our minds are produced by our brains. There are actually many observations and facts that contradict such dogmas, such as the fact that many people report their minds and memories still working during a near death experience in which their brains shut down electrically (as the brain does soon after cardiac arrest).

One of the most dramatic type of observations conflicting with neuroscience dogmas is the fact that memory and intelligence is well-preserved after the operation called hemispherectomy. Hemispherectomy is the surgical removal of half of the brain. It is performed on children who suffer from severe and frequent epileptic seizures.

In a scientific paper “Discrepancy Between Cerebral Structure and Cognitive Functioning,” we are told that when half of their brains are removed in these operations, “most patients, even adults, do not seem to lose their long-term memory such as episodic (autobiographic) memories.” The paper tells us that Dandy, Bell and Karnosh “stated that their patient's memory seemed unimpaired after hemispherectomy,” the removal of half of their brains. We are also told that Vining and others “were surprised by the apparent retention of memory after the removal of the left or the right hemisphere of their patients.”

On page 59 of the book The Biological Mind, the author states the following:

A group of surgeons at Johns Hopkins Medical School performed fifty-eight hemispherectomy operations on children over a thirty-year period. "We were awed," they wrote later of their experiences, "by the apparent retention of memory after removal of half of the brain, either half, and by the retention of the child's personality and sense of humor." 

In the paper "Neurocognitive outcome after pediatric epilepsy surgery" by Elisabeth M. S. Sherman, we have some discussion of the effects on children of temporal lobectomy (removal of the temporal lobe of the brain) and hemispherectomy, surgically removing half of the brain to stop seizures. We are told this:

After temporal lobectomy, children show few changes in verbal or nonverbal intelligence....Cognitive levels in many children do not appear to be altered significantly by hemispherectomy. Several researchers have also noted increases in the intellectual functioning of some children following this procedure....Explanations for the lack of decline in intellectual function following hemispherectomy have not been well elucidated. 

Referring to a study by Gilliam, the paper states that of 21 children who had parts of their brains removed to treat epilepsy, including 10 who had surgery to remove part of the frontal lobe, "none of the patients with extra-temporal resections had reductions in IQ post-operatively," and that two of the children with frontal lobe resections had "an increase in IQ greater than 10 points following surgery." 

The paper here gives precise before and after IQ scores for more than 50 children who had half of their brains removed in a hemispherectomy operation in the United States.  For one set of 31 patients, the IQ went down by an average of only 5 points. For another set of 15 patients, the IQ went down less than 1 point. For another set of 7 patients the IQ went up by 6 points. 

The paper here (in Figure 4) describes IQ outcomes for 41 children who had half of their brains removed in hemispherectomy operations in Freiburg, Germany. For the vast majority of children, the IQ was about the same after the operation. The number of children who had increased IQs after the operation was greater than the number who had decreased IQs. 

Referring to these kind of surgeries to remove huge amounts of brain tissue, the
paper “Verbal memory after epilepsy surgery in childhood” states, “Group-wise, average normed scores on verbal memory tests were higher after epilepsy surgery than before, corroborating earlier reports.” 

Some try to explain these results as some kind of special ability of the child brain to recover. But there are similar results even for adult patients. The page here mentions 41 adult patients who had a hemispherectomy. It says, “Forty-one patients underwent additional formal IQ testing postsurgery, and the investigators observed overall stability or improvement in these patients,” and notes that “significant functional impairment has been rare.”

Of these cases of successful hemispherectomy, perhaps none is more astonishing than a case of a boy named Alex who did not start speaking until the left half of his brain was removed. A scientific paper describing the case says that Alex “failed to develop speech throughout early boyhood.” He could apparently say only one word (“mumma”) before his operation to cure epilepsy seizures. But then following a hemispherectomy (also called a hemidecortication) in which half of his brain was removed at age 8.5, “and withdrawal of anticonvulsants when he was more than 9 years old, Alex suddenly began to acquire speech.” We are told, “His most recent scores on tests of receptive and expressive language place him at an age equivalent of 8–10 years,” and that by age 10 he could “converse with copious and appropriate speech, involving some fairly long words.” Astonishingly, the boy who could not speak with a full brain could speak well after half of his brain was removed. The half of the brain removed was the left half – the very half that scientists tell us is the half that has more to do with language than the right half. 

Cases like this make a mockery of scientist claims to understand the human brain. When scientists discuss scientific knowledge relating to memory, they almost never discuss the most relevant thing they could discuss, the cases of high brain function after hemispherectomy operations in which half of the brain is removed. Instead the scientists cherry-pick information, and describe a few experiments and facts carefully selected to support their dogmas, such as the dogma that brains store memories, and brains make minds. They also fail to discuss the extremely relevant research of John Lorber, who documented many cases of high-functioning humans who had lost almost all of their brain due to hydroencephaly. 


cherry picking

A scientist discussing memory will typically refer us to experiments involving rodents. Such experiments are almost always studies with low statistical power, because the experimenter failed to use at least 15 animals per study group, the minimum needed for a moderately reliable result with a low risk of a false alarm. There will be typically be some graph showing some measurement of what is called freezing behavior, when a rodent stops moving. The experimenter will claim that this shows something was going on in regard to memory, although it probably does not show such a thing, because all measurements of a rodent's degree of freezing are subjective judgments in which an experimenter's bias might have influenced things. There will often be claims that a fear memory was regenerated by electrically zapping some part of the brain where the experimenter thought the memory was stored. Such claims have little force because it is known that there are many parts of a rodent's brain that will cause a rodent to stop moving when such parts are electrically stimulated. And, of course, rodent experiments prove nothing about human memory, because humans are not rodents.

When a scientist discusses memory research, he will typically discuss the case of patient HM, a patient who was bad at forming new memories after damage to the tiny brain region called the hippocampus. Again and again, writers will speak as if this proves the hippocampus is crucial to memory. It certainly does not. The same very rare effect of having a problem in forming new memories cropped up (as reported here) in a man who underwent a dental operation (a root canal). The man had no brain damage, but after the root canal he was reportedly unable to form new memories. Such cases are baffling, and the fact that they can come up with or without brain damage tells us no clear tale about whether the hippocampus is crucial for memory. The hemispherectomy cases suggest that the hippocampus is not crucial for memory, for each patient who had a hemispherectomy lost one of their two hippocampuses, and overall there was little permanent effect on the ability to form new memories.

A scientific paper tells us that “lesions of the rodent hippocampus do not produce reliable anterograde amnesia for context fear,” meaning rodents with a damaged hippocampus can still produce new memories. The paper also tells us, “These data suggest that the hippocampus is normally involved in context conditioning but is not necessary for learning to occur.” So it seems that the main claim that neuroscientists cite to persuade us that they have some understanding of a neural basis for memory (the claim that the hippocampus is “crucial” for memory) is really a factoid that is not actually well established. 

Postscript: The case of patient HM has been cited innumerable times by those eager to suggest that memories are brain based. Such persons usually tell us that patient HM was someone unable to form any new memories.  But a 14-year follow-up study of patient HM (whose memory problems started in 1953) actually tells us that HM was able to form some new memories. The study says this on page 217:

In February 1968, when shown the head on a Kennedy half-dollar, he said, correctly, that the person portrayed on the coin was President Kennedy. When asked him whether President Kennedy was dead or alive, and he answered, without hesitation, that Kennedy had been assassinated...In a similar way, he recalled various other public events, such as the death of Pope John (soon after the event), and recognized the name of one of the astronauts, but his performance in these respects was quite variable. 

The study also says that patient HM was able to learn a maze (although learning it only very slowly),  and was able eventually to walk the maze three times in a row without error. 

Monday, September 17, 2018

When Scientific Theorists Use “Prestige by Association” Ploys

There is a persuasion technique that we can call “prestige by association.” The technique is used by someone who tries to give himself extra prestige by associating himself with someone or something more respected, famous, rich, successful or admired. Below are some examples:

  1. A person who runs a not-very-reputable investment company will be careful to collect any pictures he can get of himself with well-known or respected people, and to display such pictures as framed photos visible to anyone who comes to his office.
  2. A president or political candidate who dodged military service will favor “photo ops” in which he is seen side-by-side with military heroes.
  3. A person who had some remote connection with a respected institution will make much mention of this thin connection. For example, a person who merely graduated from a community college, but who later wrote a short article in some magazine published by Harvard may frequently refer to himself as a “Harvard-published author.”
  4. An author publishing some nonsensical claims in a book may tell us that the book now resides in the Library of Congress, a claim that may impress people who do not realize that anyone can submit to the Library of Congress a book as long as it is 50 pages long.
  5. If your friend knows someone who knows movie star Jennifer Lawrence, your friend may try to trumpet this connection, almost making you think that he is one of Jennifer Lawrence's inner circle – even if Jennifer would never be able to recognize him.

The same “prestige by association” tricks are often used by scientific theorists trying to build up the reputation of some doubtful scientific theory. The strategy is to make your dubious scientific theory sound more credible by trying to establish some association or mental link with some other scientific theory that has more prestige. Below are some examples of how this “prestige by association” trick was used by various theorists:

  1. When some theorists advanced the extremely dubious idea that human behavior is strongly influenced or largely controlled by genes, they christened this implausible theory “behavioral genetics,” thereby trying to borrow some of the prestige of the well-established science known as genetics.
  2. When theorist Gerald Edelman advanced a complex speculative theory of the brain, he labeled it “Neural Darwinism,” trying to get some prestige by association. But the theory bears little resemblance to anything taught by Darwin.
  3. When theoretical physicist Lee Smolin advanced some weird theory that new universes are being formed when black holes collapse, he called this theory “cosmological natural selection,” trying to get some prestige by association from the biological theory of natural selection (even though there is scarcely any resemblance between his non-biological theory and doctrines about natural selection in biology).
  4. Theorists trying to bolster the prestige of the Darwinian theory of the origin of species by natural selection will often discuss that theory while discussing very prestigious theories such as special relativity and the theory of electromagnetism that make precise numerical predictions that have been exactly verified. The aim is to leave the reader with the impression that Darwinism is in the same class with such exact mathematical theories. No mention will be made of the fact that Darwinism, unlike such theories, does not make exact numerical predictions that have been verified.
  5. The theory of cosmic inflation is a separate theory from the Big Bang theory. The Big Bang theory maintains that the universe arose from an incredibly dense state 13 billion years ago, possibly from a state of infinite density. The cosmic inflation theory is a theory that the universe underwent a super-short phase of exponential expansion during a tiny fraction of its first second. The evidence for the Big Bang theory is pretty good, but there is no good evidence for the cosmic inflation theory. Repeatedly the proponents of the cosmic inflation theory have tried to play “prestige by association” tricks by trying to get people to conflate the fairly well-established Big Bang theory with the purely speculative cosmic inflation theory. They do this by describing the cosmic inflation theory as “the modern version of the Big Bang theory” or “the current version of the Big Bang theory.” Such claims are inaccurate, as the Big Bang theory and the cosmic inflation theory are two separate theories, and evidence establishing the first is not evidence establishing the second. Similarly, proponents of the cosmic inflation theory may refer to it as “the inflationary Big Bang theory,” trying to give their speculative cosmic inflation theory some of the credibility of the Big Bang theory.
Recently the press office of Stanford University has given us another example of scientific theorists playing “prestige by association” games. We have an article in which the author tries to give the completely groundless and entirely speculative “string theory landscape” theory some “prestige by association” by linking it to the Big Bang theory. The article states, “The latest draft of the scientific story of genesis is called the String Theory Landscape,” as if the “string theory landscape” theory was some version of the Big Bang theory. This is entirely false. The “string theory landscape” theory is not a theory of the origin of the universe, and is not a version of the Big Bang theory.

The article also tries to give a much-needed prestige boost to string theory by trying to link it or associate it with the cosmic inflation theory. There are two reasons why this attempt is absurd. The first is that it's a case of trying to bolster the prestige of one empirically groundless theory by associating it with another empirically groundless theory. For just as there is no evidence for string theory, there is no evidence for the cosmic inflation theory (not to be confused with the more general Big Bang theory). So if you're a string theorist trying to bolster your prestige by associating your theory with cosmic inflation theory, it's kind of like some Big Foot theorist trying to bolster his prestige by joining forces with a Loch Ness monster theorist.

Another reason the attempt to bolster the prestige of string theory by associating it with cosmic inflation theory is laughable is that the two theories are unrelated. Cosmic inflation theorists have churned out nearly 1000 different papers giving versions of cosmic inflation theory, and virtually none of these papers ever relied on the assumptions of string theory. String theorists have churned out more than 1000 different papers giving versions of string theory, and virtually none of these relied on the assumptions of cosmic inflation theory. A New Scientist story talks about a study that “suggests it may be difficult to reconcile string theory with the widely accepted theory of inflation.” It quotes a Princeton scientist saying, “I think the fact that it is difficult to combine inflation and string theory is very interesting.”

The Stanford University article gives us this bad reasoning:

This diversity, they say, is key to explaining certain baffling features of our universe, like the fact that several parameters in physics and cosmology appear to be curiously fine-tuned for life forms like us to exist. Perhaps the most glaring example is the cosmological constant, which relates to a universal repulsive force that is pushing space-time apart. Physicists have struggled to explain why the tiny value of this constant just happens to lie within the narrow band that allows stars and planets to form and biological life to evolve. But if there are innumerable universes, each with differing laws of physics, then it should not be surprising that we inhabit one where the cosmological constant is small – if things were any different, we could not exist to marvel at the coincidence.

But actually, it should be incredibly surprising that we inhabit a universe where some vastly improbable set of coincidences occurred, because you don't change the likelihood of such coincidences occurring in any one of those universes by imagining other universes. The likelihood of something very improbable happening in any one random trial does not change by increasing the number of trials (for example, your probability of winning a million dollars at a Las Vegas casino is not increased the slightest if there are a trillion universes filled with casinos). So imagining the “innumerable universes” other than ours is pointless. What's happened is that our string theorists have made an elementary error in logic, confusing the likelihood of “some universe” being fine-tuned for life with the likelihood of “our universe” being fine-tuned for life.

Such elementary errors in logic are not rare among PhD's, such as when biologists suggest that natural selection is the cause of complex biological innovations, when there is no natural selection effect related to a biological innovation (no “survival of the fittest” effect) until after that biological innovation appears. This is the elementary reasoning error of maintaining that a consequence of an effect is the cause of that effect, which is kind of like reasoning that the thundercloud appeared because the city street got soaked by the thunderstorm. We must always remember that a man who has a PhD is just as prone to commit elementary reasoning errors as someone who does not have a PhD. Our different tribes of scientists (such as the string theory tribe)  are belief communities just as much as the Amish and Sikhs and Scientologists are belief communities, and in many a belief community bad reasoning can be so normalized that it may be hard for a member of the community to see how large a logic error he may have committed.

String theory maintains there are 10 or more dimensions of space. This idea has flunked a recent observational test. A recent headline reports, “University of Chicago astronomers found no evidence for extra spatial dimensions to the universe based on the gravitational wave data.”


LIGO gravitational wave detector

Postscript: Stanford University has now completed its five part goofy exposition of the Fake Physics of the "string theory landscape."  Its more laughable parts include:

(1) Scientist Andrei Linde babbling about infinite copies of you in the multiverse. 
(2) A scientist named Dimopoulos making the silly claim that the "richness" of string theory "tells you that there are many universes."
(3) The claim that the "string theory landscape" with 10 to the five hundredth power universes "elegantly explains why the universe appears to be so eerily fine-tuned for life."  The theory doesn't do that, and if you look up the definition of "elegant" you will see that in a scientific context it means a solution that is simple; but nothing could be less simple than imagining 10 to the five hundredth power universes. 

There's a lesson you should derive from the third example (which repeats an untruth discussed here). It is that when scientists say something about one of their theories, the exact opposite may be true. So a theory described as "brilliant" by a scientist may be very stupid; and a theory described as "proven" by a scientist may be groundless and inconsistent with observations. 

Real physics involves equations, and there is no way to ever write an equation that yields another universe or a multiverse.  You never get another universe doing real physics calculations. Multiverse fantasists may try to impress us by writing papers with equations, but none of their equations ever yields another universe after the equal sign.