Saturday, January 28, 2017

Neural Correlation Studies Often Lie With Colors

A very interesting question is whether there are particular parts of the brain that are strongly associated with particular facets of human mental functionality. Are there, for example, regions of the brain that work much harder when you learn something, or remember something, or feel something? The idea that there are such areas is a hypothesis called localization.

Some claim that this idea of localization is supported by brain imaging studies. It is claimed that quite a few studies tell us about the neural correlates of conscious experiences. In a typical study of this type, people will have their brains scanned by some instrument such as an MRI machine. Then scientists will look for certain parts of the brain which showed more activity (such as blood flow) when some particular type of mental activity was occurring.

But there are reasons for thinking that such studies tell us very little. For one thing, brain imaging studies on the neural correlates of consciousness typically involve only small numbers of participants (often fewer than 25). Making generalizations from such small samples is dubious.

Also, claims that particular regions of the brain show larger activity during certain mental activities are typically not well-replicated in followup studies. A book by a cognitive scientist states this (page 174-175):

The empirical literature on brain correlates of emotion is wildly inconsistent, with every part of the brain showing some activity correlated with some aspect of emotional behavior. Those experiments that do report a few limited areas are usually in conflict with each other....There is little consensus about what is the actual role of a particular region. It is likely that the entire brain operates in a coordinated fashion, complexly interconnected, so that much of the research on individual components is misleading and inconclusive.

There have been statistical critiques of brain imaging studies. One critique found a common statistical error that “inflates correlations.” The paper stated, “The underlying problems described here appear to be common in fMRI research of many kinds—not just in studies of emotion, personality, and social cognition.”

Another critique of neuroimaging found a “double dipping” statistical error that was very common. New Scientist reported a software problem, saying “Thousands of fMRI brain studies in doubt due to software flaws.”

Considering the question of “How Much of the Neuroimaging Literature Should We Discard?” a PhD and lab director states, “Personally I’d say I don’t really believe about 95% of what gets published...I think claims of 'selective' activation are almost without exception completely baseless ”

Another huge reason for being skeptical about brain imaging studies is that such studies very often use very misleading visual presentations, creating false impressions. What very frequently goes on is something like this. A series of brain scans will show a very small difference between brain activity in different parts of the brain – typically only 1%. A Stanford scientific paper on fMRI uses this 1% figure, making an exception only for small parts of the brain (“visual and auditory cortices”) associated with seeing and hearing. The paper makes this generalization: “While cognitive effects give signal changes on the order of 1% (and larger in the visual and auditory cortices), signal variations of over 10% may arise from motion and other artifacts in the data.” In other words, people moving their heads (and other misleading signals) may create the impression that there is a higher variation in the non-sensory parts of the brain, but the real variation in the signal changes is only something like 1%. A similar generalization is made in this scientific discussion, where we are told the following:

For example, most cognitive experiments should show maximal contrasts of about 1% (except in visual cortex),  hence, if estimates for a single subject are much larger than that, then the estimates are likely to be bad. Poor estimates can arise from head motion, or sporadic breathing patterns by the subject, or sometimes from a poor design matrix that is ill-conditioned.

But again and again studies on neural correlations will produce a visual which grossly exaggerates this very small difference of 1%, making it look like a great big difference. This is lying with colors
 
To explain why this is highly deceptive, let's consider some examples of displaying visual information: an honest presentation and a misleading presentation. Imagine you have a home for sale, and you are preparing a web page or brochure that describes your house's selling points. One of the key factors in selling houses is the quality of the local school district. Anyone with children will want to buy a house in a neighborhood with better schools.

Imagine you've got the average school stores for your home's school district, and the data looks like this:

District Average Reading Score
District 1 80
District 2 80
District 3 80
District 4 (yours) 81
District 5 80
District 6 80
District 7 80
District 8 80

Now imagine you wanted to present a school district map highlighting the higher score of your home's school district. For you to honestly present such information, you would have to follow this rule: the difference in the color shade should be no different than the differences in the data that you have.

So if you were to present an honest school district map, color-coded school scores, it would have to look something like this:


This would be an honest map. It honestly indicates only a very slight difference between the school scores in your home's school district and the nearby districts. But you might be tempted to present the data differently. You might present it like the map below.



This map would be better from the standpoint of selling your house, since it would leave someone with the impression that your house's school district has much higher scores than the nearby districts. But given the actual data showing only a 1% difference, it would be utterly misleading to present a map like this. The map would incorrectly give someone the idea that your home's school district had scores maybe 30% better than surrounding districts. Presenting a map like this would be an example of lying with colors.

It is exactly such lying with colors that goes on again and again in brain imaging studies on the neural correlates of consciousness. Again and again, such studies will show visuals that depict differences of only 1% or less between blood flow in different regions of the brain. But such regions will be shown as red regions in brain images, with all of the other areas having a grayish “black and white” color. When you see such an image, you inevitably get the impression that the highlighted part of the brain has much higher activity than other regions. But such a conclusion is not what the data is showing.

So, for example, a study finding 1% higher brain activity in a region near the corpus callosum (under some activity that we may call Activity X) might release a very misleading image looking like the image below, in which the area of 1% greater activity is colored in red.



But such an image is lying with colors. If there is only a 1% greater activity in this region, an honest diagram would look like the one below.




With this diagram, the same region shown in red in the first diagram is shown as only 1% darker. You can't actually tell by looking at the diagram which region has the 1% greater activity when Activity X occurs. But that's no problem. The diagram above leaves the reader with the correct story: none of the brain regions differ in activity by more than 1% when Activity X occurs. Contrast this with the first image, which creates the very misleading idea that one part of the brain is much more active than the others when Activity X occurs.

You might complain that with such a visual, you cannot tell which regions have the slightly greater activity. But there are various ways to highlight particular regions of a brain visual, such as circling, pointing arrows, outlining, and so forth. For example, the following shows a region of high activity without misleading the viewer by creating the impression of much higher activity:


The misleading diagrams of brain imaging studies seem all the more appalling when you consider that the images in such studies are typically the only thing that laymen use to form an opinion about localization in the brain. The text of brain imaging studies is typically written in thick jargon that only a neuroscientist can understand. Frustrated by this very hard-to-understand jargon and unclear writing, every layman reading these studies forms his opinions based on the visuals. When such visuals deceive us by lying with colors (as they so often do), the whole study ends up being something that has the effect of creating misleading ideas.

At 4:47 in this online course, we are told that the blood changes in the brain are quite small when observed with an MRI, between 0.1% and 5% (based on two previous comments I quoted, the higher value seems to be only found in the auditory cortex or the visual cortex). The speaker in this course tells us that if we were to just watch a movie of the fMRI scans, we wouldn't be able to notice the changes between different brain regions. But the visuals in neural correlation studies misrepresent this minimal change, making it look like a great big change. These cases of lying with colors give readers a very misleading impression that brain activity involving thinking and memory recall is very localized, and tightly correlated with mental activity.

How closely your brain is correlated with your mind is important from a philosophical standpoint. If different parts of our brains surge dramatically with blood when we think or recall memories, that is a point favoring the idea that your mind is just a product of your brain. But if different parts of your brain look pretty much the same when you think or recall memories, that's a point in favor of the idea that your mind and memories may involve something much more than your brain, perhaps some soul or some higher cosmic consciousness infrastructure. The actual data from brain imaging is the second of these cases: different parts of the brain show about the same activity when thinking or memory recall occurs. But by doing neural correlation studies that visually make tiny changes look like huge changes, our neuroscientists almost seem to be trying to fool us into thinking that a very different thing is going on, that particular parts of our brain light up dramatically when our higher mental functions are engaged. To such neuroscientists we should say: visually represent your own data honestly, and stop lying with colors.

Tuesday, January 24, 2017

With Overwhelming Likelihood, a Random Universe Would Be Lifeless and Light-Less

Shortly after publishing an essay on the topic of cosmic fine-tuning that had some of the worst reasoning I have ever read on the topic (which I discuss here), the Nautilus web site is out with another essay on this weighty topic that I have often discussed on this blog. The new essay by scientist Fred Adams is entitled “The Not So-Fine Tuning of the Universe.” Adams pulls some misleading tricks to try to make you believe his very wrong conclusion that “our universe does not seem to be particularly fine-tuned.”

Here are the main fallacies Adams is guilty of:

  • The “ant near the needle hole” fallacy of visually representing something incredibly unlikely to make it look as if it is likely
  • The fallacy of considering any type of star allowing life when considering stars and cosmic fine-tuning, while not considering the equally important likelihood of stars as suitable for life's evolution as our own sun
  • The fallacy of considering only less sensitive requirements when considering the likelihood of stars existing, and ignoring a vastly more sensitive requirement which makes the existence of stars incredibly unlikely in random universes
  • The fallacy of ignoring the universe's most dramatic cases of cosmic fine-tuning, and focusing only on less dramatic cases

Adams' scientific specialty is stars. He gives us a graph that plots possible strengths of the electromagnetic force and the gravitational force in hypothetical possible universes. A shaded portion taking up a fairly large part of the graph is described as an area “consistent with life.” The graph makes clear that stars require an unlikely balance between the gravitational force and the electromagnetic force, but from looking at the graph you might think that such a thing wasn't all that unlikely. 

Adams here is guilty of a fallacy that we might call the fallacy of the “ant near the needle hole.” Consider an ant that somehow wanders into your sewing kit. If it were smart enough to talk, the ant might look at the eye of a needle hole in your sewing kit, and say, “Wow, that's a big needle hole!” Such an observation will only be made if you have a perspective looking a few millimeters away from the needle hole. 


 Similarly, Adams has given us a graph in which his “camera” is placed a few millimeters from the needle hole that must be threaded for stars to exist. He has graphed a parameter space in which two fundamental constants vary by only a few times. But physicists routinely deal with a difference of 40 orders of magnitude (10,000,000,000,000,000,000,000,000,000,000,000,000,000), which, for example, is roughly the difference between the strength of the strong nuclear force and the gravitational force. So if we are imagining a parameter space of alternate universes, we must imagine a parameter space vastly larger than the relatively microscopic parameter space Adams has graphed. Rather than just visualizing something like the small changes in the fundamental constants Adams graphs, we should imagine that any of them could vary by a trillion times or a quadrillion times or a quintillion times. 

Given such a parameter space, a realistic visual representation of the chance of a random universe having parameters allowing stars to exist would be one like the visual below, which shows a tiny needle hole somewhere in the Grand Canyon. It is therefore correct to say that with overwhelming likelihood, a random universe would not have stars. Since stars are necessary for both light and life to exist, it is correct to say that with overwhelming likelihood, a random universe would be lifeless and light-less.


 

The misleading nature of Adams' graph is discussed on page 40 of this excellent scientific paper by physicist Luke Barnes, who concludes (contrary to Adams) that “the existence of stable stars is indeed a fine-tuned property of our universe.”

The second fallacy Adams commits is the fallacy of merely considering the likelihood of stars in relation to cosmic fine-tuning, when he should also be considering the likelihood of stars as suitable for life as our own star.

Among the stars in our universe are short-lived blue stars, long-lived yellow stars like our sun, and much less bright “red dwarf” stars that are very long-lived. It could be that life exists on planets around red dwarf stars, but it is almost universally recognized that life is much less probable to arise on planets revolving around such stars. There are two main reasons, discussed fully here. One is that since red dwarf stars are much dimmer, a planet would have to be fairly close to a red dwarf star for life to exist on the planet; and at such closer distances the planet would be subjected to very troublesome tidal effects that might make it uninhabitable. The second reason is that red dwarf stars are more unstable than stars like our sun; as a wikepedia.org article says, “Red dwarfs are far more variable and violent than their more stable, larger cousins,” such as our sun. Such variability would make a planet near a red star much more likely to get zapped by crippling radiation.

So it's kind of like this: yellow stars like our sun are good for the evolution of life, but red dwarf stars are not-so-good (kind of what we may call borderline possibilities). But when considering how much cosmic-fine tuning our universe has, we should consider the odds of getting the best thing we have, not just the odds of getting some “just barely works” borderline possibility. In fact, the requirements for sun-like stars are much more stringent than for red dwarf stars. The physicist Paul Davies says this on page 73 of The Accidental Universe: 

If gravity were very slightly weaker, or electromagnetism very slightly stronger (or the electron slightly less massive relative to the proton), all stars would be red dwarfs. A correspondingly tiny change the other way, and they would all be blue giants.

So we can put it this way: it is incredibly unlikely that a random universe would have any stars, and super-incredibly unlikely that a random universe would have sun-like stars. Clearly we should pay attention to both of these probabilities when judging how fine-tuned the universe is.

I can give an analogy. Suppose you walk deeply into the wooded wilderness of a national park with your friend, and come across a log cabin. You may say, “That must have been fine-tuned” or “That must have been designed.” Now your friend may say, “Not so, because trees might have fallen in such a way to provide you with some type of shelter from the rain.” This is fallacious, because the relevant thing to consider is the most fine-tuned thing you see, not some other less suitable thing that luck might have given you. And similarly, when considering fine-tuning in regard to stars, we should be noting that the requirements of the most suitable types of stars (stars like our sun) are much, much more stringent than the requirements of “some type of stars.” Adams ignores these more stringent requirements.

The third fallacy Adams commits is the fallacy of considering only some of the less stringent requirements of stars, while ignoring the most stringent requirement for stars. The most stringent requirement of stars is that the proton charge exactly balance the electron charge. This requirement has been pointed out by the astronomer Greenstein, who pointed out that no stars could exist if the proton charge did not exactly match the electron charge.

If there was a very small difference between the electron charge and the proton charge, you would either have (1) an electrical imbalance between particles which would completely overwhelm gravity, making it impossible for stars to hold together, or (2) an electrical imbalance between particles that would completely preclude the possibility of the thermonuclear reactions we observe in stars.

In our universe each proton has a mass 1836 times larger than each electron, but the charge of the proton exactly matches the charge of the electron to at least eighteen decimal places, as measured here (the only difference being that the proton has a positive charge and the electron has a negative charge). Stars could not possibly exist if this precise fine-tuning did not exist. Adams has simply ignored this ultra-stringent requirement, focusing on less stringent requirements. Were he to consider this requirement, he might realize that stars are trillions of times less probable to exist in random universe than he imagines.

I may note that this requirement is an entirely different requirement than the one previously considered. So for a random universe to have stars, it must not only “thread the needle” involving the balance of the gravitational and electromagnetic force (the balance that Adams has considered), but a random universe would also have to “thread the microscopic needle” of having the proton charge exactly match the electron charge. So it is as if the arrow of the blind archer must hit not just one very distant bulls-eye for stars to exist, but two very distant bulls-eyes.

We are then doubly justified in saying: with overwhelming likelihood, a random universe would be both lifeless and light-less.

The fourth fallacy that Adam commits is the fallacy of ignoring the universe's most dramatic cases of fine-tuning, and focusing only on less dramatic cases. The three most dramatic cases of cosmic fine-tuning all seemingly involve fine-tuning more precise than 1 part in 1,000,000,000,000,000,000,000,000. They are:

  • the exact match of the absolute magnitude of the proton charge and the electron charge, to more than 18 decimal places
  • the fine-tuning of the vacuum energy density, discussed here, by which we have a cosmological constant more than 1050 times smaller than the amount predicted by quantum field theory (such as we would have if opposing parameters of nature accidentally canceled out each other to more than fifty decimal places)
  • the fine-tuning of the universe's initial expansion rate (in which the universe's initial critical density matched the actual density to something like 1 part in 1050).

Which of these does Adams discuss in his Nautilus essay? None of them. Of course, he does not want to discuss such things as they would obliterate his claim that “our universe does not seem to be particularly fine-tuned.” 

Adams is very well aware of the cosmological constant problem (also known as the vacuum density problem and the “vacuum catastrophe” problem), because he discusses it at length in a scientific paper he co-authored. There he gives us some reasoning that is as off-the-mark as his insinuations about the likelihood of accidental universes having stars.

The issue in regard to the cosmological constant is that quantum field theory predicts the cosmological constant should be 1060 or 10120 times larger than the value we observe. This prediction (which you can find discussion of by doing a Google search for “worst prediction in the history of physics”) is that the vacuum of space should be super-dense – much denser than steel. But the actual vacuum of space has very little energy or density – it's almost empty.

We know that life could never exist if the vacuum was anything like that predicted by quantum field theory. Obviously you can't have life if the space between a star and a planet is thicker than steel – light cannot even travel through that. But an interesting question is: by how much could the cosmological constant differ from its current value and still allow life to exist?

Adams concludes that the cosmological constant could be up to 1030 times larger and still allow life to exist. This is almost certainly a far-too-generous estimate, and other estimates have estimated much greater sensitivity. He uses this estimate to support a conclusion in the paper that “the universe is not overly fine-tuned.” But he should be reaching exactly the opposite conclusion from these facts. If the cosmological constant is supposed to be 1060 or 10120 times larger than the value we observe because of quantum considerations, and  a value 1030 times larger than the observed value would have prevented life, then how much luck did we have in this regard to have a habitable universe? The answer is: luck with a probability of about 1 part in 1030 or 1 part in 1090. Adams should have reached the conclusion that the universe is astonishingly fine-tuned, in a way that less than 1 universe in a billion trillion should have by chance.

Adams also gives some misinformation about the fine-tuning issue involving nuclear resonances and the triple-alpha process, a process by which stars produce energy. He claims that this fine-tuning issue “goes away,” because there's some particular way in which an alternate physics could allow carbon to exist. He's using some fallacious reasoning he uses in this scientific paper. The fine-tuning issue involving the triple-alpha process and resonances is that the physics of the universe must be fine-tuned for both carbon and oxygen to exist in abundant qualities, as it does in our universe. But in his paragraph claiming a way to make this fine-tuning “go away,” he does not discuss oxygen. And on page 26 of his paper, he says, “This set of simulations does not include nuclear reactions that produce oxygen, neon, and heavier elements.” So he cannot truthfully claim to have made this fine-tuning issue “go away.” The difficulty is explaining how a random universe could have abundant amounts of both oxygen and carbon, not just carbon.

This fine-tuning requirement is correctly stated in a 2014 scientific paper which tells us on page 16 that in order for you to have abundant quantities of oxygen and carbon, you need for the quark masses to be within 2 to 3 percent of their current values, and you also need for the fine-structure constant to be within 2.5% of its current value. You could therefore say nature has to hit two different “holes in one,” and these aren't the only “holes in one” nature has to hit in order to end up with intelligent life. Because these two “holes in one” that nature must hit are different from the two other “holes in one” I discussed before, while discussing stars.

Another bit of sloppy thinking Adams gives in his Nautilus essay is when he attempts to explain away a fine-tuning of the strong nuclear force by claiming that if some alternate physics were true,  "The longest-lived stars could shine with a power output roughly comparable to the sun for up to 1 billion years, perhaps long enough for biological evolution to take place."  This is laughable, since earthly life is believed to have required 3.5 billion years to have appeared; and obviously a universe in which stars like ours can burn brightly for 10 billion years is greatly preferable to one in which they can only burn for 1 billion years.  Again, I may note that you do not explain a more favorable case of fine-tuning by imagining some much less favorable situation requiring less fine-tuning.

In his Nautilus essay, Adams misreads what nature is telling us, and his conclusion that “our universe does not seem to be particularly fine-tuned” is very much at odds with both the facts and the statements of numerous other scientists with a variety of philosophical standpoints, who have again and again stated the opposite

Postscript: I may note based purely on Adam's graph plotting the electromagnetic force versus the gravitational force and a life-compatible region, and the fact that the potential parameter space in random universes is more than a billion trillion times larger than the parameter space he has graphed, we should conclude that the chance of stars in a random universe is less than 1 in a billion trillion (less than 1 in 1,000,000,000,000,000,000,000).  The requirement of the proton charge matching the electron charge is simply a second reason for drawing the same conclusion. 

Friday, January 20, 2017

Astonishing Paranormal Accounts in the Recent CIA Document Dump

This week the CIA released 13 million pages of documents. The documents include some jaw-dropping accounts dealing with psychic phenomena.

You can search for documents using this URL. Seeing a news report saying that there was an interesting test report regarding the famous psychic Uri Geller, I typed in his name in the search box.  I got this document describing some 1973 tests done with the famous psychic. Near its beginning, the document states, “As a result of Geller's success in this experimental period, we consider that he has demonstrated his paranormal perceptual ability in a convincing and unambiguous manner.”

Geller was placed in a double-walled shielded room used for EEG research, one with an outer and inner door. Some drawings were made and one by one taped on the outer wall. Geller's successes were remarkable. In one example, the drawing taped on the wall was of 24 grapes. Geller drew a picture that also had 24 grapes.  Below are the test drawing and Geller's drawing.  You could hardly ask for a more exact match.



In another test on August 8th, the target picture was of a flying sea gull. Geller almost immediately said he saw a flying swan. 

The CIA document also includes many documents on the STARGATE project, a long-running government project which for many years did experiments on remote viewing, a psychic ability to gain information about remote locations through paranormal means. An overview document is here.  Among the STARGATE documents is a 439-page document entitled, "Anomalous Mental Phenomena."  This document begins by saying, "The conferees agree to provide $2,000,000 for the STARGATE program as proposed by the Senate."  The ultimate proof of the fact that this long-running program did indeed get evidence of paranormal abilities is simply the fact that it was well-funded for many years. You don't get funding like that year after year unless you have some impressive results you can brag about in annual budget reviews. On page 159 to 184 of this document is a paper giving some good evidence for the reality of remote viewing.

Another interesting paper in the CIA documents can be found here. It discusses a Chinese practice called qigong, and claims that 20 million people were practicing it by the 1980's. (The Washington Post describes qigong as "a 5000-year old Eastern healing art.") The paper discusses some “qigong masters.” A man named Zhang Baosheng is described as having the ability to read sealed envelopes, remove small insects from sealed bottles, burn cloth with a touch of his fingers, and write a message on a paper sealed in a box. The paper asserts: “Countless tests were performed, all under tightly controlled conditions, and their results published without a single attempted fraud.”

Another paper discusses efforts in China during the 1980's to find children with “extraordinary functions of the human body” (EFHB). We are told “many hundreds of children with EFHB were found throughout the nation.” A test in Beijing of ESP in children around age 10 reported that 40 to 63 percent of children around age 10 were found to have EFHB “to some extent.”

There is mention of the same person figuring prominently in the previously mentioned report, Zhang Baosheng. The report says he could “perform incredible miracles.” We are told, “Zhang caused objects, such as someone's photo identification card or personal name stamp to move to another room which had not been entered, or caused a torn personal letter to be restored to a single piece.” The paper also claims that Zhang could remove from sealed bottles (in a paranormal manner) things such as insects or small pills.

The paper also says that another “qigong master” named Yan Xing was associated with many paranormal healings, and “performed various transformations of the physical characteristics of samples at a distance of several meters,” also doing the same thing at a distance of 2000 kilometers.

Another 100-page document discussed Chinese paranormal research, while also advancing some interesting philosophical ideas. On page 11 the document says:

There were a number of experiments called “psychokinesis” experiments where the subject with paranormal abilities would turn the hand of watches, bend metal, break matches, pluck off twigs...as well as cause exposure of sealed unexposed film, and “spontaneous combustion” of some flammable materials with the wave of a hand, without touching the materials in the experiment. Successes in these experiments followed one after another.
 
The document notes there was a repressive backlash: “The research in many areas and in many units was forced to stop.”

You have to take the word of the Chinese about such dramatic mind-over-matter paranormal phenomena. But it's a different story in regard to extrasensory perception (ESP), which has been extensively studied in the Western world for more than 100 years, with very convincing results being repeatedly achieved under controlled laboratory conditions (as discussed here and here). You would not know about this from the statements of many Western scientists who are in denial about such a phenomenon, and who have erected within their little cultural tribes a senseless social taboo against acknowledging the existence of ESP, mainly because it conflicts with their incorrect underlying assumptions about the relation of the brain and the Mind. 

One point frequently made against ESP is "we don't understand how it would work." The person who makes such a point is typically throwing a stone from a glass house, as he typically believes in lots of things (such as the Big Bang, the origin of life from chemicals, the 50-year storage of memories in brains, the instantaneous recall of memories in brains, and the very phenomenon of consciousness) which he cannot explain with any description involving plausible details of the underlying causes and mechanics.  For ages people observed earthquakes and infectious diseases without any understanding of their causes (plate tectonics and microorganisms); and it would have been rather silly if people had pretended during such ages that such things didn't exist. 

Postscript: Another interesting document in the CIA document dump is this one.  The document discusses Kirlian's photography of a person named A. Krivoroltov, who was supposedly able to heal by the technique of "laying on hands."  We are told, "A hundred fold repeated rumor is that somehow the Krasnodar experimentalists have been able to photograph a mysterious luminescence (maybe even a sign of holiness, a halo) that a person is suppose to emit." Kirlian measured the electrical resistance on Krivoroltov's hands, and found them to be 3 to 5 times the normal amount. 

Monday, January 16, 2017

The Professor's Bad Reasoning About "Bad Odds"

Our universe seems to be astonishingly fine-tuned to allow the existence of life. For example, if the absolute value of the proton charge and the electron charge were not exactly the same, gravitation would not be enough to hold planets together (since the electromagnetic force is roughly a trillion trillion trillion times stronger than the gravitational force, even a relatively tiny difference in the proton charge and the electron charge would cause repulsive effects exceeding the attractive effects of gravitation in large bodies such as planets, preventing their existence, as mentioned by Greenstein here). We know of no specific reason why such an equality between the proton charge and the electron charge should occur, and given that each proton has a mass 1836 times larger than each electron, it seems quite the amazing coincidence that the electron charge and the proton charge match exactly (experiments have shown they match to twenty decimal places, the only difference being that the signs are opposite). 

An even greater coincidence occurs in regard to the vacuum energy density, as discussed here. Straightforward calculations tell us that the quantum contributions to the vacuum should cause empty space to be extremely densely packed with mass-energy, but no such thing occurs. Somehow we have a cosmological constant or vacuum energy density more than 1060 times smaller than what quantum physics predicts, possibly because of an exact cancelling out of opposing effects. If it were not for such a coincidence, life would be totally impossible (no more possible than the evolution of life inside the sun). As discussed here, there are many similar coincidences involving the strong nuclear force, particle masses, the gravitational constant, and nuclear resonances. 

The visual below helps to illustrate the difference between the ratios of uninhabitable, barely habitable, moderately habitable, and abundantly habitable universes, although the actual ratios are almost certainly vastly greater than illustrated in this schematic visual.  I discuss these categories here, and argue that our universe must be in the rarest category (the "abundantly habitable" category shown in green).


habitable universes


Many have suggested that this cosmic fine-tuning suggests the likelihood of a cosmic fine-tuner. But Princeton philosophy professor Hans Halvorson disagrees. He came out yesterday with an essay entitled, “The Cosmos’ Fine-Tuning Does Not Imply a Fine-Tuner.”

Below is some of Halvorson's reasoning.

An analogy here might be apt. Suppose that you’re captured by an alien race whose intentions are unclear, and they make you play Russian roulette. Then suppose that you win, and survive the game. If you are convinced by the fine-tuning argument, then you might be tempted to conclude that your captors wanted you to live. But imagine that you discover the revolver had five of six chambers loaded, and you just happened to pull then [sic] trigger on the one empty chamber. The discovery of this second fact doesn’t confirm the benevolence of your captors. It disconfirms it. The most rational conclusion is that your captors were hostile, but you got lucky. Similarly, the fine-tuning argument rests on an interesting discovery of physical cosmology that the odds were strongly stacked against life. But if God exists, then the odds didn’t have to be stacked this way. These bad odds could themselves be taken as evidence against the existence of God.

Halvorson's reasoning in this essay is as careless as his proofreading.

The claim that “the odds didn't have to be stacked this way” is in error, if we are talking about the ratio between possible habitable universes and possible non-habitable universes. Regardless of whether any deity exists, it will inevitably be true that the class of all possible habitable universes is vastly outnumbered by the class of all possible universes. It is in general true that the number of ways in which one can arrange things so that a desirable functional end is achieved is vastly smaller than the number of ways in which you can arrange things so that no particular functional end is achieved.

For example, if I have a garage full of atoms, the number of ways in which I can arrange those atoms so that no functional end is achieved is always going to be 1,000,000,000,000,000 times larger than the total number of ways in which the atoms can be arranged so that a working motor vehicle is created. Similarly, the total number of ways in which the physics and constants of a universe can be arranged so that nothing special will ever happen is always going to be vastly greatly than the total number of ways in which the physics and constants of a universe can be arranged so that the very many physical requirements of life (such as stable long-burning stars, stable planets, and stable atoms) are met.

So the odds in which “nothing-special” uninhabitable possible universes vastly outnumber possible habitable universes do indeed “have to be stacked this way,” contrary to what the professor claims. Such an odds ratio is simply a logical necessity, and not at all something that can be “taken as evidence against the existence of God.” The fact that possible uninhabitable universes vastly outnumber possible habitable universes is no more evidence against the existence of God than the fact that the set of all numbers is vastly greater than the set of all numbers with consecutive digits.

Now let's look at the professor's analogy about the gun. Here he gives us a case of a false analogy, because the situation he's describing bears no resemblance to any claim that anyone is actually making. The professor imagines a person who is forced to fire at himself a six-shooter loaded with five bullets. Here are the characteristics of a person who is forced to fire at himself a six-shooter loaded with five bullets, and then survives:

Characteristic 1: The person faces a danger point, with a strong likelihood of a disastrous result.
Characteristic 2: The person fares well from this danger point, purely as a matter of luck.

Such characteristics bear no resemblance to the assumptions of a person believing that the universe was deliberately fine-tuned for life. Suppose you imagine that a benevolent deity deliberately created the universe so that it would be habitable for life. Under such a scenario, there never is any danger point in which there is a likelihood of a disastrous result. So Characteristic 1 does not hold true. There is also no point at all in which a favorable result occurs, purely as a matter of luck. If you think that the universe was set up deliberately so that it would be habitable, you do not believe this was a matter of luck. So Characteristic 2 also does not hold true.

So with his Russian roulette analogy, Halvorson is imagining some situation that bears no resemblance to the claims made by those who think the universe was deliberately fine-tuned. Halvorson has therefore committed the fallacy of false analogy.

I can imagine using reasoning similar to Halvorson's in other situations. If someone built a nice home for you to live in, you would consider that the set of all possible unlivable or dangerous ways to arrange the bricks, nails, boards, and pipes is much greater than the set of all nice livable houses; and you would conclude from these “bad odds” either that no one built the house or that someone evil built the house. Such reasoning would be as erroneous as Halvorson's.

Thursday, January 12, 2017

Dubious Facets of the SETI Sales Pitch

SETI is the scientific search for extraterrestrial intelligence, primarily by using radio telescopes. When I went to the homepage of the SETI Institute (SETI.org) on January 2, 2017, the first thing I saw was a great big red “Donate Now” button. I have a rule about donating to organizations: I insist that they show no signs of being anything less than completely straightforward and candid. But does the SETI Institute meet such a criteria?

The crucial question you should consider before donating to the SETI Institute is: what are the chances that success will be achieved, and that some radio signals will be discovered from extraterrestrial civilizations? That depends on what the chances are of intelligent life existing elsewhere in our galaxy. The SETI Institute has a FAQ page, and one of the “frequently asked questions” is “Why do we think that life is out there?” This is the entire answer given in the FAQ.

Over the last half-century, scientists have developed a theory of cosmic evolution that predicts that life is a natural phenomenon likely to develop on planets with suitable environmental conditions. Scientific evidence shows that life arose on Earth relatively quickly (only 100 million years after life was even possible), suggesting that life will occur on any planets that have the requisite characteristics, such as liquid oceans (either on the surface or underground). With the recent discovery that the majority of stars have planets – the number of potential habitats for life has been greatly expanded.
In addition, exploration of our own solar system and analysis of the composition of other systems suggest that the chemical building blocks of life – such as amino acids – are naturally produced and very widespread. There are several hundred billion other stars in our Galaxy, and more than 100 billion other galaxies in the part of the universe we can see. It would be extraordinary if we were the only thinking beings in all these vast realms.

This answer is questionable. Let's start with the first sentence: “Over the last half-century, scientists have developed a theory of cosmic evolution that predicts that life is a natural phenomenon likely to develop on planets with suitable environmental conditions.” This strongly implies that during the past 50 years there has been some revolution in our understanding of the likelihood of life developing on a random planet the right distance from a star. But no such thing has occurred. Thinking on this matter is pretty much as it was 50 years ago.

Far from having any theory that predicts that life is likely to arise whenever there are suitable environmental conditions, we still have a set of facts that seem to suggest the opposite. We know that even the most primitive life would require self-replicating molecules, a genetic code that acts like a complex system of symbolic representations, and proteins that seem fantastically improbable to have arisen by chance. Then there's the difficulty of accounting for the origin of the very complex machinery in cells. The facts we have discovered are still quite consistent with the idea that the origin of life would be unlikely to occur on one planet in a trillion, because of the unlikelihood of these things all occurring because of lucky chemical accidents.

The second sentence in the FAQ answer is: “Scientific evidence shows that life arose on Earth relatively quickly (only 100 million years after life was even possible), suggesting that life will occur on any planets that have the requisite characteristics, such as liquid oceans (either on the surface or underground).” SETI enthusiasts have been making this claim for decades, but it is very dubious indeed, relying on two assumptions: (1) that the earth's oceans appeared about a billion years after the earth formed; (2) that life also originated about a billion years after the earth formed.

Our planet is 4.6 billion years old, and claims are made that there are geological signs of life dating back to 3.5 billion years. But such claims are doubtful, as they rely on what are called stromatolites, unusual-looking geological features which some claim were formed by bacteria. We see no cells or biological structures in the oldest stromatolites. The claim that very old stromatolites (older than 3 billions years) are signs of ancient life relies on a rather complicated and debatable line of reasoning. It's quite possible that they are not signs of early life, and that there are alternate geological explanations. This scientific paper says the evidence for life older than 2.5 billion years is “meager and difficult to read.”

Moreover, as discussed here, many scientists think that the earth's oceans are almost as old as the earth itself, having been brought here by comet bombardments. If that assumption is true, there may have been as much as a billion years between the time when life first had a chance to arise on our planet, and the time that it first did arise. If the shaky claims about the oldest stromatolites are in error, there may have been as much as 1.5 billion years between the time when life first had a chance to arise on our planet, and the time that it first did arise. So the claim long made by SETI enthusiasts that life arose here on our planet “almost at the first opportunity” is quite doubtful.

Even if it were true that life on Earth arose 100 million years after it first had the opportunity to arise, this would not be a strong reason for thinking that life in the universe is common. Consider this case. You open your new pastry shop one day, and within an hour someone comes in trying to order a pizza. The chance of this happening is very low. You would be mistaken if you reasoned that the chance of such a thing happening must have been high, or else it would not have occurred within the first hour of your shop being open. You are not entitled to draw such conclusions based on the timing of a single occurrence.

The article here states the following by MIT professor Joshua Winn (referring to this scientific paper):

There is a commonly heard argument that life must be common or else it would not have arisen so quickly after the surface of the Earth cooled," Winn said. "This argument seems persuasive on its face, but Spiegel and Turner have shown it doesn't stand up to a rigorous statistical examination — with a sample of only one life-bearing planet, one cannot even get a ballpark estimate of the abundance of life in the universe."

The next claim in the SETI FAQ answer is: “In addition, exploration of our own solar system and analysis of the composition of other systems suggest that the chemical building blocks of life – such as amino acids – are naturally produced and very widespread.” This is true, but ignores the fact that you can't estimate the probability of something complex arising merely from the availability of building blocks. A big auto parts store may have all the ingredients for a car, but the chance of such ingredients forming into a car when a tornado passes by is presumably very, very low.

The same type of misleading talk is served up by a recent book on SETI by astrobiologist David Grinspoon. On page 339 of his book Earth in Human Hands, he states this:

Much that we have learned in over a half century of space exploration seems to tell us that life and complexity are bound to be anything but rare. The basic ingredients and conditions that facilitated the origin and evolution of life here seem to be widespread throughout the universe.

The first sentence is not at all true, and does not follow from the second statement. Everything we have learned from space exploration is still completely consistent with the hypothesis that the appearance of life is a virtually miraculous event that we would not expect to occur on more than 1 planet in a trillion. You cannot make conclusions about the likelihood of great functional complexity arising from the mere availability of ingredients. There may be all the ingredients for a car in a large auto parts store, but that certainly does not allow us to conclude that is likely that one day such ingredients will randomly assemble into a car. 

It is true that there may be some kind of purposeful cosmic teleology that assures life is common in the universe, but our SETI experts seem to never appeal to such a possibility, relying on dubious kind of "the ingredients are there, so it will happen" reasoning. 

When asking for donations, SETI experts can gain an advantage if they make it look like searches for extraterrestrial intelligence are a fairly new undertaking. People are more likely to donate to a promising new project than some longstanding project that has failed so far. So if I tell you there is a promising new cancer drug called thorsmixadine, and that I need some money to fund initial research on it, you will be much more likely to donate to such a project than if I had told you, “They've spent 400 million dollars researching this drug, with no positive results; but I want to spend even more money, so can you can please help?”

In testimony before the US Congress in 2014, the leading SETI scientist Seth Shostak made these claims giving the impression SETI was some fairly fresh project:

We have only begun to search...The fact that we haven’t found anything means nothing. It’s like looking for megafauna in Africa and giving up after you have only examined one city block.

Such statements were misleading. By the year 2014 SETI had been going on for decades, and scientists had checked many thousands of stars for extraterrestrial radio signals. For example, there was Project Phoenix in the 1980's consisting of 2600 hours of observations, using the world's largest radio telescope. A similar project was the SERENDIP project. And when Shostak made his testimony in mid-2014, most of the work reported in this scientific paper had been done. The paper describes a negative search for radio signals coming from 9293 stars, consisting of 19000 hours of observations carried out between May 2009 and December 2015.

An appropriate theme song for SETI would not be the Carpenters' hit We've Only Just Begun but a song with the same tune but different lyrics:

We haven't just begun...to search
So many stars we've checked
But we keep getting nothing at all
We haven't just begun

If our SETI scientists were to be more candid and frank, they would put away their “it's almost a sure thing” talk and put away their “we've only just begun” talk. They would instead give a very candid pitch for donations like the one below. It would probably raise less funds, but at least it would be forthcoming. 

SETI
 

Sunday, January 8, 2017

Pretzel Logic of the Multiverse Fantasists

Physicists who speculate about the multiverse (a vast collection of universes) are very good at math, but their essays are often shockingly poor at logic. An example is a recent essay by string theorist Tasneem Zehra Husain.

Husain states this early in the essay: “The same process that created our universe can also bring those other possibilities to life, creating an infinity of other universes where everything that can occur, does.” The origin of our universe in the supremely mysterious Big Bang is a total mystery. There is no understanding whatsoever of any natural or physical process that can create a universe or did create our universe. No physicist has any understanding whatsoever of a universe creation process. There is no evidence for any physical universe beyond our own. As for the notion of a universe-creating process creating not just one other universe but an infinity of them, that's just runaway speculation no more substantial than speculating about an infinity of unicorn kingdoms.

Husain spends most of her essay talking about how scientists “feel about the multiverse.” I guess that may be a good way to fill up a long essay when you have no evidence to back up your central claim (the idea of a multiverse, that there are countless other universes). Husain offers this reason for believing in the multiverse:

The multiverse explains how the constants in our equations acquire the values they do, without invoking either randomness or conscious design. If there are vast numbers of universes, embodying all possible laws of physics, we measure the values we do because that’s where our universe lies on the landscape. There’s no deeper explanation. That’s it. That’s the answer.

I give in this essay six reasons why the multiverse idea is quite worthless for explaining the fitness of our universe. To explain something means to discuss one or more causal factors that caused something or made that likely. You do not do any such thing if you say (as multiverse theorists do) “There are an infinity of universes, and our universe just got lucky.” It is also 100% superfluous to imagine the other universes in such a case, because you can just as easily say “Our universe just got lucky” when imagining that no more than one universe exists. You do not increase the likelihood of our particular universe getting lucky by imagining other universes (and, more generally, you do not increase the likelihood of success on any one particular random trial by imagining an increased number of random trials). So the fine-tuning of our universe (including its physical constants) does not provide any rationale for believing in a multiverse. The “why was our universe so lucky" question looms with equal weight, regardless of whether there is or is not a multiverse.

When used to explain cosmic fine-tuning, the multiverse idea therefore pulls quite the astonishing trick: it brings in infinite baggage, but with zero explanatory value. Imagine if some theorist were trying to explain the features of one rabbit by theorizing that there are an infinite number of rabbits. But suppose that such a theory did actually nothing to explain the features of that one rabbit. That would be kind of the ultimate “epic fail,” rather like some President paying all the money in his country to buy some contraption that didn't even work. Such is the epic fail of the multiverse theorist in this regard. 

Referring to some other thinker, Husain states the following:

The multiverse, he says, “could open up extremely satisfying, gratifying, and mind-opening possibilities.” Of all the pro-multiverse arguments I heard, this is the one that appeals to me the most.

Your reasoning is in very bad shape indeed if your best argument for the existence of something is that it opens up “satisfying” or “gratifying” possibilities. That reeks of some kind of wish-fulfillment fantasizing rather than hard thinking.

Husain also gives us this cringe-inducing howler: “Logically speaking, an infinity of universes is simpler than a single universe would be—there is less to explain.” No, very obviously an infinity of universes is infinitely less simple than a single universe, because it involves infinitely more to explain.

When our physicists give us this type of “black is white, squares are round” type of talk, this type of Orwellian doublespeak, I wonder whether we have ended up in some strange reality in which words may be used in exactly the opposite of their dictionary meaning. Faced with such absurdity, it is helpful to have a few reality checks such as the ones below.

  • There is no evidence for any physical universe beyond our own, nor can we can imagine any observations that we might ever have that would give us such evidence (anything we might observe would be part of our universe, not some other universe).
  • The idea that there are many universes beyond our own does nothing to explain the life-favorable characteristics of our universe.
  • The "cosmic inflation" theory of the exponential expansion of the universe during part of its first instant is not well supported by evidence, and does not intrinsically require any universe beyond our own.
  • We have no scientific understanding of what caused the beginning of our universe, nor is there any physical understanding of why the universe is so fine-tuned.
  • There is no evidence at all for string theory. String theory has thus far been pretty much a 35-year waste of time, the biggest flop in the history of modern physics. String theory is based on another theory called supersymmetry, which is rapidly dying, because experimental results from the Large Hadron Collider have all but closed the door on it (as discussed here).

From reading an essay such as Husain's, you might get the impression that the multiverse is some hot topic that is dominating the papers of theoretical physicists. But it isn't.

Below is a diagram from a scientific workshop. The line at the bottom represents the fraction of papers that have been written about the multiverse.

As we can see, it seems there are very few scientific papers actually being written about the multiverse.



There is a way for you to make your own graph similar to this graph, using the technique below.
  1. Go to the arXiv server for scientific papers, where copies of all physics and cosmology papers have been posted for the past twenty years.
  2. Click on the Advanced search link, taking you to this page.
  3. Type in a search topic, and limit the results to a particular year.
  4. Note how many papers appear in the search results, and add that number to a row on a spreadsheet.
  5. Graph the results.
Below is a graph I made using this technique. The number of papers on the supersymmetry theory (also called SUSY) should probably be twice as high, because I searched using only SUSY as a search string, without using “supersymmetry” as a search string. 

 

My graph above is consistent with the first graph. What we see is that the number of scientific papers written about a multiverse is only a tiny fraction of the papers written about other speculative topics such as string theory, cosmic inflation theory, and supersymmetry theory.

Here are some numbers from recent years.




2013 2014 2015 2016
String theory papers 365 419 377 295
Inflation theory papers 276 465 402 339
Multiverse papers 8 10 16 16
SUSY papers 105 104 114 71

Why are there so few papers written about the multiverse? Is it because physicists don't like to speculate? No, they love to speculate, as shown by the 1000+ speculative papers listed in the table above (string theory, cosmic inflation theory, and SUSY are all extremely speculative theories).

The reason so few papers have been written about the multiverse is pretty much that there's no factual basis on which to write a multiverse paper. There's no “there” there.

Don't let Husain fool you. The multiverse is just a “castle in the sky” that a few fantasist physicists are building, from a few gossamer threads of speculation. 

Postscript: See Peter Woit's post about "Fake Physics," with some relevant comments.