Header 1

Our future, our universe, and other weighty topics


Friday, September 30, 2016

How Could the Brain Store Long Sequences When Neurons Have No Linear Structure?

Scientists have never had any observational basis for making dogmatic statements asserting that long-term human memories are all stored in the brain. No scientist has ever done anything like reading a particular memory from particular neurons in a brain, nor has a scientist ever done anything like causing a particular memory in a human brain to be evoked when some part of the brain is electrically stimulated. There have been a few recent claims to have evoked memories in mice by optical stimulation of the brains of mice, but such claims are not well founded (for reasons discussed here). Nonetheless, neurologists and writers on the brain very commonly commit the intellectual sin of dogmatically proclaiming unfounded claims about the human brain, asserting that all human mental activity is due to brain activity. But other people have long suspected that the human mind is something more than just the brain, and humans have long tried to gather evidence to support such a claim.

One way of trying to substantiate the claim that the mind is more than the brain is to investigate anomalous psychic phenomena. If it can be found that the human mind is capable of paranormal experiences or talents that should be impossible for a brain to produce, that is evidence that the mind is something more than just the brain. Investigations into the paranormal have been highly successful in showing things we cannot explain by brain activity. Experimenters such as Professor Joseph Rhine accumulated very convincing evidence for extrasensory perception (ESP), as discussed here. Solid evidence has been gathered for near-death experiences.

Such evidence helps to substantiate the idea that there is something like a human soul that transcends the brain. But there is a very different way of trying to support the idea that the mind is more than a product of brain activity. This way involves looking not into anomalous activity of the human mind, but instead involves an analysis of perfectly normal, everyday activity of the human mind. The approach is as follows:
  1. Make a list of all the capabilities of the human mind, capabilities such as recall of childhood memories, and instant retrieval of a memory.
  2. Examine the physical capabilities of the human brain, and consider whether it is plausible to maintain that each of the capabilities of the human mind could be achieved by something such as the human brain.
  3. Whenever a shortfall is found – whenever we find some capability of the human mind that cannot be plausibly explained by the brain – mention this as evidence that the human mind involves something more than just the human brain.
In three previous posts, I put this strategy to work, and presented three arguments for believing that human memories cannot all be stored in the human brain. One argument was based on the apparent impossibility of the brain ever naturally developing all of the many encoding protocols it would need to store the many different things humans store as memories. The second argument was based on the apparent impossibility of explaining how the human brain could ever be able to instantly recall memories, if memories are stored in particular locations in the brain, because there would be no way for the brain to know or figure out where a memory was stored. The third argument was based on the fact that humans can remember 50-year old memories, but scientists have no plausible explanation for how the human brain can store memories for longer than a single year (partially because of rapid molecular turnover).

There is still still another argument that can be made along these lines, another argument based on a capability of the human mind that we cannot plausibly explain in terms of the human brain. This argument I will call the insertable sequential memories argument. I can concisely state the argument as follows: humans have the ability to store memories involving very long sequences, with an easy ability for insertion; but the human brain has no structural capability that can support any such ability, so some of our memories or ideas are probably stored or created in some mental reality that transcends the human brain.

First, let us consider the human ability to remember memories involving long sequences. An average person shows this ability by being able to remember the words and notes of many different songs. Each song is a particular sequence that must be remembered. You might think to yourself, “I don't know the words of many songs,” but if I were to have you browse though a list of the 500 most famous songs in history, you would find that you remembered the words and notes of lots of them. Of course, on the stage you might see something like a concert in which a singer sings from memory for 80 minutes. There are opera singers who have memorized various different roles in the operatic repertory, which may add up to many hours of words and notes they have memorized. In his prime years as an opera singer, Placido Domingo had at least 20 hours of opera roles he could sing upon demand (and would sometimes fill in for a sick singer, singing one of the many roles he knew by heart). Similarly, there are a number of Muslims who have memorized the entire Quran, a book of some 114 chapters or surahs.

We take this kind of sequential recall for granted, because this is the way we have always experienced memory working. Similarly, if we were able to exactly remember every word in each book we had read, we would take that for granted, and think it nothing special. But we must ask: does the brain have any type of structure that might allow such sequential recall to occur, if memories were all stored in the brain?

There is no such thing in the brain. The brain consists of billions of neurons, which are connected together. But there is a reason why the brain does not seem to have the right type of arrangement to allow for sequential recall of long sequences of information. The brain consists of neurons, and each neuron is connected to many other neurons. There is no “next” for a particular neuron. Neurons are not arranged in any type of chain-like structure that might support a recall of sequential memories.

Consider the physical way in which a book allows for a sequencing of information. Words are arranged in a linear order on a particular page. Also, the page order and binding of a book imposes a sequence on its contents, a sequential ordering we would not see if the pages were in a jumbled heap.

Or consider the DNA molecule. The DNA molecule is structured in a way that allows for a sequential storage of information. A DNA molecule is rather like an incredibly long and thin rope, in which a sequence of letters is written on the rope. Within the genetic code used by DNA, there is a particular sequence that acts as a “stop” signal, rather like a period in a sentence. With this physical structure, DNA allows for both a sequential storage of information and a demarcation of information.

But consider the human brain. There seem to be no physical characteristics that might allow for sequential storage of information. It seems that if you try to store information in the brain, it should be like tossing a set of alphabet blocks onto a giant heap in a junkyard.

I could schematically depict a set of neurons with a visual like the one below. The little circles represent individual neurons. The diagram greatly understates the number of connections between individual neurons.



Consider the recall of sequential information. Imagine you are trying to recall a series of words. We might imagine that individual parts of the sequence are stored in individual neurons – perhaps something a little like the schematic visual below.




But how could you recall the sequence in its correct order? Nerve cells are scattered throughout three dimensional space, with each neuron having many connections to other neurons. If information is stored in nerve cells, there would seem to be no way for a sequence to be stored in a way that would allow a sequential recall involving a long series, such as happens when an actor playing Hamlet recalls all of his many lines in the correct order. We can't imagine the brain simply going from one neuron to the “next” neuron to retrieve a sequence of information. This is because neurons don't exist in chains in which a particular neuron has a “next” neuron. Each neuron is connected to many other neurons.

Below we see a map of Dupont Circle in Washington D.C.


Once your car gets on Dupont Circle, there is no “next” place to go. You've reached an interchange in which there are 10 roads feeding out of the circular interchange. Similarly, in the photo below we see neurons. Each one of the parts coming out of the nerve cell is a path that can be traversed from this nerve cell. Such an arrangement should not offer any support for storing a sequence of information such as the lines in a play or the notes in a song. There's no “next” route leading from one neuron to the next neuron. Every neuron is like Dupont Circle, except that there are even more paths leading out of the neuron.


Now let's consider another aspect of human sequential memory: the fact that it is insertable. What this means is when we memorize a sequence, it is very easy to insert anywhere a new item in the sequence.

You can demonstrate this by trying a test such as this. Most Americans know the beginning of the song “America the Beautiful”

O beautiful, for spacious skies
For amber waves of grain

But what if we try to recall this sequential data with an insertion? Try memorizing the following variation, and see how long it takes to recall it while looking away from this page:

O beautiful, for spacious skies
For amber waves of tasty grain

This task is extremely easy. We have no problem inserting an item in the middle of something we have memorized. Similarly, if you (like most Americans) have memorized the famous line “we hold these truths to be self-evident: that all men are created equal,” then it is very easy to change that memory into something a little different, such as: “we hold these truths to be self-evident: that all men are created pretty much equal.”

So evidently humans can remember sequences, and it is quite easy for us to make an insertion anywhere in the sequence. But from a neurological standpoint, such a thing should be impossible. For let us imagine that there is some sequence of neurons that stores a sequential memory, ignoring the difficulty that a neuron has no “next” and so neurons do not seem to be suitable for storing a sequence. Then let us imagine we are inserting an item somewhere in the middle of this sequence. This should be as physically difficult as inserting a new steel link in the middle of a steel chain (or in the middle of a chain link fence). We cannot imagine that a new neuron gets created (in the middle of the sequence) to store the new word or words that are being inserted, because new neurons are not created for each new item inserted into our memories.

There is still one other problem in postulating that sequential memory is stored in the brain. This is the “end of sequence” problem. Try recalling the song “America the beautiful.” Typically your recall will end nice and neatly with the line “From sea to shining sea.” When we remember a sequence of items such as the words in a song, we remember only up until the end point of the sequence. But how could we possibly do that, if memories are stored in neurons? Given the organization of neurons in the brain, storing a memory in neurons would seem to be like tossing a bucket of letter blocks on to a mountainous heap of blocks containing letters, words and image fragments – not at all an arrangement that allows you to tell where the end point of a particular sequential memory is to be found. Or if we imagine instead an analogy of writing on a few vines in a thick Amazon-type jungle of very densely packed vines, the “end of sequence” problem still is there.

In short, if long-term memory were stored in the brain, it seems we shouldn't be able to remember long sequential memories, we should not be able to remember where such sequences end, and we shouldn't be able to easily insert new items anywhere in such a sequence – all things we are actually very capable of doing.

The points discussed here constitute another argument for believing that our long-term memories are not actually stored in the brain, but are stored in some greater mental reality that is merely related to the brain. An old-fashioned word for such a reality is “soul,” although it may be just as appropriate to use some other term. The argument discussed here is only one basis for postulating such a reality. Another argument is the reason discussed here: given the rapid molecular turnover in the brain (under which molecules are replaced every few weeks), there is no plausible and well-founded explanation for how the brain can be storing very long-term memories such as memories that last for 50 years. So we have very good reason for thinking that human memory involves something much more than just the brain.

There are many anomalies suggesting a similar conclusion, such as this astonishing account of a woman with only half a brain. It seems she can instantly state the day of week of any date you select in the past 18 years, without using any calculation. She also plays computer solitaire unusually fast. Such an anomaly is reminiscent of the research of John Lorber, who found many cases of almost normal functionality in people who had lost large fractions of their brains due to disease.

In the field of cosmology, scientists have long recognized that the known (regular matter and regular energy) is completely inadequate to account for the behavior of galaxies. So cosmologists have resorted to saying that there are completely mysterious things that play a large role in galactic dynamics: dark energy and dark matter. No one has any idea what these things are. It would seem that when it comes to explaining the human mind, we are in a rather similar situation. The reality that we know of (the brain) is quite insufficient to explain both extraordinary psychic experiences and even the ordinary reality of human memory. So we need to postulate something more than just the brain. If we followed the naming convention of the cosmologists, with their dark matter and dark energy, we might call this non-biological reality “the dark brain,” which would be a novel term for an idea that can be described with an old-fashioned term: a human soul.

Monday, September 26, 2016

The Origin of Language Is Inexplicable Under Prevailing Assumptions

Best-selling author Tom Wolfe (author of The Right Stuff) has written a very good new book called The Kingdom of Speech. The book discusses a great explanatory failure: the failure of theorists to present a plausible explanation of how language originated. Because of Wolfe's superb writing style, the book is much more readable than the typical dry discussions of this topic. Wolfe starts out by mentioning a scientific paper entitled “The Mystery of Language Evolution,” written by leading linguist Noam Chomsky and seven other experts. The paper concludes that after long decades of work trying to illuminate the origin of language, “the most fundamental questions about the origins and evolution of our linguistic capacity remain as mysterious as ever.”

Explaining the origin of language involves the same type of severe difficulties involved in explaining the origin of vision and the origin of flying. In all three of these cases there is the difficulty in explaining the origin of functionality which requires a high degree of complexity and coordination before any survival value reward can be produced. In the case of vision, before any survival value reward can be produced, there needs to be a highly complex setup consisting of a preliminary eye, an optic nerve, and very substantial brain changes needed to meaningfully process visual input. In the case of flying, before any survival value reward can be produced, there needs to be a highly complex setup consisting of wings (or a wing-like appendage), and very substantial brain changes needed for either flying or gliding to occur. In the case of language, before any survival value reward can be produced there needs to be a highly complex setup consisting of changes in the vicinity of the mouth, changes in the brain needed to articulate speech, and changes in the brain needed to process speech spoken by others.

We cannot explain the reaching of any of these thresholds by evoking Darwin's idea of natural selection or survival of the fittest, because there would be no survival benefit until these functional thresholds were achieved. For example, natural selection would presumably not have rewarded some preliminary version of the larynx and pharynx which would have allowed people to only speak as crudely as if they had their mouths filled with rocks.

In the case of the origin of language, there are no less than 4 different things that need to be explained:
  1. How could changes in the vicinity of the mouth have occurred, in order to enable human speech, presumably before such changes were rewarded by offering an increased survival value?
  2. How could changes in the brain have occurred, in order to enable meaningful articulation of speech, presumably before such changes were rewarded by offering an increased survival value?
  3. How could changes in the brain have occurred, in order to enable meaningful interpretation of speech spoken by others, presumably before such changes were rewarded by offering an increased survival value?
  4. How could any language have come into existence, when it seems that the only way to establish a language (including verbs, grammar, adjectives and adverbs) would be if some language already existed?
The fourth question may seem unfair, but it will not after we try a little thought experiment. Let us imagine that some extraterrestrial visitors were to somehow transport 10,000 humans to some “zoo planet” for the purpose of keeping the humans as pets or zoo creatures. Let us also imagine that there is a strange side effect of this transport process – it accidentally wipes out the language memory of the humans transported to the “zoo planet.” Now 10,000 humans find themselves on this planet without any language. Could they then create a new language for themselves?

It seems all but impossible that they could. One might at first think that someone could start setting up such a language by issuing a command like this:

Damn, we lost our language! We have to fix this. Let's do this. We'll appoint Joe as the “language guru.” He will figure out some type of language for us to use, including the rules. We'll set up classes, and everyone will have to follow Joe's teachings. So if Joe picks up a rock and says, “vorko,” that means “vorko” is the new word for “rock.” And if Joe points to the sun and says, “zonzin,” then “zonzin” is the new word for “sun.”

But, of course, issuing such a command would be impossible – because these 10,000 would have all lost their language, and at first would know not one single word to speak. It seems that there would be no way to set up a new language, if some language did not already exist. We cannot imagine that some government would help to enable or enforce the new language – because you can't have a government unless there is language in the first place. Show me a people with no language, and I'll show you anarchy.

It would seem to stretch credulity to imagine even a fragmentary type of language developing under such circumstances. To explain why 10,000 people without language might come to use the same word for “rock,” we can imagine that there might be some forceful person determined to get everyone to use some particular word for “rock.” He might go from person to person, saying his word for “rock,” and making an “I will punch you” fist if the people he met didn't use the word he used for “rock.” But people would forget the word a while after he left. So it's hard to imagine everyone starting to use the same word for “rock.” And it's basically impossible to imagine how countless words such as “gently,” “mind,” “future,” “spirit,” “yesterday,” “skill,” “think” or thousands of other verbs, adjectives, and adverbs would ever get started, or how rules of grammar would ever get started, such as how to speak in the future tense and the past tense. You might be able to teach someone a few nouns by holding something or pointing to it, but that doesn't work for verbs, adverbs, and adjectives, and it doesn't even work for a large fraction of all nouns (such as “leadership”) referring to immaterial things.

This thought experiment is relevant because if we are to imagine language getting started many thousands of years ago, the same “how could a language ever become established” dilemma would exist as would exist for these 10,000 people. Even if we imagine a human with all the right brain modifications and all the right larynx and pharynx modifications, it seems impossible to explain how a full-fledged language ever got started.

The wikipedia.com article on the origin of language starts out by saying “there is no consensus on the ultimate origin or age of human language.” It does mention a few theories, some of them just goofy speculations, such as the idea that language developed as a form of “grooming,” or that language developed so that mothers could keep quiet their babies placed on the ground. Such theories merely suggest a “why” for the origin of language, but fail to address the “how” problem discussed above. Another laughable language theory was Darwin's idea that humans developed language by imitating bird song, an idea so weak that almost no one seems to advance it today.


A full-fledged and convincing natural theory of the origin of language would require a hypothetical account so detailed that it would pretty much have to read like a novel. It would tell a detailed story of one way in which a real language could have developed. 

It's easy to imagine the first chapter in such a novel. Maybe you might have a bunch of savages gathered around a water hole, and one of the savages might suggest a word for “water,” with the others repeating the word that was suggested. But it would seem to be impossible to write the remaining chapters in this novel, in such a way that might plausibly explain the origin of a language, so that the last chapter would end up with people speaking in grammatical sentences such as “I think that may be a woolly mammoth I see on the far horizon, so let's go get our spears so we can hunt it.” And even the first chapter just imagined wouldn't be plausible – for how could humans get larynx's just right for speaking, brain areas right for creating words, and brain areas for interpreting speech, all before language had ever been used? So the first chapter might have to start with a human with a preliminary larynx (who talked like someone with his mouth filled with rocks) trying to get language started, stumbling about with a brain that wasn't right for creating language, trying to teach a word to some other savages whose brains were not right for interpreting language. How a first chapter like that could lead to the last chapter seems impossible to imagine.

The failure of theorists to explain the origin of human language is one of only many ways in which explanatory naturalism fails to explain the human mind and human behavior. As discussed here, there are many facets of the human mind that cannot be explained through the mere idea of natural selection – mainly because they are facets of the human mind that did not give humans an increased survival value. Some of these facets were discussed by Alfred Russel Wallace, the co-founder of the theory of natural selection. Speaking of the law of natural selection, Wallace said the following: “Those faculties which enable us to transcend time and space, and to realize the wonderful conceptions of mathematics and philosophy, or which give us an intense yearning for abstract truth...are evidently essential to the perfect development of man as a spiritual being, but are utterly inconceivable as having been produced through the action of a law which looks only, and can look only, to the immediate material welfare of the individual or the race.”

Despite their pretentious claims of explanatory skill, our dogmatic professors cannot plausibly explain the origins of human mental abilities, human speech, or refined human feelings. Human origins are still profoundly mysterious. Given all these gigantic question marks and the failures of prevailing thinking, the door should be open to discussion of a very wide spectrum of theoretical ideas (including extraterrestrial intervention), rather than just the small range of possibilities we see in academic circles, where thinkers seem to be handcuffed by conformist taboos. The abysmal failure of modern academics to explain the origin of language may be a strong hint that they are on the wrong explanatory track, are making the wrong assumptions, and are prisoners of “inside the box” thinking.

Thursday, September 22, 2016

Trying to Shoot Down Cosmic Fine-Tuning, She Fires Only Blanks

There exist numerous cases of what look like very strong fine-tuning in our universe. Both fundamental constants and natural laws are arranged in a way that allows for us to exist. It seems that the probability of all of these favorable conditions existing by chance is incredibly low. It has been argued that the probability of you existing in a universe as fine-tuned as ours is like the chance of you surviving a firing squad (having 10 or more soldiers firing their rifles at you at close range). If you survived a firing squad, it is argued, you should assume there was some purpose involved in this, and that it wasn't just a lucky accident.

But physicist Sabine Hossenfelder disagrees. She has a recent post in which she attempts to debunk what she calls “the myth that our universe is 'finetuned for life.'” Her attempt, however, is a complete failure.

She starts out by giving a very general armchair argument:

The general argument against the success of anthropic selection is that all evidence for the finetuning of our theories explores only a tiny space of all possible combinations of parameters. A typical argument for finetuning goes like this: If parameter X was only a tiny bit larger or smaller than the observed value, then atoms couldn’t exist or all stars would collapse or something similarly detrimental to the formation of large molecules. Hence, parameter X must have a certain value to high precision. However, these arguments for finetuning – of which there exist many – don’t take into account simultaneous changes in several parameters and are therefore inconclusive.

This is not actually correct, as quite a few scientific papers about cosmic fine-tuning and the biological sensitivity of various fundamental constants do actually take into account the effects of simultaneous changes of more than one parameter. An example is the diagram below from a recent article by physicist Luke Barnes, in which he allows us to view simultaneous changes in the strong nuclear force and the fine-structure constant, showing only a tiny area that is compatible with living creatures such as ours.

cosmic fine tuning

Here's another example, a chart I created. The graph plots two different constants, the proton charge and the electron charge. Unless their absolute values match exactly (as they do in our universe to at least 18 decimal places),  planets cannot hold together.  You could actually expand this chart to be the size of a house, and it would still be appropriate to draw the green line as thin as it is here.

proton charge fine tuning


Then Hossenfelder claims to have found “counterexamples” that weaken the case for cosmic fine-tuning. But she's shooting blanks – her counterexamples are all duds.

The first “counterexample” she cites is a 2006 paper called A Universe Without Weak Interactions. She says that this paper describes a “universe that seems capable of complex chemistry and yet has fundamental particles entirely different from our own.” The paper actually talks about a universe without a weak nuclear force, one of the four fundamental forces of the universe. But the weak nuclear force has never been a very important part of arguments that the universe is fine-tuned. There are strong reasons for believing that all of the other three fundamental forces of nature (the strong nuclear force, electromagnetism, and gravitation) are very fine-tuned, but no one has claimed that the weak nuclear force is very fine-tuned.

Moreover, the 2006 paper she refers to was emphatically refuted by a later paper in 2006, a paper entitled “Problems in a Weakless Universe.” The paper concluded the following:

We point out, however, that on closer examination the proposed "weakless" universe strongly inhibits the development of life in several different ways. One of the most critical barriers is that a weakless universe is unlikely to produce enough oxygen to support life. Since oxygen is an essential element in both water, the universal solvent needed for life, and in each of the four bases forming the DNA code for known living beings, we strongly question the hypothesis that a universe without weak interactions could generate life.

So Hossenfelder's first “counterexample” doesn't do anything to undermine the case for cosmic fine-tuning. Her second “counterexample” is no better. She cites Abraham Loeb's paper “The Habitable Epoch of the Early Universe.” In that paper Loeb imagined that in the early universe, the cosmological constant (or the energy density of space) might have for a while been sufficient to bathe the whole universe with a warmth suitable for life. He imagined the following scenario:
  1. The Big Bang occurs 13 billion years ago.
  2. After about 400,000 years the universe cools enough for atoms to form.
  3. About 10 million years later, planets and stars form.
  4. For a few million years it is warm enough for life to exist, because of the cosmological constant, which fills all of space with a pleasant warmth.
  5. Microbial life forms appear during this relatively brief period.
  6. A few million years later, the continued expansion of the universe causes the universe to cool sufficiently so that the cosmological constant is no longer sufficient to keep space warm.
  7. Any microbial life that may have arisen from the warmth of the cosmological constant then dies, as temperatures fall far below freezing.
But this scenario is nothing like an alternate way for the universe to be life-friendly, because this “habitable epoch of the early universe” lasts way too short a time (as the expansion of the universe quickly causes those early warm temperatures to fade away, being replaced by deadly cold). It actually lasts (under Loeb's assumptions, as discussed here) only about two million years, as Loeb admits by saying that this epoch would last only “a few Myr,” using an abbreviation for megayears (a million years). Since there was supposedly billions of years (thousands of millions of years) between the appearance of earthly microbes and the appearance of man, it is clear that two million years is not long enough for any intelligent life to evolve. It is almost certainly not a sufficient time for any life at all to develop. So the possibility discussed by Loeb is irrelevant.

When discussing whether the universe is fine-tuned to allow for intelligent life, we don't care whether there might have been some brief two-million year window in the very early universe (a short-lived period of warmth) that might have allowed mere microbes to appear before being wiped out when the universe becomes super-cold again. We care about the possibility of intelligent life appearing. In fact, it is overwhelmingly likely that the higher radiation and asteroid concentrations in any early universe would not even allow microbes to have appeared in the early universe.

Loeb's paper (discussing something both extremely improbable and irrelevant) did nothing to upset the idea that the cosmological constant is fine-tuned. The fact that the cosmological constant is enormously fine-tuned is reaffirmed by a recent scientific paper noting that the cosmological constant is 10123 times smaller than its “natural value,” and that “there is no satisfactory solution yet for this problem.” The paper's graphs suggest that there would no observers if the cosmological constant were a few hundred times smaller or larger. Given the natural value so much larger, having a cosmological constant within this range is like hitting the exact center of a target 1000 yards distant.

So Hossenfelder's second “counterexample” is a dud that does nothing to undermine the case for cosmic fine-tuning. Hossenfelder's third “counterexample” is no better. She cites a 2016 paper by Adams and Grohs discussing the “triple alpha process,” a case in which nuclear physics has to be just right in order for carbon to be produced, because of what is called a resonance fine-tuning. The authors imagine other universes that might not require this particular type of resonance fine-tuning.

Hossenfelder claims that this paper is a “demonstration that a chemistry complex enough to support life can arise under circumstances that are not anything like the ones we experience today.” That's wrong, because the paper actually describes universes very much like ours, but in which there are small changes in fundamental constants. And also, the paper does not actually describe an alternate universe with a chemistry complex enough to support life, because it fails to describe a universe in which oxygen is produced in sufficient quantities. This is made clear on page 26 of the paper, in which the authors say, “This set of simulations does not include nuclear reactions that produce oxygen, neon, and heavier elements.”

It has long been recognized that the fundamental constants have to be just right for nuclear reactions in stars to produce large quantities of both oxygen and carbon, both of which are requirements for life. By failing to discuss alternate reactions in which stars produce oxygen, the paper of Adams and Grohs does nothing to undermine that fine-tuning requirement. The fine-tuning requirement is stated in a 2014 scientific paper which tells us on page 16 that in order for you to have abundant quantities of oxygen and carbon, you need for the quark masses to be within 2 to 3 percent of their current values, and you also need for the fine-structure constant to be within 2.5% of its current value. You could therefore say nature has to hit two different “holes in one,” and these aren't the only “holes in one” nature has to hit in order to end up with intelligent life.

In short, all of Hossenfelder's “counterexamples” are empty duds. She has done nothing whatsoever to weaken the case that the universe is fine-tuned. Far from being a “myth” as she claims, the finding that the universe is incredibly fine-tuned for life is one of the fundamental achievements in the past 50 years of physics, and is something that has been acknowledged by many physicists and cosmologists.

As all of her “counterexamples” are failures, Hossenfelder doesn't actually succeed in providing even a single example of some alternate universe as life-friendly as ours. Even if she were to provide such a thing, it would do nothing to discredit the claim that our universe is fine-tuned for life. Arguments about cosmic fine-tuning never claim that there is only one possible universe consistent with life, but merely claim that it is incredibly improbable that any particular random universe would be compatible with the appearance of intelligent life. There are many possible universes that would allow intelligent life to appear, but the set of all possible universes that would allow intelligent life to appear is almost infinitely smaller than the set of all possible universes, making it almost infinitely improbable that any particular random universe would accidentally meet the many requirements for intelligent life. You do not damage such reasoning in the least by showing a few other possible universes that might allow intelligent life to appear.

It's rather like this. A man may point to a car, point out its fitness for a purpose, and say, “Wow, that sure is fine-tuned” or “that sure wasn't produced by some set of accidents.” You do not at all discredit such reasoning by demonstrating that there are other possible cars with a very different appearance.

Sunday, September 18, 2016

Some Errors in His Hardcore Skepticism

I recently read a book entitled Skeptic by Micheal Shermer. It's a collection of his essays that were published in Scientific American. Shermer is not a scientist, but his essays do serve as a kind of ideological comfort food for a certain type of materialistic mindset. In this book Shermer takes a jaundiced view towards all claims of paranormal phenomena. But the book is not a powerful debunking of the paranormal. The reason is simple: Shermer almost always follows the approach of simply ignoring all of the better evidence for the paranormal. He's like some writer trying to prove that the New York Yankees have never produced good hitters, and who tries to prove it by pretty much only discussing Yankee hitters who hit under .250.

I'll give some examples. On page 99 Shermer delves into the subject of mediums. He accuses medium John Edward of cheating, saying, “It's a trick.” But he provides nothing to substantiate this claim, other than speculation. He fails to mention that Edward was tested by scientist Gary Schwartz, using controlled conditions. Schwartz did not find cheating, but found impressive paranormal-seeming results (the paper is here). And what about other mediums in history who achieved very impressive results under scientific investigation? These include Daniel Dunglas Home (who passed scientific tests of paranormal ability conducted by the world-class scientist William Crookes), Leonora Piper (who passed with flying colors long investigations by psychologist William James and his colleagues), and Indridi Indridason (who produced spectacular paranormal effects in a controlled laboratory setting, while being investigated by some of Iceland's top scientists, as discussed here). We hear no word of these in Shermer's book.

Shermer twice mentions near-death experiences, but does absolutely nothing to debunk them. He fails to mention any specific case of near-death experience. He make no mention of the phenomena of veridical near-death experiences, cases like the Pam Reynolds case, when a person who should have been absolutely unconscious reported correctly details of his or her operation, while reportedly floating out of the body. Shermer does give us on page 106 this weird logic-mangling non-sequitur:

The December 2001 issue of Lancet published a Dutch study in which of 344 cardiac patients resuscitated from clinical death, 12 percent reported near-death experiences, where they had an out-of-body experience and saw a light at the end of a tunnel. Some even described speaking to dead relatives....These studies are only the latest to deliver blows against the belief that mind and spirit are separate from brain and body.

So when you float out of your body that's evidence against the belief that mind and spirit are separate from brain and body? No, it's evidence for such a belief.

Shermer's book makes occasional mention of UFO's, although Shermer fails to mention any of the more spectacular cases, and fails to discuss any of the better evidence for UFO's. On page 54 he makes the very weird calculation (without any justification) that superstring theory is seven times more probable than UFO's. This makes no sense, because we have very abundant observational evidence for paranormal sky phenomena such as UFO's, but no evidence at all for superstring theory.

Shermer's claim to be a skeptic is very doubtful. The skeptics of ancient Greece were those who were skeptical about all claims of knowledge, and cynical about all claims of knowledge authority. But Shermer is gullible when it comes to groundless ornate speculations such as superstring theory, probably just because such speculations are popular among some scientists. On page 22 Shermer reveals himself to be a true devotee. Using the term “shaman” to kind of mean “high priest,” Shermer says: “We show deference to our leaders, pay respect to our elders, and follow the dictates of our shamans; since this is the Age of Science, it is scientism's shamans who command our veneration.” Veneration? Would any real skeptic ever gush in such a fawning way, acting in such a worshipful way towards a human authority? 
 
In discussing ESP (extra-sensory perception), Shermer absolutely owed his readers a full discussion of the evidence gathered under controlled lab conditions by Professor Joseph Rhine at Duke University. As I discuss here, this is “smoking gun” evidence for ESP. But here (on page 103) is all that Shermer has to say about Rhine's research: “In the twentieth century, psi periodically found its way into serious academic research programs, from Joseph Rhine's Duke University experiments in the 1920's to Daryl Bem's Cornell University research in the 1990's.” That's all. He fails to even mention that Rhine's experiments were successful. They were, in fact, a most spectacular success, repeatedly showing extremely dramatic evidence for ESP, such as results with a chance probability of only 1 in 10 trillion (see below for a more specific discussion).

Shermer does mention later ganzfeld ESP experiments in the late twentieth century that also showed dramatic evidence for ESP – but he merely cites some arch-skeptic who criticized them, and Shermer never mentions the numerical results of the experiments. Again, Shermer keeps his readers in the dark, hiding from them one of the best examples of evidence for the paranormal. The results were that in a long series of ESP experiments conducted by quite a few scientists over years, in which the expected success rate was 25%, the subjects scored with an accuracy of about 32%, something virtually impossible to occur by chance.

Shermer does give us (on page 104) some lame armchair reasoning against ESP: “Until psi proponents can explain how thoughts generated by neurons in the sender's brain can pass through the skull and into the brain of the receiver, skepticism is the appropriate response, as it was for evolution sans natural selection, and continental drift without plate tectonics.” This is fallacious reasoning for three reasons.

First, evidence for ESP is evidence that the human mind is something larger than neurons, so we don't have to explain ESP using the assumption that the mind is only the product of neurons. Second, it is wrong to claim that we should be skeptical about things that are observed but not explained. For thousands of years, humans observed plagues before understanding they were caused by microbes; and for thousands of years humans observed earthquakes before they understood they are caused by plate tectonics. Humans should absolutely not have been skeptical about plagues and earthquakes before they understood what caused them. Third, it is particularly ridiculous to say that we should have been skeptical about continental drift before discovering plate tectonics (discovered in the twentieth century), because continental drift (which has occurred throughout history) has always been the correct explanation for why Africa and South America fit together.

The rule Shermer is suggesting – one of “you can't believe in something until you understand the cause” is a fallacious one not actually followed by scientists, who believe in the Big Bang despite having no explanation for it. Equally wrong is the principle that Shermer states on page 53 (in a chapter entitled “Baloney Detection”), in which he claims that we can detect baloney or nonsense by asking: “Has the claimant provided a different explanation for the observed phenomenon, or is it strictly a process of denying the existing explanation?” Shermer claims the second of these two is “unacceptable in science,” but that claim is itself pure baloney.

There is no reason why anyone criticizing an existing explanation must be forced to give his own explanation that is better. It is, in fact, a perfectly sound and fair technique to argue that a proposed or popular explanation for something is wrong, and that we simply do not know what the explanation is. It is, for example, completely fair for a defense attorney to discredit a district attorney's claim that the defense attorney's client is guilty of murder, without offering an alternate suspect; and it would be quite absurd to argue “you can't discredit my claim that your client is guilty of murder unless you show who committed the murder.” To give another example, if I criticize a physics “theory of everything,” it is absurd to say that I am not entitled to do that unless I advance my own physics “theory of everything” as an alternative.

On page 258 of his book, Shermer gives us a glaring misstatement. He says:

Either people can read other people's minds (or ESP cards) or they can't. Science has unequivocally demonstrated that they can't.

To the contrary, professor Joseph Rhine provided conclusive evidence for ESP using ESP cards, such as the test with Hubert Pearce in which he scored 27 standard deviations above the expected chance result, getting 3746 successes out of 10,300 trials, in controlled laboratory tests in which the expected chance result was only 2575 successes. The chance of that is less than 1 in a 10 trillion. His colleague Pratt got similar results with the same subject. See here for details. In another test discussed here, a contemporary of Rhine's (Riess, a skeptical CUNY professor) did a remote test in which the subject scored a 73% accuracy rate while guessing 1850 cards. The chance of that has been estimated as 1 in 10 to the 700th power. What Shermer has told us here about ESP is the exact opposite of the truth.

The same misinformation is provided by another Scientific American columnist, John Horgan. Ignoring a previous account he has given us of a laboratory experiment getting a 75% hit rate on an ESP experiment in which the expected hit rate is only 20%, Horgan informs us on page 104 of his book Rational Mysticism that “psi has never been convincingly demonstrated in the laboratory.” This is absolutely false, for the reasons given above.

Shermer's closed mind on the paranormal is shown in a recent Scientific American column, where he claims that “not even in principle” can the paranormal be used to explain “hitherto unsolved mysteries.” Consider how absurd such a principle is: it is the principle that things we don't understand (the paranormal) can never explain things we don't understand (unsolved mysteries). By stating that we cannot even “in principle” use the paranormal, Shermer has made it clear that no observations can ever persuade him of something paranormal. Apparently he will not believe in the paranormal even if a city-sized spaceship appears over his head. There is a phrase to describe this type of “no evidence could ever persuade me” attitude: bullheaded entrenched intransigence. It's a very unscientific attitude, since being scientific means (among other things) being always open to new evidence that might overturn previous assumptions. So why are Shermer's essays being published in a magazine called Scientific American

skepticism