Header 1

Our future, our universe, and other weighty topics


Tuesday, August 28, 2018

“Nobel's Razor” as a Rule of Thumb for Judging Science Claims

There are quite a few problems involving scientific research.  One problem is what is called medical ghost-writing, in which pharmaceutical companies will hire people to write up a study, and then recruit doctors to sign their names to the study, even if the doctors had little or no involvement in the study.  A wikipedia.org article on the topic says, "A 2009 New York Times article estimated that 11% of New England Journal of Medicine articles, 8% of JAMA, Lancet and PLoS Medicine articles, 5% of Annals of Internal Medicine articles and 2% of Nature Medicine were ghost written." 

Another problem is that press releases issued by universities, colleges, and various other institutions are frequently announcing scientific research in ways that include exaggerations, unwarranted claims or outright falsehoods. A scientific paper reached the following conclusions, indicating a huge hype and exaggeration crisis both among the authors of scientific papers and the media that reports on such papers:

Thirty-four percent of academic studies and 48% of media articles used language that reviewers considered too strong for their strength of causal inference....Fifty-eight percent of media articles were found to have inaccurately reported the question, results, intervention, or population of the academic study.

Another huge problem involves what is called the Replication Crisis. This is the fact that a very large fraction of scientific research results are never replicated. The problem was highlighted in a widely cited 2005 paper by John Ioannidis entitled, “Why Most Published Research Studies Are False.” 


Although physics is often regarded as a more “hard” and reliable form of science, there is still tons of wobbly speculation in the world of cosmology and theoretical physics. An example was the recent paper by three cosmologists claiming to have found evidence of something called “Hawking points” in the cosmic background radiation, which they interpreted as supporting their cyclical theory of the universe that almost no one else but them believes in. A cosmologist studying the cosmic background radiation for the faintest traces of something he wants to believe in may be compared to someone who checks his toast with a magnifying glass every day, and eventually reports something that he thinks looks a little like the face of Jesus. 



What rule can you use to distinguish between solid well-established science on the one hand and hype and speculation on the other hand? Can we simply use the rule of “trust something if you read it one of the top science publications like Science or Nature or Scientific American, but maybe be skeptical if you read about it in a publication or web site of lesser stature?” No, this principle does not at all work. Nowadays the most respected science publications often contain quite a few misleading headlines proclaiming as discoveries dubious research results that do not at all qualify as discoveries. For example, in the leading science journals, we very often find neuroscience experiments done with too few test animals, such as only 7 (15 test animals per study group is the minimum for a moderately reliable result). 

It is also not at all true that you can rely on the truth of a science headline that you read in the New York Times, since the writers at this publication almost never show signs of critically scrutinizing dubious claims by scientists and university press releases. Nor is it true that you can count on a research result that is directly stated in the title of a scientific paper. Scientists frequently give their papers dubious titles announcing results they have not proven. Nor is it true that you can count on a result announced by a distinguished college or university such as MIT. Nowadays the press offices of colleges and universities are notorious for the dubious hype of their press releases, and this problem is not at all confined to less prestigious academic institutions. In this post and in the series of posts I have labeled “overblown hype” you will find many examples to back up the claims I have made in this paragraph.

Can we perhaps distinguish between solid science and unproven speculation by following the principle “trust in things that most scientists believe”? No, because of some of the reasons discussed in this post and this post. Unfortunately, communities of experts can become ideological enclaves, and in such enclaves it is all too easy for a majority to reach an opinion that is not well established, once that opinion becomes “all the rage” in that community.

There is actually no reliable process in place for determining what doctrines are believed in by a majority of scientists. It is quite unreliable to try to gauge the opinion of scientists by analyzing scientific papers, because a scientific paper may repeat standard shibboleths to increase its chance of getting published, and it's hard to tell how much the authors believe in such customary utterances. An opinion poll of scientists is a more reliable way of measuring their opinions. But most such polls suffer from defects, such as offering too little choice, and not offering an option of “I don't know” or “I'm uncertain about this.” Some opinion polls of scientists also require them to publicly assert their opinions to their superiors, which is not a reliable way of measuring private opinion.

The most reliable way to measure opinions on a topic is a secret ballot. But there is no process in place for measuring the opinions of scientists through a secret ballot. Furthermore, common opinions regarding a consensus of scientists are based entirely on impressions got from US and European scientists. A true global measure of scientific opinion (including all the scientists in India and China) might have many surprises. In light of all these difficulties, it is not a particularly reliable or useful guideline to try to distinguish between strong science and less reliable science claims by using common opinions about which things most scientists believe in or don't believe in.

But I can think of one simple “rule of thumb” principle that is pretty good for distinguishing between solid topnotch science on the one hand and weaker claims on the other hand. The principle is what I call “Nobel's Razor.” This is simply the principle: if some science claim has won a Nobel Prize, regard it as topnotch “Grade A” science, but if no one has ever won a Nobel Prize for establishing the claim, regard it as something less than topnotch, well-established science.

The Nobel Prize committees award annual prizes in physics, chemistry, and medicine and physiology. Over the years, the Nobel Prize committees have been extremely good about awarding prizes only to very solid scientific results (with a handful of exceptions). The Nobel Prize committees “wait for the dust to settle,” almost always avoiding giving a prize to any research result until its solidity has been established over a period of several years. For example, Penzias and Wilson discovered the cosmic background radiation in the mid-1960's, but had to wait until 1978 before getting their richly deserved Nobel Prize in Physics.

There are some interesting examples of things that are claimed to be examples of established science, but which have never won any Nobel prizes. One such example is that no one has ever won a Nobel prize for any work establishing Darwin's theory of evolution by natural selection, nor any work helping to establish Neo-Darwinism. This is a great embarrassment to Darwin enthusiasts. It is true that Darwin died before the Nobel prizes were established. But we may ask: if Darwinism is really a topnotch scientific result, why has there been no Nobel Prize for any type of research work done to establish such a theory?

Other interesting examples of widely-repeated claims by scientists that have no Nobel prizes in their favor are the opinion that the human mind is a product of the brain, and the opinion that human memories are stored in our brains. No one has ever won a Nobel prize for research helping to establish such ideas. The link here shows the scientists who won the Nobel prize in medicine and physiology, and what research they did to win the prize. None of the prizes are for work involving memory, consciousness, or the relationship of the brain and the mind.

You can see here a list of all people who have won Nobel prizes in physics. No one has ever won for any research on dark matter, dark energy, the multiverse, or the “cosmic inflation” claim that the universe underwent an instant of exponential expansion (not to be confused with the more general theory of the Big Bang).

Do all these omissions mean that this “Nobel's Razor” principle is not a good way of distinguishing between topnotch well-proven science on the one hand and lesser claims that are not very well proven? No, such omissions help to establish the solidity of such a “Nobel's Razor” rule-of-thumb, and to help show that the Nobel committees have been excellent about only giving awards to results that are well established by observations or experiments. When people press you to believe in some science claim that is not topnotch science demonstrated by observations or experiments,  you can ask such persons, "Why should I believe in that when no one ever won a Nobel Prize for establishing it?" Such a question will not be an effective reply to assertions about global warming, seeing that the IPCC committee was awarded a Nobel Peace Prize. 

Friday, August 24, 2018

More Poor Answers at the "Ask Philosophers" Site

The “Ask Philosophers” website (www.askphilosophers.org) is a site that consists of questions submitted by the public, with answers given by philosophers. No doubt there is much wisdom to be found at this site, although I found quite a few answers that were poor or illogical – such as the ones listed in my previous post. Below are some more examples.

Question 464 is an excellent and concise question: “Is it more probable that a universe that looks designed is created by a designer than by random natural forces?” In reply to this question, Stanford philosopher Mark Crimmins gives a long answer that is poor indeed. He tries to argue that it is hard to exactly calculate just precisely how improbable it might be that a universe was designed, no matter what characteristics it had. Using the term “designy-ness” to apparently mean “resembling something designed,” Crimmins then states, “the mere 'designy-ness' of our universe is not by itself a good reason for confidence that it was designed.”

This doesn't make sense. If we find ourselves in a garden that appears to be designed, with 50 neat, even rows of flowers, that certainly is a good reason for confidence that the garden was designed. If we find ourselves in a structure that appears to be designed, with nice even walls, nice even floors and a nice convenient roof, that certainly is a good reason for confidence that such a structure was designed. And if we find ourselves against enormous odds in a universe with many laws favoring our existence, and with many fundamental constants that have just the right values allowing us to exist, this “designy-ness" would seem to be a good reason for confidence that such a universe was designed. If you wish to escape such a conclusion, your only hope would be to somehow specify some plausible theory as to how a universe might accidentally have such favorable characteristics by chance or by natural factors. It is illogical to argue, as Crimmins has, that the appearance of design in a universe is no basis for confidence that it is designed.  I may note that confidence (which may be defined as thinking something is likely true) has lower evidence requirements than certainty. 

As for his “it's too hard to make an exact calculation of the probability” type of reasoning, anyone can defeat that by giving some simple examples. If I come to your backyard, and see a house of cards on the back porch, I can have great confidence that such a thing is the product of design rather than chance, even though I cannot calculate precisely how unlikely it might be that someone might throw a deck of cards into the air, and for a house of cards to then appear. And if I see a log cabin house in the woods, I can have very great confidence that such a thing is a product of design, even though I cannot exactly calculate how improbable it might be that falling trees in the woods would randomly form into a log cabin.

In question 24743, someone asks the question, “How can a certain bunch of atoms be more self aware than another bunch?” The question is a very good one. We can imagine a shoe box that has exactly the same element abundances of the human brain, with the same number of grams of carbon, the same number of grams of oxygen, and so forth. How could a human brain with the same abundances of elements produce consciousness, when the atoms in the shoe box do not? We can't plausibly answer the question by saying that there is some particular arrangement of the atoms that produces self-awareness.

Let us imagine some machine that rearranges every 10 minutes the element abundances in the human brain, producing a different combination of positions for these atoms every ten minutes. It seems to make no sense to think that the machine might run for a million years and not produce any self-awareness, and that suddenly some particular combination of these atoms would suddenly produce self-awareness.

The answer to this question given by philosopher Stephen Maitzen is a poor one. He merely says, “There's good evidence that the answer has to do with whether a given bunch of atoms composes a being that possesses a complex network of neurons.” There is no such evidence. No one has the slightest idea of how neurons or a network of neurons could produce self-awareness. If you try to suggest that somehow the fact of all of the atoms being connected produces self-awareness, we can point out that according to such reasoning the connected atoms in a crystal lattice should be self-aware, or the densely packed and connected vines in the Amazon forest should be self-aware.

A good answer to question 24743 is to say that there is no obvious reason why one set of atoms in a brain would be more self-aware than any other set of atoms with the same abundances of elements, and that such a thing is one of many reasons for thinking that our self-awareness does not come from our brains, but from some deeper reality, probably a spiritual reality.

In question 4922, someone asks about the anthropic principle, asking whether it is a tautology, or “is there something more substantive behind it.” The anthropic principle (sometimes defined as the principle that the universe must have characteristics that allow observers to exist in it) is a principle that was evoked after scientists discovered more and more cases of cosmic fine-tuning, cases in which our universe has immensely improbable characteristics necessary for living beings to exist in it. You can find many examples of these cases of cosmic fine-tuning by doing a Google search using either the phrase “anthropic principle” or “cosmic fine-tuning,” or reading this post or this post.

The answer given to question 4922 by philosopher Nicholas D. Smith is a poor one. Smith says the anthropic principle “strikes me as neither a tautology nor as something that has anything 'more substantive behind it.' " Whether we can derive any principle like the anthropic principle from the many cases of cosmic fine-tuning is debatable, but clearly there is something enormously substantive that has triggered discussions of the anthropic principle. That something is the fact of cosmic fine-tuning. If our universe has many cases of having just the right characteristics, characteristics fantastically unlikely for a random universe to have, that philosophically is a very big deal, and one of the most important things scientists have ever discovered – not something that can be dismissed as lacking in substance.

cosmic fine-tuning
Against all odds, our universe got many "royal flushes"

In question 40, someone asks a classic philosophical question: “Why does anything exist?” The questioner says, “Wouldn't it be more believable if nothing existed?” The answer to this question given by philosopher Jay L. Garfield is a poor one. After suggesting that the questioner read a book by Wittgenstein (the last thing anyone should do for insight on such a matter), Garfield merely suggests that the question “might not really be a real question at all.” That's hardly a decent answer to such a question.

An intelligent response to the question of “why is there something rather than nothing” would be one that acknowledged why the question is an extremely natural one and a very substantive question indeed. It is indeed baffling why anything exists. Imagining a counter-factual, we can imagine a universe with no matter, no energy, no minds, and no God. In fact, such a state of existence would be the simplest possible state of existence. And we are tempted to regard such a simplest-possible state of existence as being the most plausible state of existence imaginable, for if there were eternal nothingness there would be zero problems of explaining why reality is the way it is. We can kind of get a hint as to a possible solution to the problem of existence, that it might be solved by supposing an ultimate reality the existence of which was necessary rather than contingent. But with our limited minds, we probably cannot figure out a full and final answer as to why there is something rather than nothing. We have strong reason to suspect, however, that if you fully understood why there is something rather than nothing, you would have the answer to many other age-old questions.

In question 3363, a person very intelligently states the following:

When I think about the organic lump of brain in my head understanding the universe, or anything at all, it seems absurdly unlikely. That lump of tissue seems to me more like a pancreas than a super-computer, and I have a hard time understanding how organic tissue is able to reach conclusions about the universe or existence.

We get an answer from philosopher Allen Stairs, but only a poor one. Stairs claims, “Neuroscientists will be able to tell you in a good deal of detail why the brain is better suited to computing than the pancreas is.” This statement implies that neuroscientists have some idea of how it is than a brain can think or create ideas or generate understanding of abstract concepts. They have no such thing. As discussed here, no neuroscientist has ever given a remotely persuasive explanation as to how a brain could understand anything or generate an idea or engage in abstract reasoning. A good answer to question 3363 would have commended the person raising the question, saying that he has raised a very good point that has still not been answered, and has at least brought attention to an important shortcoming of modern neuroscience. Philosophically the point raised by question 3363 is a very important one. The lack of any coherent understanding as to how neurons could produce mental phenomena such as consciousness, understanding and ideas is one of the major reasons for rejecting the idea that the mind is purely or mainly the product of the brain. Many other reasons are discussed at this site.

In question 4165 a person raises the topic of near-death experiences, and asks whether philosophy has an opinion on this type of experience. The answer we get from Allen Stairs is a poor one. He attempts to argue that “it's not clear that it would do much to support the idea that the mind is separate from the body,” even if someone reported floating out of his body and seeing some information that was taped to the top of a tall object, information he should have been unable to see from an operating table. This opinion makes no sense. Such evidence would indeed do much to support the idea that the mind is separate from the body. This type of evidence has already been gathered; see here for some dramatic cases similar to what the questioner discussed (verified information that someone acquired during a near-death experience, even though it should have been impossible for him to have acquired such information through normal sensory experience). 

Stairs states the following to try and support his strange claim that people repeatedly reporting floating out of their bodies does not support the idea that the mind is separate from the brain:

How would that work? Does the bodiless mind have eyes? How did the interaction between whatever was up there on top of that tall object and the disembodied mind work? How did the information get stored? How did the mind reconnect with the patient's brain? The point isn't that the mind must be embodied. The point is that a case like this would only amount to good evidence for minds separate from bodies if that idea gave us a good explanation for the case. As it stands, it's not clear that it gives us much of an explanation at all, let the best one.

Stairs seems to be appealing here to a kind of principle that something isn't an explanation if it raises unanswered questions. That is not a sound principle at all, and in general in the history of science we find that important explanations usually raise many unanswered questions. For example, if we were to explain the rotation speeds of stars around the center of the galaxy by the explanation of dark matter, as many astrophysicists like to do, that raises quite a few unanswered questions, such as what type of particle dark matter is made up of, and how dark matter interacts with ordinary matter.

As for Stair's insinuation that postulating a mind or soul separate from the body is “not much of an explanation at all,” that's not at all true. By postulating such a thing, it would seem that we can explain many things all at once. By postulating a soul as a repository of our memories, we can explain why people are able to remember things for 50 years, despite the very rapid protein turnover in synapses which should prevent brains from storing memories for longer than a few weeks. By postulating a soul as a repository of our memories, we can explain why humans are able to instantly recall old and obscure memories, something that cannot be plausibly explained with the idea that memories are stored in brains (which creates a most severe “how could a brain instantly find a needle in a haystack” problem discussed here). By postulating a soul as the source of our intelligence, we can explain the fact (discussed here) that epileptic children who have hemispherectomy operations (the surgical removal of half of their brains) suffer only slight decreases in IQ, or none at all. By postulating a soul, we can explain how humans score at a 32 percent accuracy on ganzfeld ESP tests in which the expected chance result is only 25 percent (ESP being quite compatible with the idea of a soul).  And by postulating a soul apart from our body, we can explain why so many people have near-death experiences in which they report their consciousness moving out of their bodies. So far from being “not much of an explanation at all” as Stairs suggests, by postulating a soul separate from the body, it would seem that we can explain quite a few things in one fell swoop.

Monday, August 20, 2018

Some Poor Answers at the “Ask Philosophers" Site

The “Ask Philosophers” website (www.askphilosophers.org) is a site that consists of questions submitted by the public, with answers given by philosophers. No doubt there is much wisdom to be found at this site, although I found some answers that were poor or illogical. Below are some examples.

In Question 27225 someone asks the excellent question “if Order and Reason are a part of Nature” or if “this is simply how humans view things and try to make sense of things.”

Philosopher Peter S. Fosl answers this question by saying this:

For myself, I think the traditions of philosophical skepticism have raised serious doubts about whether or not this question can be finally answered. It seems, given the apparent lessons of those traditions, that it wisest to suspend judgment on the question but nevertheless to keep inquiring and to remain open to the chance that we might figure it out.

Given what we know about the fine-tuning of the universe's fundamental constants and the laws of nature, this answer is a poor one. We live in a universe with astonishing order and fine-tuning. To give one example of many, each proton in the universe has the same mass, a particular mass 1836 times greater than the mass of each electron. Despite this mass difference, the absolute value of the electrical charge of each proton is precisely the same (to more than fifteen  decimal places) as the absolute value of the electrical charge of each electron. Were it not for this “coincidence,” which we would not expect in even 1 in a trillion random universes, life could not exist in our universe, for (as discussed by the astronomer Greenstein) the electromagnetic repulsion between particles would be so great that planets would not be able to hold together. As we live in a universe that has many such “coincidences” necessary for our existence, the wise way to answer question 27225 is to say that order and reason seem to be abundantly manifest in our universe.


Galaxy NGC 1398 (Credit: NASA)

In question 3435 someone asks the following:

I really don't understand what the big deal is with the apparent 'fine tuning' of the constants of the universe, or even if 'fine tuning' is even apparent! The conditions have to be just right for life to emerge, sure, but so what? Conditions have to be just right for many things in the universe to occur, but we don't always suspect an outside agent as responsible.

This answer is given by philosopher Jonathan Westphal:

Suppose human life is extremely improbable. What does that show? Alas, again the answer is, absolutely nothing at all. The improbable sometimes happens, although, of course, not very often! We should thank heaven that it did!

This answer is a poor one. We use probability all the time to reach conclusions about what happened and who was responsible for it. The more improbable something is, the more justified we may be in judging that something more than mere chance was involved. You do not justify ignoring an appearance of intention or design by evoking a principle that “the improbable sometimes happens.” Such a point is easily dismissed by pointing out that something that serves a favorable functional purpose virtually never happens by chance.

If human life appeared despite enormous odds against it (such as the odds of throwing a pack of cards into the air and it forming by chance into a house of cards), that would seem to be an extremely important clue to the nature of reality, and not at all something that should be dismissed as something that means “absolutely nothing at all.” If you were walking in the woods, and saw a garden with 40 long neat rows of flowers, with an equal space between each row, you would be absolutely justified in assuming that some design and purpose led to this arrangement; and you would chuckle at the very bad judgment of anyone who claimed the arrangement had occurred by chance, on the grounds that “the improbable sometimes happens.”

In Question 221, a person asks the following:

I heard about the analogy of a computer and the mind, but I'm fuzzy about the connection. Please help!

We then get an answer from Peter Lipton that includes the following:

What makes the analogy attractive is the thought that mental states might also be functional states. Thus the same kind of thought might be 'run' on or 'realized' in different physical states on different occasions, just as the same program might be run on different types of computer hardware. One attraction of this idea is that it seems to capture the intuition that mental states are not simply identifiable with lumps of matter, while avoiding any suggestion that they are spooky non-physical stuff.

This answer is a poor one. It seems to encourage the very erroneous idea that the mind is like a computer by arguing that software (a computer program) is somehow like thought. A thought is vastly different from software. Rather than trying to argue for the mind being like a computer, the answer should have stressed that the two are drastically different. A computer is a physical thing, but a mind is a non-physical thing. A mind has life experiences, thoughts, feelings, and ideas, none of which a computer has. So it makes no sense to say the mind is like a computer. It is not like any computer that we know of. Also, Lipton erroneously suggests we should avoid thinking of mental states as non-physical, which makes no sense, because mental states are non-physical.

Question 317 is this question:

How do thoughts exist in our brains? How are they stored? Is this a chemical or electrical process?

The answer provided by Louise Antony is a poor one. She states, “The most plausible proposal about what kinds of states these might be is, in my view, the view that says that thoughts are actually sentences in a 'language of thought', expressed by means of some kind of neurological code, on analogy with the 'machine language' employed by computers at the most basic level.” This idea is not plausible at all, and there is no evidence for it. Antony gives no neuroscience facts to support it.

The idea that our thoughts could be stored using some neurological code involves a host of problems. One problem (discussed at length here) is that there is no place in the brain that could serve as a plausible site where memories could be stored for decades. The leading theory of memory storage in the brain is that memories are stored in synapses. But that theory is completely implausible, for we know that the proteins that make up synapses have average lifetimes of less than two weeks. Another problem (discussed here) is that we can imagine no plausible scheme by which our thoughts could ever be translated into information that could be stored in the brain using some neurological code. A study of how computers store information will show that such a thing involves all kinds of sophisticated translation systems such as the scheme by which letters are converted into numbers (the ASCII system), and another scheme by which such numbers are converted from decimal to binary. Such translation is easy for a computer, but it is all but inconceivable that such translations could be going on in our brains, which never got anything like the ASCII system contrived by human designers. Then there is the huge "instantly finding the needle in the haystack" problem (discussed here) that we know of no way in which a brain could ever instantly retrieve memories if they were stored in brains, the brain lacking any of the things we have in computers that allow for fast information retrieval (things such as indexing, sorting, and hashing). If there was a “neurological code” by which the brain stored information, we would have discovered it already; but no such thing has been discovered.

Far from being “the most plausible proposal,” the possibility mentioned by Antony is a very implausible proposal, and an idea that no one has ever been able to sketch out in any detailed and credible way. Difficulties such as I have mentioned should have been been mentioned in Antony's answer, and she should have said that because of such reasons, we do not know that our memories or thoughts are stored in our brains, and do not know that our thoughts exist in our brains. Our thoughts may exist as part of our souls rather than our brains, or our thoughts may have a non-local existence apart from our body, just as the number pi exists independently of any circle. 

In Question 2354, someone asks, “Is telepathy possible or is this just a magician's trick? We get a poor answer from Allen Stairs. Very inconsistently, he says, “I suspect that it is not possible,” but then mentions ESP experiments using the Ganzfeld protocol in which “receivers are able to pick the correct target at a rate significantly above chance.” He doesn't mention the numbers, but in the Ganzfeld experiments the average success rate is about 32% (as discussed here), compared to a rate of only 25% that someone would get by chance. Given such overwhelming evidence for ESP, why would anyone say that telepathy “is not possible”? Later Stairs says “while there is some evidence on behalf of telepathy, it's very far from making a strong case.” But why would anyone claim something “is not possible” on one hand, and that “there is some evidence” for it? That makes no sense. The evidence for telepathy and other forms of psi is extremely strong. The Ganzfeld experiments would by themselves be adequate evidence for ESP, and there are many other experiments (such as these done by Joseph Rhine with Hubert Pearce) in which the success rate was so high that it constitutes overwhelming evidence for telepathy, very much making exactly the strong case that Stairs denies.

In Question 5176 someone asks, “Is it a common view among philosophers that human beings are simply biological computers?” Eddy Nahmias answers us by telling us, “There are few substance dualists (who think the mind is a non-physical entity).” This is not accurate. There are many philosophers who think the mind is a non-physical entity.

Question 24702 asks the following:

Assuming that the multiverse account of the universe is true -- and every possible reality is being simultaneously played out in an infinite number of parallel universes -- am I logically forced into accepting a nihilistic outlook on life? Or is it still possible to accept the truth of the multiverse account and still rationally believe that the pursuit of life goals is both meaningful and valuable, despite the fact that every possible outcome -- or potential reality -- is unfolding somewhere in another parallel universe?

In response to this question, philosopher Stephen Maitzen gives a poor answer. He states the following:

The beings very similar to you who inhabit other universes are at best "counterparts" of you, which leaves open the question "What will you do with your life?" It may be well and good if one of your counterparts works hard to achieve wisdom, promote justice, or whatever, in some other universe. But his/her hard work isn't yours and doesn't occur in your universe.

Instead of this lame answer, Maitzen should have pointed out the lack of any empirical basis for believing in any universe other than our own. He should have asked the user: why are you assuming the truth of an infinitely extravagant claim for which there is no evidence​? 

In my next post I will discuss some additional examples of poor answers given at the "Ask Philosophers" web site. 

Postscript: Today's "Question of the Day" on the "Ask Philosophers" site is a question that ends by asking, "What then, prevents any layman from calling himself a philosopher a priori and considering himself equal to you?"  The answer by philosopher Allen Stairs is an answer ringing with a sound of superiority and elitism. He says this:

You may still wonder: what does it actually take to make someone a bona fide philosopher?...People who have several publications in respectable philosophy journals would count, for example. So would people with PhDs in philosophy who have positions in philosophy departments at accredited universities. Such folk are paradigm cases of philosophers (at least, in the early 21st century in the west.) People recognized as philosophers by paradigm-case philosophers will count. People similar to paradigm-case philosophers are candidates for being counted as philosophers; the stronger the similarity the stronger the case. 

This is a poor answer. A good answer to the question, "What then, prevents any layman from calling himself a philosopher a priori and considering himself equal to you?" is: nothing at all.  To philosophize is the birthright of every human, and anyone who thinks deeply on any complex topic has every right to call himself a philosopher. Philosophers should strive for humility, rather than mounting some high horse and calling themselves "paradigm-case philosophers."  There is no reason why we should be more inclined to accept or reject a philosophical argument of a PhD than to accept or reject the philosophical argument of a butcher, a baker or a candlestick maker.  All are equal on the battlefield of philosophical argumentation. 

Thursday, August 16, 2018

How Could a Mere Microtrace of DMT Produce a Near-Death Experience?

Near death experiences (NDE) first came to public light in the 1970's with the publication of Raymond Moody's book Life After Life. Patching together elements from different accounts, Moody described an archetypal typical near-death experience, while noting that most accounts include only some elements in the described archetype. The archetype NDE included elements such as a sensation of floating out of the body, feelings of peace and joy, a life-review that occurs very quickly or in some altered type of time, a passage through a tunnel, an encounter with a being of light, and seeing deceased relatives. Since Moody's original book, near-death experiences have been the subject of extensive scientific study, with very many accounts published and collected by other authors.

A recent scientific paper attempts to hint at a natural explanation for near-death experiences, by trying to suggest that they may be caused by the hallucinogenic drug DMT. The paper was entitled “DMT Models the Near-Death Experience.” The hypothesis hinted at (that DMT may be an explanation for near-death experiences) is hardly credible. 99% of the people who have experienced near-death experiences never used the drug DMT. While some have suggested that there may be the faintest traces of this drug in the human body, the facts I cite at the end of this post will clarify that the human body does not have even a thousandth of the amount of DMT necessary to create some type of unusual mental experience.

The paper authors (Timmermann and others) made use of what is called the Greyson scale. Created by near-death experience researcher Bruce Greyson, the Greyson scale asks 16 questions about reported aspects of a near-death experience. A person can give an answer that can be 0 to mean “no,” 1 to mean something moderately extraordinary, or 2 to mean something highly extraordinary. For example, question 12 is “Did you feel separated from your body?” The three possible answers are 0 to mean “No,” 1 to mean “I lost awareness of my body,” and 2 to mean “I clearly left my body and existed outside of it.” The maximum score on the Greyson scale is 48.

Timmermann and his colleagues injected 13 subjects with the drug DMT. The number of subjects was smaller than the number of subjects (15) recommended for reliable data with a low chance of false alarms. Timmermann and his colleagues then asked the Greyson scale questions of these subjects. They then compared their answers to 13 random people who claimed to have near-death experiences.

There's something rather suspect in the latter part of this methodology. There has already been much data collected in regard to the Greyson scale scores of those who experience near-death experiences, so why not use some of that data (which involves large numbers of people), rather than randomly selecting only 13 people who experienced near-death experiences?

Figure 4 of the study compares the Greyson scale answers of the people who experienced near-death experiences with the answers of those who received the DMT injections. We see quite a discrepancy in several areas. The main differences are below:

  • The people who had NDE's had a much higher tendency to report separation from the body than the DMT-injected people.
  • The people who had NDE's had a much higher tendency to report altered time perception than the DMT-injected people.
  • The people who had near death experiences had a much higher tendency to report encountering spirits of the deceased or other spirits than the DMT-injected people.
  • The people who had near death experiences had a higher tendency to report speeded-up thoughts than the DMT-injected people.
  • The people who had near death experiences had a much higher tendency to report "understanding everything" than the DMT-injected people.
  • The people who had near death experiences had a much higher tendency to report encountering a border or point of no return than the DMT-injected people.
  • The people who had near death experiences had a much higher tendency to report having precognitive visions than the DMT-injected people.
  • The people who had near death experiences had a much higher tendency to report experiencing ESP than the DMT-injected people.
  • The people who had near death experiences had a much higher tendency to report experiencing a life-review than the DMT-injected people.

In light of these differences, it is clear that the authors of this study should not have used “DMT Models the Near-Death Experience” as the title of their paper, and should not have claimed “DMT Induces Near-Death Type Experiences” in their paper.

In order for us to judge how closely these DMT-induced experiences resembled near-death experiences, we would need to have first-hand accounts of the experiences from the people who had the experiences. It would have been very easy for a scientific study to have produced such accounts. Timmermann and his colleagues could have simply had each of his subjects write (or recite into a tape recorder) a 500-word or 1000-word account describing their experiences. Then those accounts could have been included verbatim as appendixes in the scientific paper. If that had been done, then we would be able to judge how closely the experiences resembled near-death experiences, not only by looking for points of similarity between DMT experiences and near-death experiences, but also by looking at things that happened during the DMT experiences that did not happen in the near-death experiences. But since Timmermann and his colleagues have not included any such accounts in their paper, we have no way of accurately judging how closely the DMT experiences resemble near-death experiences. It could be that the DMT experiences were filled with all kinds of things that never happen in near-death experiences.

In fact, when we read previously published accounts of DMT experiences, such as those given here, we have reason to believe that DMT produces all kinds of weird stuff that doesn't show up in near-death experiences. The page includes all kind of random hallucinatory stuff such as someone seeing his house unbuilding itself, someone reporting that there were slinky toys everywhere, someone reporting his computer looking sad, someone reporting his friends melting, and someone being swallowed by a being like an octopus.


It seems likely that not a single one of the 13 subjects had an overall experience closely resembling a near-death experience, on the grounds that the Timmermann paper does not publish a single account written by the subjects who were injected with DMT. If any one of the subjects had reported such an experience, it seems that the authors would have included such a compelling account in their paper, which might have persuaded many people what the authors were trying to provide evidence for.

The Timmermann paper says that it got subjects “by word of mouth,” so we may guess that the subjects were students of the professors. Such persons might have been people more likely to give positive answers when given the Greyson scale questions after their DMT experience, particularly if they knew that one of their professors was hoping to get enough positive answers to publish a paper saying DMT experiences are like near-death experiences. A better approach might have been to have advertised for subjects online.

A question of great relevance here is: is there any reason to believe that the human brain might be able to produce or release enough DMT to produce an extraordinary experience that was something like a hallucinogenic experience or near-death experience? DMT was found in the brains of rats, but only 20 nanograms per kilogram (as mentioned here). To get a mystical or hallucinatory or extraordinary experience in a human, you need about 20 milligrams of DMT, an amount a million times larger than 20 nanograms.

The question of DMT in the brain was clarified by David E. Nichols in a paper he authored in the Journal of Psychopharmocology. Speaking of DMT (also known as N,N-dimethyltryptamine) in the paper Nichols says, “It is clear that very minute concentrations of N,N-dimethyltryptamine have been detected in the brain, but they are not sufficient to produce psychoactive effects.” Addressing speculations that DMT is produced by the tiny pea-sized pineal gland in the brain, Nichols points out that the main purpose of the pineal gland is to produce melatonin, but the pineal gland only produces 30 micrograms of melatonin per day. But the pineal gland would need to produce about 20 milligrams of DMT (about 660 times more than 30 micrograms) to produce a mystical or hallucinatory experience. “The rational scientist will recognize that it is simply impossible for the pineal gland to accomplish such a heroic biochemical feat,” says Nichols. In the article in which he states that, it is noted that “DMT is rapidly broken down by monoamine oxidase (MAO) and there is no evidence that the drug can naturally accumulate within the brain.” Strassman attempted to detect DMT in the brains of 10 human corpses, but was not able to find any.

Given these facts, it is quite absurd to suggest that near-death experiences are being produced by DMT in the brain. Judging from the amount of DMT in rat brains, we have about 100,000 times too little DMT in our bodies for DMT-produced experiences to appear. Web pages speculating that near-death experiences may be produced by DMT will typically tell us that DMT has been detected in rat brains, failing to tell us how much DMT was detected (merely trace levels 100,000 times too small to produce any remarkable mental experiences).

Postscript:  You can google for "DMT experiences" to get many accounts of what it's like to take DMT. When people take DMT, it is extremely common for them to quickly experience a kind of runaway, kaleidoscopic imagery with all kinds of fantastic bizarre details. For example, you might see something like this in front of you:



People taking DMT report encounters with an extremely wide variety of strange beings, that may include elves, reptiles, spiders, robots, jellyfish, extraterrestrials, spiritual figures, or an octopus. So you might see something like this:



We may contrast such exotic dream-like imagery with the 20+ "veridical" near-death experiences described here, in which people having near-death experiences reported no kind of weird psychedelic imagery, but just ordinary down-to-earth details of medical efforts to revive them (while they were unconscious)  or parts of a hospital they hadn't seen yet, details that were subsequently confirmed. There's a world of difference between such accounts and DMT trips. 

Another reason for rejecting a DMT explanation and all other brain chemistry explanations for near-death experiences is that near-death experiences have been repeatedly been reported during cardiac arrests, after the brain has temporarily shut down (as it does within 20 seconds after the heart stops). Your brain cannot be tripping out on DMT when the brain's electrical activity has shut down. 

Post-Postscript: See this video for a very detailed technical discussion by David E. Nichols PhD on why the idea of DMT existing in the pineal gland is a fantasy.  Although micro-traces of DMT have been found in cerebrospinal fluid, there is zero evidence that DMT exists in the human brain. At 25:08 in the Nichols video, he displays a slide saying, "There is no evidence to suggest that DMT can be accumulated within the brain or within neurons at significant concentrations; such inferences are either not supported by direct experimental evidence or are based on flawed experiments." 

Post-Post-Postscript:  A study discussed in 2022 logged the experiences of many DMT users. An article on the study states this:

"Michael also suggested the findings poked holes in a common myth which posits dying feels like a psychedelic trip because the human brain releases DMT upon death. If anything, he felt the trips his team chronicled had less in common with near-death experiences than they did with typical recounting of alien encounters and abductions."

Sunday, August 12, 2018

Two Reasons the Synapse Theory of Memory Storage Is Untenable

How is it that humans can remember things for decades? For decades neuroscientists have been offering an answer: that memories are stored when synapses are strengthened. But this idea has never made any sense. There are two gigantic reasons why it cannot be correct.

The first reason has to do with how long humans can remember things. People in their sixties or seventies can reliably remember things that they saw 50 or more years ago, even if nothing happened to refresh those memories in the intervening years. I have a long file where I have noted many cases when I remember very clearly things I haven't thought about, seen or heard about in four or five decades, memories that no sensory experiences or thoughts ever refreshed. I have checked the accuracy of very many of these memories by using resources such as Google and Youtube.com (where all kinds of clips from the 1960's TV shows and commercials are preserved). A recent example was when I remembered a distinctive characteristic of the “Clutch Cargo” animated TV show (circa 1960) that I haven't watched or thought about in 50 years, merely after seeing a picture of Clutch Cargo's head. The characteristic I remembered was the incredibly poor animation, in which only the mouths moved. Using youtube.com, I confirmed that my 50-year old recollection was correct. A scientific study by Bahrick showed that “large portions of the originally acquired information remain accessible for over 50 years in spite of the fact the information is not used or rehearsed.”

Is this reality that people can remember things for 50 years compatible with the idea that memories are stored by a strengthening of synapses? Synapse strengthening occurs when proteins are added to a synapse, just as muscles are strengthened when additional proteins are added to a muscle. But we know that the proteins in synapses are very short-lived. The average lifetime of a synapse protein is less than a week. But humans can reliably remember things for 50 years, even information they haven't reviewed in decades. Remarkably, the length of time that people can reliably remember things is more than 1000 times longer than the average lifetime of a synapse protein.

The latest and greatest research on the lifetime of synapse proteins is the June 2018 paper “Local and global influences on protein turnover in neurons and glia.” The paper starts out by noting that one earlier 2010 study found that the average half-life of brain proteins was about 9 days, and that a 2013 study found that the average half-life of brain proteins was about 5 days. The study then notes in Figure 3 that the average half-life of a synapse protein is only about 5 days, and that all of the main types of brain proteins (such as nucleus, mitochondrion, etc.) have half-lives of less than 20 days.

Consequently, it is absurd to maintain that long-term memory results from synapse strengthening. If synapse strengthening were the mechanism of memory storage, we wouldn't be able to remember things for more than a few weeks. We can compare the synapse to the wet sand at the edge of a seashore, which is an area where words can be written for a few hours, but where long term storage of information is impossible.

It may be noted that scientists have absolutely not discovered any effect by which synapses undergo any type of strengthening lasting years. Every single type of synapse strengthening ever observed is always a short-term effect not lasting for years.

There is another equally gigantic reason why it is absurd to maintain that memories are stored through synapse strengthening. The reason is that it is, in general, wrong to try to explain information storage by appealing to a mere process of strengthening. Strengthening is not storage. We know of many ways in which information can be stored, and none of them are cases of strengthening.

Below are some examples:

  1. People can store information by writing using a paper and pen. This does not involve strengthening.
  2. People can store information by using a typewriter to type on paper. This does not involve strengthening.
  3. People can store information by drawing pictures or making paintings. This does not involve strengthening.
  4. People can store information by taking photographs, either by using digital cameras, or old-fashioned film cameras. In neither case is strengthening involved.
  5. People can store information by using tape recorders. This does not involve strengthening.
  6. People can store information by using computers. This does not involve strengthening.

So basically every case in which we are sure information is being stored does not involve strengthening. What sense, then, does it make to claim that memory could be stored in synapses through strengthening?

In all of the cases above, information is stored in the same way. Some unit capable of making a particular type of impression or mark (physically visible or perhaps merely magnetic) moves over or strikes a surface, and a series of impressions or marks are made on the surface. Such a thing is not at all a process of strengthening.

Consider a simple example. You have a friend named Mary, and you one day learn that Mary has a black cat. Now let us try to imagine this knowledge being stored as a strengthening of synapses. There is no way we can imagine such knowledge being stored by a strengthening of synapses. If you happened to have stored in your brain the knowledge that Mary has a black cat, it could conceivably be that a strengthening of synapses might allow you to more quickly remember that Mary has a black cat. But there is no way that the fact of Mary having a black cat could be stored in your brain through a strengthening of synapses.

Every protein molecule of a particular type has exactly the same chemical contents – for example, every rhodopsin molecule has the same chemical contents. Unlike nucleic acids, which can store strings of information of indefinite length, a protein molecule cannot store arbitrary lengths of information. So we cannot imagine that there is some particular tweak of protein molecules added to a synapse (when the synapse is strengthened) that would allow information to be stored such as the fact that Mary has a black cat.

In his Nautilus post “Here's Why Most Neuroscientists Are Wrong About the Brain,” C. R. Gallistel (a professor of psychology and cognitive neuroscience) points out the absurdity of thinking that mere changes in synapse strengths could store the complex information humans remember. Gallistel writes the following:

It does not make sense to say that something stores information but cannot store numbers. Neuroscientists have not come to terms with this truth. I have repeatedly asked roomfuls of my colleagues, first, whether they believe that the brain stores information by changing synaptic connections—they all say, yes—and then how the brain might store a number in an altered pattern of synaptic connections. They are stumped, or refuse to answer....When I asked how one could store numbers in synapses, several became angry or diverted the discussion with questions like, “What’s a number?”

What Gallistel describes sounds dysfunctional: a pretentious neuroscientist community that claims to understand how memory can be stored in a brain, but cannot give anything like a plausible answer to basic questions such as “How could a number be stored in a brain?” or “How could a series of words be stored in a brain?” or “How could a remembered image be stored in a brain?” Anyone who cannot suggest plausible detailed answers to such questions has no business claiming to understand how a brain could store a memory, and also has no business claiming that a brain does store episodic or conceptual memories.

Gallistel suggests a radically different idea, that a memory is stored in a brain as a series of binary numbers. There is no evidence that this is true, and we have strong reasons for thinking that it cannot be true. One reason is that there is no place in the brain suitable for storing binary numbers, partially because nothing in the brain is digital, and everything is organic. Another reason is there is no plausible physiology by which a brain could write or read binary numbers. Another reason is that we cannot account for how a brain could possibly be converting words and images into binary numbers. A computer does this through numerical conversion subroutines and by using a table called the ASCII code. Neither numerical conversion subroutines nor the ASCII code is available for use within the brain.

In short, the prevailing theory of memory storage advanced by neuroscientists is untenable. Why do they advance this theory? Because they have no better story to tell us. There is actually no theory of a brain storage of memories that can stand up to prolonged critical scrutiny. As discussed at length here, there is no part of the brain that is a plausible candidate for a place where 50-year-old memories could be stored. As discussed here, there is no part of the brain that acts like a write mechanism for stored memory or a read mechanism for stored memory.

What our neuroscientists should be doing is telling us, “We have no workable theory as to how a brain could store and instantly retrieve memories.” But rather than admit to such a lack of knowledge, our neuroscientists continue to profess the untenable synapse theory of memory.  For they want at all costs for us to stay away from a very plausible idea they abhor: that episodic and conceptual memory is a spiritual effect (a capability of the human soul) rather than a neural effect.

Many think that there is an exact match between the assertions of scientists and observations. But this is not correct. The diagram below shows something like the real situation. Claims such as the claim that memories are stored in synapses are part of the blue area, along with many dogmatic and overconfident pronouncements such as string theory, multiverse speculations and evolutionary psychology. The idea that memory is an aspect of the human soul rather than the brain is supported not only by many observations in the green area of the diagram (observations that a typical scientist would not dispute), but also by many observations in the red area (such as the massive evidence for psychic phenomena). See the posts at this site for a discussion of very many of these observations.


scientist overconfidence
Do not be fooled by the small number of scientific papers that claim to have found evidence for an engram or memory trace. As discussed here, I examined about 10 such papers, and found that almost all of them have the same defect: the number of animals tested was way below the standard of 15 animals per study group, meaning there is low statistical power and a very high chance of a false alarm.  Besides a reliance on subjective judgments of freezing, the papers all deal with small animals, and don't tell us anything about human memory. 

I can give a baseball analogy for the theory that episodic and conceptual memories are stored in the brain. We can compare such a theory to a batter at the plate.  If such a theory includes a plausible explanation of how human experiences and concepts could be stored as neural states, overcoming the extremely grave encoding problem discussed here, we can say the theory at least made contact with the pitched ball. If such a theory can credibly explain how memories could be written to the brain, we can say such a theory has reached first base. If such a theory can explain how a stored memory could last for 50 years, despite the very rapid protein turnover in brains and synapses, we can say such a theory has reached second base. If such a theory can explain how humans can so often instantly remember obscure things they learned or experienced decades ago, overcoming the seemingly insurmountable "finding the needle in a haystack" problem discussed here, we can say such a theory has reached third base. If such a theory were to be confirmed by someone actually extracting learned information from a dead brain, we can say such a theory reached home plate and scored a run.  But using this analogy it must be reported that the theory of conceptual and episodic memory storage in the brain never even reached first base and never even made contact with the ball. For none of these things has been accomplished. 

Postscript: The 2017 paper "On the research of time past: the hunt for the substrate of memory" was written by some leading neuroscientists. It states on page 9 that "synaptic weight changes can now be excluded as a means of information storage." The paper thereby disavows the main theory about memory storage that scientists have been pushing for the past few decades, the very theory I've rebutted in this post. The paper then suggests (in a pretty vacillating and tentative fashion) a variety of alternate possibilities regarding the storage of memory in the brain, none of which is suggested as the most plausible of the lot. See this post for why none of the alternate possibilities mentioned is a very credible idea. 

Wednesday, August 8, 2018

Why Crummy Research Can Get Highly Hailed

The following thing happens again and again in the world of science and science reporting.
  1. Someone will publish a paper providing feeble or faulty evidence for some thing that the science community really wants to believe in.
  2. The flaws of the paper will be overlooked, and the paper will be hailed as a great research advance.
  3. Innumerable web sites will hail the feeble research, often hyping it and making it sound like it was proof of something.
  4. Countless other scientific papers in future years will cite the faulty paper, meaning that there will be a gigantic ripple effect in which weak science gets perpetuated.

We might call this last effect “the afterlife of bad science.” Flawed and faulty research will enjoy a long afterlife, reverberating for years after its publication, particularly if the research matched the expectations and worldview of the scientist community.

We see an example of this in the case of a 2002 paper that was entitled “Molecular Evolution of FOXP2, a Gene Involved in Speech and Language.” The 2002 paper said, “We show that the FOXP2 gene contains changes in amino acid coding and a pattern of nucleotide polymorphism, which strongly suggest that this gene has been the target of selection during recent human evolution.”

The study rang the bell of those favoring orthodox explanations of the origin of man and the origin of human language. Since FOXP2 was being called a “language gene,” the study suggested that natural selection might have helped to spread this language gene, and that natural selection may have had a role in the origin of language. In the following 16 years, the study was cited more than 1000 times by other scientific papers.

With that many citations, we can say that the study won superstar status. But no such thing should have occurred, for the study had the most obvious defect. According to an article in Nature, “It was based on the genomes of only 20 individuals,” and “has never been repeated.” The authors had no business drawing conclusions about the natural selection of a gene from such a small sample size. 

Now, according to the journal Nature, a study using larger data has overthrown the 2002 study. We read the following:

They found that the signal that had looked like a selective sweep in the 2002 study was probably a statistical artefact caused by lumping Africans together with Eurasians and other populations. With more — and more varied — genomes to study, the team was able to look for a selective sweep in FOXP2, separately, in Africans and non-Africans — but found no evidence in either.

No selective sweep means no natural selection in regard to the FOXP2 gene, which wipes out the idea of any gene evidence for a Darwinian explanation for the origin of language. But what could be more predictable? Whenever you do a scientific study with a very small sample size, there's always a very large chance of a false alarm.

The story here is: feeble study gets highly hailed, because it fits in with the belief expectations of the scientific community. Just such a thing goes on over and over again in the field of neuroscience. Every year there are multiple studies published that are trumpeted as evidence for neural memory storage in animals, as evidence for an engram. Usually such studies suffer from exactly the same problem as the 2002 FOXP2 study: too small a sample size. As discussed here, these “engram” studies typically suffer from even smaller sample sizes than the 2002 FOXP2 study, and may involve fewer than 10 animals per study group. With such a small sample size, the chance of a false alarm is very high. Similarly, almost all brain imaging studies involve sample sizes so small that they have little statistical power, and do not provide good evidence of anything.

But the scientific community continues to cite these underpowered studies over and over again, and the popular press trumpets such feeble studies as if they were good evidence of anything. Critics don't seem to get anywhere complaining about “underpowered studies” and “studies of low statistical power.” Perhaps it's time to start saying these too-little-data studies are examples of crummy research. “Crummy” is a word meaning “poor because of a small size” – for example, “You must be kidding if you think I'm going to live with you in your crummy studio apartment.”



Using the term “crummy research” for studies that draw conclusions or inferences based on too little data, we may say that a large fraction of the research in neuroscience and evolutionary biology is crummy research -- research such as animal studies that used too few animals, brain scans that used too few subjects,  or natural history studies based on too few fossils or too small fragments of DNA. Evolutionary biologists have a bad habit of drawing conclusions based on DNA data that is too fragmentary. The problem is that DNA has a half-life of only about 521 years, meaning that every 521 years half of a DNA sample will decay away. So when an evolutionary biologist draws conclusions about the relation between humans and some hominid species that lived more than 200,000 years ago, such conclusions are based on only tiny fragments of DNA found in such very old species – only the tiniest fraction of the full DNA. Anyone who draws firm conclusions from such fragmentary data would seem to be giving us an example of crummy research.

Some orthodox Darwinists probably gnashed their teeth when they read the new study showing no evidence of natural selection in the FOXP2 gene. For this gene was thought to be one of the few genes that showed evidence of such a thing. In the 2018 book Who We Are and How We Got Here by David Reich, a professor of genetics at Harvard Medical School, the author makes this revealing confession on page 9: “The sad truth is that it is possible to count on the fingers of two hands the examples like FOXP2 of mutations that increased in frequency in human ancestors under the pressure of natural selection and whose functions we partly understand.” 

Judging from this statement, there are merely 10 or fewer cases where we know of some mutation that increased in the human population because of natural selection. And now that FOXP2 is no longer part of this tiny set, the number is apparently 9 or fewer. But humans have something like 20,000 genes. If only 9 or fewer of these genes seem to have been promoted by natural selection, isn't this something that overwhelmingly contradicts the claim that human origins can be explained by natural selection? If such a claim were true, we would expect to find thousands of genes that had been promoted by natural selection. But the scientific paper “The Genomic Rate of Adaptive Evolution” tells us “there is little evidence of widespread adaptive evolution in our own species."

In the study here, an initial analysis found 154 positively selected genes in the human genome -- genes that seemed to show signs of being promoted by natural selection. But then the authors applied something they called "the Bonferroni correction" to get a more accurate number, and were left with only 2 genes in the human genome showing signs of positive selection (promotion by natural selection).  That's only 1 gene in 10,000. Call it the faintest whisper of a trace -- hardly something inspiring confidence in claims that we are mainly the product of natural selection.  The 2014 study here finds a similar result, saying, "Our overall estimate of the fraction of fixed adaptive substitutions (α) in the human lineage is very low, approximately 0.2%, which is consistent with previous studies." That's only about 1 gene in 500 showing promotion by natural selection.