Header 1

Our future, our universe, and other weighty topics

Wednesday, June 21, 2017

Shermer's Faulty Case Against an Afterlife

Scientific American columnist Michael Shermer has a new column entitled “Why the 'You' in an Afterlife Wouldn't Really Be You.” Arguing against both spiritual concepts of an afterlife and technological concepts of a “digital afterlife,” Shermer attempts to give three arguments against the possibility of an afterlife.

His first argument is without merit, as it depends on an unproven assumption. Shermer argues:

First, there is the assumption that our identity is located in our memories, which are presumed to be permanently recorded in the brain: if they could be copied and pasted into a computer or duplicated and implanted into a resurrected body or soul, we would be restored. But that is not how memory works. Memory is not like a DVR that can play back the past on a screen in your mind. Memory is a continually edited and fluid process that utterly depends on the neurons in your brain being functional.

This argument is not valid against any of these three ideas of an afterlife: (1) the idea that you have some immaterial soul that will survive death in a kind of natural way, without any divine intervention required; (2) the Christian idea that the dead will be physically resurrected by some divine agent; (3) the idea that minds may be uploaded into computers in the future, providing people with a digital afterlife. The first of these ideas does not depend on the idea that memories are permanently recorded in the brain. A person believing in a soul may believe that memories are stored largely in some soul, and may deny the claim that memory depends on neurons (a claim scientists haven't proven). The argument also does not debunk the idea of a physical resurrection of the dead. In such a case the neurons of individuals would presumably be recreated. The argument also does not debunk the idea of uploading minds into a computer. If our memories now depend on neurons (and there are reasons for doubting that), that is merely a current dependence, that could in theory be overcome if some new type of computer could be created that could store the equivalent of human neural states.

In his second argument, Shermer attempts to stretch out one of the arguments made against mind uploading, and turn that into an argument against believing in any type of afterlife. Mind uploading is the idea that in the future it will be possible for people to transfer their consciousness into a computer. The idea is that we will be able to scan the human brain, and somehow figure out some neural pattern or synaptic pattern that uniquely identifies an individual. Then, it is argued, it will be possible to recreate this pattern in some super-computer. It is claimed that this could be a method of getting a digital afterlife. Some futurists claim that if someone were to have his brain scanned and then have his neural patterns transferred to a computer, that person could discard his body, and continue to live on indefinitely within a computer.

Here is how Shermer presents his second argument:

Second, there is the supposition that copying your brain's connectome—the diagram of its neural connections—uploading it into a computer (as some scientists suggest) or resurrecting your physical self in an afterlife (as many religions envision) will result in you waking up as if from a long sleep either in a lab or in heaven. But a copy of your memories, your mind or even your soul is not you. It is a copy of you, no different than a twin, and no twin looks at his or her sibling and thinks, “There I am.” Neither duplication nor resurrection can instantiate you in another plane of existence.

Minus the clumsy twin reference, this argument has considerable force against the idea of a digital afterlife through mind uploading. If I were to scan your brain and recreate your neural pattern inside a computer, that seems to be making a copy of your consciousness rather than a transfer of your consciousness, for two different reasons. The first reason is that a transfer has been made to a totally different medium (from a biological platform to an electronic platform). The second is that the mind upload seems to leave open the possibility of your biological body continuing after your mind upload has completed, and that would seem to be the creation of a copy of your consciousness rather than a transfer.

But the copying argument has much less force (and perhaps no force) when used against the idea of a physical resurrection of the dead. In that hypothetical possibility, there is no transfer to a different medium. A physically resurrected body would be the same (or mostly the same) as the body that existed before someone died. Also, a physical resurrection would presumably only occur for people who had already died. So there would be no issue that both a source and a target (or copy) could exist at the same time.

The copying argument has no force at all against the idea of a soul that continues to exist after a person's body dies. In such a case, presumably there would be no copy at all made of anything. A person who believes that the soul survives death does not tend to believe that the soul suddenly appears at the moment of death, suddenly having a copy of consciousness and memory that was stored in the brain. Such a person will instead tend to believe that the soul was a crucial component of the human mind all along (or perhaps was equivalent to the mind), and that such a soul simply continues to exist when a person dies.

Imagine a person who has worn a heavy lead coat all his life, and has worn a dark glass fishbowl over his head all his life. Under the soul concept, we may regard death as being rather like a person shedding that heavy lead coat and dark-glass fishbowl, with the heavy lead coat and dark-glass fishbowl being the restrictions of movement and perception associated with a human bodily existence. Under such a concept, there is no copying at all, but more like a kind of jettisoning, rather like a soaring rocket jettisoning a no-longer-needed fuel tank, with the body being what is jettisoned. Such a concept of the survival of the soul is completely free from difficulties involving copying, because no copying at all is assumed.

Shermer's third argument has no force against any concept of an afterlife other than a digital afterlife. Using the odd term “POVself,” he argues as follows:

If you died, there is no known mechanism by which your POVself would be transported from your brain into a computer (or a resurrected body). A POV depends entirely on the continuity of self from one moment to the next, even if that continuity is broken by sleep or anesthesia. Death is a permanent break in continuity, and your personal POV cannot be moved from your brain into some other medium, here or in the hereafter.

Such reasoning based on continuity might have some force against the concept of a digital afterlife by means of mind uploading, but probably not very much force as Shermer has stated it. For he's used the phrase “there is no known mechanism,” which is hardly going to discourage futurists who will claim that such a mechanism will be invented in the future. The reasoning here also has no force against either the possibility of a physical resurrection of the dead or the idea of a soul surviving death. A Christian believing that the dead will be physically resurrected will believe that this is done through divine agency, so it is futile to argue that such a thing cannot occur because there is “no known mechanism” by which it would occur. The person believing in a soul that survives death need not believe that any transfer, copying or moving occurs to allow a person to survive death. Such a person will tend to believe that the essence of a person – what makes you you – has already resided in your soul all along, not in your brain; and such a person is not required to believe that anything is transferred from the brain to the soul when a person is died. Under the idea of a soul, there is no break in continuity when a person survives death.

Neuroscientists have always followed the principle: explain every conceivable mental activity as being something caused by the brain. But what we must remember is that Nature never told us that all our memories are stored in brains, or that our thoughts are generated by brains. It was neuroscientists who told us that, not Nature. For example, we have no understanding of how 50-year-old memories can be stored in brains, given all the rapid molecular turnover that occurs in brains. Below is a quote from a recent scientific press release, citing a comment by a neuroscientist.

Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.

Such a quote (which has a “grasping at straws” sound to it) gives a very strong impression that neuroscientists have no real basis for being confident claiming that long-term memories are stored in the brain. The type of evidence neuroscientists cite for their claims is often weak evidence that doesn't hold up to critical scrutiny, such as dubious brain scanning studies which typically take minor 1% differences in brain activity, and try to make them look like compelling signs of what the brain is doing, when they are no such thing.

The idea behind a soul or spirit can be summarized as follows. You have a soul or spirit that is not at all a brain thing. You also have a brain, which serves largely for the purpose of localizing or constricting your mental activity, making sure that it stays chained to your body. The main purposes of the brain are things like control of autonomic functions, response to tactile stimuli, coordination of muscle movement, the coordination of speech, and the handling of sensory and auditory stimuli. There may also be some brain function relating to storing kind of what we may call “muscle memories,” which we use for performing particular physical tasks. These are all things related to living and surviving as a corporeal being. But things such as abstract thinking, conceptual memory and long term memory may be functions of a human soul. So when you die you may lose those things that you needed to continue walking about as a fleshly being, but may still have (as part of your soul) those things that were never necessary for such an existence (no cave-man needed to form abstract ideas, think philosophical thoughts, or remember his experiences as an 8-year-old).

This idea may seem old-fashioned to some, but a strong argument can be made that it is compelled by fairly recent evidence, and that in such a sense it is not at all old-fashioned. It is only in recent years that we have discovered what a very high degree of molecular turnover occurs in the brain, which makes it so hard to maintain that 50-year old memories are stored in the brain, as discussed here. It is only in the past 50 years that we had research such as John Lorber's, showing astonishingly high mental functioning in patients who had large fractions of their brains (or most of their brains) destroyed by disease. It is only in the past 130 years that experimental laboratory evidence has been repeatedly produced for phenomena such as ESP, which cannot be accounted for as a brain effect. It is only in recent decades that we have had reports of near-death experiences, in which many observers have reported floating out of their bodies, often verifying details of their medical resuscitation attempts that they should have been unable to observe  while they were unconscious. Such observations are quite compatible with the idea of a soul that survives death, and may force such a conclusion on us.

Shermer seems rather to be thinking of the idea of the soul as something kind of like a USB flash drive to back up the brain. But those who have postulated a soul have more often supposed it as a crucial component of human mental functioning, not some optional accessory. Under the concept of a soul that accounts for a large fraction or most of human mental functioning, there is no requirement for any sudden copying from the brain to occur for someone to survive death; and there is also continuity, as death merely means discarding what you don't need to survive beyond death. As none of Shermer's three arguments damages such a concept, Shermer has not succeeded in closing the door to an afterlife.

We know not what lies at the end of the misty bridge

Saturday, June 17, 2017

Nonsense in NatGeo's “Year Million”

The National Geographic channel has a new TV series entitled “Year Million.” A recent episode in the series was entitled “Never Say Die,” and was about the prospects of human immortality. The episode was long on slick visuals, and short on intelligence. The episode presented the ideas that we will greatly extend human longevity by using nanobots and genetic engineering, and that later people will upload themselves into computers.

The episode presented a simplistic view of biological life, one that is purely mechanistic, epitomized by a person who said, "We are still biologically hindered by the squishy computer in our heads.” The assumptions seemed to be that biological life is merely some mechanical phenomenon that humans can completely master through technologies such as nanobots and genetic engineering. These assumptions are quite dubious because of all the profound mysteries involving biological life.

It is generally admitted that we do not understand the origin of life. It is generally admitted that we do not understand the mystery of protein-folding (why proteins conveniently form into three-dimensional shapes needed for our existence). It is generally admitted that we do not understand the mysteries of morphogenesis, how it is that a fertilized egg is able to progress to become a baby who then grows into a full-grown human. We also don't really understand the origin of species, as the theory of evolution by natural selection is merely a theory of accumulation rather than what we actually need to explain the mountainous degrees of coordination, structural fine-tuning, and functional coherence in our bodies: a theory of organization explaining the arrival of the fittest, not just the survival of the fittest. As discussed here, we also don't really understand where it is that the body plans of organisms are stored, because DNA is a minimal “bare bones” language unsuitable for describing three-dimensional body plan blueprints; so while DNA might be storing a list of chemical ingredients, it does not actually seem to have a specification of our body plans.

We therefore have every reason to suspect that biological life involves some mysterious forces or factors very far beyond our understanding. If that is the case, then the prospect for vastly increasing the human lifespan through injecting humans with tiny robots called nanobots and tinkering with DNA may be much less bright than many think.

But there is some reasonable hope that genetic engineering or nanobots might extend the human lifespan, so this part of the TV show wasn't nonsensical. The show did become nonsensical when it discussed the very dubious concept of mind uploading in a completely uncritical manner. Among the reasons why mind uploading is such a dubious concept is that we have no understanding of how a brain could even be storing memories that last for decades, given all the rapid molecular turnover and protein turnover that occurs in brains; and we have no idea of how brains can generate abstract ideas. Mind uploading cannot possibly work unless some reductionist model of consciousness and memory is correct; and there are very good reasons (discussed here and in this series of posts) for doubting that any such model is correct.

The show quoted neuroscientist Michael Graziano, who made this very dogmatic proclamation:

A lot has to be done before we can figure out whether we can take a pattern of connectivity from a brain and upload it or copy it. It's going to happen; I'm positive of that. Everything's moving in that direction.

Graziano's certainty on this matter is laughable. There is no basis for believing that minds can ever be uploaded. There is no progress at all in mind uploading. Instead of “everything's moving in that direction,” the situation is actually, “nothing's moving in that direction.” And why is Graziano saying he's sure it's going to happen, just after saying, “A lot has to be done before we can figure out whether we can take a pattern of connectivity from a brain and upload it or copy it”? Those two thoughts contradict each other.

The show then quotes string physicist Michio Kaku, who nowadays seems to have his head popping up on every science or futurism documentary put on television. Kaku states, "Believe it or not, we can actually upload memories, and record memories in mice.” This statement is false. The actual claims that have been made by certain mice researchers (researchers in optogenetics) is that memories in mice can be blocked or activated. Such claims are not well founded, and are based on a few doubtful studies using a dubious methodology. The studies typically report weak levels of statistical significance. See here for a discussion of the flaws in some of these studies.

Making claims about mind uploading, the show's narrator says this:

Once we shed our physical bodies, we will exist only as a computer copy of our brains. It's hard to imagine exactly, but basically you'd be digital information stored on a server.

The TV show assures us that mind uploading is the key to eternal life. The narrator says this:

But when we start talking about immortality, that's a whole other can of worms. If we really are serious about getting to forever-land, we'll have to replace our carbon based human bodies and upload ourselves to a super-computer.

The show then tells us this existence will be like some afterlife heaven. We are told:

We're talking about everlasting life in a digital paradise....All of society will live digitally in a virtual world called the metaverse that's a thousand times more intense than the one you live in now.

There are quite a few reasons why such a thing is very unlikely to happen. First, the brain does not store digital information, so the mind is very likely not something that can be uploaded into a computer. Second, there is not the slightest evidence that the brain uses any type of readable code to store information. DNA uses the genetic code, something we can understand and read. But we have zero knowledge of anything like a brain code that we can use to read information from the brain. Nor can we plausibly imagine how such a code could have originated, as it would have to be something almost infinitely harder to explain than the genetic code (the origin of which is very hard to explain).

Third, you could never electronically capture the exact state of a particular person's brain, even if you tried to use microscopic nanobots to do such a thing. Imagine trying to exactly map every synapse and neuron in a brain with nanobots. Each of the brain-mapping nanobots would have to somehow be aware of its own exact position in the brain, so it can record the exact position of each neuron and brain connection it encounters. So if a nanobot comes to a neuron that is 1.334526 centimeters from the left edge of the skull, and 2.734538 centimeters from the right edge of the skull, and 5.292343 centimeters from the back of the skull, then those exact coordinates must be recorded. But how can a microscopic nanobot do that? You can't supply a microscopic nanobot with a little GPS system allowing it to tell its position. So it would seem that nanobots are completely unsuitable for any such job as mapping the exact physical microscopic structure of an organ with billions of cells and synapses packed together in a small space.

Then there's the duplication problem. Imagine if there was a machine that could scan your brain, and then upload your mind to a computer. Even if that process was done perfectly, the computer would not have your mind. It would instead have a copy of your mind. You may realize this just by considering that if this uploading process didn't kill you, and the computer with your “mind upload” existed at the same time as you, there wouldn't be two you's. There would be one you, and a copy of you living in the computer. If you then died after this upload, it would not at all be true that you had survived death by the fact that a copy of your mind was in the computer.

None of these difficulties are discussed on the “Year Million” TV show. The show presents mind uploading as a sure thing, ignoring all the reasons for thinking that it can't ever happen. Not only did the TV show present mind uploading as something that will likely happen; the TV show assured us that mind uploading is the key to true immortality – on the grounds that once you are living in a computer server, you are guaranteed to live forever.

This is nonsensical, because the programs running on computer servers sometimes crash and stop working; the programs running on computer servers are sometimes deliberately halted; and the computer servers themselves sometimes crash, lose power, or are turned off. If you were living in a computer server, there is no reason why you should be confident that you will be around for more than a century or two. 

mind upload
 After the father and daughter mind-uploaded

The next episode of the “Year Million” TV series continued to be largely about mind uploading, speaking as if this extremely doubtful idea was on solid ground. There was also a pitch for the idea that we are already living inside computers. The “Year Million” TV series is on track to be the silliest documentary series ever put on about the human future.

Given our very low state of knowledge about the most basic riddles of biology, memory and consciousness, fantasies of mind uploading are rather like the fantasies of some boy who knows almost nothing about the contents of the sun, but who fantasizes that he will one day reorganize the sun into a shape and color more pleasing to him.

Tuesday, June 13, 2017

Seven Levels of Mentality, and Why Brains Can't Explain the Last One

Can we account for all of man's mental activities by assuming that they can all be explained by the brain? When considering this question, it may make sense to distinguish between different levels of human mentality, with each level being more more difficult to explain than the previous level. For it might be that our brains are sufficient to explain one level, but insufficient to explain a more advanced level.

Below are seven levels of mentality we can distinguish.

Level 1: Let us imagine a very young baby lying in a crib near a window. A red bird flies and lands on the windowsill. The baby looks out at the bird, and perceives the bird's color. But there is no recognition, and the baby feels neither fear nor wonder at the sight of the bird. There is no thought or emotion, but merely sensory perception.

Level 2: Let us imagine the same event happens. The baby is lying in its crib, and a a red bird flies and lands on the windowsill near the crib. But let us suppose the baby is a little older, and now something a little more advanced happens. The baby feels an emotion of wonder or delight. But there is still no recognition, and no memory is involved.

Level 3: Let us imagine the same event happens. The baby is lying in its crib, and a a red bird flies and lands on the windowsill near the crib. Let us imagine this baby is now almost a toddler. The baby feels an emotion of delight or wonder at seeing the bird, but now some memory is involved. Without thinking any words, the child has a vague feeling of recollection, the feeling that he has seen such a bird before.

Level 4: Now let us imagine the child can walk, and he sees the red bird out in his back yard. When he sees the red bird, not only is there a feeling of delight, and a recollection, but also a use of language. Now the child points to the bird, and says, “I see birdy.”

Level 5: Now let us imagine the child is seven years old. Now he sees the red bird in his backyard, and the sight produces not just emotion and language, but also abstract thinking. The child makes an observation, “Wow, that sure is a beautiful bird,” or perhaps asks a simple question such as “I wonder whether birds like that get tired when they fly.”

Level 6: Now let us imagine the child is now thirteen years old, and is old enough to engage in philosophical thinking. So when he sees the red bird, he asks an advanced question such as “I wonder how it is that birds or any other species ever started to exist,” or “Is it morally right for us humans to be putting birds in cages?”

Level 7: Now let us imagine the person is no longer a child, but is now sixty years old. He can write in depth about many advanced philosophical questions; he can feel many refined emotions; and he is quite capable of writing an entire book about birds. Moreover, he can instantly recall memories that happened five decades ago. So perhaps he remembers the day 50 years ago when he first saw a beautiful red bird in his backyard.

Now, which one of these levels can we explain by assuming brain activity? Some philosophers would deny that even Level 1 can be explained by brain activity. As rudimentary as Level 1 is, it is still an example of Mind, and some philosophers say that we cannot explain how Mind can arise from mere matter. But perhaps they are wrong, and perhaps we can explain Level 1 by assuming that it involves merely sensory perception, which parts of the brain might explain. Conceivably we also might explain Level 2 also by imagining that some kind of hormone or chemical produces a feeling of wonder or delight.

But we cannot explain Level 7 through brain processes. For we cannot explain any way in which the brain could store memories for decades; we cannot explain how brains could instantly recall distant memories; and we cannot explain how a brain could generate abstract thoughts.

Consider the storage of memories. It has been proven through the work of scientists such as Bahrick that humans can store memories fairly reliably for more than 50 years. In order for you to have a workable theory for how brains can be storing memories, a neuroscientist would need to plausibly explain how human memories could be stored in a brain for 50 years. No scientist has done any such thing.

The main theory for how the brain stores memories is the idea that our synapses store memories. But there is a reason why this theory does not work. As discussed here, synapses are subject to very strong molecular turnover and structural turnover which should prevent them from storing memories for longer than a year. The proteins in synapses have an average lifetime of only about a week.  As a scientific paper puts it, "Thus, the constituent molecules that subserve the maintenance of a memory will have completely turned over, i.e. have been broken down and resynthesized, over the course of about 1 week."

Not only do neuroscientists fail at explaining how very long-term memories can be stored in the brain; they also fail to explain human memory retrieval. The two main problems in explaining human memory retrieval are these: (1) the problem of explaining how a human being could find the exact location where a memory might be stored in the brain; (2) the problem of explaining how we are able to recall obscure and old memories with such blazing speed.

If human beings took 30 minutes to retrieve a memory, the difficulty would not be so great. We might imagine that a brain simply scans all of our memories, looking for the right one, like a person flipping through the pages of a book looking for a topic. But given the instantaneous memory recall of distant, trivial memories shown on TV shows such as Jeopardy, it is obviously not true that you read through all your memories until you find something.

How are computers able to retrieve information so quickly? Through indexing, which requires sorting. But there is zero evidence that the brain does any type of physical sort or physical indexing. The brain does not have the type of architecture to support physical sorting or indexing. So our memories cannot be indexed in the way that computer information is indexed.

In short, there is no viable explanation as to how a brain could be able to find so quickly a spot in the brain where a memory was stored. Nor does it work to claim that memories are stored everywhere in the brain. The idea that every memory is stored everywhere in the brain is nonsensical.

Although neuroscientists are very bad at explaining how a brain could store or instantly retrieve memories lasting for decades, neuroscientists are even worse at explaining how brains could possibly be generating ideas and abstract thought.

If you go to this page on the “Expert answers” site quora.com, you will be treated to some answers illustrating the utter failure of modern neuroscience on how a brain could possibly generate new ideas. The page gives some answers to the question, “How does our brain form new ideas?”

The first answer by Tanush Jagdish begins by saying, “We don't know,” but then suggests “synaptogenesis,” the creation of new synapses. This is not a plausible idea. Some text such as “tall blue cold triangle” can instantly create an idea in your head that you have never had before, and you certainly did not have to wait for new synapses in your brain to form before you had such an idea.

The second answer by Jeff Nosanov is a circuitous answer that explains nothing, while claiming incorrectly that “ideas cannot be made to happen.” Nosanov ends by saying “that was not a physiological explanation.”  Then the page has some answers by non-scientists which are not illuminating.

A similar page on the “Ask science” sub-reddit at www.reddit.com offers equally little illumination. A Google search for “how the brain generates ideas” will result in a vast wasteland of results failing to offer a single neuroscience study offering any real illumination on this topic. One of the items you will get is a Harvard news story with the title “How the brain builds new thoughts.” But the story is discussing some research that does nothing to explain such a thing – just another one of those oh-so-dubious brain scanning studies in which some scientists scan brains and find trivial differences in blood flow (see here on why such studies are typically of little value).

Essentially the only idea that neuroscientists have to explain a brain creating ideas is some idea of combination, kind of the idea that you create a complex idea by combining simpler ideas. This does not explain the miracle of abstract thought. Let us imagine a savage who experiences 100 cold days, and who then reaches the abstract concept of coldness. This idea is not reached from any type of combination – it is reached by abstraction. Similarly, a person who sees 100 other humans may reach the abstract idea of a human being. But that idea is not reached by combination.

Computers offer not the slightest clue as to how abstract thinking can occur, because no computer has ever had a concept, an idea, or an abstract thought, something that requires a conscious mind. Don't be fooled by the type of computer program called an idea generator. Such programs are typically just programs for combining words into novel combinations. Until a human reads the output of such a program, an idea is not actually created. 

In short, there is no plausible brain explanation for how humans could be storing memories for 50 years; there is no  plausible brain explanation for how humans can instantly retrieve distant memories; and there is no plausible brain explanation for how humans can even create abstract ideas.  See here and here for additional support for these claims. It is sometimes argued that we need to postulate something like a human soul or spirit (something beyond the brain) to account for paranormal phenomena such as ESP and near-death experiences.  It may also be argued very forcibly that we need to postulate something like a human soul or spirit to account for certain types of normal mental functioning that we observe every day. 

Postscript: Just after finishing this post, I read this new scientific article, which quotes a neuroscientist speculating about memory storage. The article says:

Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates. 

High-dimensional cavities? Cavities are holes, not information storage media.  I think the quote bolsters my claim that scientists do not have any plausible explanation of how brains can be storing memories for 50 years.

Friday, June 9, 2017

Biomedical Blunders Blow Billions

In the new book Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hopes, and Wastes Billions by Richard Harris, we are told this: “Misleading animal studies have led to billions of dollars worth of wasted effort and dead ends in the search for drugs.” There is a rather acidic visual on the front cover of the Rigor Mortis book. We see a toe tag tied around the letter “I” in the title, like the toe tag they tie around the toes of corpses. In the “Name” slot of the toe tag, we see: “Biomedical Research.”

That visual is an exaggeration, since biomedical research isn't dead. But judging from the book, there are serious problems in the field. It seems that a very large fraction of research studies cannot be replicated. The problem was highlighted in a widely cited 2005 paper by John Ioannidis entitled, “Why Most Published Research Studies Are False.” A scientist named C. Glenn Begley and his colleagues tried to reproduce 53 published studies called “ground-breaking.” He asked the scientists who wrote the papers to help, by providing the exact materials to publish the results. Begley and his colleagues were only able to reproduce 6 of the 53 experiments.

In 2011 Bayer reported similar results. They tried to reproduce 67 medical studies, and were only able to reproduce them 25 percent of the time. On page 14 of the book by Harris, we are told that one expert estimates that 28 billion dollars a year is spent on untrustworthy papers.

Part of the problem is a culture that provides high rewards for splashy results that can be called “ground-breaking,” but which makes it rather hard for a biologist to get a paper published if the paper reports a failure to replicate a previous study. Another part of the problem is insufficient attention to methodology and precise mathematics. One expert quoted on page 172 says that in the current culture of biomedical research, it “pays to be first” but “it doesn't necessarily pay to be right.” The expert laments, “It actually pays to be sloppy and just cut corners and get there first,” noting that this is “really wrong.”

On page 96 we learn about a problem with misidentified cell lines, in which experiments are done assuming some series of cells are from one type of organism when they are actually from some other type of organism. We read:

A 2007 study estimated that between 18 and 36 percent of all cell experiments use misidentified cell lines. That adds up to tens of thousands of studies, costing billions of dollars....Sometimes, even the species isn't correct. Nelson-Rees found a “mongoose” cell line was actually human and determined that two “hamster” cell lines were from marmosets and humans, respectively. “Have the Marx Brothers taken over the cell-culture labs?” Roland Nardone asked in a 2008 paper bemoaning this state of affairs.

On page 203 of the Harris book, an expert laments that “we haven't trained a lot of our biologists to think mathematically or to understand or analyze data.” On the same page we are told that there are no standards in whole genome sequencing, that there are no standards in searching for mutations in genomes, and that in searching for mutations in genomes, “Nobody does it the same way.”

A Pro Publica article is entitled, “When Evidence Says No, But Doctors Say Yes.” Apparently there is a problem of some doctors recommending procedures that aren't backed up by evidence. Below is a quote from the article:

In a 2013 study, a dozen doctors from around the country examined all 363 articles published in The New England Journal of Medicine over a decade — 2001 through 2010 — that tested a current clinical practice, from the use of antibiotics to treat people with persistent Lyme disease symptoms (didn’t help) to the use of specialized sponges for preventing infections in patients having colorectal surgery (caused more infections). Their results, published in the Mayo Clinic Proceedings, found 146 studies that proved or strongly suggested that a current standard practice either had no benefit at all or was inferior to the practice it replaced; 138 articles supported the efficacy of an existing practice, and the remaining 79 were deemed inconclusive.

Another huge problem in contemporary medical practice involves doctors who invest in fantastically expensive equipment, and who then give advice that may be biased by their desire to pay off the cost of such a machine (or profit from its use).

unnecessary medical care

It seems that there is a huge amount of unnecessary medical treatment being done. A New Yorker article by a doctor states the following:

In just a single year, the researchers reported, twenty-five to forty-two per cent of Medicare patients received at least one of the twenty-six useless tests and treatments....The Institute of Medicine issued a report stating that waste accounted for thirty per cent of health-care spending, or some seven hundred and fifty billion dollars a year, which was more than our nation’s entire budget for K-12 education....Millions of people are receiving drugs that aren’t helping them, operations that aren’t going to make them better, and scans and tests that do nothing beneficial for them, and often cause harm.

If you are asked to take some expensive test or undergo some expensive medical procedure, the following are good questions to ask your doctor:

(1) Is the course of treatment or testing you are recommending considered a standard practice or "best practice" for patients with my set of circumstances? 
(2) If you were teaching a room full of medical students, would you recommend this exact treatment or testing for someone with my set of circumstances?

Look for a firm, confident answer of "Yes," rather than a weaker answer such as "Doctors often do this."

During earlier times, people had complete faith in words spoken by anyone wearing the black outfit of the priest. Today we have somehow been socially conditioned to regard anyone with a white coat as a totally reliable source of information. Perhaps both forms of “color confidence” involved too much uncritical trust.

Postscript:  An article in the Guardian states the following:

More than 70% of the researchers (pdf), who took part in a recent study published in Nature have tried and failed to replicate another scientist’s experiment. Another study found that at least 50% of life science research cannot be replicated. 

A Nature article states this: "Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature." This smells like scientists having too much overconfident faith in their fellow scientists.  

Monday, June 5, 2017

Book Looks at the Universe's Many Royal Flushes

The fascinating question of cosmic fine-tuning has been given a very impressive and comprehensive treatment in the recent book A Fortunate Universe: Life in a Finely Tuned Cosmos by astrophysics professor Geraint F. Lewis and astronomy postdoctoral researcher Luke A. Barnes. The book looks at the many ways in which our universe seems to have improbably “hit the jackpot” or “won the lotto,” having a series of incredibly lucky breaks that were necessary for our eventual existence. On page 29 the authors describe the thesis of a fine-tuned universe like this: “The claim is that small changes in the free parameters of the laws of nature as we know them have dramatic, uncompensated, and detrimental effects on the ability of the universe to support the complexity needed for physical life forms.” 

One case involves the existence of abundant carbon and oxygen in the universe, two elements that must exist in abundance for life to exist. Carbon and oxygen didn't exist in the early history of the universe, which was almost entirely hydrogen and helium. Carbon and oxygen were formed gradually by stars. A great deal of luck is required for any universe to be able to produce either carbon or oxygen; and for a universe to produce abundant amounts of both carbon and oxygen, some fantastically improbable strokes of luck are required. The authors note this on pages 118-119 (referring to the strong nuclear force that binds protons and neutrons in the nucleus of the atom):

If we nudge the strength of the strong force upwards by just 0.4 per cent, stars produce a wealth of carbon, but the route to oxygen is cut off. While we have the central element to support carbon-based life, the result is a universe in which there will be very little water. Decreasing the strength of the strong force by a similar 0.4 per cent has the opposite effect: all carbon is rapidly transformed into oxygen, providing the universe with plenty of water, but leaving it devoid of carbon.

Protons and neutrons are made up of smaller particles called quarks. A proton is made of two up quarks and one down quark, while a neutron is made of two down quarks and one up quark. On page 50 to 51 of the Fortunate Universe book, we are told some reasons why a life-bearing universe requires that these quark particles have masses not too far from the mass they have. If the down quark was about 70 times more massive or the up quark was about 130 times more massive, there would be only one element, and complex chemistry would be impossible. More sensitively, if the up quark was more than six times more massive, protons could not exist, and there would be no atoms. But later we learn of a much more sensitive requirement demanding that the quark masses be almost exactly as they are in order for the universe to be hospitable for life.

Reiterating the conclusions of this scientific paper, the book also notes on page 120 that we would not have a universe with abundant carbon and oxygen if the quark masses were different by much more than a few per cent. The book notes how improbable such a case of “hitting the distant bulls-eye” was:

And remember from last chapter that because the quarks are already “absurdly light” in the words of physicist Leonard Susskind, a range of mass that is a small percentage of their value in our Universe corresponds to a tiny fraction of their possible range. It is about one part in a million relative to the Higgs field, which gives them their mass. It is about one part in 1023 relative to the Planck mass!

On page 75 of the book we have a diagram that is basically the same as the diagram below from an article by one of the authors. The author shows that if you make random values for the strong nuclear force and a fundamental constant called the fine structure constant, then only a very tiny fraction will allow carbon-based life. Because the graph uses a logarithmic scale, it visually exaggerates the size of the tiny white rectangle. If you were to program a computer to assign random numbers for these two (the strong nuclear force and the fine structure constant) between 0 and 1000, less than one in a million times would the numbers end up within the tiny white rectangle. 

cosmic fine tuning

From Barnes article here

On page 63 the book has a discussion of fine-tuning involving the Higgs mass and Higgs boson:

Life requires a value not too much different to what we observe. There must be an as yet unknown mechanism that slices off the contributions from the quantum vacuum, reducing it down to the observed value. This slicing has to be done precisely, not too much and not so little as to destabilize the rest of particles. This is a cut as fine as one part in 1016...This problem – known as the hierarchy problem – keeps particle physicists awake at night.

On page 111 of the book we are told about some ways in which the strong nuclear force is sensitive to changes:

A small decrease in the strength of the strong force by about 8 per cent would render deuterium unstable. A proton can no longer stick to a neutron, and the first nuclear reaction in stars is in danger of falling apart. An increase of 12 per cent binds the diproton – a proton can stick to another proton. This gives stars a short cut, an easy way to burn fuel. If the diproton were suddenly bound within the Sun, it would burn hydrogen at a phenomenal rate, exhausting its fuel in mere moments.

cosmic fine tuning
So many royal flushes

Then there is the cosmological constant problem, the case of fine-tuning discussed here. It's the issue that quantum field theory predicts that ordinary space should be very densely packed with quantum energy, making it even denser than steel. But somehow we live in a universe that has only the tiniest sliver of the vacuum density that it should have. Below is what page 162 of the book has to say about this:

Maybe there is a mechanism at work here, a mechanism that we clearly don't yet understand, which trims the energy in the quantum vacuum; so, while it is intrinsically very large, the value we observe, the value that influences the expansion of the universe, appears to be much, much smaller. But this would have to be a very precise razor, trimming off 10120 but leaving the apparently tiny amount that we observe....But what if this mechanism for suppressing the influence of the cosmic vacuum energy was not so efficient, removing the effect of 10119 rather than 10120, so there would be ten times the vacuum energy density we actually measure? Remember, such vacuum energy accelerates the expansion faster and faster, emptying out the Universe, cutting off the possibility of stars, planets, and eventually people.

Towards the end of the book, the authors discuss some common objections made to minimize the importance of such conclusions. One objection goes like this: improbable things happen all the time (for example, there was only 1 chance in a billion that you would have the 9-digit Social Security number that you have). The objection is easily dismissed on these grounds: improbable things do happen all the time, but improbable lucky things do not happen all the time. The cases of cosmic fine-tuning are not merely improbable things happening, but incredibly improbable lucky things happening; and it is not at all true that incredibly improbable lucky things happen all the time.

Another objection appeals to the existence of a multiverse: maybe there are an infinity of universes, and in such a case the odds of one of them being successful might be good. This is not a sound objection because it merely increases the number of random trials; and increasing the number of random trials does nothing to increase the chance of any one random trial succeeding. If you drive into Las Vegas and drive out with $50,000,000 in your car, that's an astonishing piece of luck; and it's no less astonishing if there are an infinity of such lucky winners scattered across an infinity of universes in which an infinity of different things happen. Adding a multiverse does not increase the odds of lucky events in any one particular universe such as ours.

In so many different ways (physics, cosmology, biology) the universe seems to scream at us in a thundering voice: “Purpose and non-randomness!” But all this falls on the deaf ears of many experts in academia who keep summarizing things by telling us, “It's all just randomness.”

Thursday, June 1, 2017

Future Risks Cloud the Issue of When to Take Social Security Benefits

For Americans approaching retirement age, an important decision is when to apply to the Social Security administration to start getting Social Security retirement benefits. A person can apply as early as age 62, and as late as age 70. The longer you wait, the larger your monthly check will be. For each year that you delay benefits, your monthly check will increase by roughly 7.5 percent. So a person with modest lifetime earnings might choose between getting 1000 dollars a month at age 62, 1335 a month at age 66, or 1783 a month at age 70. If you made more money during your working years, the monthly paycheck at each age might be larger.

Some experts tell us the decision about when to take benefits is simple. They offer the simple formula: delay applying for benefits as long as you don't need the money now. If a person followed this advice, he would ask himself at age 62, “Do I need the Social Security money this year?” If the answer was negative, he would delay applying for benefits until the next year. And he would ask the same question when he was 63, 64, and so forth. He might delay filing for Social Security benefits until he was 70. Or, if he found that he needed the Social Security check at age 67, he might apply for benefits in that year.

But this “delay applying for benefits as long as you don't need the money now” is an oversimplification. The decision on when to apply for Social Security benefits is actually a very complex one. As we will see, all of the following factors are relevant to your decision:

  • Whether or not you need the Social Security check during any year between age 62 and 70.
  • Your current heart rate
  • What age your parents died at
  • Whether you have a family history of cancer or heart disease
  • Whether you are a smoker or obese
  • Whether you are working after the age of 62
  • How much money the US government currently owes
  • Whether you think medical science will invent some anti-aging treatment you can afford
  • How high US budget deficits will be in the coming years
  • How high is the chance that the US government will cancel Social Security or trim its benefits

One tool to help make the decision about when to take Social Security benefits is what is called a break-even analysis. There are online tools that allow you to do this. For example, this site has a tool that asks your full retirement age, your Social Security benefit at age 62, your Social Security benefit at age 66, and your Social Security benefit at age 70. You can get these from your yearly Social Security statement. The site will give you a “break-even analysis” graph like the one below. 

What does the graph show you in this case? It shows you that there is a “break-even age” at about age 79. Theoretically, if you expect to live longer than this age, you will get more money from the government by delaying taking your Social Security benefits, rather than taking them at age 62.

So, clearly the simple slogan “delay applying for benefits as long as you don't need the money now” is an oversimplification. A better slogan might be “delay applying for benefits if you think you will live beyond the break-even age” (although I will discuss some reasons why even that slogan is on rather wobbly ground).

How would you tell whether you should expect to live beyond the break-even age? You might go to some actuarial tables, and find the expected life expectancy for a person of your age. The table here (on the Social Security Administration web site) will give you a rough idea.

But there are other things to consider. If you are a smoker or obese, your life expectancy may be shorter than the number listed on the table. If you have a family history of heart disease or cancer, your life expectancy may be shorter than the number listed on the table. If one of your parents died of natural causes before the age of 80, your life expectancy may be shorter than the number listed on the table. If your resting heart rate is above 80 beats per minute, your life expectancy may be shorter than the number listed on a life expectancy table. But if you happen to think that some anti-aging treatment will be invented in the next two decades, and you think that you will be able to afford it, perhaps you will make a higher estimate of your life expectancy.

So matters have got quite complicated. Then there is a whole other can of worms to consider: whether the US government will in the future cancel Social Security or cut back on its benefits.

When giving financial advice on when to take Social Security benefits, almost every book and web site naively assumes that we can be confident that the US government will continue to pay benefits at the current rates, adjusted for inflation. For example, in her book How to Make Your Money Last: The Indispensable Retirement Guide, Jane Bryant Quinn gives us this cheerful assurance on page 72:

But benefits won't drop....Social Security keeps millions of older people out of poverty and saves millions of adult children from having to support mom and dad. No president, no Senate, no Congress will let those voters down.

But there is little basis for being so cheerful. The House of Representatives just passed a bill (approved by the President) that plans to make drastic cuts in health care. President Trump has released a budget that proposes sharp cuts in Social Security disability insurance payments. While the latter is not a cut in Social Security retirement payments, it is so “in the vicinity” that it should raise question marks about whether the government will be trimming Social Security retirement payments in the future. We should also consider that the United States national debt continues to spiral out of control, having reached nearly 20 trillion dollars after 16 years in which it doubled. The deeper the national debt, the more likely that the US government will trim future Social Security retirement benefits.

So whenever you see one of those break-even analysis charts, it might be better to visualize the chart like this. The question marks indicate the uncertainty that future payments will be made far into the future.

There are several ways in which the government could effectively cut Social Security benefits in a roundabout, sneaky way, without directly cutting anyone's paycheck. The government might suddenly raise the retirement age. Someone delaying benefits until he reached 66 might one day find this 66 had become 67 or 68; and someone delaying benefits until he reaches 70 might one day find that this 70 had become 71 or 72.

The government might also modify tax laws so that more of Social Security benefits are taxed. This might be done on either the federal or the state level. Or the government could introduce an assets test for Social Security benefits. So if you had a house worth $200,000 or more, you might find yourself no longer eligible to receive benefits.

Still another underhanded way for the government to shrink benefits would be for the federal government to be miserly in its cost-of-living adjustments, which are supposed to make Social Security payments keep up with inflation. The government already seems to be taking that route. Below are the miserly Social Security cost-of-living adjustments for the past 7 years. 

Another sneaky way for the government to reduce benefits would be to privatize Social Security benefits. If your future monthly Social Security checks were replaced by some stock fund that you owned, it would be hard to tell if the second was worth much less than the first.

There is also a very important question we must ask about conventional advice about when to take Social Security benefits. The question is: is such advice based on a faulty investment principle?

When people suggest that you should delay as long as possible before applying for Social Security benefits, they seem to be assuming this type of principle: maximize your chance of getting the most amount of money. Is that a good general principle for people of all ages? It may not be. Consider a person with $20,000 in savings. That person can maximize his chance of making the most amount of money by investing his money in stock options – a risky investment with a potential for very high returns. But stock options are so risky that the standard advice is: never invest in stock options if you need the money you are investing.

Maximize your chance of making the most amount of money” may actually be a poor principle for an older person to follow. And it seems particularly dubious to recommend following this principle to someone so that he can get some extra monetary bonanza that will gradually accrue very late in life – in your eighties or nineties. By then you may be in such poor state of health that you won't be able to enjoy your extra money as much as you would if you were younger. For example, you probably won't be using your extra money to go on some tiring vacation when you are 85 or 90.

A better financial principle may be this: minimize your chance of getting the least amount of money. A person following this principle may come to a completely different decision about when to take Social Security benefits than if he were following the principle of “maximize your chance of getting the most amount of money.”

Let us consider an imaginary scenario of a man named Joe who has just turned 62 and has $150,000 in savings. He retires at age 62, and decides to delay taking his Social Security benefits until age 70, thinking to himself: “I'll get a bigger monthly check if I wait until 70 to file for benefits.” Joe spends his $150,000 covering his expenses over the next eight years. At age 70, he has run out of money; but he applies for Social Security benefits. A few months later, something horrible and unexpected happens: the Social Security program is canceled by a heartless administration. Joe is now left with nothing. He would have avoided this problem entirely if he had taken Social Security at 62. He could have used his Social Security payments to cover his expenses, and still be left with $150,000 at age 70.

If we imagine that Joe has one adult son at age 70, but no wife (who died earlier), and we imagine that Joe dies at age 70, it is a similar situation, except that things are bad for Joe's son rather than him. Instead of getting a $150,000 inheritance, Joe's son is left with nothing.

Joe might have suffered such a disaster if he greedily followed the “young man's principle” of “maximize your chance of getting the most amount of money.” But if Joe had more sensibly followed the principle of “minimize your chance of getting the least amount of money,” he would have avoided this disaster altogether, by taking his Social Security benefits when he was 62.

When some financial adviser advises you to wait to retire until 66, what is he really saying is that it will work out well financially for you if you work until 66 and then start taking Social Security benefits at that time. But such advice is not the same as a recommendation that you should wait to take Social Security benefits until age 66 if you stop working at age 62. If you have more than $80,000 in savings, it can make more sense to start taking Social Security benefits at age 62, because you will get a monthly check earlier and also you might be able to make a lot more in capital appreciation or interest from your savings than you would if you tapped those savings to fund your retirement between the age of 62 and 66. There may also be an additional tax benefit from taking Social Security benefits at age 62 if you have a spouse who is still working. This is because the Social Security benefits are taxed less than 401K withdrawals or non-Roth IRA withdrawals.

I am not a financial adviser, so I cannot make any recommendation on when you should take Social Security benefits. I will merely note that in light of the considerations discussed here, taking Social Security at the earliest possible age (particularly if you are single or if you have decided to stop working at age 62) may be wiser than is generally acknowledged. Be aware that a person advising you to wait until 70 to apply for Social Security benefits may be operating under some principle of “maximize your chance of getting the most amount of money,” which may be a good principle for adventurous young people, but may be an unwise principle for people in retirement or approaching retirement. The quite different principle of “minimize your chance of getting the least amount of money” may lead to a completely different decision on this matter.

Nerdy Postscript: Please ignore this postscript if you never use spreadsheets for personal finance decisions.

There is a way for you to make your own break-even analysis, one that may be more accurate than one done on an online calculator. You can set up a spreadsheet like the one shown below. This allows you to fine-tune the inputs (which in this case are the columns on the left). The simplest way to do the spreadsheet is to just type in the second column the numbers your Social Security statement has given you for how much you will get if you retire at 62, to type in the third column the number your Social Security statement has given you for how much you will get if you retire at 66, and to type in the fourth column the number your Social Security statement has given you for how much you will get if you retire at 70. You can use the Sum() function to create the columns on the right, which provide cumulative totals for the annual benefits listed on the left. 

The advantage of creating such a spreadsheet is that you can fine-tune the calculations, making them more realistic for your specific case. For example, in the top left corner you might create a formula which takes into account the fact that if you retire at 62 you may be earning additional income on money you have already saved, interest you would not be earning if you started taking Social Security benefits at age 66.

In your spreadsheet you can then create a graph of the last three columns. The graph will look like one of the break-even charts shown above. But it may give you a more realistic calculation, because you have fine-tuned the inputs.

Below is a spreadsheet graph created from the three columns at right. It graphs cumulative money received under three options: taking Social Security at age 62 (red), taking Social Security at age 66 (yellow), and taking Social Security benefits at age 70 (green), varying over a 28-year time span from age 62 to age 90, with the break-even age occurring at about age 81. This spreadsheet assumes the case of someone not working after age 62. 

If you print out such a graph, it would be a good idea to pencil in a bunch of question marks in the upper right corner of the graph. That will remind you that the longer you go into the future, the more doubtful are predictions about money you will receive from Social Security, because of the very substantial chance that benefits will be reduced in the future.