Header 1

Our future, our universe, and other weighty topics


Tuesday, November 29, 2016

It's More Likely That Everything Is in Consciousness Than That Consciousness Is in Everything

In academic circles it is commonly assumed that consciousness is generated by the brain. But some philosophers have been dissatisfied with this idea. Why is it that some particular arrangement of matter would cause Mind (a totally different type of thing) to emerge from that matter? To many that seems no more plausible than the idea that some particular arrangement of crystals in a rock might cause the rock to gush out blood.

One alternate idea is to assume that we have something like a soul, and that our mental experiences are produced not just by matter (the brain), but something that is itself spiritual. Another alternate idea is the radical notion known as panpsychism. Panpsychism is basically the idea that consciousness is in everything.

A panpsychist might believe that every piece of matter is in some sense conscious. At first this may seem to help in explaining how brains can produce consciousness. If every neuron is a little bit conscious, it might explain why billions of cells in the brain can produce consciousness.

But it does not seem that there is any evidence that little pieces of matter are conscious. Electrons seem to behave with complete predictability according to the laws of electromagnetism, not as if they were semi-conscious little things with wills of their own. And oxygen molecules do not act as if they had some interest in their surroundings. When you enter an empty room, you do not feel a gush of wind coming your way, as you might feel if the air molecules were interested in seeing who was entering the room.

Panpsychism seems rather inconsistent with theism. Consider the moon. The idea that the moon is conscious may have a certain appeal. But imagine all those rocks on the moon's surface, lying there for billions of years. Think of what torture it would be if such rocks were actually conscious – they would have to suffer billions of years of absolute boredom, just sitting there with nothing to observe or experience. It would be hard to think of a reason why any deity would wish to endow such rocks with consciousness, to suffer such billion-year boredom and stagnation. And what about all the rocks and little pieces of matter underneath the surface of the moon and other planets, who would have the dullest experiences imaginable for billions of years?

I'm not sure we would want to believe that all the little pieces of matter around us are conscious. Do you want to believe that every time you walk on the autumn leaves, that you are crushing conscious beings underneath your foot? And if you accepted panpsychism, it would seem that every time you bake some cookies, you would have to worry about slowly torturing the poor conscious little cookies. 

panpsychism

An alternative to panpsychism, and something equally as radical, is the philosophical doctrine of idealism. Rather than maintaining that consciousness is in everything, an idealist maintains that everything is in consciousness.

An idealist maintains that the only thing that exists are different types of minds, and that material things only exist in the sense that they are elements within the mental experiences of minds. Perhaps the best way to explain this idea to the modern person is to consider what goes on in a video game. Suppose you are playing some Star Wars video game in which you are trying to blow up the Death Star. Now to what extent does this Death Star exist? It has no physical existence outside of the game world. The Death Star exists as a shared perception, something that is seen by all of the players of the video game under certain conditions. Similarly, an idealist may believe that Earth's moon has no physical existence outside of minds. According to such a person, our moon only exists as a shared perception within the mental experience of humans. An idealist thinks that if somehow all minds were to be destroyed, there would be no more moon.

So the idealist believes that the sole reality of physical things we perceive is their reality inside our minds. To such a person, the history of the universe is simply a history of mental experiences; and there was no state of the universe in which matter existed before minds.

There is no way to prove this philosophical doctrine, but there are no observations that we could ever have that would disprove this idea. Think about it. Every single observation or measurement we can make can be boiled down to a human experience. If you see a rock, that's a human experience. If you weigh the rock, that's also a human experience. If you measure the rock with a measuring rod, or determine its chemical content using a mass spectrometer, that's also a human experience. There is no way to verify that the rock exists outside of human experience.

Any credible theory of idealism requires a belief in some higher agent that acts to assure that there are certain consistencies in human experiences. But since idealism ends up removing quite a few dilemmas and difficulties in the “first matter, then mind” story of the universe, idealism ends up being at least as credible as any other philosophical worldview. A surprisingly compelling case for idealism was made in the 18th century by the British philosopher George Berkeley.

Today idealism is rather unfashionable, but in certain circles it is fashionable to speculate that we are just items in some computer simulation created by extraterrestrials. But the underlying concept is quite similar – that the things we perceive do not exist independently of our experiences, and that there is some external reality guaranteeing that we have certain common perceptions (such as the perception of the moon when we look up at night), rather than each of us having totally unique mental experiences.

But if you maintain that we are participants in some computer simulation crafted by extraterrestrials, you haven't removed any explanatory difficulties. The vexing problem is how is it that Mind can arise from matter, a totally different type of thing. With idealism such as advanced by Berkeley, that problem is removed, for you end up with the doctrine that there are only minds. But with some extraterrestrial simulation theory, the explanatory problem becomes twice as bad. For the theory maintains that biological matter gave rise to one type of minds (extraterrestrial minds), and that such minds then produced electronic matter that give rise to our minds. With that theory, you have two types of “Mind from matter” difficulties.

Both panpsychism and idealism are rather radical philosophical doctrines, and we are not forced to choose between the two. But if I had to choose between the belief that consciousness is in everything (panpsychism), and the belief that everything is in consciousness (idealism), I would choose the second of these. I don't care to believe that my cookies are suffering when I bake them.

Friday, November 25, 2016

How Not To Do a Meta-Analysis

I have no idea whether the esoteric practice known as homeopathy has any medical effectiveness, and this will certainly not be a post intended to persuade you to use such a practice. I will be examining instead the unfairness, methodological blunders and sanctimonious hypocrisy of an official committee summoned to convince you that homeopathy is unworthy of any attention. Committees such as this are part of the reality filter of materialists, the various things they use to try to cancel out, disparage, or divert attention from the large number of human observations inconsistent with their worldview. 
 
reality filter
 
Issuing a 2015 meta-analysis report on homeopathy, the committee called itself the Homeopathy Working Committee, and was sponsored by or part of the the National Health and Medical Research Council (NHMRC), an Australian body. The committee consisted of 6 professors and one person who was identified merely as a consumer.

On page 14 of the committee's report, the committee makes this confession: “NHMRC did not consider observational studies, individual experiences and testimonials, case series and reports, or research that was not done using standard methods.” What kind of unfairness is that? Using such a rule, if a committee investigating a medical technique were to receive a million letters saying the technique produced instantaneous and permanent cures of dire maladies, the committee would just discard all such letters and not let them influence its conclusion.

The committee limited itself to scientific studies, but it did not at all simply consider all of the scientific studies on homeopathy. Instead, the committee chose to disregard the vast majority of scientific studies on homeopathy, and consider only a small subset of those studies. This is made clear by a Smithsonian article on the committee's report, which says, “After assessing more than 1,800 studies on homeopathy, Australia’s National Health and Medical Research Council was only able to find 225 that were rigorous enough to analyze.” But what was going on was actually this: the committee cherry-picked 225 studies out of more than 1800, claiming that only these should be allowed to influence its conclusions. So it based its findings on only about 12 percent of the total number of scientific studies on the topic it was examining, excluding 88% of the studies. I have never heard of any meta-analysis that excluded anything close to such a high percentage of the studies it was supposed to be analyzing.

The committee claimed to have used quality standards, standards that relatively few of the studies met. What were these standards? Below is a quote from the committee's report.

The overview considered only studies with these features: the health outcomes to be measured were defined in advance; the way to measure the effects of treatment on these outcomes was planned in advance; and the results were then measured at specified times (prospectively designed studies); and the study compared a group of people who were given homeopathic treatment with a similar group of people who were not given homeopathic treatment (controlled studies).

It is not at all true that medicine or science has these criteria as standards that are followed by all or even most studies. Control groups are when you have some people studied who are not subject to what is being tested in another group. A large fraction of all scientific studies and medical studies do not use control groups, for various reasons. Controls are often not practical to implement, too expensive to implement, or not needed because it is clear what the result will be in the case of zero influence. This scientific paper says the following about control groups:

The proportion of studies that have control groups in the ten research domains considered range from 3.3% to 42.8% ..Across domains, a mere 78 out of the 710 studies (11%) had control groups in pre-test posttest designs.

It also is extremely common for medical and scientific research to report findings that the study was not designed to look for. Saying that a study must only report what it was designed to measure is a piece of sanctimonious rubbish, rather like claiming that good students must only get ancient history answers by reading the original ancient texts, rather than looking up the answers on the Internet. Under such a rule, we would for example ignore very clear findings that homeopathy was effective in reducing arthritis pain, if the study was designed to look for whether homeopathy was effective in reducing headaches. I have never heard of any meta-analysis excluding studies based on whether they reported unexpected findings the study was not designed to look for. This seems to be an aberrant, non-standard selection rule.

So what we have here is a committee using a double standard. It has declared that scientific studies will not be considered unless some particularly fussy standard is met, a standard that a large fraction of highly-regarded scientific studies do not meet. It's like the door guard of the country club saying “Ivy league graduates only” to dark-skinned people trying to get in, even though he knows he just admitted some white people who don't even have college degrees.

The statement below from the committee's report also is a sign of double standards and cherry-picking.

For 14 health conditions (Table 1), some studies reported that homeopathy was more effective than placebo, but these studies were not reliable. They were not good quality (well designed and well done), or they had too few participants, or both. To be confident that the reported health benefits were not just due to chance or the placebo effect, they would need to be confirmed by other well-designed studies with adequate numbers of participants.

On page 35 we learn that the actual participant size requirement used by the committee was a minimum of 150 participants (studies with fewer participants were ignored). So if there had been 500 studies each showing that between 110 and 149 patients were instantly cured of terminal cancer, such studies would all have been excluded and ignored. How silly is that? For comparison, a meta-analysis on stuttering treatments excluded only studies with fewer than 3 participants; a meta-analysis on diabetes excluded only studies with fewer than 25 participants; and a cardiology meta-analysis included studies with as few as 62 participants.

I very frequently read about scientific studies which used only a small number of participants (30 or less), studies getting lots of coverage in the media, after being published in scientific journals. So evoking “too few participants” as an exclusion criteria (based on a requirement of at least 150 participants) is another example of a double standard being used by the committee. And once a committee has declared the right to ignore any study that does not meet the vague, arbitrary, subjective requirement of being “good quality (well designed and well done)," it has printed itself a permission slip to ignore any evidence it doesn't want to accept.

Below is a page from a statistician's presentation on whether or not small sample sizes should be excluded when doing a meta-analysis of medical studies. The recommendation is the opposite of what the homeopathy study committee did.

Similarly, the “Handbook of Biological Statistics” site says, “You shouldn't use sample size as a criterion for including or excluding studies,” when doing a meta-analysis.

In the case of homeopathy, it's particularly dubious to be excluding small studies with less than 150 participants. Only a small fraction of the population believes in the effectiveness of homeopathy. It is entirely possible that because of some “mind over body” effect or placebo effect, homeopathy is actually effective for those who believe in it, but ineffective for those who don't believe in it. So we are very interested in whether it is effective for small groups such as a small group that believes in homeopathy. But we cannot learn that if a committee is arbitrarily excluding all studies with fewer that 150 participants.

No doubt if we were to examine the scientific papers of the professors in the committee, we would find many that had the same issues of small participant size or no control groups or reported effects that the study was not designed to show (or we would find these professors had authored meta-analysis papers that included studies that lacked one or more of these exclusion criteria). So it is hypocrisy for such a committee to be using such things as exclusion criteria.

Apparently the committee used some type of scoring system to rate studies on homeopathy. One of the subjective criteria was “risk of bias.” We can guess how that probably worked: the work of any researcher judged to be supportive of homeopathy would be assigned a crippling "risk of bias" score making it unlikely his study would be considered by the committee. But what were the scores of the excluded studies, and what were the scores of the studies that were judged to be worthy of consideration? The committee did not tell us. It's kept everything secret. The report does not give us the names of any of the excluded studies, does not give us URL's for any of them, and does not give us the scores of any of the excluded studies (nor does it gives the names, the URLs or the scores of any of the studies that met the committee's criteria). So we have no way to check on the committee's judgments. The committee has worked in secret, so that we cannot track down specific examples of how arbitrary and subjective it has been.

There is a set of guidelines for conducting a medical meta-analysis, a set of guidelines called PRISMA that has been endorsed by 174 medical journals. One of the items of the PRISMA guidelines is #19: “Present data on risk of bias of each study and, if available, any outcome level assessment.” This standard dictates that any subjective “risk of bias” scores used to exclude studies must be made public, not kept secret. The NHMRC committee has flaunted such a guideline. The committee has also ignored item 12 on the PRISMA guidelines, which states, “Describe methods used for assessing risk of bias of individual studies.” The NHMRC committee has done nothing to describe how it assessed a risk of bias. Nowhere do the PRISMA guidelines recommend excluding studies from a meta-analysis because of small sample size or whether the reported effects match the effects the study was designed to show, two aberrant criteria used by the NHMRC committee.

It has been recommended by a professional that whenever any meta-analysis uses a scoring system to exclude scientific studies on the topic being considered, that such a meta-analysis should always give two different results, one in which the scoring system is used, and another in which all of the studies are included. That way we could do a sensitivity analysis in which we can see how much the conclusion of the meta-analysis depends on the exclusion criteria. But no such thing has been done by the committee. They have secretively kept their readers in the dark, by revealing only the results obtained given all of their dubious exclusions.

After doing all of this cherry-picking based on double standards and subjective judgments, the committee reaches the conclusion that homeopathy is not more effective than a placebo. But even if such a thing were true, would that make homeopathy worthless for everybody? Not necessarily.

Here's the story on placebos. Placebos have repeatedly been shown to be surprisingly effective for certain conditions. A hundred years ago, your doctor might have given you a placebo by just handing you a bottle of sugar pills. But nowadays you get your medicine in labeled plastic containers at the pharmacy, and people can look up on the Internet anything that is on the label. So a doctor can't just write a prescription for sugar pills without the patient being able to find out it's a placebo. But if a patient thinks some particular thing will work – homeopathy, acupuncture, holding a rabbit's foot, praying, or meditation – that might act as a placebo with powerful beneficial effects. 

We therefore cannot dismiss something as being medically ineffective by merely saying it's no better than a placebo. Imagine there's a patient who doesn't trust pills, but who tends to believe in things like homeopathy. Under some conditions and for certain types of patients, homeopathy might help, even if what's going on is purely a “mind over body” type of placebo effect, rather than anything having to do with what is inside some homeopathic treatment.

If there are “mind over body” effects by which health can be affected by whether someone believes in a treatment, such effects are extremely important both from a medical and a philosophical standpoint, since they might be an indicator that orthodox materialist assumptions about the mind are fundamentally wrong. Anyone trying to suppress evidence of such effects through slanted analysis shenanigans has committed a grave error.

Based on all the defects and problems in this committee's report, we should have no confidence in its conclusion that homeopathy is no more effective than placebos; and even if such a conclusion were true it would not show that homeopathy is medically ineffective (since placebos can have powerful medical effects). `The fact that 1800 studies have been done on homeopathy should raise our suspicions that at least some small subgroup is benefiting from the technique. It doesn't take 1800 studies to show that something is worthless – one or two will suffice.

Whether homeopathy has any medical effectiveness is an unresolved question mark, but about one thing I am certain. The committee's report is an egregious example of secretiveness, double-standards, overzealous exclusions, guidelines violations, and sanctimonious hypocrisy. Using the same type of methodological mistakes, you could probably create a meta-analysis concluding that smoking doesn't cause lung cancer; but you would mislead people if you did that. 

Postscript: Today's New York Times criticizes "the cult of randomized controlled trials" and points out the case of those who say the evidence for the effectiveness of flossing is weak, because there aren't enough randomized controlled trials showing it works. That, of course, makes no sense, as we have abundant anecdotal evidence that flossing is effective -- just as we have abundant evidence that parachutes work, despite zero randomized controlled trials showing their effectiveness. 

Postscript: A meta-analysis was recently published on the effectiveness of homeopathy in livestock. The meta-analysis avoided the outrageous exclusion problems discussed above; for example, it didn't exclude studies based on sample size. The meta-analysis concluded, "In a considerable number of studies, a significant higher efficacy was recorded for homeopathic remedies than for a control group." Specifically it concluded that "Twenty-eight trials were in favour of homeopathy, with 26 trials showing a significantly higher efficacy in comparison to a control group, whereas 22 showed no medicinal effect."  What is astonishing is that this result favoring homeopathy has been reported in The Scientist magazine with the headline, "Homeopathy does not help livestock."  That's the opposite of what the meta-analysis actually found.  

Monday, November 21, 2016

The Errant Experts Who Cry "Impossible!"

The controversial EmDrive device offers a hope of a revolutionary new method of space travel that could greatly shorten trips to the Moon and Mars. It's a propellantless propulsion device. Even after sizable evidence had accumulated that the device did indeed work, various scientists declared that it is “impossible,” that the device could not work. Now (as discussed here) a new peer-reviewed scientific paper has found that the EmDrive device does indeed work, providing a significant thrust. 

An EmDrive device

How could so many physicists have got it wrong? An example was Cal Tech physicist Sean Carroll, who had this to say last year about the EmDrive:

The more recent “news” is not actually about warp drive at all. It’s about propellantless space drives — which are, if anything, even less believable than the warp drives....The “propellantless” stuff, on the other hand, just says “Laws of physics? Screw em.” ….What they’re proposing is very much like saying that you can sit in your car and start it moving by pushing on the steering wheel....There is no reason whatsoever why these claims should be given the slightest bit of credence, even by complete non-experts.

Carroll's blunder is an example of a type of misfire we have seen again and again. Once upon a time, scientists discovered a few assorted facts about nature, a few pieces of the vast cosmic puzzle. Soon thereafter, the heads of our scientists started to swell like balloons. They started depicting themselves as great lords of knowledge, who had learned enough to issue pronouncements such as This is the way that nature always behaves or That type of behavior is forbidden by nature. Such pronouncements are usually just bombast and bluster, and are often no more reliable than the pronouncements of sectarian theologians.

Here is what is typically going on when a scientist says something is impossible:
  1. A scientist assumes that some particular unproven assumption (call it assumption X) is true, largely because such an assumption is popular among his colleagues.
  2. The scientist then considers some proposed phenomenon (call it phenomenon Y), and judges that such a phenomenon cannot be occurring if assumption X is true.
  3. The scientist then declares that phenomenon Y is impossible.
Such a declaration of impossibility is usually dubious, because it relies on unproven assumptions. If such assumptions are wrong, the “impossible” phenomenon may be perfectly possible.

An example of such a declaration of impossibility occurs when a scientist declares telepathy or extrasensory perception (ESP) to be impossible (despite compelling laboratory evidence for its existence). Below is what is going on:
  1. The scientist assumes that a particular unproven assumption is true, the assumption that the human mind is purely the result of brain activity, largely because this assumption is popular among scientists.
  2. The scientist then considers the possibility of telepathy, and judges this to be impossible if humans minds are purely the result of brains, based on the difficulty of signals passing out of the brain and traveling through the skull.
  3. The scientist then declares that telepathy is impossible.
But the reasoning is invalid because the first assumption is not only unproven but actually extremely dubious. We are not entitled to conclude with any confidence that our minds are purely the product of brains, because (as discussed here) we have no understanding of how 50-year old memories could possibly be stored in brains, which are subject to such high structural and molecular turnover that memories should not be able to last for more than a year. Nor do we have any understanding of how brains could possibly achieve the instant recall of obscure memories that our minds display. We are told this involves molecular actions, but complex molecular reactions occur way too slowly to account for obscure memories that are retrieved in half a second (as we see happening on television shows such as Jeopardy). It takes a minute for cellular molecules to transcribe a single protein from the data in DNA, so how many minutes would it take to read some memory stored in brain molecules? Then there's this problem: how on earth could a brain ever know where some particular memory was stored inside it? Don't tell me your brain searches through your neurons, because that would take way too long to account for obscure memories that are retrieved by quiz show contestants in half a second.

It short, claims of the impossibility of telepathy rest on utterly dubious dogmas about the source of the human mind, dogmas that no scientist could be justified in proclaiming until he can give a detailed plausible story of how our brains might be able to store memories for 50 years, and also instantly retrieve them (something our scientists are nowhere close to doing).
`
A particularly errant creature is the scientist who assures us that there is “no need” for him to examine evidence in favor of some phenomenon that he has declared to be impossible, on the grounds that it conflicts with some cherished unproven assumption of the scientist. By such strange pretzel logic, such a scientist may try to print himself a permission slip for ignoring the very evidence that he should have evaluated before even making the unproven assumption.

Let's look at some examples of some things that one might be regarded as “scientifically impossible,” but which are actually no such thing. An example is the levitation of a material object or a human body. Science has merely established that there must be a downward gravitational force acting on a body positioned on a planet. But science in no way guarantees that there will not be an upward force (material or immaterial) acting underneath such a body, pushing it up with a force exceeding the downward force of gravity. So levitation is not actually impossible, and could be achieved through various means. Of course, humans are actually levitated in some tornadoes.

Other examples of things sometimes said to be scientifically impossible, but not actually so, include things such as sudden cures, mind-over-matter, apparitions, and life after death. No such things violate any known law of nature that can be specified, so they are not impossible. Arguments attempting to show the impossibility of such things always boil down to: I presume that nature works in this particular way; but if nature does in such a work a way, that thing could not occur; therefore, that thing is impossible. Such reasoning has little weight, because our understanding of how nature works is paltry and fragmentary. Contrary to the overconfident bluster of so many scientists, we have a very good understanding of neither life nor Mind; and we don't even understand most of the material substance in the universe. How can we claim to understand nature well, when something as simple as the vacuum of space is a great thorn in the side of the modern physicist, having a nature completely different from that predicted by theoretical quantum considerations? And if we don't understand something as simple as a vacuum, what pretentious hubris is it for us to act as if we understand such infinitely more complicated things as biological life and Mind?

Postscript: We can put down cosmologist Ethan Siegel as another scientist who just won't accept the EmDrive results (based on what he says here). 

Thursday, November 17, 2016

Buried Memories of the Distant World: A Science Fiction Story

After the Council started to receive reports of Helena's strange psychiatric practices, the Council demanded that Helena face a board of inquiry concerning her practices.

We have heard that you are engaging in unorthodox techniques that have not been approved,” said the Chairman. “We are told that you have some weird practice in which you claim to retrieve buried memories from the minds of men and women. Please describe your technique so we can judge whether it is sound.”

It all started when some people started to complain about recurring dreams,” said Helena. “I was told by three people that they kept having the same strange dreams again and again. I wondered whether these dreams might be a sign of memories buried deep inside their minds. So I started to use a method to try to extract or elicit memories buried deep in their minds.”

And what was that technique?” asked the Chairman.

It is a technique that I call hypnotic regression,” explained Helena. “I use various techniques to put someone into a state of deep relaxation. Slowly the patient enters into a strange type of consciousness that is neither normal waking consciousness nor sleep, but kind of a strange mental state somewhat in between the two. I have found this is a useful technique for extracting hidden memories buried deep in the mind.”

So what happens when a person enters this state of 'hypnotic regression,' as you call it?” asked the Chairman.

I start asking them questions about what happened long ago, before the earliest memories they could normally remember,” said Helena. “It is really quite amazing, because when I start asking these questions, my patients will tell me very strange stories. They all tell the same type of stories – they are amazingly similar.”

Amazingly similar?” said the Chairman. “So tell me a typical story that one of your patients would tell.”

Typically a patient will describe a kind of 'past life' before the patient's current life,” explained Helena. “But what is very strange is that the patient will say this 'past life' didn't take place on our planet. The patient will say the 'past life' took place on some other planet out in space.”

Past lives on another planet?” said the Chairman. “Obviously these patients are just hallucinating or confabulating, just making things up. Maybe this 'hypnotic regression' state encourages them to fantasize.”

Conceivably,” said Helena. “But what is strange is how often my patients will tell the same weird tale. Here is the story I will hear them tell again and again. They will say that they once lived on a distant planet, and signed up to be something called 'astronauts.' They will say that they boarded some space ship to travel from the distant planet, and that then they were kind of frozen.”

Frozen?” said the Chairman. “Well, that's a silly detail.”

But as the patients tell the story, it doesn't sound quite so silly,” explained Helena. “They explain it like this. They signed up for a mission to travel from one solar system to another. But the distance was so great that the trip took many decades, too long for a human lifetime. So the travelers on the ship – these 'astronauts' – had to be kind of frozen so that they would sleep throughout the long voyage between the stars. The patients tell me it wasn't really freezing, just kind of a lowering of the body temperature much lower than normal.”


So how many people have told you this strange story while under 'hypnotic regression' as you call it?” asked the Chairman.

A total of 12 people,” said Helena.

And what other stories of 'past lives' do they tell?” asked the Chairman.

Pretty much only this story,” said Helena.

That's an astonishing coincidence,” said the Chairman.

But perhaps it's not a coincidence,” said Helena. “I have a theory to explain these stories. It's a very far-out and fantastic theory, but I'd like to explain it.”

Go ahead,” said the Chairman.

My theory goes like this,” said Helena. “Perhaps all of my patients really were born on some other planet. Perhaps on that other planet they built a spaceship designed to travel from one star to another. The planned voyage may have been a voyage lasting many decades, so they may have needed to put the people on the spaceship into almost a kind of deep freeze in which they were unconscious. But maybe that lowering of the body temperature may have caused a loss of memory. So when the people on the spaceship finally reached the distant planet, and somehow woke up, they may have lost all memories of their lives on the planet they were born on. Except that the memories were still buried deep in their minds. And maybe the planet they traveled to is our own planet.”

How did you hatch that crazy idea?” asked the Chairman. “How could such people have got mixed up with people like you or me?”

According to my theory, all of us adults here in the Village are these 'astronauts' from another planet,” explained Helena. “Think about it. You and I are one of 40 adult people living in this Village, and we and our children are the only people on this planet. That raises the question: how did we get here? Do you know how you got here?”

I must have been raised by my parents here on this planet,” said the Chairman. “It's just that I don't remember them, or anything about my youth.”

Exactly,” said Helena, “and none of us remember our parents or anything about our youth. Our parents aren't around. But hasn't it ever occurred to you how unlikely such a coincidence is, that all of our parents would have died when we were young? If the chance of your parents dying when you were young is maybe 1 in 2, then the chance of 40 people all having their parents die when they are young is maybe 1 in 2 to the fortieth power – too unlikely a thing to have happened.”

What are you suggesting?” asked the Chairman.

I am suggesting that all 40 adults in our Village – the entire adult population of this planet – was actually born on another planet,” said Helena. “We bravely signed up to be space voyagers. When we got on the spaceship, they did some kind of temperature-lowering thing – almost a freezing – that caused us to sleep through the long space voyage. Then our ship reached this planet, and we woke up. But the long sleep caused us to lose our memories. So we couldn't remember our home planet. We just got busy building ourselves some shelters, which became the Village we live in now. We still have our memories of our home planet, but those memories are buried deep in our minds. Those memories can only be brought out through hypnotic regression like I use on my patients.”

My, my, that's a fascinating theory,” said the Chairman. He consulted with the other members of the Council. They then announced their decision to Helena.

We have decided that your silly theory is a disruptive superstition that we must ban as a damnable heresy,” said the Chairman. “You are henceforward forbidden from engaging in this practice you call 'hypnotic regression.' And you are forbidden from teaching this crazy theory of yours that we were all born on another planet.”

I submit loyally to the will of the Council,” said Helena sadly.

Before she left, the Chairman had one last question.

Oh, just out of curiosity,” said the Chairman, “did your patients have any name for that distant planet they said they were born on?”

Yes,” said Helena. “They called it planet Earth.”

Sunday, November 13, 2016

Epigenetics Cannot Fix the “Too-Slow Mutations” Problem

Recently in Aeon magazine there was an article entitled “Unified Theory of Evolution” by biologist Michael Skinner. The article starts out by pointing some problems in Neo-Darwinism, the idea that natural selection and random mutations explain changes in species or the origin of species. The article says this:

One problem with Darwin’s theory is that, while species do evolve more adaptive traits (called phenotypes by biologists), the rate of random DNA sequence mutation turns out to be too slow to explain many of the changes observed...Genetic mutation rates for complex organisms such as humans are dramatically lower than the frequency of change for a host of traits, from adjustments in metabolism to resistance to disease. The rapid emergence of trait variety is difficult to explain just through classic genetics and neo-Darwinian theory.... And the problems with Darwin’s theory extend out of evolutionary science into other areas of biology and biomedicine. For instance, if genetic inheritance determines our traits, then why do identical twins with the same genes generally have different types of diseases? And why do just a low percentage (often less than 1 per cent) of those with many specific diseases share a common genetic mutation? If the rate of mutation is random and steady, then why have many diseases increased more than 10-fold in frequency in only a couple decades? How is it that hundreds of environmental contaminants can alter disease onset, but not DNA sequences? In evolution and biomedicine, the rates of phenotypic trait divergence is far more rapid than the rate of genetic variation and mutation – but why?

As interesting as these examples are, they are merely the tip of the iceberg if you are talking about cases in which biological functionality arises or appears too quickly to be accounted for by assuming random mutations. The main case of such a thing is the Cambrian Explosion, where we see a sudden explosion of fossils in the fossil record about 550 million years ago, with a large fraction of the existing phyla suddenly appearing. Instead of seeing some slow gradual progression in which we very gradually see more complex things appearing over a span of hundreds of millions of years, we see in the fossil record many dramatic new types of animals suddenly appearing.

The other main case of functionality appearing too quickly to be accounted for by random mutations is the relatively sudden appearance of the human intellect. The human population about 1 million years years ago was very small. This article tells us that 1.2 million years ago there were less than 30,000 in the population. The predicted number of mutations is inversely proportional to the population size, which means the smaller the population, the lower the number of mutations in the population. So when you have a very small population size, the predicted mutation rate is very low. But suddenly humanity about 100,000 or 200,000 years ago seems to have got some dramatic increase in brain power and intellectual functionality. Such a thing is hard to plausibly explain by mutations, given the very low number of mutations that should have occurred in such a small population.

But Skinner tries to suggest there is something that might help fix this “too-slow mutations” problem in Neo-Darwinism. The thing he suggests is epigenetics. But this suggestion is mainly misguided. Epigenetics cannot do the job, because it is merely a kind of “thumbs up or thumbs down” type of system relating to existing functionality, not something for originating new functionality.

Skinner defines epigenetics as “the molecular factors that regulate how DNA functions and what genes are turned on or off, independent of the DNA sequence itself.” One of the things he mentions is DNA methylation, “in which molecular components called methyl groups (made of methane) attach to DNA, turning genes on or off, and regulating the level of gene expression.” Gene expression means whether or not a particular gene is used in the body.

The problem, however, with epigenetics is that it does not consist of detailed instructions or even structural information. Epigenetics is basically just a bunch of “on/off” switches relating to information in DNA.

Here is an analogy. Imagine there is a shelf of library books at a public library. A librarian might use colored stickers to encourage readers to read some books, and avoid other books. So she might put a little “green check” sticker on the spines of some books, and a little “red X” sticker on the spines of other books. The “green check” sticker would recommend a particular book, while the “red X” sticker would recommend that you avoid it.


Perhaps such stickers would have a great effect on which books were taken out by library patrons. Such stickers are similar to what is going on with epigenetics. Just as the “red X” sticker would instruct a reader to avoid a particular book, an epigenetic molecule or molecules may act like a flag telling the body not to use a particular gene.

But these little “green check” and “red X” markers would not explain any sudden burst of information that seemed to appear in too-short a time. For example, suppose there was a big earthquake at 10:00 AM, and then at 11:00 AM there appeared a book on the library shelf telling all about this earthquake, describing every detail of it and its effects. We could not at all explain this “information too fast” paradox by giving any type of explanation involving these little “green check” and “red X” stickers.

Similarly, epigenetics may explain why functionality that appeared too fast is or is not used by a species, but does nothing to explain how that functionality appeared too fast. Epigenetics is making some valuable and interesting additions to our biological knowledge, but it does nothing to solve the problem of biological information appearing way too quickly to be accounted for by assuming random mutations.

Another analogy we can use for epigenetics is what programmers call “commenting out code.” Given some software system such as a smartphone app, it is often easy for a programmer to turn off particular features. You can do what programmers call “commenting out” to turn off particular parts of the software. So the following is a quite plausible conversation between a manager and a programmer:

Manager: Wow, the app looks much different now. Some of the buttons that used to be there are no longer there, and two of the tabs have disappeared. How did you do that so quickly?
Programmer: It was easy. I just “commented out” some of the code.

Such “commenting out” of features is similar to gene expression modification produced by epigenetics, in which there's a “let's not use this gene” type of thing going on. But the following is a conversation that would never happen.

Manager: Wow, the app looks much different now. I see there's now some buttons that lead you to new pages the app never had before, which do stuff that the app could never do before. How did you do that so fast?
Programmer: It was easy. I just “commented out” some of the code.

The programmer would be lying if he said this, because you cannot produce new functionality by commenting out code. Similarly, some new biological functionality cannot be explained merely by postulating some epigenetic switch that causes some existing gene not to be expressed. That's like commenting out code, which subtracts functionality rather than adding it.

I can give Skinner credit for raising some interesting questions, but he does little to answer them. The problem remains that biological information has appeared way too rapidly for us to plausibly explain it by random mutations.

For every case in which random mutations produce a beneficial effect, there are many cases in which they produce a harmful effect. Long experiments on exposing fruit flies to high levels of mutation-causing radiation have not produced any new species or viable structural benefits, but produce only harm. We have so far zero cases of species that have been proven to have arisen from random mutations, and we also have zero cases of major biological systems or appendages that have been proven to have arisen from random mutations. So why do our scientists keep telling us that 1001 wonderful biological innovations were produced by random mutations?

It's rather like this. Imagine Rob Jones and his family get wonderful surprise gifts on their doorstep every Christmas, left by an anonymous giver. Now suppose there is someone on their street named Mr. Random. Mr. Random behaves like this: (1) if you invite him into your home, he makes random keystrokes on whatever computer document you were writing; (2) if you eat at his house, he'll give you probably-harmful soup made from random stuff he got from random spots in his house and backyard, including his bathroom and garage; (3) if you knock on his door, and ask Mr. Random for a cup of sugar, he'll give you some random white substance, maybe sugar or maybe plaster powder or rat poison. Now imagine how silly it would be if Rob Jones were to look on those fine Christmas gifts on his doorstep, and say to himself: Let me guess who left these – it must have been Mr. Random!

Wednesday, November 9, 2016

The Best Planet to Colonize in Case of an Apocalypse Is...Earth

All of those who regard the 2016 presidential election as one of the great disasters of modern times may take slight consolation in the thought that there are much bigger disasters we could have suffered. Our planet could have been hit by a comet or an asteroid. A solar flare could have caused an electromagnetic pulse effect that could have wiped out all our electricity. The Yellowstone Park super-volcano could have erupted, burying much of North America in ash. Or a nuclear war could have started.

There are some who argue along the following lines:

We're in a cosmic shooting gallery. A comet or an asteroid could hit us at any time. Then there's the threat of nuclear war, not in mention the eventual ruinous effects of global warming. How can we protect ourselves from the risk of extinction posed by such hazards? We must go to Mars! The sooner we get started on Mars colonization, the better.

But there are some reasons for doubting that Mars colonization is our best bet to avoid the threat of extinction. One problem is the risk of a Mars landing failing. This risk seems very large in light of the fact that the European space agency spent many millions on a Mars lander that recently crashed on Mars, resulting in a total loss of the mission. We never see movies with a plot like this:

An asteroid is discovered in space, heading for collision with our planet. The world rushes together a Mars spaceship. Heroic astronauts set out for the long voyage to Mars, which they hope to colonize. When they try to land, things don't go right, and their lander crashes and burns.

But such an outcome is a distinct possibility. And what about the radiation hazard, both on Mars and during the flight to Mars? Space is filled with deadly cosmic rays, and it is very hard to build a spaceship that fully protects against such radiation. By the time astronauts get to Mars, they might have damaged brains, with the disastrous effects described in my science fiction story Mars Peril. Another possibility is that by the time the astronauts got to Mars, the radiation during the voyage over may have caused harmful mutations. Such mutations might show up as birth defects in the first generation of children born on Mars.

Then there is the fact that once astronauts got to Mars, they might still suffer great hazard from radiation. This is because the very thin atmosphere of Mars does a poor job of shielding the surface from radiation.

If we are faced with an apocalyptic threat, it would seem there is a better option than rushing to colonize Mars. The better option is to stay right here on Earth, and build underground “Earth colonies” capable of surviving any type of disaster on the surface of our planet.

It's easy to imagine a type of structure that would work well and be fairly easy to build. The algorithm could go something like this:
  1. 1.Create a rectangular hole in the ground 30 meters deep and 20 meters wide, dumping all of the dug dirt on the sides of this hole.
  2. Drop at the bottom of this hole a steel structure about 20 meters wide.
  3. Add on top of the structure 1 or more excavation chutes allowing access to the surface.
  4. Add some solar panels that could work rather like the periscopes of submarines, capable of being withdrawn deep below the ground during times of surface upheaval, or pushed above the ground when the air above the shelter is relatively calm.
  5. Dump all of the excavated dirt on top of the steel structure.
  6. Then clear off some dirt corresponding to the top of the excavation chute and the top of the periscope-style solar panels.
If loaded with sufficient water and food, and oil and generators, such a structure could provide shelter for a decade or more, even on a planet that was being pulverized by a comet, an asteroid, or a nuclear war. The building of such structures could be facilitated by digging robots, relatively easy to make.

Of course, such structures would be too hard-to-make to save a large fraction of humanity in case of an apocalyptic event. But they would serve well to preserve a small fraction of the human race to ride out the years of environmental hell caused by the apocalypse .In most cases of apocalyptic events, the destructive surface events will only last several years before things start to slowly normalize.

Given the radiation problem on Mars, it might be necessary to build underground Mars bases to protect Martian colonists from cosmic rays. When you go to the wikipedia.com article on “Colonization of Mars,” you immediately see a drawing of a proposed Mars base that is largely underground. But if you're going to be building underground structures, why not just build them here on Earth? Don't answer, “Because you could use fancy hydroponic technology to grow crops underground,” because the same technology could be used in underground shelters on Earth. For the cost of one Mars mission moving 40 colonists to Mars, you could probably build underground shelters for 100,000 humans. 

You might think that people would go crazy living underground, but it is easy to imagine some tricks that could be used to make things tolerable. For example, we can imagine a large central room with a dome-shaped ceiling. Using projections, lighting tricks, and some vegetation, such a room could be made to simulate being outdoors during various times of day and various seasons, providing a somewhat outdoorsy ambiance to people sheltering underground.  

I'll admit that underground shelters on Earth have zero glamour, which makes them different from the high glamour of a Mars colonization mission. But in terms of bang-for-the-buck, terrestrial underground shelters beat shelters on Mars hands down.

Another idea for coping with an apocalypse (without going to Mars) is the idea of recolonization stations that I discuss here. This is the idea of putting up specially designed space stations intended to be occupied for a a decade or more, with the inhabitants of the station then returning back to our planet, using escape capsules built into the station. This would not be as cost-effective as underground shelters, but would probably still be much less expensive than trying to colonize Mars. 

A recolonization station 

Saturday, November 5, 2016

How Could 50-Year-Old Memories Be Stored in the Shifting Sands of Synapses?

I may compare the mighty fortress of materialism to a castle that is under siege. The walls that protect the castle are being breached by a wide variety of findings and observations, such as findings about cosmic fine-tuning, and observations such as near-death experiences. So how does a dedicated defender of this materialist paradigm try to defend his besieged castle? He builds walls to prop up the castle's defenses. But sometimes these walls are built of pure speculation, and therefore are no more solid than walls built of gossamer, the stuff that spider webs are built of.

Some examples can be found in the world of neurological theory. One breach in the wall around the castle of materialism is the fact of rapid molecular turnover in synapses. The most popular theory about the storage of memories in the brain has been that memories are stored as what are called synaptic weights. But those weights are built from protein molecules, and those molecules have short lifetimes of only a few weeks. This paper finds that synaptic proteins turn over at a rate of 0.7% per hour, which means 17% per day. Such turnover (which involves the replacement of protein molecules at a rapid rate) should quickly clear any information stored in synapses. 

Based on what we know about molecular turnover, it would seem that the brain should not be able to store memories for longer than a month. But instead humans can remember things well for 50 years or longer. How can that possibly be, if memories are stored in synapses?

The issue is a crucial one, because if your brain is incapable of storing your very long-term memories, it means they must exist elsewhere, presumably in some place (such as a human soul or some spiritual reality or infrastructure) incompatible with materialistic assumptions.

There are a few theories designed to overcome this molecular turnover difficulty, theories that are called synaptic plasticity maintenance theories. But these theories are super-speculative affairs that strain credulity in many ways. One idea is the idea of bistable or bimodal synapses. The speculation goes something like this:

Maybe there is some set of molecules that acts like an “on/off switch,” in the sense that there can be two different states. So when you learn something, maybe some molecules that act like an on/off switch get switched on. Then when some of those molecules start to disappear, because of rapid molecular turnover, maybe there's some kind of “feedback mechanism” that switches the set of molecules back to its original state. So it's kind of like a little square on an SAT test form that is penciled in, and then, after the rain washes away the pencil mark, something acts to restore the little pencil mark that was filled in.

This ornate speculation is extremely unbelievable. There is also no good evidence that it is true, and the evidence actually stands against such an idea. Below is a quote from a scientific paper:

Despite extensive investigation, empirical evidence of a bistable distribution of two distinct synaptic weight states has not, in fact, been obtained....In addition, a demonstration of synaptic bistability would require not only finding two distinct synaptic strength states but also finding that a set of different protocols for LTP induction (e.g., different patterns of stimuli, or localized application of pharmacological agents) commonly switched synaptic weights between the same two stable states. Such a demonstration has not been attempted. In addition, modeling suggests that stochastic fluctuations of macromolecule numbers within a small volume such as a spine head are likely to destabilize steady states of biochemical positive feedback loops, causing randomly timed state switches.... The weight distribution of Song et al... is based on measurements of several hundred excitatory postsynaptic potential (EPSP) amplitudes and appears to particularly disfavor the bimodal hypothesis.

What the above paragraph is basically saying is that there is no good evidence for the idea of bistable or bimodal synapses, and that there is good evidence for rejecting such an idea. There is an additional reason for rejecting the idea, not mentioned in the quote above. It is the fact that complex information like human memories could never be biologically stored in some information storage mechanism based on some binary “on/off” mechanism of storage such as the bistable or bimodal synapses idea imagines. 

Take the simple example of a visual image you see and remember. Each pixel or dot that makes up the image requires a color dot. A color dot could be biologically represented by some synapse strength that could vary from 1 to 16 or 1 to 256, but a color dot could not be biologically represented by some setup consisting of mere “on/off” switches. Computers can store information in binary form, but no plausible mechanism can be described by which a natural biological system could store memories in a binary form.

So the ultra-speculative idea of bistable or bimodal synapses is a bust as an explanation for how your brain might be able to store memories for 50 years despite rapid molecular turnover. Another major attempt to explain such a thing is a cluster theory of synaptic stability. The theory was presented in this paper. The speculative idea is that maybe molecular turnover is much more likely to occur where information has already been stored.

Imagine a very simple case of a 10 by 10 grid of 100 cells, in which some information is stored in the grid. Imagine there is a black square shape stored in the center of this grid, formed by 80 black cells in the grid. Then imagine the grid cells are being randomly overwritten by molecular turnover. But suppose instead of these grid square replacements occurring randomly at all positions, imagine that the grid replacements occur much more near spots that already have a black mark. Then the information might decay less rapidly.

That doesn't sound too unreasonable. But when we take a look at the details of the paper, we find that it makes quite an outrageous assumption. It states, “The primary effect of this implementation is that the insertion probability at a site with many neighbors (within a cluster or on its boundary) is orders of magnitude higher than for a site with a small number of neighbors.” The phrase “orders of magnitude” means something like 100 times, 1000 times, or 10,000 times. To assume that is to “cook the books” in a quite ridiculous way, like some gambler assuming that he will be 1000 times more successful than the gambler next to him at the roulette table. There is no reason why we would see such gigantic discrepancies in the positions where molecular turnover occurred.

The paper shows simulations designed to show that under these absurdly implausible assumptions, information could be preserved despite molecular turnover. The simulations show a little 7 by 7 square grid evolving over 1000 time steps. Information is preserved over 1000 time steps, but is not preserved over 2000 time steps.

There are three problems here: (1) the assumptions about molecular turnover occurring “orders of magnitude” more often near existing storage receptors is absurdly unrealistic and biased; (2) the simulation period is too short, leaving the author with the claim merely that such a “meta-stable network” could last as long as a year (we actually need something that would store information for 50 years); (3) the information being tested in the simulation is too simple, being the type of simplest-possible shape (a square) that works best under such a simulation, rather than a more complicated shape that would tend to not work as well. 

If a similar simulation were attempted with a more complicated shape, such as a Y shape or a P shape, the information would not be well-preserved.  Of course, what we actually store in our memories is vastly more complicated than such test shapes.  

There is another reason why these synaptic plasticity maintenance theories are futile. Even if you were somehow to explain how information could be preserved despite rapid molecular turnover in synapses, you would still have the problem of instability on a larger structural level.

We are told that what are called dendritic spines are storehouses of synaptic strengths. A person believing memories are stored in synapses will sometimes think of these dendritic spines as being rather like words in a paragraph, words storing our memories.



 The little "leaves" shown here are dendritic spines

But how long do these dendritic spines last? In the hippocampus of the brain, they last for less than two months. In the cortex of the brain, they last for less than two years. This study found that dendritic spines in the hippocampus last for only about 30 days. This study found that dendritic spines in the cortex of mice brains have a half-life of only 120 days. 

So it's rather like this. Imagine you are driving on the highway, and you watch a car filled with papers, and papers are blowing out of the car's open windows at a constant rate. That is like the information loss that would seem to be caused by rapid molecular turnover. Then suppose you watch the car crashing into a tree and bursting into flames. That's like the loss of information caused by the short lifetimes of dendritic spines. Now suppose someone creates a theory that maybe someone in the car wrote down the information in all the papers blowing out the windows. That utterly contrived and implausible theory is like the synaptic plasticity maintenance theories I have described. But such theoretical ingenuity is futile, because it cannot explain how the information could be preserved after the car crashes and burns. Similarly, the synaptic plasticity maintenance theories are futile because they can't explain how we could have memories lasting for 50 years despite dendritic spines that last no longer than two years.

Our scientists need to wake up and smell the coffee of their own research findings about the brain. Such findings imply that 50-year-old memories cannot be stored in synapses, and (as discussed here) there's no other plausible storage place where the brain could store them. Our minds must therefore be something much more than just the product of our brains. My 50-year-old memories may be stored in a soul, or some other mysterious consciousness reality, but they cannot be stored in my synapses, which are not a stable platform for permanent information storage. 

Postscript: I mentioned here the low lifetimes of dendritic spines, but didn't mention that synapses themselves don't last very long. Below is a quote from a scientific paper:

A quantitative value has been attached to the synaptic turnover rate by Stettler et al (2006), who examined the appearance and disappearance of axonal boutons in the intact visual cortex in monkeys.. and found the turnover rate to be 7% per week which would give the average synapse a lifetime of a little over 3 months.

 After writing this post, I found the important scientific paper "The demise of the synapse as the locus of memory: A looming paradigm shift?" by Patrick C. Trettenbrein.  This scientist makes quite a few points similar to my points in this post, and then makes this very astonishing confession: "To sum up, it can be said that when it comes to answering the question of how information is carried forward in time in the brain we remain largely clueless."