Wednesday, March 30, 2016

Are We All Blindfolded Lab Rats In Their Big Gene Gamble?

The book Altered Genes, Twisted Truth by Steven M. Druker is an extremely thorough look at potential hazards involved genetically modified organisms (GMOs). The book has been endorsed by more than 10 PhD's, some of which are biologists. Here is a rather terrifying passage from page 192 of the book:

Accordingly, several experts believe that these engineered microbes posed a major risk. Elaine Ingham, who, as an Oregon State professor, participated in the research that discovered those lethal effects, points out that because K. planticola are in the root system of all terrestrial plants, it is justified to think that the commercial release of the engineered strain would have endangered plants on a broad scale – and, in the most extreme outcome, could have destroyed all plant life on an entire continent, or even on the entire earth...Another scientist who thinks that a colossal threat was created is the renowned Canadian geneticist and ecologist David Suzuki. As he puts it, “The genetically engineered Klebsiella could have ended all plant life on the continent.”

For years scientists have been creating genetically modified organisms that have been introduced into our food supply. We are told that such GMO food products are safe. But there are reasons for thinking great hazards may be involved.

One reason is that there is no way to determine that a GMO is safe merely by testing it in the lab. This because such organisms are released into the environment, which has far too many variables for any scientist to keep track of. An organism that may seem safe when tested under lab conditions may turn out to be a killer or cancer-causer when introduced into an ecological system that has way too many variables and unknowns to ever be properly simulated in the lab.

A second reason for doubting the safety of GMO's is that every genetically modified organism or GMO is its own particular case, and safety successes in the past never guarantee the safety of new GMO projects. Every single new GMO is a new piece of technology that may have risks. It's rather like this: I may mix together 40 different combinations of chemicals, without any problem; but it is still perfectly possible that the forty-first combination of chemicals that I try may cause an explosion (or lethal gas) that kills me. A similar situation holds for GMO's. Claims that a “40 year-success record prove that GMO's are safe” are not valid, because every new type of GMO is a new unproven piece of technology that might blow up in our faces.

A third reason for doubting the safety of GMO's is that we don't understand enough about life to be very confident about the safety of genetic engineering. Many scientists have a bad habit of exaggerating human knowledge about biology, often advancing an unwarranted triumphalist narrative making it sound as if they have godlike insight into the inner workings of biology. The truth is very different. We do not understand at all the origin of life, and do not understand even basic issues such as morphogenesis, how a fertilized ovum is able to progress into a newborn baby. We do not even understand where the body plans of humans come from, as I discuss in my post The Gigantic Missing Link of Biological Life. Contrary to common claims that DNA is some blueprint for the human body, the “language” used by DNA seems to be a “bare bones” semantically-minimal language entirely incapable of expressing anything like a three-dimensional arrangement of parts. It's a language suitable mainly just for creating lists of chemicals. The Human Genome Project, which was supposed to offer great insight into life, has revealed a baffling sea of complexity which scientists are pondering with little insight, rather like historians who scratched their heads about Egyptian hieroglyphics before the Rosetta Stone was discovered.

In Chapter 11 of his book Druker has an excellent chapter comparing the difficulties of changing computer code you don't understand with the difficulties of genetic engineering. Pointing out “the inescapable risks of altering complex information systems,” Druker says quite correctly that DNA is information of high complexity and low comprehensibility. By messing with that code, we are rather like newly-hired programmers who start making changes in some very old legacy system with 5 million lines of code the programmers don't understand, without understanding the ramifications of their changes. A golden rule of programming is: don't screw around with legacy code you don't understand. But genetic engineering requires that scientists do just that. Also, DNA is not written in anything like readable high-level programming code written in a language such as Java. It's intelligibility level is much closer to binary code – streams of 1's and 0's that you cannot understand by reading. Most programmers know that trying to edit binary code is a “Russian roulette” type of business.

Druker argues that given such a situation, we should not even be using the term “genetic engineering,” since engineering is what goes on when you have a mastery of what you are building or changing. A more accurate term, he argues, is bio-hacking.

Druker urges a banning of all GMO's, but there's a much less drastic step we can take: the step of labeling all foods with GMO's, so that consumers can choose not to consume them if they are concerned about their safety. Labeling of GMO foods is supported by 93% of all Americans, and is required practice in many countries. With labeling of GMO's, you can choose not to be a blindfolded lab rat in the big gene gamble. 




But many scientists stand in opposition to GMO labeling. Why? In many cases this is because they have a financial interest in the spread of the genetically modified organisms. Many are employed directly by biotechnology firms that sell GMO products. Many other biologists take consulting money (directly or indirectly) from biotechnology corporations. The vested interests of biologists or biochemists may be clouded in various ways. For example, the biologist may be paid for writing on some web site that gets half of its money from biotechnology corporations, which gives the money to spread the message that GMO's are safe. Or a biochemist may work for a research unit at a university that is partially funded by contributions or contracts from biotechnology corporations.

Given the many ways in which biologists and biochemists have financial interests in GMO's, we absolutely cannot trust any “expert consensus” that GMO's are safe. We should not expect objective opinions from people who have vested interests or financial interests in the matters on which they are stating opinions.

The reasoning used by opponents of GMO labeling are often ridiculous. One common charge is that it is “anti-science” to ask for GMO labeling. That's laughable. Genetically modified food products aren't science – they are technological products that we have the option to consume or not-consume. It is no more “anti-science” to avoid consuming GMO's than it is is anti-science not to buy some particular computer product (a type of computer is not “science” any more than a GMO, although both make use of scientific knowledge).

It is also ridiculous to argue that it is “anti-science” to avoid GMO's because some small committee of the American Society for the Advancement of Science said they are safe in 2012. An opinion doesn't become science because some committee of scientists voices it. Science is the total body of facts collected by scientists. That body of facts does indeed warrant concern about the safety of GMO's.

Another argument made is that we must support GMO's because they will help feed starving people abroad. But such an argument does nothing to argue against GMO labeling. A safer way of increasing food supplies is to reduce meat consumption (the grain needed for a meat-centered meal is typically enough to make six meals that don't use meat).

A study published in 2012 found that a genetically modified crop and a herbicide it was engineered to be grown with caused severe organ damage and hormonal disruption in rats fed over a long-term period of two years. Eventual consequences for some of the rats included tumors. Published in a peer-reviewed scientific journal, the study was carried out by a team led by Professor Gilles-Eric Séralini. A kind of intellectual lynch mob quickly formed, led by pro-GMO interests, which caused the paper to be retracted on the flimsy basis that it was inconclusive (using the same criteria we would have to throw out a third or a quarter of all scientific papers). The incident was a great black mark on contemporary bio-science, and seems like a very troubling attempt at a cover-up. Various ridiculous justifications were given, including untrue claims that Seralini's team had used the wrong type of rat. After a long delay another scientific journal published the study. See here for other information about the study

Such a study does not show that some GMO food you are eating is unsafe. But it does show GMO advocates are not telling us the truth when they claim there is no reason for thinking that genetically modified organisms may be risky.

Saturday, March 26, 2016

Data-Dredging and Small-Sampling Their Way to Borderline Baloney

Yesterday the news media were discussing a PLOS-One scientific paper entitled: “Why Do You Believe in God? Relationships between Religious Belief, Analytic Thinking, Mentalizing and Moral Concern.” Most of the media uncritically regurgitated the silly press release on this study, and failed to notice any of the very large deficiencies with this study, which could have been noticed with a casual inspection of the scientific paper.

The paper in question was written by authors who make very clear near the beginning of their paper that they are in the “you have religious beliefs because you are stupid” camp. They make this clear by stating in the second paragraph of their paper “analytic thinking discourages religious and spiritual beliefs,” after giving no support for such a statement other than references to other papers with problems similar to those in their own paper. The authors then attempt to support this unwarranted and outrageously over-general claim by presenting their own little experiments. But wait a second – what happened to not reaching a conclusion until you discuss your experiments? The standard procedure in a paper is to first suggest a hypothesis (with a degree of impartiality, treating it merely as a possibility), then to discuss an experiment testing the hypothesis, and then to discuss whether the evidence supports the hypothesis. When scientists start speaking like “we believe in this doctrine, now here our some experiments we did that support our belief,” they should cause us to suspect a very clear experimental bias that calls their results into question.

What are the problems with this study? The first problem is that the scientists did experiments that used a ridiculously small sample size. Their first study involved only 236 persons, and a similar sample size was used for their other studies. It is preposterous to try to make any claims whatsoever about a relation between religious belief and analytic abilities based on such a ridiculously small sample size. The smaller the data size, the more likely that false correlations will be found. There is no excuse for such a small sample size, given the fact that the studies were merely multiple-choice surveys, and that nowadays almost any ace programmer can build a website easily capable of doing such studies in a way that might get many thousands of respondents.

The second problem with the study is the very small correlation reported. The study found a negative correlation between religious belief and analytic ability that is reported as having a P-value of “<.05” or less than .05. Sound impressive? It isn't. The P-value is quite a slippery concept. It is commonly thought of as the chance of you getting the result when there is no actual correlation. But according to an article in Nature, such an idea is mistaken. Here is an excerpt from that article:

According to one widely used calculation, a P value of 0.01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of 0.05 raises that chance to at least 29%.

So the P value of “less than 0.05” reported in the “Why Do You Believe in God” study means very little, with a chance as high as 28% that a false alarm is involved. And the chance of a false alarm could easily be more than 50%, given the experimental bias the study shows by very quickly announcing near its beginning the claim that “analytic thinking discourages religious and spiritual beliefs” before providing any data to support the claim.

The “Why Do You Believe in God” study shows strong signs of what is called p-hacking or data-dredging. This is when scientists gather data and then slice it and dice it through various tortuous statistical manipulations, data-fishing until they can finally claim to have discovered something “statistically significant” (usually ending up with some borderline result which means little). The hallmark of data-dredging is that the study veers into claiming correlations that it wasn't explicitly designed to look for, and that seems to be what is going on in this study (a study that announces no clear objectives at its beginning, unlike a typical paper announcing the study was designed to test some particular hypothesis).

The study in question, and similar studies, are misrepresented by
Professor Richard Boyatziz, who makes the following doubly-misleading statement:

A stream of research in cognitive psychology has shown and claims that people who have faith (i.e., are religious or spiritual) are not as smart as others. Our studies confirmed that statistical relationship.

Given that Boyatziz's study has a ridiculously small sample size, and a very unimpressive borderline P-value, it very clearly does not confirm anything. As for Boyatziz's claim that studies have shown that people who have faith “are not as smart as others,” it is as bogus as his claim about his own study.

Let's look at the biggest such study. It was a meta-analysis described in this 2013 Daily Mail story which has this misleading headline, one that doesn't match the actual results of the study: “Those with higher intelligence are less likely to believe in God claims new review of 63 scientific studies stretching back decades.” But when we look at the main graph in the study, we find quite a different story.
IQ (left) compared to % of population that are atheist

 The graph shows some blue dots representing countries. The graph indicates atheism is very rare among people with low intelligence. But what about people with normal intelligence or slightly higher-than-normal intelligence? Among those countries with an IQ of about 100 (between 95 and 105), the great majority (about two-thirds) show up in the graph as having atheism rates of 40% or less. Among those countries with above-average intelligence (IQ of 105 of more), 80% show up as having atheism rates of 30% or less.

This graph clearly shows that Boyatziz's claim that “people who have faith (i.e., are religious or spiritual) are not as smart as others” is a smug falsehood. Such a claim could only be justified if the chart above showed a strong predominance of atheism among groups of average or above-average intelligence. The graph shows the opposite of that. The correct way to describe the graph above is as follows: atheism is extremely uncommon among people of low intelligence, and fairly uncommon among both peoples of average intelligence and people of above-average intelligence. In fact, in the graph above the country with the highest percent of atheists has a below average IQ, and the country with the highest average intelligence has only about 12% atheists.

Boyatziz's “not as smart as others” claim has the same low credibility as claims made by people who say (based on small differences in IQ scores) that “black people are not as smart as white people.” Slight statistical differences do not warrant such sweeping generalizations, which hardly make sense when a white person has no idea whether the black person to his left or right is smarter than he is. The tactics used here are similar to those used by white supremacists: get some data-dredging tiny-sample study finding some dubious borderline result, and then shamelessly exaggerate to the hilt, saying something like that proves that “those type of people are dumber.”

It is basically a waste of time to try to justify either belief or disbelief by searching for slim correlations between belief and intelligence. Imagine someone who tried to decide how to vote in November by exhaustively analyzing the intelligence of Democrats and Republicans, trying to figure out which was smarter. This would be a gigantic waste of time, particularly so if the results were borderline. How to vote should be based on issues, not some kind of “go with the slightly smarter crowd” reasoning. Similarly, belief or non-belief in spiritual realities should be based on whether there are good reasons for such belief or non-belief. If 95% of smart people do not believe in something, you should still believe in it if you have convincing reasons for believing in it – particularly if such reasons are subtle or complicated reasons that those 95% have little knowledge of.

It is interesting to note that the Pearce-Rhine experiments on ESP had an overall p-value of 10-22 (or .0000000000000000000001), as reported in the last column of the table in this link. Here is a result that is more than 100,000,000,000,000,000 times more compelling than the unimpressive p-value of about .05 reported by this “Why Do You Believe in God?” study. But the same people who will groundlessly reject the Pearce-Rhine study (because of ideological reasons) will warmly embrace the study with the paltry p-value of only about .05 (again, for ideological reasons). Can we imagine a stranger double standard? 

Postscript: Back in the old days, studies like this might have a thin veneer of scientific objectivity. But the press release for the study makes clear that the researchers "agree with the New Atheists," which heightens our suspicions that  we have here some agenda-driven research created mainly to be used as an ideological battering ram. 

Wednesday, March 23, 2016

Mainstream Media Muddled Multiverse Mishmash

I have no problem with the title of the recent BBC article on the multiverse. It is “Why there might be many more universes beside our own.” Sure, there might be – anything's possible. My problem is with the subtitle of the article, which is: “The idea of parallel universes may seem bizarre, but physics has found all sorts of reasons why they should exist.” This is not accurate, and the article fails to provide any reasons why “parallel universes” should exist.

The first reason the article gives for thinking that there might be multiple universes is that our universe might have an infinite amount of matter, and if so, there might be duplication – such as some other galaxy exactly like our galaxy in all respects (including a double of you). But that's not really imagining another universe – it's imagining an infinite universe with large-scale duplication. There is no possibility that we could ever verify that such a large-scale duplication is likely, because we have no prospect of ever verifying that the universe does have an infinite amount of matter. Whatever observations we make in a finite span of time, they will never be able to justify a conclusion that the amount of matter in the universe is infinite. Even if you verified the existence of a trillion trillion trillion trillion galaxies, you would have no basis for assuming that there was more than a trillion trillion trillion trillion trillion galaxies. So the very idea of a universe with infinite matter is not provable, and not scientific. It certainly does not count as a “physics reason” for believing in a multiverse.

The article then describes the cosmic inflation theory, which is actually a large group of theories with a few similarities. Exaggerating the case for this theory (as many scientists do whenever discussing a theory that fits their ideological inclinations), the article claims that the match between predicted variations in the cosmic background radiation and those predicted by the inflationary theory are “almost unbelievably good,” suggesting that the theory is correct.

But that's misleading. Cosmic inflation theory (not to be confused with the more general Big Bang theory) started out around 1980, when we already knew pretty much how small the variations in the cosmic background radiation were. During the past 35 years (as our knowledge of such variations has slightly increased) physicists have continually fiddled with different varieties of the cosmic inflation theory, trying to get some version that matches observations. Most versions of the cosmic inflation theory do not match observations, and any that do are those that have been fiddled with and tweaked to try to match observations as those observations came in. Given such a situation, the evidence value of such a match between “prediction” and observation is minimal. We do not at all have a situation where the theory predicted something very surprising, which was much later confirmed to be exactly true.

In fact, there are many problems with the cosmic inflation theory, such as its excessive requirements for fine-tuning (the theory was created to help get rid of some fine-tuning, but may require more fine-tuning than it gets rid of). Far from predicting the cosmic background radiation variations exactly, as the BBC article suggests, the theory actually is hard to reconcile with one of those variations – the feature known as the Cold Spot. A cosmologist quoted here puts it this way:

[The inflationary model] “predicts that today’s universe should appear uniform at the largest scales in all directions. That uniformity should also characterize the distribution of fluctuations at the largest scales. But these anomalies, which Planck confirmed, such as the cold spot, suggest that this isn’t the case… This is very strange. And I think that if there really is anything to this, you have to question how that fits in with inflation…. It’s really puzzling.

The BBC article then tries to make the leap from the general idea of the cosmic inflation theory to a particular variation of that theory called the eternal inflation theory, which imagines many bubble universes. This eternal inflation theory is not at all verified nor well supported, so such an idea does not qualify as a “physics reason” for believing in a multiverse. We have zero evidence for the existence of any other “bubble universe” outside of our own (or any type of universe outside our own).

The BBC article then attempts to blend from an inflationary multiverse (the idea of lots of little bubble universes) to what is called the string theory landscape – the completely speculative idea that there are many different universes which each have a different version of string theory physics. There is no factual basis for such a leap, as there is currently no evidence that string theory is a correct theory. 

 Strange galaxy in an alternate universe 

The article implies that such an idea of a multiverse consisting of many universes may be helpful in explaining the fine-tuned features of our universe. But it isn't. This is because one does not increase the likelihood of success on any one trial by increasing the number of trials. If a universe as fine-tuned as ours is a zillion-to-one shot, it's still exactly the same zillion-to-one shot if you assume there are a zillion other universes. Increasing the number of universes may increase the chance that some universe may be accidentally habitable, but we are not interested in the probability of some universe being accidentally habitable. We are interested in the probability of our universe being accidentally habitable. That probability is not increased by even 1 percent by assuming other universes.

As physicist V. Palonen states in this scientific paper:

The overall result is that, because multiverse hypotheses do not predict the fine-tuning for this universe any better than a single universe hypothesis, the multiverse hypotheses fail as explanations for cosmic fine-tuning. Conversely, the fine-tuning data does not support the multiverse hypotheses.

Next the BBC article touches on Lee Smolin's theory of cosmological natural selection. The article describes it like this:

This suggested to Smolin that a black hole could become a Big Bang, spawning an entire new universe within itself. If that is so, then the new universe might have slightly different physical properties from the one that made the black hole. This is like the random genetic mutations that mean baby organisms are different from their parents.

The theory in question is a heap of crazy speculations. We have no basis for concluding that black hole collapses lead to new universes, nor is there any basis for concluding that such a new universe would have different laws and physical constants from our universe. Even if such a theory were true, it would not at all explain our universe's fine-tuning, contrary to Smolin's claims. This is because in order for you to have a universe in which black holes are forming in the first place, a universe needs to already have an incredible amount of fine-tuning – the fine tuning needed for atoms and for stars (the predecessors of black holes) to exist. There are specific reasons why we should not expect stars to exist in a millionth of universes with random laws and fundamental constants. You can't explain the universe's fine-tuning if you start out with a universe that is fine-tuned in the first place.  See here for some other reasons for rejecting Smolin's theory.
 
The BBC article then touches on M-theory, a version of string theory. This is also speculation for which there is no evidence. Its main purpose currently seems to be to create busy work for mathematicians who can play around with speculative equations because they can't think of something more productive to do.

Finally, as if building up to some muddling climax of confusion, the BBC article touches on the “many worlds” theory of parallel universes. This is the crazy idea that all physically possible realities are actualized – so there must be some universe in which your dog is the ruler of America. Thankfully there is not the slightest evidence for this morally ruinous theory, which tells us that any absurdity you can imagine is just as real as the news you watch on television (an idea that is a perfect prescription for moral indifference).

In short, the BBC article fails to provide any substantiation for its subtitle claim: “The idea of parallel universes may seem bizarre, but physics has found all sorts of reasons why they should exist.” The article fails to produce a single physics reason why they “should exist.” The article tacitly admits such a thing near its end, when it says “these ideas lie on the border of physics and metaphysics” and then attempts to rank the plausibility of multiverse scenarios “in the absence of any evidence.” Yes, we finally have a confession that we are in the realm of groundless speculation – drifting off into metaphysics “in the absence of any evidence.” So why did the article appear in the Science section? 

Postscript: See page 39 of this paper for a discussion of how only a certain small range of the fundamental constants will allow for the existence of stars.  If we assume that the gravitational force can have any value between its current value and a value as strong as the strong nuclear force, then "the stable star-permitting region" of parameter space occupies less than a trillionth of a trillionth of a trillionth of the total parameter space. 

Saturday, March 19, 2016

They Keep Knocking Their Heads on the Same “Origin of Life” Wall

In my earlier post They Keep Feeding Us “Explanation Is Near” Baloney, I discussed how science writers have the extremely bad habit of writing stories that suggest that scientists are on the brink of understanding some long-standing riddle, even when the evidence suggests they are light-years away from understanding such a problem. We had an example of such a piece of science writing in a recent Quanta article entitled “In Warm, Greasy Puddles, the Spark of Life?” The article discusses the work of a biochemist named David Deamer. The article states, “Over the past few years, Deamer has expanded his membrane-first approach into a comprehensive vision for how life emerged.”

But that is baloney. The word “comprehensive” means “complete.” So if we had a complete vision for how life originated, we would understand how a self-replicating molecule was able to originate against all odds. We would understand how the complicated genetic code originated. We would understand how the first cells originated. But none of our scientists understand such things. All of our scientists are light-years away from having any such thing as a “comprehensive vision for how life emerged.”

That Deamer does not have any such thing as a “comprehensive vision for how life emerged” is suggested by the following initial question asked by the interviewer (and the answer given):

QUANTA MAGAZINE: What have been the biggest accomplishments of researchers seeking to understand life’s origins? What questions remain to be solved?

DAVID DEAMER: We have really made progress since the 1950s. We have figured out that the first life originated at least 3.5 billion years ago, and my guess is that primitive life probably emerged as early as 4 billion years ago. We also know that certain meteorites contain the basic components of life. But we still don’t know how the first polymers were put together.

Asked to list “the biggest accomplishments of researchers seeking to understand life’s origins,” Deamer fails to list anything very substantive. He talks about some finding about when life first emerged. The important question is not when it emerged, but how it emerged. As for the claim that meteorites contain “the basic components of life,” that isn't correct if we use the term “basic components of life” in the most common way. What we generally think of as “the basic components of life” are self-reproducing molecules and cells, and meteorites have no such thing. Of course, you can shrink your scale further and talk about something much simpler, some little chemical fragment, and call that a “basic component of life,” but that's no more meaningful than claiming that the basic components of glass windows (silicon atoms) are found in sand on the seashore, which doesn't explain how the windows came into existence.

By admitting that “we still don’t know how the first polymers were put together,” Deamer shows that he does not have any such thing as a “comprehensive vision for how life emerged.”

The traditional concept of life's earthly origin is that of a primordial soup, a warm concentration of ingredients in which life might have originated. This idea has always been extremely inadequate, because it doesn't explain how those chemicals combined into a self-reproducing molecule. What does Deamer add to this? Based on the diagram he supplies, his main addition seems to be a kind of sandwich underneath the soup, a sandwich made up of greasy semi-permeable layers. Below is a diagram similar to the one in the Quanta article, but a little easier to understand. 

 

Does adding this little “sandwich” underneath the “primordial soup” help things out a great deal? Not really. The problem of the origin of life is a problem of explaining what we can call a functionality explosion. According to the common thinking, suddenly a disorganized sea of chemicals transformed into the machinery of a self-reproducing molecule, which was followed by the machinery of a cell, which somehow used the highly organized system of symbolic representations known as the genetic code. How do you explain that? Adding a sandwich underneath the soup doesn't get you to that explanation. That's because it is not merely a problem of concentration (which this “sandwich under the soup” might help you to explain) but a problem of organization, a vastly improbable coordination in which complicated machinery somehow arises. It's a rather like a chemist mixing some chemicals in a beaker and somehow ending up with some cool self-reproducing nanobots at the bottom of his beaker.

Deamer's approach to trying to illuminate the origin of life question is an example of a particular approach that we may call the “special environment” approach. If you try this approach, you concentrate on trying to describe some unusual local environment that might explain the origin of life. Scientists have been taking this approach for 50 years, and it seems to be a futile exercise, a case of knocking their heads against the wall. Given the enormous difficulties of explaining the origin of life, there is no reason to suspect that they can be overcome by any approach centered on just imagining some favorable environment. If I'm trying to explain an event that is like typing monkeys producing a Shakespeare sonnet, there's no way to do that by imagining some special monkey room in which such a thing might have been likely.

Here is a totally unorthodox approach that might have a better chance of success. Imagine you are some ambitious young scientiest hell-bent on explaining the origin of life sometime in your career. You might do worse than to follow the approach described below.

First, spend a few years researching anomalous effects involving water. Pay no attention to the “don't look into that, because it's impossible” skeptics. Thoroughly and impartially investigate any and all claims involving inexplicable effects involving water.

Then spend a few years researching anomalous effects involving atoms or molecules. Pay no attention to the “don't look into that, because it's impossible” skeptics. Thoroughly and impartially investigate any and all claims involving inexplicable effects involving atoms or molecules.

Then spend a few years researching anomalous effects involving energy. Pay no attention to the “don't look into that, because it's impossible” skeptics. Thoroughly and impartially investigate any and all claims involving inexplicable effects involving energy.

Then spend a few years researching anomalous effects involving communication, coordination, or coincidence. Pay no attention to the “don't look into that, because it's impossible” skeptics. Thoroughly and impartially investigate any and all claims involving inexplicable examples of communication, coordination, or coincidence.

Then finally, come back to consider the question of the origin of life. See whether some of the things that you have learned from these investigations may help to shed light upon the origin of life. It could be that these “mini-miracles” you have investigated may help to explain the “major miracle” of the origin of life.

Because of the enormous difficulties of explaining the origin of life, such an approach probably would not work. But I suspect that it would have a greater chance of success than the futile “special environment” approach our scientists have been pursuing for decades with very little success.

Tuesday, March 15, 2016

He Wants to Upload His Way to Multiple-Body Immortality

The runaway fantasies of mind-uploading enthusiasts are documented in a recent BBC article on the topic, an article that also helps to reveal the logic shortfalls of these fervent apostles of what has been called “the rapture of the nerds.”

We are first introduced to a Russian who says modestly, “Within the next 30 years I am going to make sure that we can all live forever,” and who says that he is “100% confident it will happen.” Is he talking about some immortality potion or pill? No, he is talking about the idea that you will be able to copy the essence of your brain to a computer or robot, and that this will give you immortality.

There are quite a few reasons why such a thing won't happen. First, the brain does not store digital information, so the mind is very likely not something that can be uploaded into a computer. Second, there is not the slightest evidence that the brain uses any type of readable code to store information. DNA uses the genetic code, something we can understand and read. But we have zero knowledge of anything like a brain code that we can use to read information from the brain. Nor can we plausibly imagine how such a code could have originated, as it would have to be something almost infinitely harder to explain than the genetic code (the origin of which is very hard to explain).

Third, you could never capture the exact state of a particular person's brain even if you tried to use microscopic nanobots to do such a thing. There have been some reasonable projections about what nanobots can do. For example, we can imagine nanobots that clear artery blockages that are starting to form. You just inject into the arteries a nanobot that is able to detect such a blockage, and then have the nanobot do something to reduce the blockage, perhaps by just releasing a tiny droplet of some chemical. That wouldn't be too hard for a little nanobot only about the size of a cell. But imagine trying to map a brain with nanobots. Each of the brain-mapping nanobots would have to somehow be aware of its own exact position in the brain, so it can record the exact position of each neuron and brain connection it encounters. So if a nanobot comes to a neuron that is 1.334526 centimeters from the left edge of the skull, and 2.734538 centimeters from the right edge of the skull, and 5.292343 centimeters from the back of the skull, then those exact coordinates must be recorded. But how can a microscopic nanobot do that? You can't supply a microscopic nanobot with a little GPS system allowing it to tell its position. So it would seem that nanobots are completely unsuitable for any such job as mapping the exact physical microscopic structure of an organ with billions of cells packed together in a small space.

Then there's the duplication problem. Imagine if there was a machine that could scan your brain, and then upload your mind to a robot. Even if that process was done perfectly, the robot would not have your mind. It would instead have a copy of your mind. You may realize this just by considering that if this uploading process didn't kill you, and the robot existed at the same time as you, there wouldn't be two you's. There would be one you, and a copy of you living in the robot. If you then died after this upload, it would not at all be true that you had survived death by the fact that this robot existed.

This has been pointed out many times to mind-uploading theorists, and it seems to have gone in one ear and out the other without ever making contact with their brains. For example, the previously quoted Russian makes this statement: "For the next few centuries I envision having multiple bodies, one somewhere in space, another hologram-like, my consciousness just moving from one to another." Apparently he thinks that he will be able to move from one body to another by just making sure that no more than one of his electronic incarnations is active at the same time.

Imagine it: you upload your body to robot X and robot Y. Your body then dies. Can you now “switch your mind from robot X to robot Y” just by turning off robot X and turning on robot Y, or “switch your mind from robot Y to robot X” just by turning off robot Y and turning on robot X? Of course not. In reality, both robot X and robot Y will at best be copies of you. Your life is not extended beyond the death of your body because either robot X exists or robot Y exists. And it certainly won't be a case of you “switching bodies” because one machine is turned off and another is turned on.

The greedy fantasies of the mind uploading theorists are amazing. If you want to hope for a technological extension of your life, why not just hopefully imagine that you will be able to take some kind of youth serum that will extend your life – or that you will be able to have your brain transplanted into a robot body that might last centuries? No, our mind upload theorists must imagine for themselves something even more extravagant – the ability to switch their minds between different bodies. It's kind of like a person who is not satisfied with just believing that he will go to heaven, but also wants to believe that once he gets to heaven he will rule as the king of heaven.

Let me tell a little science fiction story. Once there was a planet on which most of a continent was covered by a densely packed jungle. The trees of this jungle had long thin branches and vines that could connect with many other trees. A particular tree might have branches and vines that stretched out for hundreds of meters, connecting with hundreds of other trees. But there was something very remarkable about this jungle. When most arrangements of these trees and branches and vines existed, the huge jungle was just an ordinary jungle, no more conscious than a stone. But when there got to be a sufficiently great density of these trees and branches and vines, the jungle became a self-conscious mind, and was capable of judgment, analysis, insight, imagination and self-conscious experience.

The story isn't very believable, is it? Why should a jungle become self-conscious merely because there was some particular arrangement of trees and branches and vines? But such a story is just like the story that our neurologists ask us to believe. We are told that we have consciousness merely because of a particular arrangement of densely packed nerve cells and dendrites connecting different cells – an arrangement quite like that of the jungle just described, except that instead of trees it is nerve cells, and instead of branches it is dendrites, and instead of vines it is axons. 

Jungle thickets and neural thickets

That is a tale that makes no sense, and the only reason it is accepted is that we don't have some other account of consciousness that does make sense. But one day we will learn of such an account, a completely convincing story of how our consciousness arose. When we finally learn such a story, I suspect it will make perfect sense. At that time I suspect we will look back on our current ideas of the origin of consciousness (and related notions such as uploading human minds into computers) as being laughable fairy tales.

Friday, March 11, 2016

Your Stumbling Path as a Universe Creator

Let us imagine that you are a divine omnipotent being and you have just decided to create a universe to keep yourself company. This may seem like an impious or impertinent line of thought, but it is actually one that may shed some light on an important philosophical issue. So at the risk of committing blasphemy or some other spiritual sin, let us pursue this thought experiment.

Given this thought assignment, your first thought might be that you would need to create some universe that starts out in a simple state, and then progresses to a more and more orderly state, one in which life can gradually develop. So you start things off by creating a gigantic disorganized burst of matter and energy. But before long you find that things are not turning out right. The newly created matter and energy is not progressing in the right way. Things are not getting more orderly.

So you cancel this attempt at creating a universe, causing your creation to vanish. You resolve to plan things out more carefully. First you figure out some laws that will cause newly created matter and energy to progress into ever-more-orderly forms. This may take quite a bit of time. Then you figure out that there are physical constants that must be set up just right. After settings up such things correctly, you then create a new universe, one that will follow these laws.

You wait a long time, and at first things seem to be going okay. Your newly created universe is very slowly becoming more orderly. But eventually it dawns on you: this is going to take almost forever before things get interesting. So you ask yourself: what can I do to speed things up?

Eventually you realize: you don't have to create a universe in which order very gradually evolves over eons. You can create a universe that starts out as a highly orderly universe.

So again, you cancel the universe you created, causing it to vanish. Everything is now blackness and void once again. You wonder: how can I “cut to the chase” by creating a universe that starts out in a highly orderly state? Eventually you realize: you can just create a planet full of life, and even a planet that has intelligent creatures on it.

There is no reason, you realize, why a newly created planet has to have a “fresh born planet” look to it. You can create a planet in any state you can imagine. You can instantly create a planet that looks a thousand years old, a million years old, or five billion years old. The older-looking planets simply require more details for you to fill in. But that's no problem, since it is easy for your vast superhuman mind to quickly churn out as many background details as you need.

So you create such a planet, and a whole universe of stars and planets surrounding that planet. You observe your handiwork with satisfaction, focusing on the first planet created. On that planet you have created a race of intelligent beings. They have minds big enough to form a civilization and create cities. This is going to get interesting real soon, you think to yourself.

But things don't progress as quickly as you would like. For what happens is that these newly created beings have blank minds. Since you have just brought them into existence, and forgot to give them any memories, they start out completely empty-minded. They don't even know how to build a fire.

Again, you think to yourself sadly: this is going to take too long before things get interesting. But then suddenly you have a brilliant idea: why not create people whose minds are already filled with memories? There is, you realize, absolutely no rule that a freshly created person has to have a blank mind. You can create a person who starts out with any set of memories you can imagine.

You suddenly realize: you can instantly create a planet that is in any state of civilization you can imagine. The trick is to create people who start out living with all kinds of memories in their minds. Such memories, you realize, do not actually have to correspond to previous experiences the person lived.

You realize that if you want to create a planet starting out in a state just like Earth was in on January 1, 1950, or any other date, you can do so. You can just create people whose minds are already filled up with memories. Such people can be right in the middle of some task. For example, you can begin the planet's history with lots of people in their cars, driving down some road, and convinced that they have already lived 30 years, even though they were just created an instant ago. In the same first instant of the planet's history, there can be all kinds of other people whose lives just suddenly start, with their heads filled with memories.

So now you get rid of the previous universe you created, causing it to vanish. Once again everything is darkness and void. You decide on a plan to create a universe that will instantly begin in a highly ordered state. From the very first instant there will be all kinds of planets with all kinds of civilized and active beings, in various different states of existence. On the first day of this universe’s existence, none of these people will suspect that today was the first day they ever lived, and that the memories of their previous days were just memories that they started out with on the first day of the universe’s creation. 

 
So poof, you create such a universe. This is great, you think. No need to wait around. There are countless planets for you to observe, most of which are in highly ordered states, with cities packed with people, and cars and trains riding about, and all kinds of fascinating activity. Now you are happy. You finally got things right.

This has been an interesting thought experiment, but it has been more than just an idle exercise. There is a very interesting point behind this thought experiment. The point is: we do not know how old our universe is. The entire universe could have been created (by a deity or an extraterrestrial simulator of universes) x number of years ago, where x is any number between 1 year and 13 billion years. The fact that you may have memories of having lived for, say, 50 years does not prove you have actually lived for 50 years. The entire visible universe could have been created 20 years ago, and on the first day of your life, you may have started out with decades of memories in your mind, memories that were just planted in your mind (and the mind of countless others) on the first day of our universe's existence.

We cannot be certain that all of the people we read about in the history books actually lived. Real human history (that which humans have actually experienced) may not stretch back longer than 50 years or 500 years or some other shocking number. The fact that we have been given various hints or clues suggesting that our universe or actual human experience is a certain number of years old does not prove that the universe or actual human experience is not some tiny fraction of such a number – a hundredth or a thousandth. 

Of course, it is far more likely than not that you have lived as long as you think you have. But the idea that the universe was created fairly recently is an interesting possibility. 

Imagine a father gives a child named Susan a story to read. The story tells the tale of a man named John who was born 22 years ago. The father asks Susan to determine the age of John. Then there might be a conversation like this:

Father: So tell me, Susan, how old is John?
Susan (after re-reading the story): John is exactly 22.
Father: Are you sure of that?
Susan: Yes, I'm quite certain of that. It clearly says he was born 22 years ago.
Father: Well, you're wrong. The correct answer is: John is only two hours old. Because that's when I wrote this story involving John.

We may be making the same kind of mistake as Susan. We live in a universe that seems to have within it a kind of “background story” that it is something like 13 billion years old. But that whole universe, including this “background story,” may have been created much more recently.
  

Monday, March 7, 2016

Gene-Spliced Super-Geniuses Are Not on the Horizon

A recent Nautilus article by Stephen Hsu has the title “Super-Intelligent Humans Are Coming.” Hsu gives us some reasoning that tries to justify the claim that we will soon be able to genetically engineer humans to have an IQ of 1000. First he guesses that there might be thousands of gene variations that can have a slight effect (positive or negative) on intelligence. Then he reasons as follows:

Given that there are many thousands of potential positive variants, the implication is clear: If a human being could be engineered to have the positive version of each causal variant, they might exhibit cognitive ability which is roughly 100 standard deviations above average. This corresponds to more than 1,000 IQ points.

But there are quite a few problems with such reasoning. First, we have no idea whether having so many genetic variations would be feasible as a way of getting to a super-genius. It could be that if you have more than, say, 100 of them, it has some terrible side effect that would mess up a person's brain or body. Second, we have no idea whether there is a law of diminishing returns that would kick in once you had tried to give someone more than 100 of these genetic variations.

There are all kinds of situations in which having one type of thing may increase some parameter by a one percent, but having, say, 50 of those things does not increase that parameter by anything like 50 percent (because of a “law of diminishing returns” effect). For example, if you buy one smoke detector for your house, it may increase your life expectancy by one percent. But buying 50 smoke detectors does not increase your life expectancy by 50 percent. And while bringing a second pencil to the test center may increase your SAT score by an average of 1 percent, bringing 20 pencils will not increase your SAT score by anything close to 20 percent.

It could well be that it will be impossible to manipulate genes to increase human intelligence by more than 50 percent, no matter how many genetic modifications are made. The whole idea that the secret of human intelligence is found in the genes may be misguided. The rice plant has more than 32,000 genes, but humans have only about 20,000 genes. How could that be if our genes are storing an algorithm for making human minds?

It is also doubtful that we will be able to isolate some series of gene changes that could add up to a roadmap for making humans super-intelligent. In this Guardian article a psychologist says the following:

It’s the best kept secret of modern science: 16 years of the Human Genome Project suggest that genes play little or no role in explaining differences in intelligence. While genes have been found for physical traits, such as height or eye colour, they are not the reason you are smarter (or not) than your siblings. Nor are they why you are like your high-achieving or dullard parents, or their forebears.

There is a problem called the “missing heritability” problem. This is the problem that while twin studies may suggest that as much as 50% of variations in human intelligence may be caused by genetic differences, scientists have had no luck in determining genes that determine intelligence. In this article a scientist named Plomin states, “I've been looking for these genes for 15 years, and don't have any.” This Scientific American article says, “Numerous researchers have found that the structure of cognitive abilities is strongly influenced by genes (although we haven't the foggiest idea which genes are reliably important).” Given the lack of success in finding genes that determine intelligence, it may well be that intelligence has relatively little dependence on a person's genes – perhaps much less than 50%.

Hsu's optimism about genetic tinkering is the kind of optimism scientists had before the Human Genome Project was completed. It was thought that once man's genes were mapped, there would be all kinds of breakthroughs. Scientists thought that we would conveniently find that particular genes mapped to particular traits, and that this would be some medical Aladdin's lamp. What actually happened was something very different. What scientists found was a murky muddle that resulted in relatively few breakthroughs. All too often the roles of particular genes were still very unclear. A typical result was that some gene might be found to have a 2% effect on some trait. Such results were surprisingly unhelpful. Given such a reality, there is no basis for concluding that some big increase in human intelligence can be caused anytime soon through genetic engineering.

The difficulty of mapping genes to cognitive prowess should surprise no one. Think of what genes are: they are typically sequences of chemicals used to construct proteins. Now imagine some great problem involving abstract thinking, such as the question of what is the best future path for mankind, or why there is something rather than nothing, or whether there is an overall plan for the universe, and if so, what is its nature. Can we imagine some new combination of chemicals that would suddenly cause us to understand such problems with much greater insight? No, we cannot. We basically have no understanding of how some particular gene might cause increased intelligence, so in trying to manipulate genes to increase intelligence, we are groping around in the dark.

In short, while we cannot rule out the idea of one day genetically tinkering our way to super-minds, the prospects of being able to radically improve intelligence through genetic engineering anytime soon are poor. It is not at all true to suggest genetic engineering prospects suggest “Super-Intelligent Humans Are Coming,” as Hsu's title states.

Are there any reasons for hope on this matter? There are a few. For one thing, human intelligence as measured on IQ tests seems to be increasing. Google for the topic “Flynn effect” and you will find that IQ scores have increased by 5 to 25 points in the past several decades. This seems quite inexplicable from any kind of genetic standpoint. Could there be some entirely unknown factor behind human intelligence, something that is now turning up the knob on human intelligence to help us cope with an increasingly complex society? Possibly.

Another basis for hope is the prospect that we might develop some drug or chemical that might produce short-term boosts to human intelligence. Many people think that the mind is not a product of the brain, and that the brain is just a kind of temporary receptacle for our minds (for reasons discussed in this series of posts). According to such an idea, your brain is a kind of localization device, restraining a spirit, soul, or mind that might otherwise be free-roaming, forcing it to be chained to some particular body. If such an idea is true, some drug could conceivably switch off some of the brain's activity, which might in some sense be like letting the genie out of the bottle. By taking some drug you might get in touch with some higher consciousness that has been restricted by brain activity largely dedicated to keep you living in the here and now. There doesn't seem to be any drug very suitable for such a purpose at this time, but we may hope that some day such a drug might be developed. It might be that future Americans might have some pill that will make them feel temporarily as if they had some consciousness far beyond that of normal human consciousness. After swallowing such a pill, you might feel as if the doors of Eternity had been opened.

Wednesday, March 2, 2016

The Dart of the Vacuum Miracle Hit the Distant Bullseye

Perhaps the biggest mystery in physics is why the vacuum of space has so little energy in it. Although it may seem intuitive to think of the vacuum of space as empty, quantum mechanics predicts that it should be something very different: something incredibly packed with energy, a dark energy caused by all kinds of quantum fluctuations. In a TED talk a physicist discussed this:

Now, if you use good old quantum mechanics to work out how strong dark energy should be, you get an absolutely astonishing result. You find that dark energy should be 10 to the power of 120 times stronger than the value we observe from astronomy. That's one with 120 zeroes after it. This is a number so mind-bogglingly huge that it's impossible to get your head around. We often use the word "astronomical" when we're talking about big numbers. Well, even that one won't do here. This number is bigger than any number in astronomy. It's a thousand trillion trillion trillion times bigger than the number of atoms in the entire universe.

Instead of the vacuum of space being filled with this kind of energy (which would give each square meter of empty space far more density than steel), we have a vacuum of space that is almost entirely empty of matter and energy. This discrepancy between reality and prediction is sometimes called the vacuum catastrophe. But for reasons I discuss here, it really should be called instead the vacuum miracle. Having a vacuum that is relatively empty is both exceptionally improbable and very fortunate in allowing our existence. The common term used for an extremely unlikely but highly fortunate event is the term “miracle,” as in: It was a miracle that she fell onto an open truck carrying pillows when she jumped off the high bridge.

The vacuum miracle is something that bothers many scientists, who would prefer to believe that the universe is not so well-arranged to favor creatures like us. One way they have tried to ease this discomfort is to suggest that perhaps there is some unknown reason why the vacuum of space has to be empty, resulting in a zero cosmological constant, or zero dark energy. But such reasoning doesn't work, because in the late 1990's it was discovered that the expansion of the universe is accelerating. This can only be true if there is a very small cosmological constant, which basically means that each square meter of the vacuum of space has a little bit of energy. Collectively this energy (the same as dark energy or the cosmological constant) is causing the expansion of the universe to accelerate.

A recent paper by five scientists suggests that a cosmological constant just like we have is actually necessary for the eventual appearance of creatures such as us. The paper states: “We find that we seem to live in a favorable point in this parameter space that minimizes the exposure to cosmic explosions, yet maximizes the number of main sequence (hydrogen-burning) stars around which advanced life forms can exist.”

The paper is discussed by this article in the journal Science, which states:

As it turns out, our universe seems to get it just about right. The existing cosmological constant means the rate of expansion is large enough that it minimizes planets’ exposure to gamma ray bursts, but small enough to form lots of hydrogen-burning stars around which life can exist. (A faster expansion rate would make it hard for gas clouds to collapse into stars.)

We can use the common phrase “threading the needle” to describe this type of fine-tuning. Or you might compare it to landing the golf ball in the golf hole, or hitting the distant bullseye target with an arrow. And given all the random quantum contributions to the vacuum, this should have been as improbable as a drunk blindfolded archer hitting the very distant target bullseye with his arrow.

But the journal Science is written for scientists, so it was quite predictable that the article author would try to ease the discomfort of any scientists who might be made uncomfortable by this extreme example of cosmic fine-tuning. This was in accordance with the standard principle that our scientists must be kept in carefully filtered information bubbles, like 1980 Moscow bureaucrats who would get all their news from Pravda. Heaven forbid that the tender ears of our scientists should ever be offended by something that does not match their expectations.

So the Science article cites a statement by physicist Lee Smolin:

However, he adds, all truly anthropic arguments to date fall back on fallacies or circular reasoning. For example, many tend to cherry-pick by looking only at one variable in the development of life at a time; looking at several variables at once could lead to a different conclusion.

It certainly is not true that “all truly anthropic arguments to date fall back on fallacies or circular reasoning,” a claim which is just a lazy kind of dismissal similar to statements such as “all Republican arguments use fallacies” or “all Democratic arguments rely on logic errors.” In fact, the particular example given does not hold up as an example of a fallacy.

Let's imagine if you found a case in which one cosmic parameter seemed to be extremely fine-tuned for life. Would it be wrong to form an opinion based on that parameter, without considering all other parameters? It might be if you were examining some external universe, and you didn't know whether life existed in it. Because the overall situation might be like this:

1 parameter with just the right value for life to exist

5 parameters inconsistent with the existence of life

But we know that no such situation can exist in our universe, because we know that life does exist in our universe. So given the discovery of a single parameter that seems to be fine-tuned for life, the worst situation that could exist is:

1 parameter with just the right value for life to exist

All other parameters consistent with the existence of life

Even if you found such a situation, the evidence would still be pointing to a fine-tuned universe. You would have one “thumbs up,” and zero “thumbs down.”

In fact, we know the situation is much better than that. We know that there are quite a few parameters and fundamental constants which are fine-tuned for life, as discussed here and here. So we know that the situation is really: lots of thumbs up, and no thumbs down (if there were any thumbs down, we wouldn't exist). We know of lots of parameters that are very fine-tuned, and there is no chance that we will discover some other parameter that will cancel out such evidence (because if such a parameter existed we could not). The darts of nature known as the universe's fundamental constants have most improbably hit not just one distant bullseye, but lots of them.

Cosmic fine-tuning

So Smolin's reasoning here has no weight. In fact, Smolin himself showed that he is extremely impressed by the evidence that the universe is fine-tuned. This is because Smolin wrote a whole book called The Life of the Cosmos in which he advanced an elaborate theory designed to explain the fine-tuned features of the universe. It was a pretty goofy theory, involving the idea that universes magically pop into existence whenever black holes collapse. But at least it showed that Smolin thought that cosmic fine-tuning is something very important we need to explain.

In general, we should not pay particular attention when physicists lecture us about errors in logic, as physicists have no training in logic. You can get a PhD in physics without ever taking an introductory course in logic.

Far from involving some fallacy, the fact that our universe is incredibly fine-tuned is one of the most important things discovered by science in the past hundred years. If your philosophy doesn't mesh with such a fact, your philosophy needs to be revised. 

Postscript: This post included this snarky comment: "This was in accordance with the standard principle that our scientists must be kept in carefully filtered information bubbles, like 1980 Moscow bureaucrats who would get all their news from Pravda." I didn't expect to see another example of that so quickly. Shortly thereafter news arrived that a flood of complaints by scientists caused the PLOS One journal to retract a scientific paper dealing with the great amount of coordination in the human hand -- solely because the authors made three one-sentence references to "the Creator" -- for example, "The explicit functional link indicates that the biomechanical characteristic of tendinous connective architecture between muscles and articulations is the proper design by the Creator to perform a multitude of daily tasks in a comfortable way." Again, we see the Pravda principle at work -- the tender ears of our scientists must not be offended, the information bubble must be carefully filtered from contaminating deviations of thought, and the sociological taboos of the tribe must be rigidly enforced.