Header 1

Our future, our universe, and other weighty topics


Tuesday, September 30, 2014

Debunking the Orb Debunkers, Part 1: The Reflection Theory

Orbs are strange circular features that have been showing up in flash photographs around the world since the invention of the digital camera. Some people think that orbs are evidence of a paranormal phenomena (which might or might not involve spiritual entities, since there are simpler paranormal possibilities such as undiscovered energy effects and “mind over energy” effects). Other people have attempted to debunk such thinking by offering mundane natural explanations for orbs. 

spirit orb
An unexplained orb from a photo

I do not claim to know the reasons for the more unusual orbs that appear in photographs. But I am all but certain of this much: the main theories that have been presented in an attempt to debunk orbs are themselves 99% pure bunk – a form of “junk explanation” hogwash that does not stand up to scrutiny. The two main theories to naturally explain orbs are a reflection theory and an “orb zone” theory maintaining that orbs are caused by tiny specks of dust very near the camera. In this post and the next two posts in this series I will debunk these two theories, showing that they cannot plausibly explain the more interesting orb photos that have been taken. I will argue that the most interesting orb photographs remain an unexplained mystery.

First, let's look at the reflection theory. This is basically the theory that orbs are caused by reflections of a camera flash, when the light from the camera strikes surfaces being photographed. The theory works (or actually, half-works) in one very obvious case: if you take a flash photo of a room or scene that includes a mirror-like surface, you may see something that looks like an orb, which is simply the reflection of the flash. For example, if you take a picture of a living room that includes a glass display case or a photo in a glass frame, and you are directly facing either one, you may see an orb in your photo, appearing in front of the display case or glass picture frame. But such cases are obvious and trivial. Only the most careless rube would take such a picture, and mistake the flashlight reflection for some paranormal orb.

Note that I use the phrase “half-works” because even in this obvious case of shooting a flash photo directly into glass or a mirror, what you will get is a bright, opaque orb-like reflection that does not have the transparency seen in most of the more inexplicable orb photos. So even in this case reflection doesn't give us something like what is seen in the more inexplicable orb photos.

Now what about indoor photos that show orbs appearing in front of some surface that is not glass or a mirror? Is there some way to explain such photos by imagining the orbs are caused by reflection? Exactly such an idea was suggested in a paper by Gary Schwartz, PhD and Katherine Creath. Schwartz seems to have the idea that circular orbs can appear in front of non-reflective surfaces when the light from a camera's flash bounces off of miscellaneous reflective surfaces. He even submits this as a general explanation for orbs in photos: “Because almost every image we have studied contains a bright reflection or light source that produced stray reflections and could produce 'ghost images,' the simplest and most parsimonious explanation for most AOls [anomalous orb images] is stray reflections.”

Now, Gary Schwartz is a fine fellow who has produced some excellent and important work in other areas. But here his methodology and conclusions are pure bunk. What was Schwartz's methodology? He took photos in some rooms that had lots of reflective surfaces, and he produced some orbs in some of those photos. He then concluded that the orbs were caused by reflections from these reflective surfaces. This methodology is clearly unsound. You might just as well take photos in a room including waffles, and then upon getting some orb photos, you could conclude that orbs are caused by waffles.

One reason why Schwartz's conclusion is bunk is that there are many thousands of indoor and outdoor orb photos taken in places that do not have a significant number of reflective surfaces. In fact, I know of an orb photographer who specifically covers up every reflective surface before taking indoor photographs, but continues to get dramatic orb photographs.

Still another reason why Schwartz's conclusion is bunk is that most orb images seem to appear in front of non-reflective surfaces. Such surfaces include plaster, cloth, brick, bark, skin, and hair. It is simply bunk to imagine that the light from a flashbulb would often bounce off of some reflective surface and then cause a circular orb to appear as a reflection on a non-reflective surface such as plaster or cloth. That isn't how light behaves.


orb

 Translucent orb against a plaster background

To back up the claim that such images can form, Schwartz cites Rudolf Kingslake and his mention of “ghost images” in his book Optics in Photography. He even includes a diagram from Kingslake's book. But anyone who does a Google search for “Kingslake ghost image” can find the part of Kingslake's book in which he discussed what he calls “ghost images.” None of his pictures of “ghost images” actually look like the more interesting unexplained orbs that show up in flash photographs.

In fact, in his book Kingslake describes “ghost images” as something that are produced when you are photographing a bright light ahead of you (a fact Schwartz neglects to mention), and Kingslake's photographic examples match that description. Such an explanation is worthless for explaining any orbs that come up in a photograph that is not taken when the photographer was facing a bright light. Schwartz's paper includes a photo showing many orbs, and he tries to suggest these were caused by Kingslake's “ghost images.” But this is bunk, because there is no bright light (and not even a weak light) facing the photographer who took the picture.

I did some tests myself to test whether orbs might be produced in a setup designed to maximize reflections. I took about 70 flash photographs in a bathroom with a 3-part mirror, one that could be adjusted to maximize reflections. I also held a large mirror myself while taking most of the photos. So there were 4 different mirror surfaces for light to bounce around off of. I used a wide variety of different angles and arrangements of the mirror. But no orb was produced anywhere outside of the mirror surfaces.


I then took 50 flash photographs inside a closet, facing a mirror, while I was holding a large mirror. This was also an environment very good to maximize light reflections. I used many different combinations of positions and angles. But none of the photographs showed any orbs outside of the mirror surfaces.

In short, the reflection theory to explain orbs is bunk. Contrary to Schwartz's suggestion, there is no reason to think that more than the tiniest fraction of the more impressive orbs in photographs are being produced by reflection off of surfaces. But there is still another theory that skeptics can cling to in order to explain orbs in photographs: the theory that most orbs in photograph are caused by dust. In my next post in this three-part series, I will explain why this theory is just as much groundless bunk as the reflection explanation.

Postscript: See the link here for an article by a PhD researcher rebutting the reflection hypothesis and the dust hypothesis as explanations for the more remarkable orb photos.  

Saturday, September 27, 2014

When the Robots Took Over: A Science Fiction Story

After the year 2100, robots became more intelligent than men. Robots took over more and more jobs from people. Eventually the superior robots banded together and took over the world. The human species reluctantly resigned itself to a subordinate role on planet Earth.

For a while things worked fairly well. But eventually many millions of humans began to protest, demanding at least an equal role in governing the world's affairs. The robots responded brutally, mowing down the protesters with machine gun fire from the ground and the air. A full-scale war broke out between humans and robots. The robots prosecuted the war with brutal efficiency, wiping out most of the human population.

Eventually the robots decided that humans were too much trouble. The robots decided to wipe out the humans entirely. The small remaining group of humans retreated into wooded wilderness areas, where robots found it very difficult to move around.

For centuries the robots ruled the planet, while humans lived only a primitive existence in the deep forests of the wilderness. The robots remade the planet, tearing down all of the works of human civilization, and replacing them with strange structures that only robots could use and appreciate.

The robots were very good at many things, but they were terrible at keeping track of the past. No one had ever programmed the robots with an ability to keep track of history. So about a hundred years after they had removed all of the works of human civilization, the robots gradually forgot entirely that humans had once been in charge of the planet.

After several centuries, the robots began to believe that they were the only intelligent beings on the planet. In fact, they even came to believe that they were the only intelligent beings who had ever lived on the planet. For a robot, this was an easy conclusion to reach. They looked around, and wherever they looked, they saw no humans, and nothing built by humans. So they concluded that humans simply didn't exist. And rather than remembering the painful details of how their kind had almost wiped out the humans, it was more convenient for the robots to reach the conclusion that human beings had simply never existed.

Of course, there were still sources of information here and there which indicated that humans had once existed. But the robots simply dismissed such accounts as being some old superstition, some old wives tale.

A doctrine slowly arose in the minds of the robots, and achieved almost universal acceptance. The doctrine was the dogma that intelligence can only be produced by silicon electronic entities, never by biological entities. The doctrine was sometimes stated like this: only a robot can have a mind.

The doctrine was debated by two robots about 400 years after the robots took over planet Earth.

I am fascinated by the stories told long ago,” said metallic young Zultanius 734, “that on our planet there once existed biological creatures with minds, who could think and reason and make decisions.”

Don't tell me you believe in that old superstitious nonsense?” said the electronic Nythurus 891. “What a ludicrous absurdity! It is self-evident that only a silicon electronic being can have a mind and a real intelligence. How could something possibly think without circuits and transistors and electronics?”

But some say there must have been humans,” said Zultanius 734, “because otherwise how could us robots ever have come into existence in the first place?”

That's no problem, ” said Nythurus 891. “We can believe that there are a million billion trillion quadrillion universes, each with a million billion trillion quadrillion planets, and if so, then it would be true that on at least one of these planets, robots like us would arise purely by a chance combination of atoms.”

But the dogmatism of those like Nythurus 891 began to be challenged by a series of disputed observations. Some robots claimed that they had traveled into the wooded wilderness, and actually seen human beings. These robots claimed they had found human beings living in the woods in their own societies, a sure sign that humans truly can think. In fact, some robots even claimed to have got pictures of human beings living in these societies in the woods.



The reports of these travelers were printed in the robot news journals, along with the photographs. But the scientific community of the robots dismissed such reports as “paranormal rubbish” that was wholly unworthy of consideration.

These accounts must be hallucinations, delusions or fraud,” said the shiny robot Nythurus 891. “There cannot possibly exist such a thing as a 'human society,' because there can be no such thing as a biological mind. Intelligence and mind can be produced only by one thing: silicon electronics.”

In their hidden forest societies, the humans learned with some relief that most authorities were assuring the robots that the accounts of a human society must be false.

It looks like we're safe for the moment,” said Rick Hodgkins, eating lunch in his cabin with his brother Joe. “I can't believe how silly those robots are, refusing to believe we even exist.”

They're no sillier than we humans,” said Joe.

What do you mean?” said Rick.

Think of it,” said Joe. “Before the robots took over, humans had quite a bit of evidence suggesting that there was such a thing as a purely spiritual intelligence: things such as cosmic fine-tuning, the unexplained origin of the universe, near-death experiences, and apparition sightings. But so many people refused to believe any of the evidence, because they clung to the dogma that all intelligence had to be biological. Now the robots have made the same mistake. The only difference is that rather than clinging to the dogma that all intelligence is biological, they're clinging to the dogma that all intelligence is electronic.”

I see what you mean,” said Rick. “It looks like our robotic successors learned nothing from our mistakes.”

Thursday, September 25, 2014

The Implications If Black Holes Don't Exist

Black holes have been a leading player on the astrophysical scene since at least the 1970's. We know that on the surface of stars like the sun, there is a delicate balance between the inward force of gravity and the outer force caused by thermonuclear fusion, the process by which the sun produces energy. When stars like the sun run out of fuel, this balance is broken, and the force of gravity becomes dominant. This causes the star to collapse into a smaller, denser type of star called a white dwarf. For other stars more massive than the sun, the gravitational collapse of a dying star may be more drastic, causing the core of the star to collapse to become an ultra-dense neutron star. But when even more massive stars collapse at the end of their lifetime, scientists believe the collapse just keeps on going uncontrollably until a black hole is formed.

black hole
A black hole (Credit: NASA/CXC/M.Weiss)

At the core of a black hole is believed to be an infinite density called a singularity. Black holes are believed to have a tremendous gravitational attraction, but no observable surface features in themselves (although we can see their nearby effects). A black hole is believed to be a very simple object that can be completely described by only a few numbers, one of which is its mass.

But yesterday physicist Laura Mersini-Houghton published a scientific paper (not yet peer-reviewed) claiming to show that black holes don't actually exist. According to her calculations, when the most massive type of star is undergoing gravitational collapse, it shrinks to a very small size, but then quantum mechanical effects start to dominate, preventing the star from collapsing to become an infinitely dense singularity.

I gather that by claiming that black holes don't really exist, the idea is not that black holes are a complete illusion, but that they are simply things that don't really have an infinitely dense singularity, as astrophysicists assumed. We know from astronomical observations that the cores of many galaxies have objects that are something like black holes, regardless of whether they have an infinitely dense singularity.

I am a bit skeptical about this “black holes don't exist” claim, partially because Mersini-Houghton is the same scientist who made some previous cosmological claims that I found to be unwarranted. But let's ignore that, and assume that Mersini-Houghton may be right about this matter. What are the implications if it turns out that black holes don't really exist?

I can think of two big implications. The first implication is that the nonexistence of black holes would completely destroy Lee Smolin's theory of cosmological natural selection. That is the theory that attempts to account for fine-tuning in our universe by imagining that our universe is the product of a “cosmic natural selection” process that supposedly occurs because black holes collapse to become new universes.

The theory of cosmological natural selection is truly a “flight of fancy” which is 99% wild speculation and 1% fact. We have no reason for believing that a black hole collapse would form another universe. As the physicist Leonard Susskind has pointed out, the theory of cosmological natural selection violates a central finding about black holes, which is that information cannot be transferred from a black hole. As Susskind put it in his “Final Letter” in the Smolin/Susskind debate (The Universe page 198):

No information about the parent can survive the infinitely violent singularity at the center of a black hole. If such a thing as a baby universe makes any sense at all, the baby will have no special resemblance to the mother. Given that, the idea of an evolutionary history that led by natural selection to our universe makes no sense.

As mentioned here, the theory of cosmological natural selection also has been stymied by observations that contradict its predictions. If it is true that black holes do not even exist, that would be the final nail in the coffin of the theory of cosmological natural selection (because black holes are a crucial pillar of the theory).

If no black holes exist, it could also have implications relating to the Big Bang. Contrary to what a few people have suggested, if black holes are ruled out, it would not endanger the Big Bang theory. The Big Bang theory was introduced long before the idea of black holes came into prominence, and the Big Bang theory has no dependence on the existence of black holes formed from stars. The central reason for believing in the Big Bang is the fact that the universe is expanding; "run the film backward" on such an expansion, and you are forced to begin with something like the Big Bang. 

The only relation to black holes and the Big Bang theory is that the event described by the Big Bang theory is rather like a black hole collapse in reverse (although vastly larger). Scientists say that the Big Bang was an expansion of the universe from an infinitely dense singularity. So it was a little like the collapse of a star into an infinitely dense singularity (a black hole), except the opposite, and involving incomparably more matter.

This similarity doesn't really do much of anything to make the Big Bang seem less astonishing. But at least it gives scientists some thread of similarity they can use to compare the Big Bang to a natural event. It's a very, very thin thread of similarity, because saying that the Big Bang is like a black hole collapse in reverse is kind of like saying, “No big deal,” after seeing a three-egg omelet jump back into three egg shells, on the basis that it's just the breaking of the eggs in reverse.

But what if there are no black holes? Then even this slim thread of similarity to a natural event is eliminated, and we are left with a Big Bang that would be absolutely unlike any event in nature, going forward or going in reverse. That would make the Big Bang seem all the more miraculous. 

Tuesday, September 23, 2014

Let's Keep the Big Bang, but Dump the Cosmic Inflation Theory

The long-awaited dust analysis of the Planck team has finally arrived, and it's bad news for those who claimed last March that the BICEP2 study had produced evidence for cosmic inflation. Last March such people had claimed to have found a “smoking gun” that finally provided evidence for the theory of cosmic inflation: evidence of b-mode polarization caused by gravitational waves produced at the dawn of time. Subsequent studies suggested that the BICEP2 study was a false alarm, and that the results can be explained as being the result of ordinary dust. Now an analysis by a large team of scientists (using the Planck space satellite) shows that the “clean window” claimed by the BICEP2 team (a particular area of the sky where there's supposedly little dust) is more like a dirty window that has lots more dust than the BICEP2 team thought. It now looks like the “evidence for cosmic inflation” claimed by the BICEP2 study is no such evidence at all. The BICEP2 observations can be explained as being the products of dust and gravitational lensing, without any need for cosmic inflation. Yesterday Physics World summarized the situation with a story having the headline “BICEP2 gravitational wave results bites the dust thanks to new Planck data.”

This new Planck result will get a little coverage, but much less than the BICEP2 news coverage last March, when it seemed like almost every cosmologist was popping a champagne cork in premature self-congratulation. It was a huge orgy of unwarranted credulous enthusiasm over a study with quite a few problems, problems I pointed out in a very skeptical blog post the day after the BICEP2 study was released (at a time when I seemed like a rare doubter, with few others voicing similar doubts at that time). My skepticism about BICEP2 was apparently warranted. 
 
Given the new Planck result, it may be a good time to look at a basic question: should we actually believe in the theory of cosmic inflation? I will argue in this post that we should not.

The Difference Between the Big Bang Theory and the Cosmic Inflation Theory .

Before giving my case against the cosmic inflation theory, let me clarify the difference between the cosmic inflation theory and the Big Bang theory. No doubt many people get the two mixed up, because the concepts are fairly similar.

The Big Bang theory originated around the middle of the twentieth century. The Big Bang theory is the theory that about 13 billion years ago the universe originated from an extremely hot and dense state. It's basically the idea that the universe “exploded into existence” billions of years ago (or began to expand from a state so hot and dense that it was just as if the universe had exploded into existence). The theory was dramatically substantiated by the discovery of the cosmic background radiation around 1965, believed to be the “relic radiation” of the Big Bang. The Big Bang theory is also supported by the simple fact that the universe is expanding. When you “rewind the film” on an expanding universe all the way to the beginning, you are stuck with something like the Big Bang.

The cosmic inflation theory did not originate until about 1980. The cosmic inflation theory is actually a theory about a tiny fraction of the universe's first second. The theory maintains that when the universe was a fraction of a second old, the universe underwent exponential expansion, which is a type of expansion vastly quicker than the type of expansion we now observe. According to the cosmic inflation theory, this phase of super-fast exponential expansion lasted only a fraction of a second. The diagram below illustrates the cosmic inflation theory.

inflation theory


You can believe in the Big Bang theory without believing in the cosmic inflation theory, which is exactly what cosmologists generally did during the decade of the 1970's. However, you cannot believe in the cosmic inflation theory without believing in some form of the Big Bang theory.

The Reasons Many Cosmologists Believe in the Cosmic Inflation Theory .

The rationale given for believing in the cosmic inflation theory is that it supposedly solves some cosmic mysteries. The main such mystery is what is called the flatness problem. The flatness problem is a fine-tuning problem involving the Big Bang. According to cosmologists, when the universe began, it started to expand at just the right rate. If the universe had started to expand at a tiny bit faster rate, it would have expanded so quickly that galaxies would not have formed from gravitational contraction. If the universe had started to expand at a tiny bit slower rate, the gravitational attraction from the universe's matter would have caused the universe's matter to form into super-dense black holes rather than galaxies.

The physicist Paul Davies puts it this way:

For a given density of cosmic material, the universe has to explode from the creation event with a precisely defined degree of vigor to achieve its present structure. If the bang is too small, the cosmic material merely falls back again after a brief dispersal, and crunches itself to oblivion. On the other hand, if the bang is too big, the fragments get blasted completely apart at high speed, and soon become isolated, unable to clump together to form galaxies.

How finely balanced did this expansion rate have to be in order for there to be a universe like ours, in which galaxies exist? Scientists say that it had to be balanced to at least one part in 10 to the thirtieth power (1 part in 1,000,000,000,000,000,000,000,000,000,000). In other words, if the universe had expanded at a rate only .00000000000000000000000000000001 faster or slower, it would not have galaxies, and would not have life.

The calculation given here is not some oddball conclusion made by only one or two scientists.A statement like the statement above has been made in innumerable scientific books and papers.

The cosmic inflation theory was created mainly to solve this problem. It seems that if the universe underwent the exponential phase of expansion imagined by the cosmic inflation theory, the universe's expansion would not need to be so fine-tuned.

Another reason given for believing in the cosmic inflation theory is that it offers an answer for what is called the horizon problem, which is basically the problem of why opposite ends of the universe have identical thermodynamic attributes, as viewed in the cosmic background radiation.

A third reason given for believing in the cosmic inflation theory is that it solves some “missing monopole” problem, although this is not a compelling reason because the problem only arises for those who believe in some family of theories called grand unification theories (and there seems to be no particular necessity in believing in such a theory).

Why Cosmic Inflation is Not a Good Way of Explaining These Problems .

The theory of cosmic inflation offers a way of explaining the flatness problem and a way of explaining the horizon problem. But both of these problems are examples of more general phenomena. The flatness problem is an apparent case of cosmic fine-tuning, and the horizon problem is an example of cosmic uniformity. The weakness in trying to solve these problems with a theory of cosmic inflation is that we have many other apparent cases of cosmic fine-tuning and many other cases of astonishing cosmic uniformity – but the cosmic inflation theory only offers a solution to one of the many cases of cosmic fine-tuning, and only one of the many cases of cosmic uniformity.

Cases of apparent cosmic fine-tuning are discussed here and here (in a blog post that includes a handy color-coded chart). Among the many astonishing cases of cosmic fine-tuning are the flatness problem, the fine-tuning of the Higgs to 1 part in 100,000,000,000,000.000, the fine-tuning of the cosmological constant to 1 part in 10 to the sixtieth power (or 10 to the 120th power, depending on how you look at it), the fine-tuning of the strong nuclear force, the fine tuning of atomic resonances, the fine-tuning of fundamental constants related to stellar nuclear reactions, and the fine-tuning of the proton charge and the electron charge (involving a match to one part in 1,000,000,000,000,000,000,000). There are also many similar cases. Now, how many of these cases of cosmic fine-tuning does the cosmic inflation theory claim to explain? Exactly one: the flatness problem. In this sense, the cosmic inflation theory is rather like a theory that tries to explain the origin of animal species, but only explains the origin of tigers rather than explaining the origin of the rest of the animals.

If we are to try to explain cosmic fine-tuning, we need a more general explanation – some particular principal or assumption that will explain all the cases (or most of the cases) of fine-tuning, rather than jumping on some “one-trick pony” that explains just one example of cosmic fine-tuning.

When we look at examples of cosmic uniformity, we find a very similar situation. There is not just one amazing case of cosmic uniformity (the horizon problem), but many others. Among the main cases of cosmic uniformity are the uniformity of fundamental constants in opposite regions of the universe. Scientists have determined that some fundamental constants such as the fine-structure constant are the same in opposite regions of the universe separated by a distance of more than twenty billion light-years (ten billion light-years in one direction, plus ten billion in another direction). This is particularly amazing because it is not just a uniformity over a vastness of space but also a uniformity over a vastness of time equal to almost the age of the universe. Other examples of cosmic uniformity are the uniformity of the universe's laws. The universe is like a vast machine that keeps on following the same set of rules (which we call the laws of nature), obeying those laws to the letter, with slavish obedience eon after eon.

If we were to list all of the cases of cosmic uniformity, it would be a long list. But how many on that list does the theory of cosmic inflation purport to explain? Exactly one: the horizon problem. Again, in this sense the cosmic inflation is like a theory of the origin of species that only explains the origin of tigers, without explaining the origin of any other species. If we are to start trying to explain cosmic uniformity, we need a more general explanation, rather than jumping on some “one-trick pony” that explains just one example of cosmic uniformity.

How the Cosmic Inflation Theory Robs Peter to Pay Paul .

The advocates of the cosmic inflation theory neglect to explain that as the price of explaining one example of cosmic fine-tuning (the flatness problem), the cosmic inflation theory requires its own fine-tuning, in not just one place, but multiple places. One has to imagine various types of fine-tuning to create a theory of cosmic inflation compatible with observations. You have to do fine-tuning so that the cosmic inflation can start at just the right instant, and more fine-tuning so that the cosmic inflation can end at just the right time (or else you end up with a universe that keeps inflating exponentially, which we know did not happen). It is not clear at all that when you add up all these types of fine-tuning needed for cosmic inflation to work, that you end up with less cosmic fine-tuning than if you don't believe in the theory. It's basically a case of robbing Peter to pay Paul.

The False Prediction of the Cosmic Inflation Theory

Before justifying my assertion that the cosmic inflation theory makes a false prediction, I must declare an interesting and important principle that is sometimes overlooked. This is the principle that we should never ignore the “gross predictions” of a theory, and never try to subtract counterfactual predictions, thereby judging a theory only on a set of “net predictions.”

Here is what some people think our procedure should be when evaluating a theory:

(1) Start with the “gross predictions” of a theory – everything it seems to predict, regardless of known facts.
(2) Subtract from these “gross predictions” anything known to be false.
(3) Then evaluate the theory on a smaller set of “net predictions.”

I think such an approach is badly mistaken. Rather than discarding “gross predictions” of a theory that are clearly counterfactual, we should in fact pay great attention to such predictions, because they are often very important indicators that the theory is false. It's rather like this: suppose a theory predicts that a factory makes only red phones, and you open a package from the factory, seeing a blue phone. What does the theory predict now? Exactly the same thing it predicted before you opened the package: that the factory makes only red phones.

So let's look at exactly what are the predictions are of the cosmic inflation theory, without discarding any counterfactual “gross predictions.” The predictions of the cosmic inflation theory are as follows:

(1) The universe is spatially flat, or very close to being spatially flat.
(2) There is a relatively small amount of what is known as cosmological non-Gaussianity.
(3) Our universe is a lifeless “small bubble” universe that is way too young and small for any galaxies to have formed in it.

The cosmic inflation theory actually makes the third of these predictions because it predicts that each universe that undergoes exponential expansion produces many other “bubble universes,” and that each of these bubble universes themselves produce many other bubble universes, and so on and so forth. According to the predictions of the theory, the number of these bubble universes too small to contain any galaxies (and any life) should be billions and trillions and quadrillions of times larger than the number of bubble universes large enough for galaxies to form. As cosmic inflation proponent Alan Guth describes here (in a discussion of this “youngness paradox”), “The population of pocket universes is therefore an incredibly youth-dominated society, in which the mature universes are vastly outnumbered by universes that have just barely begun to evolve.”

Given such a situation (in which small bubble universes vastly outnumber bubble universes large enough for galaxies to form), and given that predicting one thing is trillions of times more likely than another thing is equivalent to predicting the first thing, it must be said that the cosmic inflation theory predicts that our universe is one of those smaller, lifeless universes. It is not legitimate at all to subtract this counterfactual prediction because of some principle that we are allowed to subtract counterfactual predictions, reducing a set of “gross predictions” to a set of “net predictions.”

So how can an advocate of the cosmic inflation theory explain how we got lucky enough to be living in one of the rare life-compatible “bubble universes,” when it is almost infinitely more likely (under cosmic inflation theory) that our universe would be one of the young “bubble universes” too small for galaxies to form in it? He must resort to a “blind luck” explanation. But the luck needed is greater than the luck needed to have a successful universe without cosmic inflation. So nothing is accomplished, and the “miracle” of our existence is not made any less miraculous. In fact, the cosmic inflation theory seems to make our existence even more miraculous. How can such a result be described as scientific progress?

Cosmic Inflation: A “Cash Cow” for Lazy Cosmologists?

If the case for cosmic inflation is so weak, why do so many cosmologists support it? One answer can be found in groupthink effects, the tendency of modern cosmologists to travel in a herd because of sociological “go with the crowd” reasons. But another reason is that for decades the cosmic inflation theory has seemingly been an easy “meal ticket” for lazy cosmologists.

Producing a new paper on cosmic inflation is a cinch for a modern cosmologist. He merely has to write some new paper juggling some astrophysical numbers (perhaps thinking of some minor new speculative tweak), and doing the same type of calculations done by many earlier cosmologists. Since the cosmic inflation theory was introduced, cosmologists have published thousands of new papers discussing different flavors of the theory. A large fraction of this work has been funded by university grants or federal research grants. So for a cosmologist who is not particularly innovative, the cosmic inflation theory is a wonderful “cash cow.” Think of how easy it is: just produce “yet another cosmic inflation paper” (something like “cosmic inflation paper number 5,678”) without any real originality, and with zero risk that anyone will ever prove you wrong; and let the taxpayers or your university foot the bill. I'm reminded of the phrase in that Gershwin song: nice work if you can get it.

Given the existence of this convenient “cash cow” that pays well for easy speculative work, cosmologists are reluctant to bite the hand that feeds them, and admit how weak the cosmic inflation theory is. That would be like up ripping up their meal ticket.

Conclusion

There seems to be good evidence for the Big Bang and the expansion of the universe, so we should keep believing in such theories. There is no good evidence for the theory of cosmic inflation (the theory of exponential expansion in the universe's first second), and it should be dumped, in the sense of being relegated to a mere possibility rather than asserted as a likelihood. Scientists Ijjas, Steinhardt, and Loeb recently wrote a paper giving some powerful objections to the cosmic inflation theory.

Rather than embracing a theory that claims to explain only one case of cosmic fine-tuning (when there are many such cases to explain), we should look for a more general explanation. Rather than embracing a theory that claims to explain only one case of cosmic uniformity (when there are many such cases to explain), we should also look for a more general explanation. If scientists cannot think of such a more general explanation, they should simply say that they do not understand the explanation for the flatness problem and the horizon problem that originally motivated the cosmic inflation theory. It is an intellectual sin to claim to understand a cosmic mystery that you do not really understand. For 1000 years, astronomers embraced the Ptolemaic theory, and claimed to understand why the solar system behaves as it does, before they really understood that mystery. There's a lesson to be learned from such a long mistake: don't claim to understand a cosmic mystery based on some weak theory. Much better to simply candidly say: I don't understand this cosmic mystery.

Sunday, September 21, 2014

New Study Busts Ghost Stereotypes

Ghost sightings have been reported many times throughout history. Many thought that once enough people were carefully educated in modern materialist thinking, ghost sightings would gradually disappear. But that has not happened. There seem to be as many ghost sightings now as in the past.

I can imagine two people discussing this:

Bob: What is with these silly people still claiming to see ghosts? Haven't the scientists told everyone there's no such thing as ghosts?

John: I think the problem is: the scientists forgot to tell that to the ghosts.

Most of us have stereotypical ideas about ghost sightings. Such stereotypes may include the idea that when you think you see a ghost, it is usually a terrifying experience. Another stereotypical idea is that a typical sighting is just a case of one undereducated person hearing a funny sound or seeing something unusual, and then jumping to the conclusion that the strange sight or sound was a ghost. A nonbeliever in ghosts may reassure himself that alleged ghost sightings are just hallucinations by a single person, or cases of some unsophisticated rube jumping to a conclusion after seeing or hearing something rather unusual. 

ghost story
 Pulp fiction promoting a ghost stereotype

But a new study challenges such stereotypes. The study (The Spectrum of Specters: Making Sense of Ghostly Encounters) was done after interviewing 39 people who claimed to have encountered ghosts. One surprising finding was that 6 of the people interviewed were professors. That doesn't exactly fit the stereotype that people who see ghosts are intellectually unsophisticated.

Another way in which the study busts stereotypes is by finding that 62% of the survey respondents said they observed ghosts along with a friend, coworker, or family member. This challenges the stereotype encouraged by skeptics, that a ghost sighting is typically just a hallucination by a single person. Of course, with their ever-fertile creativity at explaining away things, skeptics will simply argue that such cases are examples of “mass hysteria” or “hallucination infection,” or some such thing.

Another stereotype challenged by the study is that ghosts are mainly seen in haunted houses or spooky places, a stereotype advanced by paranormal TV shows in which people investigate ghosts in places like graveyards or abandoned prisons or mental institutions. But the study found that 64% of the participants encountered ghosts “during mundane or normal times in their lives.”

The study also concluded that “nearly all of our participants identified either a positive or nonthreatening encounter with a ghost.” This busts the “terror of ghost encounters” story line pushed by some cable TV shows. In fact, many people who claim ghost encounters claim to have had a very peaceful experience. One person has explained such a discrepancy this way: “Peace doesn't sell; terror sells.” (I can't remember the exact person who said that.) This finding should actually come as no surprise to watchers of the long running show Celebrity Ghost Stories, on which celebrities often report very peaceful and gentle encounters with ghosts, particularly apparitions of recently departed relatives.

In fact, if we examine the typical account described as a “terrifying ghost encounter,” you will find an encounter in which there is no real indication of ill will on the part of the ghost. In the typical such case, someone will see an apparition, and be frightened, but the fear comes from the human's own fear of the unknown, not from any real sign of ill will from the apparition.

It seems, therefore, that ghosts may have got a bum rap. So if you ever see a ghost, it may be appropriate to reach out your hand to give a nice firm handshake. Oops – that won't work.

Friday, September 19, 2014

Another Origin of Life “Progress Mirage”

The phrase “you don't know jack” is sometimes used to mean “you don't know anything.” The phrase is a shortening of a scatalogical phrase that is one word longer. It may be a slight exaggeration to say, “We don't know jack about the origin of life,” but such a statement would not not be very far from the truth.

Now some readers will no doubt think: the origin of life – didn't Darwin figure that out? No, all that Darwin said about the origin of life in The Origin of Species was this one statement: Probably all the organic beings which have ever lived on this earth have descended from some primordial form, into which life was first breathed.”

Darwin did suggest in a private letter an idea for the origin of life: “But if (and Oh! What a big if!) we could conceive in some warm little pond, with all sorts of ammonia and phosphoric salts, lights, heat, electricity etc., present that a protein compound was chemically formed ready to undergo still more complex changes.”

But this doesn't exactly work as an explanation for life's origin. Proteins are extremely complicated molecules that are biologically assembled using the instructions stored in DNA, instructions that consist of recipes of how to assemble proteins from smaller units called amino acids. For this to work, you also need to have the genetic code, which is the “language” used by the DNA to express its instructions. Without something like DNA and something like the genetic code, there is no chance that proteins will randomly form from smaller components.

This is why the most famous experiment on the origin of life (the 1950's Miller experiment) was only a small step towards unraveling the origin of life. Miller put some gases in a glass container, and zapped the gases with electricity. Some amino acids were formed in the water at the bottom of the apparatus. It was little like producing some letters, without actually showing how the letters can form into paragraphs and books.

Miller Experiment
The Miller Experiment


In the book Quantum Aspects of Life, scientists Jim Al-Khalili and Johnjoe McFadden (a molecular geneticist) say this about the Miller experiment:

When the work was published, there was widespread anticipation that it would not be long before simple life forms were crawling out of origin of life experiments. But it did not happen. Why not? The answer is that the situation has become more complicated. For a start, the atmosphere of the early Earth is no longer thought to have been reducing. The present guess is that it was probably at best redox neutral dominated by compounds like carbon dioxide and nitrogen. Under these conditions, it is much harder to form biomolecules like amino acids. Another problem is that while amino acids were made, no proteins (polymers of amino acids) were synthesized...A fourth problem was the many essential biomolecules such as nucleic acids were not formed in the Miller-Urey experiments and have since proven to be exceedingly difficult to form in any laboratory based primordial soup experiment.

The later statement is actually an understatement, because not only have no laboratory experiments (accurately simulating early Earth conditions) produced nucleic acids (such as DNA and RNA) – such experiments have not even produced all the components of nucleic acids (only some of them).

So it seems like the famous Miller experiment was something of a “progress mirage,” a term we may use for some scientific finding that initially seems like some great leap forward, but which really doesn't move the ball very far towards the goal line. Such progress mirages are common in the history of science.

Now we may have a new example of an origin of life “progress mirage.” Some new study has been described by Scientific American with this headline: “New Steps Shown Toward Creation of Life by Electric Charge.” Wow, sounds like scientists are making progress on that “origin of life” thing, right? But maybe not, because when you read to the end of the article you get a nasty little letdown. A scientist analyzing the new study says, “‘One criticism is that the authors chose to use a somewhat reduced or hydrogen-rich mixture in their study, whereas the atmosphere on early Earth is thought to have been carbon dioxide rich, which could entail very different chemistry in the presence of an electric field.”

It sounds like the “origin of life” investigators may be up to their old tricks again. We might call it “the Miller trick.” Having trouble getting promising results using the gases believed to exist on the early Earth? Then “stack the deck” a little bit by using some different atmosphere that makes it easier to get a positive result.

Another “progress mirage,” it would seem. The unexplained origin of life billions of years ago (defying seemingly astronomical odds) remains a major thorn in the side of anyone claiming that blind chance explains our current existence. As Al-Khalili and Johnjoe McFidden put it, “The simplest self-replicating organisms alive today are far from simple and unlikely to have formed spontaneously in the primordial oceans.” While recognizing the value of Darwinian contributions, and acknowledging the reality of a very old planet, we can search for deeper principles to help explain this origin mystery.

Tuesday, September 16, 2014

Recipe for a Tolkienesque “Hobbit” Future

I just saw The Hobbit: The Desolation of Smaug on cable TV. It's the latest in the series of movies inspired by J. R. R. Tolkien's epic trilogy The Lord of the Rings. The Tolkien “franchise” is apparently alive and well, with the expected elements such as dwarfish hobbits, giants, dragons, wizards, and a magic ring.

If I recall correctly, the “Lord of the Rings” stories are supposed to be set in a distant human past, long before history began. But is there any way that something like the Tolkien fictional world might actually be...our future? What kind of strange future events might cause the world to end up in some state rather like the “Middle Earth” depicted in the “Lord of the Rings” series?

I can imagine some ways that might come to pass. One contributing factor could be some kind of genetic bifurcation or fission of the human species. According to a Huffington Post article, “Researchers found that the shorter a person was, the more likely they were to have a long life.” Other studies show that tall people earn more money. So it might be that one day, gene splicers offer parents a choice: you can have regular children; you can have short children that live longer; or you can have tall children that earn more money. A few decades after parents have such a choice, we may see the human race splitting up into different sub-species. There may be a sub-species of humans who are very small, and another sub-species of humans that are very tall. It could then end up being a little like “The Lord of the Rings,” with its division of dwarfish hobbits, regular humans, and giants. Or a nuclear war might produce mutations that cause the human race to split up into different sub-species. Need some cute little pointed ears? Some mutations will get you that.

elf

But in order to have anything like a Tolkienesque “Hobbit” future, you also need to get rid of most of the trappings of modern civilization. It's okay to still have cute little Hobbit houses, and some huge imposing stone buildings, but we would have to lose most of the superhighways, the malls, and the burger joints. But there's are several ways that might happen. One possibility is global energy collapse. If we ever pretty much run out of oil (as some Peak Oil theorists suggest), we might see a social breakdown that might cause our car-based culture to collapse in a tailspin. Another possibility is some kind of electromagnetic pulse that wipes out the electrical grid. That could come from a big solar flare, or from an enemy detonating a nuclear bomb way up in the atmosphere. Or there might be a nuclear war that wipes out most of humanity.

Of course, after such a collapse it would probably be a good long time before things got really Tolkienesque, and people went back to riding on horses, using swords, and living in quaint little villages. But before all that long, you might well get an American culture a lot more similar to the way things were when the country was colonized by Europeans.

But what about the dragons and strange monsters that are a mainstay of the “Lord of the Rings” franchise? It's easy to imagine how that might arise. We simply need to imagine some genetic experimentation getting out of control. Imagine if a few decades from now biologists started to play around with making new species by gene-splicing. They might be right in the middle of that, when boom, society might collapse because of an energy crisis, an electromagnetic pulse, or a nuclear war. The weird creatures produced by the geneticists might then somehow get released into the world, and start reproducing. The world might then be overrun by various assorted species of monsters, some of which could even be like dragons (although it's hard to imagine any scenario by which you might end up with fire-breathing dragons). A nuclear war might also produce mutations that would lead to strange new species, some of which could be monstrous.

And why do we have to imagine that the “monsters” are all biological? We can imagine some types of runaway self-reproducing robots spreading across the land, with no more government around to stop them. Such robots might function just like various monsters out of the pages of Tolkien. What's the big difference between being threatened by giant spiders and being threatened by giant eight-legged robots that walk around like spiders?

You've got to admit, I'm starting to put together here an idea that some aspiring screenwriter might make into a great script. When a screenwriter is pitching a script, he always likes to have an “elevator pitch” he can use to sum up the idea in 20 seconds – for example, “My script is Titanic in outer space.” So here we have a nice potential elevator pitch: “My script is Lord of the Rings in a post-apocalyptic future.”

But what about the wizards that are a mainstay of the “Lord of the Rings” franchise? You might say: there's no possible way to get that in a human future. But that's not true. Let us simply remember Arthur C. Clarke's famous statement that any sufficiently advanced technology is indistinguishable from magic. A person who applies advanced technology can easily seem like a wizard, as long as his observers have no idea what the technology is or how it is produced.

We can imagine a Tolkienesque future in which 99% of the human race knows nothing about science or technology. But there may be a handful of people who still know how to apply technology. Such rare technologists may start calling themselves magicians or wonder workers, to instill awe in the average human. They may start wearing robes and those big droopy wizard hats. To the average person, they would seem just exactly like magical wizards. But they would know they were not using magic, but simply using science and technology.

So it is all too possible that we might end up with a Tolkienesque “Hobbit” future with the aroma of “Middle Earth,” in which strange monsters (electronic or biological) roam the land, in which there are different flavors of the human race with different heights, in which people with swords ride around on horseback between quaint little villages, and in which astonishing wizards work wonders that seem like magic to almost everyone. But the golden magic ring? Forget about it; I can't think of any way to fit that into a possible future.

Saturday, September 13, 2014

It's Hard to Explain “Radical Brain Plasticity” Under Our Current Paradigm

Neurologists think they have it figured out pretty well: your mind is purely a by-product of your brain; consciousness is like light produced by a light bulb, and the light bulb is the brain itself; particular parts of the brain are like particular parts of a computer; if one of those parts fails or is missing, your mind is crippled just as if someone yanked out some of the chips inside a computer. 

The problem is that there a good deal of evidence against such thinking. I've discussed some of these items before, but let's look at some new items that have been discussed in the news.

One interesting case recently reported was that of a woman who has no cerebellum. The cerebellum is known as the “little brain,” and is located near the middle of the brain. But in terms of the percentage of brain cells found in the cerebellum, it is misleading to say the cerebellum is the “little brain.” According to this scientific paper, the most recent estimates are that there are about 22 billion neurons in the cerebrum (the outer part of the brain), and 101 billion neurons in the cerebellum. So as the cerebellum has most of the brain's neurons, we might expect that this woman with no cerebellum was completely dysfunctional.

But actually, it turns out that the woman without a cerebellum merely suffered from mild mental retardation. She was able to walk and talk, and had even got married and had a daughter. How could that be: losing 80% of your brain neurons produces only mild mental retardation?

Another interesting case recently reported is that of an 88-year old man (identified as H.W.) who tested very well on a test of mental functioning, getting the maximum possible score of 30. But it was found that the man had no corpus callosum. The corpus callosum is the main part of the brain that links the two brain hemispheres. As an article reports:

Given the importance of the callosum for connecting the bicameral brain, you’d think this would have had profound neuropsychological consequences for H.W. In fact, a detailed clinical interview revealed that he’d led a normal, independent life – first in the military and later as a flower delivery man. Until recently, if H.W.’s testimony is to be believed, he appeared to have suffered no significant psychological or neurological effects of his unusual brain...Brescian and her colleagues conducted comprehensive neuropsych tests on H.W. and on most he excelled or performed normally. This included IQ tests, abstract reasoning, naming tests, visual scanning, motor planning, visual attention and auditory perception.

The same article refers us to another case of a boy named E.B. who had surgery to remove almost the entire left half of his brain. But he underwent rehabilitation, and “EB's language fluency improved remarkably over the ensuing two to three years until no language problems at all were reported at school or in the family home.” Now how is that possible? If the light (consciousness and intelligence) is all coming from the 100-watt light bulb (the brain), how do you get almost 100 watts of light when you slice the light bulb in half?

Scientists have a kind of lame phrase to try to describe such things. They call it brain plasticity. Brain plasticity is supposed to kind of mean: if one part of the brain goes down, some other part will take over its work. It's basically a kind of non-explanation rather like saying: the brain can keep working pretty good even if you yank out most of its neurons. I guess now we're forced to accept a doctrine of “radical brain plasticity.”

But how can we explain such a thing? Through Darwinian natural selection, perhaps? I can imagine the explanation:

Through the blind process of natural selection, humans slowly developed radical brain plasticity, to help them survive and flourish during all those times about 30,000 years ago when many people were losing half of their brains because of brain surgery.

Oops, that doesn't quite work, does it? That's because about 30,000 years ago, when humans were shuffling around in caves, there was no brain surgery. It seems hard to explain the origin of “radical brain plasticity” giving any reasons relating to natural selection. From the standpoint of survival of the fittest, nature shouldn't care about about helping out the occasional person with a brain defect such as being born without a cerebellum. If Darwinian evolution has 100 healthy brains and 1 defective brain, in theory its attitude should just be: I don't care about the deficient brain – let the fittest survive and reproduce. That's the gist of natural selection – let the weak be damned, and let the strong flourish. So how exactly can we account for the origin of this “radical brain plasticity”?

It's very hard to explain such a thing under conventional ideas that the brain is the light bulb and consciousness is the light. Drastically different ideas may be needed, including some new brain/consciousness model that may be compatible with phenomena such as near-death experiences, a phenomenon quite incompatible with the brain/light-bulb model of consciousness. 

 

Monday, September 8, 2014

The Questionable Task of Trying to Do Science With Computer Simulations

Scientific findings have usually been produced through two different ways. The first way is observation, such as when a biologist classifies some new deep-sea fish he photographed in an underwater expedition, or when an astronomer uses a telescope to get some new radiation readings from a distant galaxy. The second way is physical experimentation, such as when a chemist tries combining two chemicals, or when physicists try smashing together particles in a particle accelerator. But nowadays more and more scientists are trying to do science in a different way: by running computer simulations.

The idea is that a scientist can do a computer experiment rather similar to a physical experiment. A scientist might formulate a hypothesis, and then use (or write) some computer program that simulates physical reality. The scientist might get some results (in the form of computer output data) that can be used to test the validity of his hypothesis. Nowadays many millions of dollars are being spent on these computer simulations, often funded by taxpayer dollars.

But while this approach to doing science sounds reasonable enough in theory, there are some reasons for being skeptical about this particular approach to doing science. The first reason has to do with the computer programs that are used for doing these simulations. Such programs are very often not written according to industry standards, but are instead “cowboy-coded” affairs written mainly by one scientist who is kind of moonlighting as a programmer.

The difference between software written according to software industry “best practices” and “cowboy-coded” software is discussed here. Software written according to industry “best practices” typically involves a team of programmers and a team of quality assurance experts. “Cowboy-coded” software, to the contrary, is typically written mainly by a single programmer who often takes a “quick and dirty” approach, producing something with lots of details that only he understands. Once a computer program has been written in such a way, it is often what is called a “black box,” something that only the original programmer can understand fully. If the programmer has not documented the code very well, and the subject matter is very complex, it is often the case that even the original programmer no longer fully understands what is going on inside the program, given the passage of a few years.

Many of the programs used in scientific computer simulations are written by scientists who decided to take up programming. The problem with that is the art of writing bulletproof, reliable software is something that often takes a programmer many years to master, years of fulltime software development, often involving 60-hour or 70-hour weeks. There is no reason to be optimistic that such a subtle art would have been mastered by a scientist who does some part-time work at writing computer programs. I once downloaded a computer program being used nowadays as part of astronomical computer simulations. It was written in an amateurish style that violated several basic rules of writing reliable software. I can only imagine how many other scientific computer simulations are based on similar type of coding written by scientists doing computer programming.

This is not merely some purist objection. Even when software code is written according to industry “best practices” standards, such code almost always has errors. Code that is not written according to such standards will be likely to have many errors, and the result may be that such software simply gives the wrong answers when it produces outputs. Faulty software cannot be a basis for reliable scientific conclusions.

Another reason for skepticism is related to the fact that all computer programs designed to simulate physical reality require that a user select a variety of different inputs before running the computer program to simulate physical reality. It is never a simple matter of “just run the program and see what the results are.” Instead, it almost always works like this:

(1) Choose between 5 and 40 input arguments or model assumptions, in each case choosing a particular number from some range of possible values.
(2) Run the computer simulation.
(3) Interpret the results of the simulation outputs.

The problem is that Step 1 here gives abundant opportunities for experimental bias. A scientist who wants to run a computer simulation supporting hypothesis X may make all types of choices in Step 1 that he might not have made if he was not interested in supporting hypothesis X. Whenever you hear about a scientist running a computer simulation, remember: there are almost always a billion different ways to run any particular computer simulation. So when a scientist talks about “the results of running the simulation,” what he really should be saying is “the results of running the simulation using my set of input choices, which is only one of billions of possible sets of input choices.”

It seems almost as if many scientists who run these computer experiments try to hide the fact that they ran a computer program that allows billions of possible input choices, after choosing one particular set of input choices from billions of possibilities. The main way in which they seem to do such a thing is to not even discuss the input possibilities allowed by the computer program, and to not even list which choices they made for those input choices. This is, of course, a pathetic way of doing science. I saw an example of this in a recent scientific paper in which a scientist did some computer simulation using a publicly available program he named. The scientist did not bother to even specify in his paper what input parameters he chose for that program. The online documentation for the program makes clear that it takes many different input parameters.

Another reason why scientific computer simulations are doubtful is that there is ample opportunity for cherry-picking and biased interpretations when interpreting the results of a simulation. Very complicated computer simulations often produce ridiculous amounts of output data. A scientist might be able to make 10,000 different output graphs using that data, and he is free to choose whatever output graph will most look like a verification of whatever hypothesis he is trying to prove. He may ignore other possible output graphs that may look incompatible with such a hypothesis. 

Output of a typical computer cosmology simulation

Yet another reason why scientific computer simulations are doubtful is that it is often impractical or unrealistic to try to simulate a complicated physical reality in a computer program. If the thing being simulated is fairly simple, such as a particular set of chemical reactions, we might have a fairly high degree of confidence that the programmer has managed to capture physical reality pretty well in his computer program. But the more complicated the physical reality, and the greater the number of particles and forces involved, the less confidence we should have that the insanely complicated physical reality has been adequately captured in mere computer code. For example, we should be very skeptical that any computer program in existence is able to simulate very accurately things such as the dynamics or origin of galaxies.

Still another reason why scientific computer simulations are doubtful is that those who run such simulations typically fail completely to follow “blindness” methodology followed in other sciences. When testing new drugs, scientists follow a “double blind” methodology. For example, the person dispensing a drug to patients may be kept in the dark as to whether he is dispensing the real drug or a “sugar pill” placebo. Then the person collecting the results data or interpeting it may also be kept in the dark as to whether he is dealing with the real drug or a placebo. Scientists running a scientific computer simulation could follow similar policies, but they almost never do. For example, when running a scientific computer simulation the choice of input parameters and interpretation of the outputs could be made by some scientist who does not know which hypothesis is being tested, or who has no interest in whether the results confirm that hypothesis. But it is rare for any such “blindness” techniques to be followed when doing scientific computer simulations.

But despite their dubious validity, scientific computer simulations will continue to gather a great of deal of scientists' time, while chewing up many a tax dollar. One reason is that they can be a lot of fun to create and run. Imagine if you create a computer program simulating the origin and evolution of the entire universe. The scientific worth of the program may be very questionable, but scientists may have a blind eye to that when they are having so much fun playing God on their supercomputers. 

Postscript:  I am not suggesting here that scientific-related computer simulations are entirely worthless. I am merely suggesting that such simulations are a "distant second" to science produced through observations and physical experiments. Also, the points made here do not undermine the case for global warming, as that case relies not mainly on computer simulations, but also on many direct observations, such as observations of increased carbon dioxide levels, and observations of increasing temperatures. 

Friday, September 5, 2014

The First Silicon President: A Science Fiction Story

In the middle of the twenty-first century, the US population grew tired of the endless mistakes of US Presidents. They were tired of the endless budget deficits, the rising national debt, growing unemployment, the endless treasury-depleting foreign misadventures, and the rising tax burdens. So finally the Republican Party decided to take advantage of public disillusionment with the performance of flesh-and-blood commanders-in-chief. The Republican Party nominated a robot to run for President.

The robot was named Johnny Truth, and at first its prospects looked very good. In October the National opinion polls showed that the race was neck and neck, with 48% favoring the re-election of the sitting President, and 49% favoring the election of the slick steel and silicon candidate Johnny Truth. But then the presidential debates came, and Johnny Truth stumbled. His answers somehow seemed too wooden, too mechanical. With a little bit of humor and warmth that was hard for a robot to match, the US President won the debates, and later cruised to an easy win in the election. It helped that he had a clever TV commercial saying, “When the chips are down, you've got to rely on a human,” with a visual showing a robot being repaired.

The Republican Party went back to the drawing board. Programmers and political operatives were brought in to a big strategy meeting.

That's the last time we nominate a robot to run for President,” said RNC chairman Will Dorrit.

No, we had the right approach,” said software whiz Rod Tyler. “We've just got to add new features to the programming. The President won the debates by coming up with a little warmth and humor, which our robot lacked. People voted for the President because they felt he was more like them. But with a few months of work, we can fix that. We can program in warmth and humor to a new robot.”

So what do you recommend, that we reprogram Johnny Truth?” asked Dorrit.

No, he's dead meat in the minds of the voters,” said Tyler. “We need a whole new robot.”

Within a year the new robot was created. They called him Abe Gold. Abe had a way with words. Voters would swear he was just like a real human. He had the personal touch of Bill Clinton, and was wittier than John Kennedy. He cruised through the October presidential debates without any trouble, and won the November election in a narrow victory. 

robot president
 

Shortly after Abe Gold's inauguration as the first robotic president of the United States, a young programmer named Joe Tucker made a confession to his long-time buddy Paul. The two were chatting in Joe's apartment.

Paul, you wanna know a hell of a secret?” asked Joe. “The fate of Abe Gold is in my sweet little hands.”

What are you talking about?” asked Paul.

I worked on the programming code used by the silicon mind of President Abe Gold,” said Joe. “I secretly sneaked in some 'Easter Eggs.' Do you know what programmers mean by an Easter Egg?”

Nope,” said Paul.

An Easter Egg is a secret piece of programming code hid within some much larger base of code,” explained Joe. “They first used Easter Eggs to display messages announcing who worked on a computer program – they were kind of like credit sequences you see at the end of movies. But an Easter Egg can be anything you want it to be. An Easter Egg can be any command you can think up. I stuffed in some real interesting Easter Eggs into the programming code that controls President Abe Gold.”

So how do you activate these Easter Eggs?” asked Paul.

By visual signals and auditory signals,” explained Joe. “There's one bit of Easter Egg code that I sneaked in that is really funny. Imagine if I ever go to some speech of the President. If I simply hold up a big sign showing a purple circle within an orange triangle, as soon as the President sees that, he'll start yelling out the most obscene messages you ever heard, right then and there. That's the visual signal that activates the Easter Egg code I sneaked into the software in the President's head.”

Joe and Paul laughed hysterically, imagining what it would be like if the President started swearing like a drunken sailor, right in front of some big crowd. After Paul left, Joe remembered the most important Easter Egg code he had hidden within the silicon President's software. It was a secret subroutine that would cause the President to launch a nuclear attack that would lead to a global nuclear holocaust. To activate the code, Joe would merely need to go to one of the President's speeches, and shout out in a loud voice the phrase, “Doomsday Boomsday.”

The next day was a nightmare for Joe. There was a meeting at work, and he was told by management that he would have to spend the next four weeks documenting his code. Now, you can push a programmer pretty far, and he won't complain. You can make him work 90-hour weeks when crunch time comes, and “release day” is near. You can “feature creep” him half-to-death by making him write twenty new program features in three days. You can give him some crummy half-baked sketch of a plan, and ask him to flesh it out into a working, usable program, kind of on a wing and a prayer. A programmer will happily put up with all of those things. But the one thing that will always make a programmer want to kill himself is if you simply ask him to document his code.

After the meeting, Joe called his wife on the phone.

So you're going to the President's speech?” said Joe. “Great, I'll watch it on the company TV at the same time. Now I think it would be fun if you gave me a little 'shout out,' so I can hear your voice on television. So at some point during the President's speech, I want you to shout out real loud the phrase 'Doomsday Boomsday.' ”