Header 1

Our future, our universe, and other weighty topics


Tuesday, April 29, 2014

"Feeling the Future" Study Replicated, as Skeptics Fume

Several years ago Cornell professor emeritus Daryl Bem published the paper Feeling the Future in a peer-reviewed scientific journal. The paper reported the results of controlled experiments which seemed to suggest the existence of precognition, the ability of humans to detect the future in a paranormal way. There were voices of outrage that an Ivy League university could have been involved with such a finding, which was denounced as pseudoscience. In the next months skeptics trumpeted one or two unsuccessful attempts to replicate the experiments.

A few weeks ago, however, Bem and others published a meta-analysis looking at 90 different experiments on precognition done in 33 laboratories. They found that Bem's sensational experiments had been well replicated. It seems that there are two ways of doing Bem's experiments, a “fast protocol” and a “slow protocol.” It seems that when you use the fast protocol, trying things just as Bem did, the effect does reproduce well. The paper found that to explain the results as a coincidence, one would have to believe in a coincidence with a chance of about 1 in 10 billion.

Bem's original “Feeling the Future” study was not at all a unique bolt-from-the-blue, but merely something in the same vein as quite a few previous studies (and many human experiences) indicating that something like precognition can occur. A particularly astonishing case is related here. Other similar experiments have shown a phenomenon called presentiment, an anomalous unexplained tendency of the human body to react to a stimulus before the stimulus has been presented. Here is a link to a meta-analysis of such experiments, showing an effect extremely unlikely to have occurred by chance. 

I personally don't like the idea of precognition, and prefer to believe that it doesn't exist, simply because it is easier to understand a universe in which time behaves like a roll of film in a movie can, with a nice clear separation between each frame in the movie and the frame that came before it. But I don't let my conceptual preferences guide my assumptions about whether precognition is likely or possible. 

 How have psi skeptics reacted to the latest meta-analysis showing that Bem's research has been well replicated? A typical unthoughtful knee-jerk reaction is found in this post.

The post exhibits the following characteristics:
  1. There is no attempt at all to address the substance of the meta-analysis paper. There is no mention of any specific flaws that the writer has discovered in the research.
  2. The writer resorts to name-calling, referring to the paper as pseudoscience. He does not say anything to back up such a claim.
  3. The writer very emotionally employs the technique of “character assassination by comparison,” the technique of trying to debunk someone by comparing him to people of low repute. Daryl Bem, a distinguished Ivy League professor emeritus, is indirectly compared to “climate change deniers, young-earth creationists, flat earthers, reptilianists, scientific racists, people who believe that women who are raped won't get pregnant, and Holocaust Deniers.”
  4. The writer claims that “the methodological unsoundness of Daryl Bem's work has been amply demonstrated,” but gives no statements, link or reference to back up that claim.
  5. The writer does not even provide a link to the meta-analysis he is attacking (apparently not wanting anyone else to look at the evidence).
There are numerous things that provoke this type of ire from skeptics: reports of near-death experiences, evidence for extra-sensory perception, evidence for precognition, the astonishing power of the placebo effect, deathbed apparitions, ghost sightings, alternative healing methods, astonishing unexplained recoveries from disease or injury, sightings of UFOs, and evidence that the universe seems to be fine-tuned to allow the existence of intelligent observers. Some peer-reviewed papers that discuss some of these topics can be found here. Perhaps the only thing these items have in common is that they seem to provoke the ire of a certain narrow-minded group which wants to act as a kind of thought police, zealously keeping our minds inside a little square with borders they have constructed.

When faced with evidence that conflicts with their cherished worldview, reductionist materialists all too often seem to follow the following general guidelines:
  1. If the evidence is a first-hand account, say that it is “merely anecdotal,” and imply that it should therefore be ignored (even if the same phenomenon has been reported by many different reliable witnesses over very long periods of time).
  2. Accuse the observers of having had hallucinations (even if their accounts are highly ordered and consistent with each other, and even if they have no signs of pathology or relevant drug use).
  3. If the evidence is something that cannot be reproduced in a laboratory, say that the evidence is “not reproducible,” and that it therefore has no merit (despite the fact that numerous important scientific phenomena such as cosmic gamma ray bursts and the Big Bang cannot be reproduced in the laboratory).
  4. If the evidence can actually be reproduced in the laboratory, with repeated successes, claim that the evidence is based on fraud (using basically the same “they're all fakers” technique used by global warming deniers).
  5. Make vague accusations of methodical unsoundness or mathematical errors, usually without substantiating the claims (or back up the claims with tangled Bayesian reasoning that no one will be likely to understand).
  6. Engage in vague name-calling by calling the research pseudoscience, or, more aggressively, call the researcher an enemy of science (even if he is a science enthusiast or has published many scientific papers).
  7. Imply that the researcher is a careless or easily-duped fool, even if he is an ultra-methodical person with a PhD.
  8. Attempt to discredit the findings by linking them with various disreputable superstitious phenomena such as astrology. Try to link the findings with extremist or fringe religious beliefs, even if there is no evidence to support any such association.
  9. Simply say that the finding was observed because of an incredibly improbable coincidence (a claim that can conveniently be made an unlimited number of times, with little chance that anyone will calculate the total microscopic improbability of all the coincidences being imagined).
  10. Flatly state that there is no evidence whatsoever for the phenomenon, even if evidence for it has been carefully and methodically accumulated by numerous researchers for more than a hundred years.
  11. If all else fails, suggest the possibility of a multiverse to explain the evidence (even though this leads to a “multiplicationism” position that is the opposite of the reductionism that is being defended).
These types of techniques may prove successful, but at the cost of constructing a kind of “reality filter” that may cause you to ignore some of the most important things man may observe or discover. 

Postscript: See this link for Honorton and Ferrari's meta-analysis of forced choice precognition experiments done between 1935 and 1987. Examining 309 experiments carried out by 62 investigators involving 50,000 participants in two million trials,  Honorton and Ferrari found an overall effect with a chance probability of about 1 in 1,000,000,000,000,000,000,000,000.

Saturday, April 26, 2014

No, Habitable Planets Are Not Bad News for Humanity

Scientists recently reported the discovery of Kepler 186f, an Earth-sized planet in the habitable zone. In the past few days there was published an opinion piece by Andrew Snyder-Beattie entitled Habitable Planets Are Bad News for Humanity. The essay made some very quirky reasoning very similar to a much earlier essay published on the web site of Nick Bostrom.

The essay is based on the idea of Fermi's Paradox, and an idea called the Great Filter derived from Fermi's Paradox. Fermi's Paradox is the “where is everybody?” mystery of why we have not yet observed extraterrestrials, even though we live in a galaxy that seems to have billions of planets on which life might have evolved. The concept of the Great Filter is the idea that there is some tendency, process or limitation that tends to prevent planets from producing extraterrestrial civilizations that survive long enough to spread throughout the galaxy.

The very strained reasoning of Bostrom and Snyder-Beattie goes rather like this:
  1. There must be some Great Filter which makes it very unlikely that planets produce civilizations that spread throughout the galaxy – something such as an unlikelihood of life originally appearing, an unlikelihood of intelligence ever appearing, or an unlikelihood of a civilization surviving for long.
  2. Such a Great Filter can either be in our past or our future (for example, if the Great Filter is the unlikelihood of life ever appearing on a planet, then the Great Filter is in our past, and we have already leaped over this hurdle).
  3. If the Great Filter is in our future, we should be very sad, because it will mean our civilization will probably not last very long.
  4. But if the Great Filter is in our past (some hurdle we have already jumped over), then we are in good shape, and our future is bright (the whole galaxy might be ours for the taking).
  5. If we discover life on another planet (or a habitable planet), it is evidence that life commonly evolves in our galaxy, and this shows that the Great Filter must be in our future, and that we won't last very long.
  6. Therefore, it is bad news if we discover any evidence that life commonly evolves in our galaxy.
This reasoning fails to make any sense. The main fallacy in it is the “single factor” fallacy of assuming that there is One Big Reason why a habitable planet would not tend to produce a civilization that would go on to spread throughout the galaxy. An additional fallacy is the assumption that if a typical extraterrestrial civilization does not go on to spread throughout the galaxy, then that tells us something ominous about the lifespan of our civilization.

In fact, there are many factors that might explain why a civilization arising on another planet would not tend to spread throughout the galaxy, and most don't suggest anything about the lifespan of our civilization. The factors include the following:

The slowness and difficulty of interstellar travel. The speed of light is a physical speed barrier, and it takes years for light to travel from one star to the nearest star of the same type. Contrary to what you see in science fiction such as Star Wars and Star Trek, travel between stars is probably very, very slow, even for the most advanced civilization. There are engineering and physics reasons for doubting that any civilization could produce a ship capable of traveling more than about a fifth of the speed of light, making travel from one star to a nearby star a matter of decades or centuries. We have no reason to believe that warp drives or instantaneous “star gates” are likely to be possible (either would require that physics gives us a gigantic gift that we could exploit, and such a gift is probably not waiting for us). 
 
The impracticality of long-distance control. Given the limit of the speed of light, we have no reason to think that any civilization could establish anything like an empire spanning a big fraction of the galaxy. Even if it were to establish interstellar colonies around neighboring stars, the communication lag between such colonies would be many years, and the possibility of enforcing any control would be minimal; the farther away the colony was, the smaller would be the chance of controlling it. In fact, there is every reason to suspect that the maximum radius for any type of interstellar empire is very small, only about 20 to 50 light-years, as I explain in this blog post. This is another reason why the whole “if they existed, they would have spread throughout the galaxy” reasoning is very weak.

The unlikelihood of ultra-expansionist extraterrestrials. Most species on our planet (including almost all birds, fish, and insects) are non-territorial, meaning they have no tendency to defend some particular area, and regard it as belonging to them. Those species that are territorial (such as dogs) are virtually never expansionist. It is extremely rare to see any species having some organized tendency to expand its control to a wider and wider area. Even among the human species, ultra-expansionist tendencies are very rare. A few short-lived regimes have been ultra-expansionist (such as the Nazis and the Mongols), but almost all governments have not been highly expansionist. Why, then, do we presume that extraterrestrial civilizations would be ultra-expansionist, and that they would want to spread their control over larger and larger sections of the galaxy?

extraterrestrial being
 
The strong possibility that extraterrestrials might skip astro-engineering and symbol propagation. We imagine extraterrestrials as beings that might turn the galaxy upside down with their engineering projects, or spread signs of their civilization all over the place. But they might have no interest in such activities. Extraterrestrials might have no interest in doing things such as planting their symbols on other planets, in the way we put a flag up on the moon. They may think such “we were here” type of activities are vain and childish. Once a civilization gets godlike powers over matter and energy, they might run wild for a few centuries, doing all kinds of breathtaking engineering projects such as building mile-high buildings or artificial rings around their planets. But after a certain number of centuries of such activity, they might get bored with that type of thing, and go back to a more “low footprint” way of living. In the latter case, it would be relatively unlikely that we would see any signs of them.

Any combination of these factors might help to explain why we do not currently observe extraterrestrials (and in fact, the non-observation of extraterrestrials is debatable, giving things such as UFO sightings, fast radio bursts, and the “Wow” signal). So we do not have to make any gloomy assumption that a Great Filter and habitable planets implies that man's future lifespan is limited.

In short, there is no good reason to assume that a discovery of extraterrestrial life (or habitable planets) tells us anything at all about a future lifespan of our civilization. If you want to get gloomy about man's future prospects, there are much more direct and compelling ways of making that case, rather than the strained reasoning advanced by Snyder-Beattie and Bostrom.

Thursday, April 24, 2014

Fast Radio Bursts Could Be From Extraterrestrial Civilizations

In 2007 astronomers detected a new class of radiation signal from deep space – what are called fast radio bursts. Fast radio bursts are highly energetic but very short-lived bursts of radio energy, typically lasting less than a hundredth of a second. Fewer than twelve of these bursts have been detected. For years, all of the detections came from a single telescope in Australia, but now the Arecibo Observatory in Puerto Rico has also detected such a fast radio burst.

Where are the signals coming from? Based on something called dispersion measures, an astronomer named Dan Thornton estimates that the signals come from between five and ten billion light years away. But other astronomers disagree. One team of astronomers estimates that the signals are coming from nearby stars inside our galaxy. But their reasoning is based on a “5% coincidence” argument that isn't very convincing (considering that 5% coincidences are not very improbable).

Based on the number of fast radio bursts that have been detected, astronomers have estimated that our planet could be receiving as many as 10,000 of these radio bursts per day. What could be causing the signals? Astronomers don't know. Some astronomers speculate that the fast radio bursts could be caused by various exotic types of stellar events, such as unusual solar flares or two neutron stars colliding with each other.

There is, however, a general problem with such explanations. A scientific paper on the fast radio bursts says this (which I'll “translate” in a moment):

There are no known transients detected at gamma-ray, x-ray or optical wavelengths or gravitational wave triggers that can be temporally associated with any FRBs [fast radio bursts]. In particular there is no known gamma-ray burst (GRB) with a coincident position on a timescale commensurate with previous tentative detections of short-duration radio emission.

Let me clarify this rather opaque comment. The highly energetic freak events imagined as the source of the fast radio bursts would probably have produced other types of radiation such as gamma ray radiation, x-rays, or visible light. But no one has detected a flash of any of these types of radiation with a position in space (and time of origin) matching any of the fast radio bursts. To give an analogy, it's kind of as if you felt the ground shaking, and assumed it was something heavy falling to the ground, but you didn't hear any noise at the same time. That would throw doubt on your explanation.

The same paper discusses various theories to explain the fast radio bursts. The paper mentions the possibility of neutron star mergers (two nearby neutron stars interacting with each other), but it notes that this extremely rare phenomenon would not occur often enough to explain the estimated occurrence rate of the fast radio bursts. The paper also notes the hypothesis of black hole evaporation being the source of the fast radio bursts, but notes that the expected energy from such an event would be much less than the energy coming from a fast radio burst. The paper notes that there is no way to get a fast radio burst merely from a core-collapse supernova event, but says that conceivably if a supernova was next to a neutron star, it might produce a fast radio burst. But a supernova occurs only about once every 50 years in our galaxy, and a supernova very close to a neutron star is very, very rare – probably too rare to explain the phenomenon.

In short, we seem to have no really good astrophysical explanations for the fast radio bursts. Given the fact that short radio bursts have been postulated as one means by which extraterrestrial civilizations could announce their existence, there would seem to be a very real possibility that some or many of these short radio bursts are coming from extraterrestrial civilizations. 

Extraterrestrial antenna
Hypothetical extraterrestrial radio transmitter

I may note that even if the signals are coming as far away as five billion light years, that does not rule out the possibility that they are artificial signals from extraterrestrial civilizations. The universe is believed to be about 13.7 billion years old. If a radio signal came from five billion light-years away, it would have come from a time when the universe was about 8.7 billion years old. Was there enough time for intelligence to appear by that date? There might well have been. Recent telescopic observations show that when the universe was only a few billion years old, it already had surprisingly mature galaxies. We know of no reason why intelligence could not have arisen in other galaxies between five and seven billion years ago.

I may note the astonishing difference between the way astronomers have reacted to two different cases of unexplained new radiation observations: the fast radio bursts and the b-mode polarization signals detected recently by the BICEP2 team. The two cases are quite similar in some respects. In both cases we have a type of signal observation that might be explained through a relatively mundane explanation, and which might also be explained by imagining something monumental. In the case of the fast radio bursts, the mundane explanations are things like solar flares and star collisions, and the monumental explanation is to imagine deliberate signals from extraterrestrial civilizations. In the case of the b-mode polarization observations, the quite plausible mundane explanations are things like cosmic dust, synchrotron radiation, and gravitational lensing; the monumental explanation is to assume something coming from cosmic inflation in the universe's first second.

In the case of the fast radio bursts, astronomers have reacted with the greatest caution and circumspection. Their accounts merely report the observations, without speculating about any possible monumental explanation. In the case of the b-mode polarization signals reported by the BICEP2 team, astronomers and cosmologists instantly threw caution and circumspection out the window, and enthusiastically jumped the gun by calling the signals proof of cosmic inflation, seemingly before the public had even had time to scrutinize the scientific paper. I suspect that this case will be recorded as one of the great cases of over-enthusiastic gun-jumping hype, similar to the 1990's “life on Mars fossils” announcement that didn't pan out. A few weeks after the BICEP2 announcement, a scientific paper appeared reporting that a certain type of cosmic dust (not considered by the BICEP2 team) could be the source of their observations.

Can we imagine if astronomers had reacted to the fast radio bursts the way they reacted to the findings of the BICEP2 team? In that case they would have announced we had received the smoking gun of alien civilizations.

What is the proper way to consider both of these cases? An intelligent outlook is to say that some interesting signals have been discovered, and to cautiously note the fact that they could possibly be due to an epic, monumental explanation – while at the same time saying that the matter is very much undecided, because the universe has a thousand surprises up its sleeves, because there are almost always a dozen different ways to explain any very distant thing we see in our telescopes, and because our knowledge of the universe is shaky and fragmentary.

Tuesday, April 22, 2014

The Pink Wall May Block the Superintelligence Singularity

Ray Kurzweil has advanced a very popular theory that the Singularity is near – a time where there will be an explosive growth in machine intelligence, one that sees computers and robots becoming far smarter than humans. To back up this theory, Kurzweil and his supporters typically show graphs showing exponential growth in the power and speed of computer hardware and computer memory.

But in order for you to have anything like machine superintelligence, we would need to see more than just computer hardware increasing by orders of magnitude (factors such as 100, 1000 or 10,000). We would also need to see computer software making similar strides. If today someone were to create a computer with a million times the speed and memory of the human mind, that computer would not be even as intelligent as a mouse. For such a computer to become even as intelligent as a mouse, you need strides in computer software on the same scale as the strides in computer hardware.

One problem is that computer software is not advancing at anything like the rate of progress of computer hardware. Computer software is progressing at a relatively slow rate, an issue I discussed in this blog post. Computer software is not progressing at anything like an exponential rate. The basic process of software development today is not fundamentally different from the process of software development in the 1990's – programmers slowly and laboriously grinding out code line by line. Code generators can create lots of code, but in a typical project most of the code still has to be created manually.

It would seem that we would need many centuries to complete the project of creating the software needed for a computer to be as smart as a human, assuming that most of it would have to be manually coded. But many singularity enthusiasts assume that we will have a gigantic shortcut – the ability to acquire much of this software by studying the human brain.

We can find an example of this type of thinking by looking at the Wikipedia.org page which summarizes Ray Kurzweil's predictions. In the predictions for the 2020's, we see this prediction:

Early in this decade, humanity will have the requisite hardware to emulate human intelligence within a $1000 personal computer, followed shortly by effective software models of human intelligence toward the middle of the decade: this will be enabled through the continuing exponential growth of brain-scanning technology, which is doubling in bandwidth, temporal and spatial resolution every year, and will be greatly amplified with nanotechnology, allowing us to have a detailed understanding of all the regions of the human brain and to aid in developing human-level machine intelligence by the end of this decade.

The above description is imagining the mother of all shortcuts – the shortcut to end all shortcuts. The description imagines that we will be able to get the needed software for an intelligent machine by studying the operations of the human brain.

The problem is that there is little reason to assume that we will be able to do any such thing anytime in the next hundred years. Why? Because the mystery of how human consciousness and human memory work is one of the greatest mysteries of the universe, and nature does not give up its secrets easily.

Philosophers have long been troubled by what is called the hard problem of consciousness – the problem of how mind can be produced by matter, something that is fundamentally different. So far very little light has been cast on this problem. We do not understand how the brain stores memories, or how the brain produces thoughts. There is actually little hope that we will be able to solve this problem any time in our century.

You might get the wrong idea from studies that look at which parts of the brain have higher electrical activity when a particular brain operation occurs. Such studies merely show correlations, without throwing any light on how the mental activity is produced. Imagine a 3-year-old child studying a computer. Such a child may make some observations that lead him to conclude that a particular green light (at the front of a computer) blinks whenever some type of operation occurs, such as a file save. But that doesn't really move the child any closer to understanding the deep mysteries of how the computer is computing, and how its data is being saved. Similarly, MRI studies on electrical brain activity really don't take us very far in understanding the mysteries of how the brain works.

Although you hear about gray matter when people talk of brain cells, the human brain is actually a pinkish color. We may use the term the Pink Wall to refer to mysteries of the human brain which are blocking us from understanding its inner secrets. We want to pierce this Pink Wall, and make our way to the inner secrets of the brain. But the Pink Wall seems all but impenetrable. We have little hope of being able to get through it in the next several decades. 

Brain barrier
The Pink Wall

The idea expressed in the italicized quotation above (that we can unravel the secrets of the human brain through brain-scanning) seems like wishful thinking. Imagine an extraterrestrial culture that had not learned the details of electromagnetism, had not learned about atoms or subatomic particles, and had not learned about quantum mechanics. Such a culture might think it could learn the fundamental mysteries of how matter is organized just by making scans of rocks with higher and higher resolutions. But such a thinking would be fallacious. It would seem to be just as fallacious to think that we can unravel the secrets of the human mind by scanning brain tissue in higher and higher resolutions.

Can we, for example, imagine a scientist of the future saying something like this?

I couldn't understand before how a neuron produces a thought, but now that I can view the neuron more closely with my new brain scanner, now I can see how the thought is produced.

Or can we imagine a scientist of the future saying something like this?

I couldn't understand before how a piece of brain tissue produces a thought, but now that I can view that tissue more closely with my new brain scanner, now I can see how the thought is produced.

We can't really imagine either of these things happening, because we can't imagine how anything that a scientist could see in a scanner could lead him to say, “Aha, there is a human thought being produced.”

It seems, therefore, that the Pink Wall will be blocking us for a long, long time. We will not have any time soon any brain-scanning shortcut that allows us to get the software for a silicon brain by borrowing it from the human brain. It seems that for a long time, the only option for getting the software for a silicon brain will be to build it through software development processes similar to those now in use. Such processes might take centuries before they could produce something like human consciousness.

So if you do not see the Singularity anytime in this century, blame it on the slow process of software development and the Pink Wall which blocks us from uncovering the sublime secrets of exactly how the brain works.

Sunday, April 20, 2014

Black Widow Pulsar Gives Fatal Bite to Theory of Cosmological Natural Selection

Faced with the fine-tuning problem of why the universe seems to be so well calibrated to allow the existence of intelligent life, some thinkers have advanced the idea of a multiverse, the idea that there is a vast ensemble of universes. The thinking is that if there are an infinite or nearly infinite number of universes, then we might expect one of them to luckily have just the right conditions allowing for observers. The drawbacks of this approach are many: the nearly infinite baggage of assuming all of those universes, almost all uninhabited; the violation of the principle of Occam's Razor asserting that “entities should not be multiplied beyond necessity” when trying to explain something; the violation of the principle of mediocrity asserting that a random sample from a larger population should be assumed to be representative of the population; the fact that we have never had a verified case of anything being successfully explained by a multiverse; the fact that while the probability of some universe being habitable by chance may be improved by assuming other universes, the probability of any particular universe (including our universe) being habitable by chance is not at all improved by such an assumption, not even by even 1 percent.

Perhaps sensing the weakness of a simple multiverse theory, some theorists have advanced a more complicated theory – a theory they call cosmological natural selection. The idea seems to have first been advanced by the physicist Lee Smolin (author of the excellent book The Trouble With Physics). In his book Time Reborn, Smolin describes the theory as follows:

The basic hypothesis of cosmological natural selection is that universes reproduce by the creation of new universes inside black holes. Our universe is thus a descendant of another universe, born in one of its black holes, and every black hole in our universe is the seed of a new universe. This is a scenario within which we can apply the principles of natural selection.

Smolin claims to have a theory of how the physics of the universe could evolve through natural selection. But how on earth can we get anything like natural selection out of the idea of new universes being created by the formation of black holes? Smolin gives the following strained reasoning: (1) he claims that the physics that favors a habitable universe are similar to the physics that favor the production of black holes; (2) he claims that a new universe produced by a black hole might have slightly different physics from its parent universe; (3) he claims that random variations in physics that would tend to produce universes that produce more black holes would cause such universes to produce more offspring (more universes); (4) he claims that as a result of this “increased reproduction rate” of some types of universes, we therefore would gradually see the evolution of physical laws and constants that tend to favor the appearance of life and also the production of black holes.

Artist's depiction of black hole (Credit: NASA Goddard Space Flight Center)

But there are many holes in this theory based on black holes.

First, let's look at the linchpin claim that a new universe can be produced from the collapse of a huge star to form a black hole. Some analysts let Smolin get away with making this claim, but there is no reason why that should be done. The idea that a new universe can be produced from the collapse of a black hole is a complete fantasy, with no basis in fact. We have no observations to support such a theory.

But let's try to open the door to such an idea. What would it be like if the matter in a collapsing star formed a black hole, and that extreme density of matter caused the surrounding space to be pinched off into its own little bubble? What would we then have? Such a little bubble should not be called a new universe, as that gives a completely misleading idea of some vast area with enough matter to form many galaxies. The bubble would be properly referred to as a spacetime bubble, or a micro-universe.

If such a spacetime bubble were to be formed, what would happen to the matter that was trapped in the black hole? It would be separated from our universe permanently, sealed off in its own little realm. We would no longer observe the gravitational effects of that matter in our universe. We would observe that black holes do not exert gravitational effects once they form. But that is not at all what we observe in regard to black holes. Black holes continue to exert very strong gravitational effects (such as sucking up all nearby gas), just as if their matter continued to exist in our universe. In short, our observations are in conflict with the idea that when a massive star collapses to become a black hole, that matter exits our universe to form another universe. Our observations indicate that the matter lost in black holes is still here in our universe.

And what if the matter in a collapsing star were to cause a new universe when a black hole formed, and the matter moved over to that universe? You would then have a tiny little one-star-sized universe. Such a possibility is worthless in explaining our universe, which has the mass of at least 1,000,000,000,000,000,000 stars. 

Another problem with Smolin's theory is that it absolutely requires you to believe that our universe has been optimized to produce a maximum number of black holes -- a thesis that is rather implausible given the fact that the nearest black hole is no closer than 1500 light years away, and that only one in about 30,000,000 nearby stars is a black hole.  In fact, other scientists maintain that the universe is not at all optimized to produce black holes. 

Another huge problem with Smolin's idea is that it does not explain how any universe could have originally came to exist in the state in which black holes could exist in it (a state similar to a state in which life could exist in it), a state enormously improbable to occur by chance. Although you may associate the concept of black holes with ideas of chaos, randomness, or disorder, there are actually many requirements that any universe must meet in order for it to have stars that can form into black holes. Smolin lists 6 such requirements on page 36 of this paper, all of which are incredibly lucky long shots. Another requirement is that the proton charge very precisely match the electron charge to many decimal places, for unless you have that coincidence there will never exist the stars from which black holes form.

In our universe the proton charge and the electron charge match to at least eighteen decimal places, and there is every reason to think that this fine balance is necessary for stars to exist. Imagine if the proton charge and the electron charge differed by one part in a trillion. That would be like increasing the electromagnetic forces on a star by one part in a trillion. But the electromagnetic force (one of the four fundamental forces) is about a trillion trillion trillion times greater than the gravitational force (another of the four fundamental forces), which means that even if there were a very tiny difference in the proton charge and the electron charge (say, 1 part in a trillion), the resulting repulsive force would be many, many times greater than the force of gravity holding together stars, and stars could not hold together. Greenstein (a professor emeritus at Amherst) says that the proton charge and the electron charge have to be balanced to 18 decimal places for stars to exist.

So let's imagine a collection or series of universes with random properties. It would require an incredibly unlikely set of conditions for any particular one of these universes to have the conditions necessary to produce black holes – a long shot with odds of no greater than 1 in a billion billion. According to Smolin's theory, once such a universe existed it might begin making copies of itself, as black holes formed new universes. So if you started out with several billion billion universes, a few might produce black holes, and then gradually (according to Smolin's theory), the fraction of the universes that had black holes (and that were compatible with life) might keep getting higher and higher over the eons. Such a theory might comfort some people by making our universe seem not so untypical in a collection of universes. But it would do nothing to explain the original incredibly unlikely long shot.

To give an analogy, imagine you have a book-copying robot which is given a copy of a rare and wonderful book, such as a first edition of Dickens. Such a robot might then make 1000 copies of the book, and stack them on your bookshelf. Seeing all of those copies on your bookshelf, you might then tend to think that the original first edition was not so rare and wonderful, but this machine would not at all explain the origin of the first edition in the first place, something that would be very hard to plausibly explain by any theory of luck.

Smolin claims that one advantage of his theory is that it makes a falsifiable prediction, and in a 2004 paper (page 38) he lists one such prediction:

There is at least one example of a falsifiable theory satisfying these conditions, which
is cosmological natural selection. Among the properties W that make the theory
falsifiable is that the upper mass limit of neutron stars is less than
1.6 solar masses. This and other predictions of CNS have yet to be falsified, but
they could easily be by observations in progress.

But in this 2010 Science Daily piece reported the discovery of a neutron star: The researchers expected the neutron star to have roughly one and a half times the mass of the Sun. Instead, their observations revealed it to be twice as massive as the Sun."

So according to Smolin's own guideline, the theory of cosmological natural selection has been falsified. He said it would be falsified if we discovered any neutron stars greater than 1.6 solar masses, and a neutron star with a mass of 2.0 solar masses has been discovered.

In fact, this paper estimates that a particular neutron star called the black widow pulsar has 2.4 solar masses. In his book Time Reborn, Smolin concedes, “If that finding holds up under more precise measurements, cosmological natural selection will be falsified.”

It would seem that the black widow pulsar has delivered a fatal bite to the theory of cosmological natural selection, somewhat like a black widow spider giving a fatal bite to a human.

Friday, April 18, 2014

Strange Social Practices of the Future

As the future carves out a new and different world for us to live in, we will no doubt change our social practices and attitudes. But in what ways will our customs and attitudes change? Below are a few possibilities.

Work Rationing

Work rationing will be what happens if society discourages people from working more than a certain number of hours per week. We could easily see work rationing if advances in automation cause a big increase in unemployment. Imagine if more and more people are losing their jobs to computers and robots. There could then be new laws that strongly discourage people from working for more than a particular number of hours per week – perhaps 40, perhaps 35, or perhaps 30.

The simplest way to enact a work rationing program would be to enact wage laws that would require not just time and a half for overtime, but double pay or triple pay for overtime. Overtime might be defined as anything more than 40 hours per week, 35 hours, or 30 hours.

Energy Rationing

The last thing anyone wants is to go back to a system like that followed during World War II, in which everyone was issued coupons that had to be produced in order to buy gas. But a practice like this may come back, if dire predictions about Peak Oil come true. The practice could be updated by giving everyone a gas card with a magnetic strip, one that would have to be produced whenever you buy gas. A more general energy rationing program would be one that also rationed air travel. Each citizen might be issued an air travel card entitling him to no more than a certain number of miles of air travel per year.

Drug Legalization

We currently have the huge problem that many millions of Baby Boomers are nearing retirement without much money saved for retirement. Many millions won't be able to afford travel or golf during their golden years. Perhaps the government may deal with the problem by encouraging drug legalization, and encouraging drug use by the elderly.

One can imagine how a cold government bureaucrat might think such a policy was good. From the government's standpoint, it is a tragedy when a young person dies from a drug overdose, because that means a loss of tax revenue. But from the government's standpoint, it is no tragedy when a very old person dies, as that saves the government costs in social security and medicare payments. We can therefore imagine a future government (with grave financial problems) encouraging not just drug use by elderly citizens, but also drug use that involved a high risk of accidental overdose. Will future state-sponsored television commercials ask: had your heroin today, Grandpa?

Robot Wives and Robot Children

One major problem that may plague the future is overpopulation. We don't see its effects all that dramatically in the United States, but in nations such as China one can see the grim effects of overpopulation in cities that are typically covered in very thick smog.

If overpopulation worsens, we may see the encouragement of novel social practices designed to minimize reproduction. We may see a social acceptance of men living with robot wives rather than real wives. We may see a social acceptance of married couples living with robot children designed as substitutes for real children. Perhaps the government might even give a free robot child to any couple who promised not to have a real child.

Vegetarianism as the Norm

Currently vegetarians are in the minority in countries such as the United States. But as global warming worsens, and people realize what a large percent of greenhouse gases are produced in order to support meat eating, then we could see a reverse of present attitudes. Meat eaters might then become an ostracized minority, somewhat like cigarette smokers are today. We can imagine a future in which meat advertisements are forbidden on television, and meat eaters have to wait to be seated in the relatively few restaurants that still serve meat.

Assisted Suicide

Currently only three US states have laws allowing physician-assisted suicide: Washington, Oregon, and Montana. But what if overpopulation problems worsen, and what if the nation starts to be bankrupted by the economic costs of supplying retirements benefits and health care for the very aged? We might then see something like vending machines that dispense suicide pills. In order to use the machine, you would have to swipe a credit card. The software in the machine would check that you are over a particular age, such as 80 or 85. The software might also be linked with a central medical computer, which might allow anyone to use the machine if that person had a diagnosis of a fatal disease.

Lawsuits About Whether Someone is Dead

In our society death is something with very significant legal ramifications. Whether you are dead may determine who is the owner of your house and other investments, and it may determine whether it is or is not the time for an insurance company to make a payout. But what if the status of your death is blurred by technology? What if you have uploaded your mind into a computer or a robot? Are you then dead, or not dead?

We can imagine relatives battling out such issues in court. One relative may argue that dear old Dad is dead, because his body has been buried; therefore, his will should now be executed. But maybe his surviving wife argues that Dad is not really dead, because he has had his mind uploaded into a computer. 

 A news story of the future

Wednesday, April 16, 2014

The SAGE Hypothesis, or Why Mankind Might Not Be So Inferior

For decades the main assumption about extraterrestrial intelligences is that our galaxy contains many civilizations much older than ours, perhaps millions of years older. The reason for this assumption is the fact that the universe is thousands of times older than the human race. Humanity is estimated to be only a few hundred thousand years old, but the universe is some 13 billion years old. Apparently intelligent life could have arisen on other planets any time during the past three or four billion years. A period of about three billion years is about 10,000 times longer than a period of only a few hundred thousand years. So it would seem that if intelligence arose in our galaxy at random times, it would have arisen mainly during the first 99% of this three-billion-year period, which would mean most extraterrestrial civilizations would have arisen millions of years ago. Under such a scenario, our species is a very inferior species, and there are extraterrestrial minds as superior to our minds as our minds are superior to mice or insects.

Such reasoning seems pretty solid, but there is one big problem with it: the fact that we do not see any evidence of other extraterrestrial civilizations (with the possible exception of UFO's, a matter of controversy). If many civilizations arose on other planets millions of years ago, we might expect that such civilizations would have left signs of themselves which we would have detected. But no such sign has been indisputably found. Our searches for extraterrestrial radio signals have not been successful; we have seen no evidence of extraterrestrials in deep space; and we see no artifacts from extraterrestrials anywhere in the solar system.

This discrepancy is known as Fermi's Paradox, the paradox that asks: where is everybody? I discussed various possible solutions to Fermi's Paradox in this earlier blog post. I would now like to suggest another possible solution. I will call this possible answer the SAGE hypothesis. SAGE is an acronym standing for Simultaneous Appearance of Galactic Extraterrestrials.

The idea behind the SAGE hypothesis is that all intelligent life that has appeared in the galaxy has appeared only in the past 300,000 years . Rather than asserting our galaxy has many civilizations millions or many thousands of years older than man, the SAGE hypothesis asserts that while our galaxy may have many civilizations, none of them are much older than mankind.

The answer that the SAGE hypothesis gives to the “Where is everybody?” question of Fermi's Paradox is: they exist, but we have not detected them because they appeared not very long ago; they are about as old as we are.

If this hypothesis is correct, it can adequately explain Fermi's Paradox. We would not expect that we would have received radio signals from extraterrestrial civilizations that are only about as old as our civilization, since our civilization has made almost no attempts yet to send radio signals to other civilizations. We also would not expect to see any signs of extraterrestrial civilizations in deep space if they are only about as old as we are, nor would we expect that they would have reached our planet with spacecraft from their planets.

The graph below illustrates the difference between conventional thinking about the origin date of extraterrestrial civilizations and the assumption of the SAGE hypothesis. Each dot represents the appearance of an extraterrestrial civilization at a point in time and space. The pattern of red dots illustrates the pattern we might expect under conventional assumptions, with extraterrestrial civilizations appearing at random intervals in the past billion years. The pattern of blue dots illustrates the pattern that might occur under the SAGE hypothesis, with all the civilizations appearing relatively recently.

extraterrestrials

Two Contexts in Which the SAGE Hypothesis Would Be Credible

Despite its success in answering Fermi's Paradox, one might argue that the SAGE hypothesis is not credible, because it seems to require too much of a coincidence for all intelligent life in the galaxy to have appeared only during the past small sliver of cosmic history (a period less than a thousandth of the total length of cosmic history). But there are two contexts in which the SAGE hypothesis would be entirely credible.

The first context in which the SAGE hypothesis would be credible is a theistic context. Let us imagine for a moment the possibility that the universe was specifically created billions of years ago by a cosmic designer, a possibility that cannot be casually dismissed in the light of all we know about the anthropic principle, apparent cosmic fine-tuning, and remarkable coincidences required for our existence. Under such a possibility it is plausible enough that the universe might be either programmed or controlled so that there is a widespread simultaneous appearance of intelligent life, all in the relatively recent past, rather than at random intervals over a span of billions of years.

We can imagine why a cosmic designer or controller might want the earth-like planets of the galaxy to produce intelligent life at roughly the same time – perhaps as a sign of that being's control over things, or perhaps to prevent one civilization from being able to take over the galaxy before other civilizations appeared. By arranging for civilizations to appear simultaneously throughout the galaxy, such a cosmic designer or controller might be guaranteeing a more even-handed distribution of things, so that each civilized planet gets a fairly equal share of the galactic pie.

The second context in which the SAGE hypothesis would be credible is a context in which the universe has some kind of information capabilities beyond any that we currently understand. Let us imagine that the universe has some strange capability in which the following astonishing thing happens: once a highly unlikely event occurs in one place in the galaxy, it then becomes radically more likely to start occurring in other places in the galaxy. This might happen if the galaxy had some type of information field or computational layer, a field perhaps allowing the universe to in some sense “learn” from great successes of the past . We can imagine some context under which the chance of intelligent life appearing on a planet is, say, 1 in a billion – until the time that it first appears, and then the probability changes to be vastly higher (perhaps only 1 in 10 or 1 in 100). It could be that for some information-related reasons, once some great but highly improbable event occurs, it is then almost as if the universe “learns” how to accomplish this thing; and once that happens it could then be relatively easy for the event to occur elsewhere.

Under this idea (which does not require any assumption of a divine creator or designer or controller, but which does require assuming some unusual computation-related feature of the universe), we have a second context in which this SAGE hypothesis could plausibly be true. If the probability of intelligent life appearing on a particular planet were somehow to be radically improved once it had occurred one time, there might be a significant chance of intelligent life appearing more or less simultaneously on many planets in the galaxy.

The Predictions of the SAGE Hypothesis, and How It Could Be Falsified

To many philosophers of science, a scientific idea should ideally be falsifiable, and it should make specific predictions. In this regard, the SAGE hypothesis is in good shape, because it can be falsified, and does make specific predictions.

The SAGE hypothesis would be falsified if we were to look for and receive radio signals or television signals from a very old civilization vastly older than ours. We might then learn that the civilization was far older than ours. That would instantly falsify the SAGE hypothesis, which maintains that no civilizations in our galaxy are very much older than ours. The hypothesis would also be falsified if our planet was to receive a spaceship from another planet, and those beings told us their civilization was very much older than ours.

Below are specific predictions that follow from the SAGE hypothesis:
  1. We will find no evidence of Dyson Spheres, or any other gigantic galactic engineering projects that would have required many thousands or millions of years to complete.
  2. If we receive extraterrestrial radio signals or television signals, they will not show us pictures of some vastly superior mega-civilization with godlike technology, but will merely show us a civilization not very much more advanced than our own.
  3. If we ever receive an extraterrestrial spaceship, it will not be from some civilization vastly older than ours, but will at most be from some civilization that only has started to explore the galaxy fairly recently.
We can imagine how the SAGE hypothesis could be pretty well verified in the next century. Looking in one direction of the sky, we might find radio or television signals from an extraterrestrial civilization only slightly more advanced than ours. Then looking in some opposite direction of the sky, to a completely different part of the galaxy, we might then find the same thing – signals from another civilization about the same age as ours. Once the same thing happened three or four times, we would have a choice between believing in a one in a billion coincidence, or believing in something like the SAGE hypothesis, which would pretty well clinch the hypothesis.

Do I personally think that the SAGE hypothesis is correct, and that civilizations have only recently appeared in our galaxy? No, I think it is somewhat unlikely that the SAGE hypothesis is correct. I still tend to prefer the idea that there are some civilizations in our galaxy much older than our civilization. However, I think that the SAGE hypothesis is a respectable hypothesis that might well be true, a hypothesis that deserves a mention in a discussion of Fermi's Paradox. I would say the SAGE hypothesis is rather unlikely to be true, but perhaps not very unlikely to be true. I also think that the SAGE hypothesis has some good aspects, particularly the fact that it is falsifiable and the fact that it makes specific predictions that we might well be able to verify in a reasonable time frame.

Monday, April 14, 2014

Davies' Dubious Defense of a Double Standard

In my blog post several days ago I complained about what I called a double standard followed by many a modern physicist. The double standard is that theoretical physicists spend huge amounts of time speculating about strange, far-out concepts such as spacetime wormholes, time travel, parallel universes, string theory and multiverses (things for which there is no observational support), but the same individuals dismiss as nonsense many possibilities such as ESP for which there is a great deal of evidence (much of it accumulated by scientists and documented in scientific papers that have been published for decades). That evidence includes compelling ganzfeld studies that show a success rate of about 31% greatly in excess of the expected success rate of 25%. For example, this study says, if one considered all the available published articles up to and including 1997 (i.e., Milton and Wiseman’s 30 studies, plus 10 new studies), 29 studies that used standard ganzfeld protocols yielded a cumulative hit rate that was significantly above chance (31%).”   An ESP study involving only artistically gifted people reported a success rate of 50%, twice the success rate expected by chance.

Schematic depiction of ESP

The very next day I read a book The Eerie Silence by physicist Paul Davies that contained a passage that almost seemed to have been written in response to my blog post. He discussed exactly the same discrepancy I discussed, using the same term that I used (“double standard”) to describe it; however Davies tried to defend this double standard.

Before trying to rebut this reasoning by Davies, let me make clear that I am a longtime Paul Davies fan. I have enjoyed many books he has written, and in 19 cases out of 20 I find his reasoning to be convincing. But in the example I will now discuss, I think Davies fails to come up with a convincing argument.

Davies starts out like this:

The point about modern physics is that weird entities like dark matter or neutrinos are not proposed as isolated speculations, but as part of a large body of detailed theory that predicts them. They are linked to familiar and well-tested physics through a coherent mathematical scheme. In other words, they have a place in well-understood theory. As a result, their prior probability is high.

Davies is on extremely dubious ground here. Neutrinos are predicted by the Standard Model of Physics, but dark matter is not at all predicted by that theory. There are no particles of dark matter mentioned in the Standard Model of Physics. Physicists believe in the likelihood of dark matter not for theoretical reasons but because of observational reasons, because they need dark matter to help explain certain observations. Exactly the same thing can be said about ESP.

Dark matter is not at all “linked to familiar and well-tested physics through a coherent mathematical scheme.” It is instead a completely mysterious alleged thing that we basically know nothing about. We have zero tested equations that describe dark matter, and also zero equations that describe ESP. That really leaves dark matter and ESP in the same ballpark.

In the case of a similar and equally important mysterious phenomenon – dark energy – we can say that there is a strong theoretical basis for believing that there is something like dark energy. However, the problem is that the theory (quantum field theory) tells us dark energy should be at least a trillion trillion trillion trillion trillion times more powerful than it is. This is the widely discussed “vacuum catastrophe,” often referred to as the worst prediction in the history of science. Scientists continue to believe in dark energy, even though their theories are almost infinitely out of whack with our observations about how much dark energy exists. So is dark energy something that is “linked to familiar and well-tested physics through a coherent mathematical scheme”? No it isn't. You can't use such language when the theory gives an answer that is wrong by a factor of 1,000,000,000,000,000,000,000, 000,000,000,000,000,000,000. So any claim of theoretical rectitude can't be used for dark energy – but scientists continue to believe in it.

In truth, neither dark matter nor dark energy is part of “well understood theory.” They are mysterious things that we do not have any well-established theory for (at best we have half-baked, super-speculative or “way off the mark” theories). The same thing can be said about multiverses, parallel universes, time travel and wormholes.

I may also note that Davies errs when he suggests that the “prior probability” of something is “high” when it can be “linked to familiar and well-tested physics through a coherent mathematical scheme.” With sufficient ingenuity, a physicist can create all kinds of bizarre, improbable theories that are linked in some way to existing physics. Inventive physicists have done that, creating a thousand and one conflicting theories of time, space, particles, and forces, including countless different varieties of string theory, inflation theory and quantum gravity. The fact that you can somehow link your theory in with existing physics does not show it has even one chance in 100 of being correct. As Davies himself says in another book, “It is easy to construct artificial universe models, albeit impoverished ones bearing only a superficial resemblance to the real thing, which are nevertheless mathematically and logically self-consistent” (Information and the Nature of Reality, page 68).

Regarding ESP, Davies says the following:

Telepathy is not obviously an absurd notion, but it would take a lot of evidence for me to believe in it because there is no properly worked out theory, and certainly no mathematical model to predict how it works or how strong it will be in different combinations. So I assign it a very low (but non-zero) prior probability. If someone came up with a plausible mechanism for telepathy backed up a proper mathematical model which linked it to the rest of physics, and if the theory predicted specific results – for example, that the 'telepathic power' would fall off in a well-defined way as the distance increases, and would be twice as strong between same-sex subjects as mixed-sex subjects – I would sit up and take notice. I would then be fairly easily convinced if the experimental evidence confirmed the predictions. Alas, no such theory is on the horizon, and I remain extremely skeptical about telepathy in spite of the many amazing stories I have read. 
 
Here Davies professes quite a demanding set of criteria before we can believe something: (1) that we should only believe something if it is predicted by some mathematical model; (2) that this model should be “linked to the rest of physics”; (3) that the model should make specific numerical predictions that can be confirmed. Does it make sense to advance such a set of criteria as a prerequisite for believing in something? It certainly does not.

One reason is that a very large fraction of all things that we do believe in (inside and outside of the sciences) do not satisfy such a set of criteria. But we accept such things nonetheless because we have observations that compel us to believe in them. I refer to things such as love, hate, psychological discomfort, newly discovered species, gamma ray bursts, and earthquakes. Consider, for example, when a biologist discovers a new species of animal in Brazil. It is not at all something predicted by some mathematical model, but it is accepted as a new scientific finding nonetheless. Even in the hard physical sciences we have things that are accepted even though they are not at all predicted by existing models. A recent example is the discovery of the acceleration of the universe's expansion, which left scientists stunned, having been predicted by no popular theory.

What Davies describes here is a model of verification that is sometimes followed (but often not followed) in the world and physics and astronomy, but such a model of verification is actually uncommon in many other sciences. Sciences such as psychology, geology, and biology make relatively little use of mathematical models and prediction. If you pick up a textbook on zoology you will see almost no equations or mathematical models anywhere. In such sciences there is no standard at all that some new scientific conclusion must be supported by some mathematical model.

As for Davies' idea that ESP would need to be supported by some theoretical model “linked... to the rest of physics,” such a prerequisite does not make any sense. It amounts to saying, “I refuse to believe in something unless it is similar to things I have already learned.” I may note that some of the greatest advances in physics and astronomy occurred when scientists started to postulate things that did not fit in with the previous framework of ideas. When quantum mechanics was introduced, it did not fit in at all with physics as it had been previously understood. When the Big Bang theory was introduced, it did not fit in at all with the previous cosmological ideas of scientists such as Einstein, who favored the idea of an eternal, static universe.

Davies suggests the idea that we should not take an observed phenomena seriously unless we have a theory of how it works, one that makes good predictions. That is a misbegotten principle that scientists themselves do not follow. When scientists observe a new type of phenomenon, they accumulate observations for the phenomenon, and at the same time start working on theories to explain the phenomenon. It may be decades or centuries later that a theory arrives that finally explains the phenomenon. An example is pandemics. The phenomenon was studied for centuries before scientists such as Pasteur finally came up with a decent theory to explain it.
It would have made no sense around 1600 to have said, “Don't believe in pandemics – we don't have a good theory to explain it.” Another example is lightning. Scientists observed it for a long time, but did not develop a decent model to explain it until the 18th century. It would have made no sense around 1500 to have said, “Don't believe in lightning – we don't have a good model to explain it.”

What Davies' criteria amounts to is a kind of plea that he should be excused from taking something seriously whenever that thing does not have characteristics that allow him to study it in the way that he is most familiar and comfortable with studying things. That's a lame type of reasoning. We can imagine the same type of reasoning being used by a biologist: “I refuse to believe in galaxies or other reputed deep space-objects, because I cannot view them through my microscope, or place them in my test tubes, or study them in a cage or an aquarium.”

A wiser approach is that we should take a phenomenon seriously whenever we have repeated compelling evidence for its existence, regardless of whether we have familiar off-the-shelf methods for studying the phenomenon, and regardless of whether the phenomenon comfortably meshes with our preconceptions.