Header 1

Our future, our universe, and other weighty topics


Sunday, March 30, 2014

The Cloud of Victory: A Science Fiction Story

In the year 2040 when our little nation was attacked by the mighty superpower, the superpower wrapped itself in a cloak of good intentions. The superpower told the world that it had invaded our small country to bring greater prosperity to our citizens. But we knew the real truth. We had been invaded so that the superpower could grab our resources, resources that were all the more important in a world troubled by energy and mineral shortages.

I was the leader of the nation, so I met with the Defense Secretary to discuss how we would try to defend ourselves against the attack.

Their divisions have stormed the border, and are heading towards this city,” I said. “Have we been able to maintain a decent defense line?”

No,” said the Defense Secretary with an odd smile. “Not at all.”

You don't seem too concerned about the situation,” I said, “but I am. What is the ratio of our forces and theirs?”

The superpower has invaded us with twice as many soldiers as we have in our army,” said the Defense Secretary. “But don't worry, everything will be fine.”

Near our border the enemy divisions routed our small defense forces. Soon the enemy's divisions approached the nation's capital. I called an urgent meaning with the Defense Secretary.

We're on track to lose this war, and lose this country to the enemy,” I said. “Isn't there something we can do, some last gamble?”

There is,” said the Defense Secretary, smiling. “Come with me, and I'll show it you.”

The Defense Secretary arranged for us to travel to a huge factory ten miles away. When we got out of our vehicle, the Defense Secretary pointed to the building.

There it is,” said the Defense Secretary. “This is going to win us this war.”

A factory?” I said skeptically. “Our troops have been devastated. It's too late to get them some new weapon.”

It's almost time for the moment of release,” said the Defense Secretary. “This is going to be a moment you'll never forget.”

And so it was. A short time later, I saw a stream of flying objects coming out of the factory. The objects formed into what looked like a dark cloud. The cloud kept getting bigger and bigger and bigger. Before long the cloud seemed to fill the whole sky, making it look like a summer sky that was about to erupt in a downpour.



There it is,” said the Defense Secretary. “The Cloud of Victory. The cloud that will save us from the invaders.”

The giant dark cloud broke up into four smaller clouds, and each flew away into different directions, one to the east, one to the west, one to the north, and one to the south.

Later I learned the technical details. Each object in the cloud was a small flying drone, only about as long as a man's foot – a device called a mini-drone. Each mini-drone contained a power source, a metal detector, a motion detector, and an explosive. The original "Cloud of Victory" contained 30,000 small drones.

Each mini-drone had some simple programming. Each mini-drone was programmed to look for moving metal. When it detected a large piece of moving metal, the mini-drone would descend from the sky, crashing into the moving metal.

Almost all of the tanks of the invaders were destroyed from the sky by the mini-drones. The small mini-drones made little noise, and were hardly even visible in the sky, so it was all but impossible for a tank driver to know if a mini-drone was in the sky above him. One minute a tank driver might be driving along, thinking that there was no trouble anywhere near. The next minute a small mini-drone would descend from the sky, crashing into the tank and blowing it up.

Something similar happened to most of the enemy's soldiers. The soldiers were wearing armor, and carrying metal guns. When a small mini-drone in the sky detected the moving metal, the mini-drone would descend from the sky, crashing into the soldier, blowing him to bits.

The day before the Cloud of Victory appeared, the enemy was winning the war. But a few days later, the enemy's invading force was in shattered shambles. Humiliated, the defeated invaders withdrew.

I congratulated the Defense Secretary on his brilliant tactic.

But what if the enemy comes back in a few years, armed with some defense against the Cloud of Victory?” I asked.

Don't worry,” the Defense Secretary said. “By then we will have perfected the Annihilation Spray.”

Friday, March 28, 2014

A Precognitive Dream of Flight 370?

I often wake up remembering extremely detailed stories and images from my dreams. Last night, for example, I woke up remembering a very vivid and elaborate dream: a story of a rich man with a huge mansion who had some woman redecorate his mansion in some astonishingly colorful way.

But on about 9:00 AM EST on March 7, 2014 I awoke from a dream with the following strange thought in my mind: 6 Indian women murdered. I didn't remember any story associated with this dream, and couldn't recall any images associated with it. All I remembered was a phrase: 6 Indian women murdered.

After I ate breakfast and turned on my computer, I recorded my dream in a text file in which I occasionally record dreams I have had. I did a Google search to see whether there was any news report of six Indian women having been murdered. I found nothing, so I forgot about the matter for several days.

A few months ago I had started recording my dreams because a certain number of months before the attack on the World Trade Center on September 11, 2001, I had a dream that the World Trade Center collapsed. In the dream I was an observer in the World Trade Center, and the floor gave way. I and everyone else plunged downward as the whole building collapsed. I then woke up, as I always do whenever I reach a terrifying or horrifying point in any dream I am having.

I mentioned to my wife that I had a dream that the World Trade Center collapsed, but then gave the matter no further thought until the events of September 11, 2001.

As it happens, the day I had the dream about the 6 Indian women murdered was the same day that Malaysia Airlines Flight 370 mysteriously disappeared. The jet disappeared on March 8, 2014 local time, but when it disappeared the date in the United States was March 7, 2014. The jet disappeared 17:21 UTC on March 7, about 4 hours after my dream which occurred at about 13:00 UTC on March 7.

When I later realized this coincidence, I did a web search to find out how many Indian people on the jet died. According to this link there were 6 Indian people who died on the plane – 5 Indian nationals, and one Indian person from Canada.

If we assume that Flight 370 was lost because of a deliberate act of terrorism or because of a suicidal pilot, then there would be a fairly close match between my dream and the reality. I had dreamed that 6 Indian people were murdered on the same day that 6 Indian people may well have been murdered.

But I didn't quite get things exactly right – because only three of the Indian people who died were women. But presumably if precognition occurs, it is a rather hazy thing; so we wouldn't necessarily expect it to be 100% accurate.

There has been controversial research suggesting that precognition (knowledge of the future) actually occurs, most notably the “Feeling the Future” experiment done at Cornell University by Daryl Bem. Studies have been done on what is called presentiment, which is the alleged tendency of the human mind and body to start reacting to phenomena an instant before they occur. A recent meta-analysis examined 26 studies of presentiment, and concluded that there was a statistically significant effect that is unexplained (see here for a similar scientific paper). 

Some people think that precognition can occur in dreams. A writer named J. W. Dunne wrote a book called An Experiment With Time, in which he claimed that after he started recording his dreams after waking up, he found that many of them came true. 

precognitive dream

One fascinating theory is that when events occur, they create ripples in some cosmic field, rather like the ripples caused when stones are dropped in a pond. Such ripples may travel forward in time or backward in time; and the more significant the event, the larger the ripple. Somehow the human mind might be able to pick up some of these ripples coming from the future. Such a theory could make sense only in a larger philosophical framework with an enlarged concept of the relation between the human mind and nature, one that transcended the reductionist dogmas of naturalistic materialism.

Was my dream an example of precognition, or was it merely a coincidence? I have no idea. I have no proof for the tale I have told here, so you can believe it occurred as I have described it, or you can believe I am just making it up.

I offer my heartfelt condolences to the families of all the people who lost their lives in Flight 370.

Wednesday, March 26, 2014

Humbling Discoveries in Our Cosmic Backyard Suggest Our Astronomical Ignorance

Today scientists announced two surprising discoveries relating to the solar system. The first discovery was the discovery of a 250-mile-wide planetoid some 7.7 billion miles from the sun. The New York Times described the discovery as follows: “Astronomers have discovered a second icy world orbiting in a slice of the solar system where, according to their best understanding, there should have been none.”

The area mentioned is an area between the orbit of Pluto and the Oort Cloud, a gigantic cloud- like region of comets believed to surround our solar system. Scientists originally thought this area was empty, but then they discovered within it Sedna, a 600-mile wide planetoid three times farther from the sun than Neptune.

Some are speculating that the planetoid discovery announced today may hint at the existence of a super-Earth planet ten times bigger than the sun, existing too far away from the sun to have been previously discovered. But if such a planet existed, it would almost certainly be too cold for life to exist on it, unless the planet had some type of geological activity that produced heat.

The second discovery announced today was the discovery of the first ring ever detected around an asteroid. These observations come as a surprise as big as the discovery a few weeks ago that a particular asteroid is disintegrating, for unknown reasons.

Humbling discoveries such as these make me wonder: why does any scientist claim to understand exactly what happened during the first second of the universe's history? Evidently we don't even yet fully understand our own solar system, our own tiny little cosmic backyard. So give out a hardy chuckle the next time a scientist speaks as if he has a detailed knowledge of exactly what happened at the dawn of time 13 billion years ago.

Monday, March 24, 2014

The Impossibility of Verifying a Varying-Constants Multiverse

For several decades scientists have discovered more and more examples suggesting our universe is seemingly tailor-made for life. A list of many examples is discussed here. One dramatic example is the fact that even though each proton in our universe has a mass 1836 times greater than the mass of each electron, the electric charge of each proton matches the electric charge of each electron exactly, to 18 decimal places, as discussed here (the only difference being that one is positive, the other negative). Were it not for this amazing coincidence, our very planet would not hold together. But scientists have no explanation for this coincidence, which seems to require luck with a probability of less than 1 in 1.000,000,000,000,000,000. As wikipedia states, The fact that the electric charges of electrons and protons seem to cancel each other exactly to extreme precision is essential for the existence of the macroscopic world as we know it, but this important property of elementary particles is not explained in the Standard Model of particle physics.” 

Wishing to cleanse their minds of any suspicions that our universe may not be the purely accidental thing they imagine it to be, quite a few materialists have adopted the theory of a multiverse. This is the idea that there is a vast collection of universes, each very different from the other. The reasoning is that if there were to be, say, an infinite number of universes, then we would expect that at least one of them would have the properties necessary for intelligent life, no matter how improbable it may be that such properties would exist.

I will refer to such a collection of universes as a varying-constants multiverse, since the concept is that the fundamental constants of different universes in this collection would vary.  The fundamental constants are items such as Planck's constant, the gravitational constant, the speed of light, the proton charge, the electron charge, and the mass ratio of the proton and the electron.

The question I will consider in this post is: is there any possible way that such an idea of a varying-constants multiverse could be verified?

Why a Varying-Constants Multiverse Could Not be Verified Through Telescopic Observations

You might think that we could verify the idea of a varying-constants multiverse by long-range telescopic observations. You can imagine scientists building some giant telescopes a thousand times more powerful than any ever built. If such telescopes were to allow scientists to look a thousand times farther than they ever looked before, then you might guess that one day scientists might be able to see other regions of space where the constants of nature differ. You might, for example, imagine that scientists looking as far as possible in one direction might see some distant area where the speed of light was much higher, and scientists looking as far as possible in some other direction might see some distant area where the gravitational constant was much different than it is on Earth.

But nothing of the sort has happened, and there is a reason why it cannot ever happen. The reason is that because of the limit set by the speed of light, whenever we look very far away in space, we are looking back in time. So when we look 10 billion light-years away (near the current observational limits of our telescopes), we are looking 10 billion years back in time. Scientists say that our universe began in the Big Bang about 13 billion years ago. So we have a built-in limit as to how far our telescopes will ever be able to look. We can never hope to observe anything, say, 16 billion light years away, simply by building more and more powerful telescopes.

Our most powerful telescopes (such as the Hubble Space Telescope) can look almost as far as humans will ever be able to see with telescopes, which is about 13 billion light years. There is no chance at all that by looking a little farther we will be able to see some sign of another universe. As we approach the observational limit of about 13 billion years, we are looking back a little more to the beginning of our own universe. Scientists say that various aspects of the very early universe and the Big Bang (such as what is called the recombination era) act as a barrier that will forever block us from observing all the way back to the time of the universe's birth in the Big Bang.

So there is no hope at all of being able to verify any theory of a varying-constants universe just by looking farther and farther out in space. But some have suggested two other ways in which we might be able to lend credence to a multiverse theory by telescopic observations: (1) by observing strange, unexplained motions of parts of our universe; (2) by finding evidence of previous cycles of our universe.

The first of these involves the idea that we might be able to see that some fraction of our universe is moving around in an unexplained way, possibly because of gravitational influences by some nearby universe. Such an observation is theoretically possible, but would not actually be any observational support for the idea of a multiverse with varying constants. If we observed such an unexplained motion, it would best be explained by postulating new factors and physics within our observed universe. Even if we were to be forced to conclude that our universe is being gravitationally tugged by some other universe, that would at best be support for the idea that our universe has a “sister universe,” rather than the almost infinitely more complicated idea that there are a vast collection of universes. Moreover, such an observation would provide no support for any idea that other universes have a variety of different physical constants.

The same thing can be said about the idea of finding evidence that our universe had previous cycles. If such evidence were found, it might lead us to think that the universe existed before the Big Bang, and that the universe is older than 13 billion years. But such evidence would not give any basis for believing in anything like a varying-constants multiverse. If our universe had previous cycles, there is no reason to think that its fundamental constants such as the proton charge would change from one cycle to the next. Science knows of no mechanism by which the fundamental constants of the universe could change (here I exclude the Hubble constant, a measure of the universe's expansion rate, which is not really a fundamental constant).

Why a Varying-Constants Multiverse Could Not be Verified By Verifying Theories Such as Inflation

Could we ever verify the theory of a varying-constants multiverse by verifying the theory of cosmic inflation, the idea that the universe underwent an exponential expansion during part of its first instant? No. I may first note that the prospect of being able to verify any theory of cosmic inflation is far dimmer than many now think. It is very doubtful that the current technique being pursued (based on looking for b-mode polarization) will ever provide any real verification. There are many sources of b-mode polarization that are not caused by inflation (gravitational lensing, dust, synchrotron radiation, and others), so trying to find a fingerprint of inflation is like trying to extract a DNA sample from a bandage that was passed around and shared by ten different people with bleeding wounds.

But even if scientists were to confirm a theory of cosmic inflation, that would not verify any theory of a varying-constants multiverse. For one thing, while some versions of the inflation theory imagine inflation producing multiple bubbles of space that might be called other universes, we would have no way of knowing whether such other bubbles of space had ever formed, as they would be forever unobservable. More importantly, we would have no license for assuming that such bubbles of space would be universes with fundamental constants that differed from our own. If one universe produced bubbles of space that branched off to become spatially separated from that universe, the most natural assumption is that such “universes” (or, more properly, other regions of the same universe) would have the same fundamental constants as their parent universe, particularly since science knows of no mechanism by which one universe could somehow produce a different universe with different fundamental constants. 

The Impossibility of Verifying a Varying-Constants Multiverse By Launching Exploratory Expeditions

There is still one other technique that might be proposed for verifying the idea of a varying-constants multiverse: the technique of actually launching a mission into another universe. One can imagine some amazing machine that might allow us to travel from our universe to a different universe. In theory, if mankind or its successors were to launch several trips to other universes, and verify that they had different fundamental constants, that might verify the idea of a varying-constants universe.

But there are huge problems with such an idea. The first is that science offers no clue as to how we ever could travel to another universe. The idea seems like pure fantasy, a thousand times more fanciful and extravagant than the farfetched idea of instantly traveling to another star through a space-time wormhole.
The second reason is that if we were somehow to create some machine capable of traveling to another universe, there is no reason to think that it would be capable of traveling back to our universe or sending signals back to our universe (either of which would be necessary for any real verification to occur).

The third reason is that if we were somehow able to create a machine that traveled to another universe, it would still be all but impossible for such a device (or people or robots traveling in it) to verify that the other universe had a set of fundamental constants different from ours. The measurement of our universe's fundamental constants has taken decades of work by scientists around the world. There's no reason to think that a machine transported to another universe would be able to verify that the fundamental constants of that universe were different.

The fourth reason is that if one imagines the scenario of a varying constants universe (many universes, each with random fundamental constants), there would be an overwhelmingly high likelihood (such as 99.999999999%) that any machine transported to such a universe would be instantly destroyed, along with any robots of humans that came along for the ride.

To understand this point, you have to consider the astonishingly high degree of fine-tuning that allows stable matter to exist in our universe. In his book The Symbiotic Universe, astronomer George Greenstein says this about the equality of the proton and electron charges: "Relatively small things like stones, people, and the like would fly apart if the two charges differed by as little as one part in 100 billion.” There are quite a few other cases of fine-tuning required for the existence of stable matter, including fine-tuning of the strong nuclear force.

So if we then imagine a machine being transported to another universe with random physical constants, we have to imagine the machine (and any one inside it) being instantly destroyed as soon as it was transported to another universe. With a 99.9999999% likelihood the coincidences which allow for stable atoms and molecules in our universe would not exist in such a universe. As soon as the machine got over to the other universe, its atoms and molecules would split apart, as the machine would (with overwhelming likelihood) no longer be in a universe which favored the existence of atoms and molecules.

multiverse
A recruiting poster from 4000 AD ?

Because of these various reasons, we can conclude that there is no substantial possibility that any machine could ever be transported to another universe to help verify the concept of a multiverse consisting of many universes, each with a different set of fundamental constants.

Conclusion

It seems that it is quite impossible to ever verify the theory that there are multiple universes with varying fundamental constants. The theory is neither falsifiable nor verifiable. Consequently, the theory is more of a metaphysical theory than a scientific theory, as all truly scientific theories can be either verified or falsified under some reasonable scenario.  
 

Saturday, March 22, 2014

A Scientific Theory is Not Confirmed Merely Because It Seems to Make a Few Correct Predictions

In discussions of scientific theories, it is often argued that this or that result will confirm some scientific theory because such a result was predicted by that theory. But such reasoning is often mistaken. The fact that a theory may seem to make some correct predictions does not necessarily show that the theory is likely to be true.

Below are some of the reasons why this is true.

A theory can be a mixture of true and false assumptions, and correct predictions can be made by the true assumptions.

Theories are often a mixture of correct assumptions and mistaken assumptions. Correct assumptions in a theory may imply certain predictions, which may prove successful. But the theory may still contain incorrect assumptions, which did not imply the predictions that turned out true. The correct predictions only tend to confirm (perhaps to at least some degree) those parts of a theory that implied those correct predictions, not other assumptions that did not imply those predictions.

For example, some people advanced the theory in 2002 that the Bush administration had secretly orchestrated the September 11 attacks, to create a pretext for war because it wanted to invade Iraq. Perhaps some of those people then said in 2003 that their theory was confirmed, because the Bush administration really did invade Iraq in that year. But in this case we have a theory making two assumptions: (1) the assumption that the Bush administration orchestrated the September 11 attacks; (2) the assumption that the Bush administration wanted to attack Iraq. The invasion of Iraq in 2003 may tend to confirm the second of these assumptions, but not the first.

So when a particular scientific theory seems to be confirmed by some prediction that eventually matches observations, we need to ask: which parts of the theory tend to predict the prediction that matched observations? Only such parts – if any-- should be considered as having been put (possibly) in a favorable light by the observations.

Multiple theories may make a particular prediction, so a confirmation of the prediction may not really support a particular theory that makes that prediction.

It is not necessarily true that a confirmed prediction tends to show that the theory that predicted it is true, because there may be many other reasonable theories that make the same prediction. For example, let's imagine a person in 2007 arguing that sinister forces on Wall Street were trying to orchestrate a sharp economic downturn, so that they could make lots of money on certain types of stock market bets called puts (which increase in value when a stock goes down). In 2008 (when such an economic downturn occurred) such a person would no doubt say, “Look, we did have a sharp economic downturn; my theory is confirmed.” But such reasoning would be invalid, because the same sharp economic downturn was predicted by various other theories, such as the theory that a housing bubble would produce such an economic downturn, and the theory that too much consumer credit would produce an economic downturn.

It is too easy to selectively present data in a way that makes a theory's predictions look true, either by massaging the “observed data,” by massaging the “predicted data,” or by massaging both, either deliberately or through unrecognized bias.

The favorite device of a theory advocate is a “predicted versus actual” line graph. Here is a very simple example of this type of graph, with the blue line showing predicted results and the red line showing observed results:


This type of graph can be used to try to show that a particular theory is matching observations. But be distrustful when you see such a graph. Why? Because it is easy to cherry-pick either the data used as the “observed data” or the data used as the “predicted data,” or both.

This is particularly true in any case where the data points are not some simple thing (depending on one observation, as in the case above), but instead require some complicated summary of multiple observations. In such cases it is all too easy for a presenter to massage the data in a way that shows a theory in a favorable light. Given a choice of five different ways of showing the “observed results” (each using a different source of data, or a different way of summarizing the data), someone can choose whichever set of “observed results” is most in agreement with his theory.

Another way in which bias can be displayed is by massaging and cherry-picking the “predicted results” shown in a graph such as the one below. Theories often have multiple flavors, which vary because of a choice of parameters that can be used within the theory. In other words, the “predicted results” from a particular theory are often very fuzzy, rather like an electron probability cloud. A presenter can pick particular values within that fuzzy cloud that most closely match the “observed results,” and plot such values as the “predicted values” on a line graph. The result will show the theory in the most favorable light, but may be misleading. For a recent specific example of this type of cherry-picking, see this blog post.

Another way in which bias can be shown in matching observed results with predicted results is simply by choosing the start point and the end point of the data being graphed. For example, if I have a theory that bonds tend to out-perform stocks, I may use a start point of January 1, 2000 and an end point of Dec 1, 2008. That will show a huge advantage for investing in bonds as compared to investing in stocks. But a different start point and end point would tell a very different story. A similar technique can be used to try to show the likelihood of a particular scientific theory. A supporter can choose to graph whatever start point and end point shows the closest match between the theory and observations, even though different start points and end points on the line graph would show a much smaller degree of agreement.

Even if a theory is the only theory that predicts an observed phenomenon, that does not mean the theory is true, because there may be many possible theories not yet imagined that can explain the phenomenon.

One type of reasoning sometimes made is: x is the only theory that predicts the observed phenomenon y, so x must be true. But that does not follow. The human imagination is weak, and our ignorance is enormous. Almost any observed phenomenon can be explained in many different ways, but the puny human imagination may be able to think of only one or two of those ways. Back during the days of the Black Plague, the theory of “God's wrath” may have been the only theory that explained why so many people were dying, but it would have been wrong at that time to assume such a theory was correct on that basis.

With sufficient ingenuity, unbelievable theories can be contrived to make predictions that match observations.

Sometimes it is possible for a theory to make some correct predictions, even though the theory isn't plausible. One of the most famous examples is the Ptolemaic theory, a theory of the solar system. The theory held that the Earth was at the center of the solar system. To make such a theory match observations, the theory included a complex model of planetary motions, in which planets orbited in small orbits called epicycles that were part of much larger orbits. The predictions of the Ptolemaic theory seemed accurate for centuries, but the theory was quite false.

There are modern-day equivalents of the Ptolemaic theory -- theories that are very suspect because of their excessive complexity and contrivance.

 Implausible, contrived scientific theories are like this
 (Source: wikiuniversity, Howard Community College)

Scientific theories are only well-confirmed by predictions when the theories make very many predictions that have been confirmed by observations.

Thinking that a scientific theory has been confirmed because it makes a few correct predictions is like thinking that you've proven you're a great baseball player because you've pounded out a few base hits. You've only proven yourself a great baseball player if you've made hundreds or thousands of hits. Similarly, the only scientific theories that are well-confirmed by predictions are those that have made hundreds, thousands or millions of predictions that have been confirmed.

We have a great example of such a theory: the theory of gravitation. The theory is based on a simple exact formula that you can use to compute the degree to which massive bodies attract each other. Scientists and engineers (and the computers on spacecraft) have used this theory thousands or millions of times, and the predictions made by the theory have always proven true. A robot spacecraft could never reach Mars and land on Mars unless the predictions of the theory of gravitation proved true thousands of times, nor could the Apollo astronauts have landed on the moon and returned.

Another such theory is the theory of electromagnetism. The theory is based on a simple exact formula you can use to compute the attraction between two electrical charges. Scientists and engineers have used the formula thousands or millions of times, and it always gives the right answer.

Compared to these theories, any theory that claims scientific validation because it seems to make a few correct predictions is like some kid who claims to be a professional actor because he acted in a few high-school plays.

According to the standard I mention here, we might have a reason for regarding many well-known scientific theories as being on rather shaky ground, as they do not make a huge number of predictions that have been confirmed. For example, we might regard as a very shaky theory the theory that life first arose on planet Earth merely because of a lucky chance combination of chemicals. Such a theory does not make a huge number of predictions that have been confirmed, and in fact, does not seem to make any prediction that has been confirmed.

Thursday, March 20, 2014

More Doubts About BICEP2: The Dubious Part of Their Main Graph

At a time when many cosmic inflation theory fans are jumping the gun and popping champagne corks over Monday's BICEP2 study results, calling it an epic breakthrough, I hate to be a killjoy. I like to join a party as much as the next man. The problem is that I keep finding reasons for doubting the claims made about the study, that it provides evidence for the theory of cosmic inflation. My main reasons were given in this blog post, and some lesser software-related reasons were given in yesterday's blog post. Now I will discuss a very big additional reason for doubting the claims being made about BICEP2, a reason I haven't previously discussed: their main graph has a very dubious feature, a curve that is quite misleading.

The BICEP2 paper has two versions of the graph, one that is logarithmic and another that is not. Below is the non-logarithmic version, which makes it easier to see how the discovered data does not match what is predicted from the theory of cosmic inflation:


In this graph the black dots represent the new BICEP2 observations of b-mode polarization. The vertical lines are error bars representing uncertainty in the data. The bottom dashed line is a prediction of b-mode polarization made by one version of the theory of cosmic inflation (a “wishful thinking” version chosen by the BICEP2 team, as I will explain in a minute). The solid line represents contributions to b-mode polarization projected to occur from gravitational lensing. The upper dashed curved line represents the b-mode polarization that could occur from a combination of gravitational lensing and the version of the inflation theory that was chosen by the BICEP2 team to make their data match inflation theory.

Now the untrained eye can spot a big problem with this graph: the observations do not match what is expected. While the first two black dots match the top dashed line (as does the last black dot), several of the other black dots are way above the top dashed line, in particular and seventh and eighth dots. On this basis, we are entitled to say: inflation theory falls way short.

But here is a very important fact about this graph: the bottom curved hill-shaped dashed line (the supposed contributions from cosmic inflation) is not “the” prediction from the theory of cosmic inflation. It is instead the prediction from a particular version of the inflation theory carefully chosen by the BICEP2 team so that their observational results can be matched to inflation theory. The version in question is one that drastically contradicts conclusions made with a 95% confidence level last year by a much larger team of scientists, using the Planck space observatory.

The “prediction from inflation” that appears as the hill-shaped red dashed line on the above graph all depends on a particular data item called the tensor-to-scalar ratio, which cosmologists represent with the letter r. In a scientific paper co-authored last year by more than 200 scientists, the Planck team concluded with a 95% confidence level that this tensor-scalar ratio is less than .11. But in the graph above the BICEP2 team chose to disregard these findings, and use on their graph an extreme version of the inflation theory in which the tensor-scalar ratio is .2 (200% higher than the maximum value set by the larger group of scientists).

Why would the BICEP2 team have done that? Because it allowed them to produce a graph showing a partial match between their observations and the predictions of a cosmic inflation theory. A triumph of wishful thinking. It's rather like a husband reassuring his wife by showing her a graph in which his projected income rises by 50% for each of the next five years.

But what would the key BICEP2 graph have looked like if they had accepted the limit set by the much larger Planck team? The graph would have looked rather like the graph below, except that the left half of the top red dashed line would have to be dropped way down, and none of the observations would be anywhere near close to matching the predictions from inflation (except for the last one, at a point in the graph where inflation is irrelevant, and all contribution is from gravitational lensing). 
 
BICEP2

The BICEP2 team could have produced a graph like the one above (but with the left half of the top dashed line dropped way down, to equal the green line plus the solid red line). That is exactly what they should have done. They might then have made an announcement like this:

We have some interesting new observations. But we're sorry to report that, respecting the limits set last year by a much larger team of scientists, our observations provide no evidence to back up the theory of cosmic inflation
 
Instead, the BICEP2 team chose to put in a bogus red dashed line in their key graph, representing a farfetched, extreme wishful-thinking version of the cosmic inflation theory, one that relies on a version of inflation with a tensor-scalar ratio (r) about twice as high as the maximum allowed value according to the larger Planck team. Rather than candidly showing such a red-dashed line as just one possible version of inflation, they put it on the graph as if it was the only version of inflation. 
 
It was a great way to grab press headlines, but not very honest or candid.

When we use the predictions of inflation using the Planck team's estimate of the upper limit of the tensor-scalar ratio (with a 95% confidence level), corresponding roughly to the green line in the graph above, we are led to think that the BICEP2 team's observations provide no support to a theory of cosmic inflation.

Postscript: This post uses the assumption that smaller values for the tensor-to-scalar ratio (r) cause the "hill" of the inflation prediction to drop much smaller, a point that is clear from looking at this site. 

Wednesday, March 19, 2014

Best Practices Software and Cowboy Coding

Note: I have revised this post to remove its original references to the programming sins of one particular programmer. I have decided to take mercy on this person, and remove all references to his coding sins. 

Let's look at the difference between two very different types of programming: best practices software and cowboy coding.

Best-practices Software

Best-practices software is software developed according to software industry guidelines for quality. Examples of these best-practices include the items below. There is not always time to follow all of these practices, but the overall quality and maintainability of the code depends on how many of these standards are followed:
  1. Each source file contains a comment specifying the type of code in that file.
  2. Each method, subroutine or function contains a comment explaining what is done by that method or subroutine. The only exception to this rule is when the name of the method, subroutine, or function leaves no doubt as to exactly what is being done.
  3. There is a short description of each argument to any method that takes arguments, except in the case when the name of an argument leaves no doubt as to what that argument is.
  4. There are comments explaining the logic in any particularly complicated or hard-to-understand parts of the code.
  5. Variables are given names that help to document what they stand for.
  6. Good coding practices are followed by each developer.
  7. Once the code is finished, it is placed in a version control system. Whenever a source code file is changed, the new version is checked into the version control system, with a comment discussing what changes were made.
  8. The code is developed by a team of developers, who can cross-check each others' work.
  9. Once the code is written, documentation is written explaining how the code works and how it can be modified.
  10. A team of quality assurance experts (known as the QA staff) are finally brought in to rigorously test the code to find any bugs in it.
  11. Once the code has been released, a meticulous record is kept of all changes in the code and all reported bugs, along with which of the bugs were fixed.
  12. Any known defects or limitations of the code are clearly documented.
  13. Each subsequent release of the code is given a new version number, with a description of exactly how the code changed during the latest release.
Practices such as these are followed by mission-critical software, or software on which great amounts of money are riding, or software on which lives depend. For example, if a company were writing software for a nuclear reactor, or software for an expensive space mission, or software for guiding a jetliner, it would tend to follow most or all of these best practices.

However, there is a totally different way of programming that is often used, a quick-and-dirty way of programming. This way of programming is sometimes called cowboy coding.

Cowboy Coding

Cowboy coding is what happens when a single developer produces some code, typically in a quick-and-dirty method. The cowboy coder isn't interested in any quality guidelines that will slow him down. He typically grinds out some software without doing much to document it. He may make no use of version control. He may then release his work without having had anyone check it other than himself. Typically the cowboy coder just kind of says, “It seems to work well when I try it – let me know if you find anything wrong with it.” A typical cowboy coder makes little or no attempt to produce written documentation for his software, and may take no care to document different versions or to document exactly which bugs were fixed. 
 



Now cowboy coding certainly has its place. Lots of programs are not mission critical, and need not be developed using best practices. It would be overkill to follow the best practices listed above when creating some little graphic utility for doing something like allowing a user to add text to an image.

However, it must be noted that cowboy coding is a severe danger if it is used for some critical part of a hugely important scientific study. This is because cowboy coding isn't very reliable. Maybe it does the right thing, and maybe it doesn't. It can be hard for anyone to tell except the original cowboy coder, and probably he doesn't even know. This is no exaggeration. Poorly documented software code is very hard to read, even if you are the original developer. Countless cowboy coders simply don't know whether their cowboy-coded projects work correctly. I've cowboy-coded quite a few little projects, and then when I went back to them much later, I could often hardly figure out any more what exactly they were doing.

The Huge Problem of Cowboy Coding in Scientific Studies

There is a very big problem that modern scientific studies often rely on dubious software solutions that have been cowboy-coded. Modern science involves incredibly high amounts of specialized data processing. Scientists cannot buy off-the-shelf software to handle these specialized needs. This is because each scientific specialty requires its own specific type of software, and the market for such software is so small that few software publishers will cater to it. 

What very often happens is that scientists will often write their own software programs to handle their own specialized needs. Such efforts are often one-man cowboy-coded efforts that do not come anywhere close to meeting the best practices of modern software development.  We have many scientists writing amateurish code that any full-time software developer would be ashamed to put his name on.  But such code might become a critical linchpin in some scientific study that uses up millions of federal dollars. 

We need new standards to minimize this problem. One possibility is to include software professionals as part of the peer review process for scientific studies. There are major scientific studies that are 30% science and 70% data processing. But in the peer review process, only scientists review the study. This makes no sense. Software professionals should be included in the process, to a degree that depends on how much data processing was done by the study.    

Tuesday, March 18, 2014

BICEP2 Study Has Not Confirmed Cosmic Inflation

The BICEP2 study was released yesterday, and found some evidence of something called b-mode polarization in the early universe. Advocates of the theory of cosmic inflation were quick to trumpet these results, with many of them claiming that the study had finally confirmed the theory of cosmic inflation. This theory maintains that the universe underwent a period of exponential inflation during a fraction of its first second.

But there are several reasons why the BICEP2 study does not confirm cosmic inflation or even provide substantial evidence for it.

The first reason is that a single scientific study rarely proves anything. Having followed scientific developments closely for more than four decades, I have lived through many a case of scientific announcements that did not stand the test of time. I remember back around 1980 an announcement in which scientists announced a fate for the universe (collapse) that is the exact opposite of the fate they now predict for it (unending expansion). I also remember the famous “life on Mars” announcement in the 1990's which did not pan out. At this web site a scientist says that there is only a 50% chance that the results from this BICEP2 study will hold.

Another reason for doubting this BICEP2 study is that it makes an estimate for an important cosmological ratio called the tensor-to-scalar ratio, and that estimate is about twice the maximum possible value, according to the estimate from a very definitive source on the topic, the Planck team of scientists (larger than the BICEP group). Apparently some group of scientists are in serious error in regard to this matter, and there is a 50% chance that it is the team that made yesterday's BICEP2 study. If they are the ones who are wrong, it throws much of their study into doubt.

A third reason why the BICEP2 study does not confirm the theory of cosmic inflation is that BICEP2's results do not match well with the predictions of that theory.

The supporters of the inflation theory are citing the graph below from the BICEP2 study. The black dots are the BICEP2 observations, with the vertical lines being error bars (representing uncertainty in the data). The bottom red dashed line is what we expect from the cosmic inflation theory.


Considering just what is predicted from the theory of cosmic inflation, the results do not match well at all. The little black dots show a rise in the line exactly where the cosmic inflation theory predicts a fall in the line.

To patch up this embarrassing discrepancy, the authors add a “gravitational lensing” factor, which seems like quite the little fudge factor. Gravitational lensing is a very exotic effect that is not an easy thing to predict or nail down with any certainty. Estimating the amount of gravitational lensing that occurred long ago is like estimating the total tonnage of asteroids that have struck in the past billion years – very much a type of estimate that involves a huge amount of uncertainty. The BICEP2 authors seem to have got an estimate for gravitational lensing by making inputs to a 3-year old computer program called LensPix. There are lots of ways to go wrong there, either in the inputs or in the software (and the site for the software says “there are almost certainly bugs” in this software).

What is interesting, however, is that even if you accept as gospel truth this estimate of the amount of gravitational lensing, the data from BICEP2 ends up strongly diverging from the expected results produced from the estimated amount of gravitational lensing and cosmic inflation.

It is very hard to tell how big this discrepancy is from the graph shown above, because it uses two sneaky data presentation techniques to make the discrepancy look much smaller than it is. The techniques are: (1) the graph unnecessarily includes a whole load of irrelevant data in the top half of the graph, causing the scale of the graph to be unnecessarily large; (2) the graph uses a logarithmic scale (a type of scale that often tends to make two data items look closer than they are).

I can use exactly the same techniques to make a graph that makes it look like a dishwasher makes almost the same amount of money as a Wall Street bond trader.

But, thankfully, buried within the BICEP2 scientific paper is a nice simple non-logarithmic graph that shows just how great the difference is between the BICEP2 results and the results predicted from the cosmic inflation theory. The graph is below.



In the graph above the black dots are the new BICEP2 observations. The vertical lines are uncertainties in the data. The bottom red dotted line is the prediction from the theory of cosmic inflation. The solid red line is the estimated gravitational lensing factor. The upper red dashed line is the result predicted given a combination of the gravitational lensing factor and the theory of cosmic inflation.

Notice the big difference between the observed results and the expected result. Even if we include this highly uncertain gravitational lensing fudge factor, the predicted results from the cosmic inflation theory do not closely match the observed results. Note that the sixth and seventh black dots are way above the top dashed red line.

Therefore these results are far from being a confirmation of the theory of cosmic inflation. They can't even be called good evidence for cosmic inflation.

I may also note that there are numerous non-inflationary cosmological models that might produce the type of polarization observations that BICEP2 has produced. If there is currently a shortage of such models, it is largely because cosmic inflation speculations have almost monopolized the activities of theoretical cosmologists during recent decades.

The BICEP2 observations can be explained by the decay of exotic particles, or by some noninflationary exotic phase transition. Or it could be that all of the observed effect is produced by gravitational lensing and none of it produced by cosmic inflation. Scientists are already assuming that most of the observed effect is being produced by gravitational lensing; it's a short jump from “most” to “all” (particularly given the many uncertainties involved in estimating the amount of gravitational lensing).

If cosmologists spend as much time producing non-exponential non-inflationary models of the early universe as they do producing models that involve inflationary exponential expansion, they will probably find that non-exponential non-inflationary models are able to explain the observed BICEP2 results just as well, and perhaps even better.

Because the effect observed by the BICEP2 study can be produced by gravitational lensing, and because we will for many decades be highly uncertain about how much gravitational lensing has occurred in the past, it is very doubtful that any study like the BICEP2 will ever be able to provide real evidence for a theory of cosmic inflation. Just as UFO photographs rarely prove anything (because there are so many ways in which lights in the sky can be produced), a study like BICEP2 doesn't prove cosmic inflation (because there are other ways, such as gravitational lensing, that the observed polarization effect can be produced).

The case for the theory of cosmic inflation theory is much weaker than many think. In a nutshell the standard sales pitch for the theory is that it solves two cosmological problems: one called the flatness problem and the other called the horizon problem. The flatness problem is an apparent case of cosmic fine-tuning, and the horizon problem is an example of cosmic uniformity. The weakness in trying to solve these problems with a theory of cosmic inflation is that we have many other apparent cases of cosmic fine-tuning and many other cases of astonishing cosmic uniformity (including laws of nature and constants that are uniform throughout the universe). Inflation theory claims to solve only one of these many cases of apparent cosmic fine-tuning, and only one of the many cases of cosmic uniformity. That puts it in not a very good position, rather like a theory of the origin of species that only explains the origin of lions and tigers without explaining the origin of any other animals. I will explain this point more fully in a later blog post.

What is particularly ironic is that the theory of cosmic inflation claims to help in getting rid of some cosmic fine-tuning, but the theory itself requires abundant fine-tuning of its own to work, as many parameters in the theory have to be adjusted in just the right way to get a universe that starts exponentially inflating and stops inflating in a way that matches observations. 

Postscript: The chart below (in which I have added a green line) shows one way we can explain the BICEP2 observations without requiring any cosmic inflation.  We simply imagine a slightly higher amount of gravitational lensing (shown in the green line). The shape of this line matches the shape of the gravitational lensing estimated by the BICEP2 study (solid red line). Because the green line passes through all of the vertical error bars, it is consistent with the BICEP2 observations. 

BICEP2 graph with an added trend line (green)

Post-postscript: at this link cosmologist Neil Turok says, "I believe that if both Planck and the new results agree, then together they would give substantial evidence against inflation!"

Post-post-postscript: See the post here for a discussion of wishful thinking and cherry picking involved in the main graph shown above.   

Post-post-post-postscript: See this link for a National Geographic story on how the BICEP2 results may be caused by dust, not cosmic inflation. 

Yet another postscript: see this post for a discussion of a talk at Princeton University in which a scientist gives a presentation that gives a devastating blow to the inflated claims of the BICEP2 study. The scientist gives projections of dust and gravitational lensing which show how such common phenomena (not from the Big Bang or cosmic inflation) can explain the BICEP2 observations.  

Yet another postscript: In this article in the scientific journal Nature, it is explained that two recent scientific papers have concluded that there is no significant evidence the BICEP2 signals are from cosmic inflation or gravitational waves, with dust and cosmological lensing being an equally plausible explanation.

Sunday, March 16, 2014

Teenage World Savior: A Science Fiction Story

All attempts to defeat the hostile extraterrestrial invasion had failed utterly. A meeting of military officers was convened at the house of Jonas MacDonald, a physicist who specialized in high-energy physics. The officers were there to ask the physicist if he knew of any high-tech way that the invading extraterrestrials could be attacked, perhaps with something such as lasers or electromagnetic pulse weapons.

So far our military efforts have been a complete disaster,” said General Curtis. “After the aliens landed in New Jersey, and wiped out many people, we've hit them with every conventional weapon we had. We've dropped countless bombs. We've strafed them with our jets again and again. We've shelled the hell out of them with our best artillery. But we're getting nowhere. The alien stronghold keeps growing larger and larger.”

Why aren't such attacks working?” asked MacDonald.

They seem to have some kind of strange energy bubble around their landing area,” explained Curtis. “It's some kind of super-strong energy field that is able to vaporize incoming bombs and bullets. Whenever we shoot something at the alien stronghold, our bombs and bullets just kind of melt as soon as they touch the protective energy bubble.”

Have you thought about using nuclear weapons?” asked MacDonald.

No, that's out of the question,” explained General Curtis. “The prevailing winds would cause radioactive fallout to drift on to New York City.”

Do you have a picture of what these extraterrestrials look like?” asked MacDonald.

General Wheeler produced a photograph, and put a picture on the table.

Let me think,” said MacDonald. “There might be some kind of high-energy proton beam we could use to attack these things.”

MacDonald's 13-year-old son Artie walked into the room. Artie should have been at school, but he had got suspended for starting a big food fight in his high school cafeteria.

Is that what the aliens look like?” asked Artie. “Cool.”

This meeting is classified,” said MacDonald. “Artie, clear out of here.”

The men continued to discuss MacDonald's ideas for a high-energy proton beam. Twenty minutes later Artie came back into the room.

Dad, I know I'm not supposed to be here,” said Artie. “But I've got an idea. I've got an idea about how you might defeat the aliens.”


Artie, have you lost your senses?” asked MacDonald. “Nobody wants to hear a teenager's ideas on saving the world from an alien invasion.”

But, Dad, it's a really good idea,” said Artie.

Let the boy speak,” said General Curtis. “Right now, we're desperate for new ideas.”

I got the idea from the cafeteria food fight I got suspended for,” said Artie. “We can fight the aliens with food.”

Very funny,” said MacDonald. “Now go to your room, and don't bother us again.”

I'm not kidding, Dad,” said Artie. “There's a way to do it. Look at that picture of the alien. He has no real nose. Just a kind of a slit for a nose. So my guess is these aliens are probably sensitive to particles in the air. If we bombard them with fine particles, it may kill them. The easiest way to bombard them with fine particles is by using spices.”

Spices of what type?” asked General Wheeler.

Any type of spice that is a very fine powder,” explained Artie. “Cinnamon or curry powder would probably do the job.”

That's the craziest idea I've ever heard,” said MacDonald. “The aliens are protected by an energy bubble that would make it so that the powder couldn't even fall into the alien stronghold.”

But it just might work,” said General Wheeler. “Who knows – maybe their protective energy bubble was only designed for things like bombs and bullets. Maybe a fine powder could get through that thing. Let's give it a try.”

So the conventional high-explosive bombs were taken out of a military jet. Two giant vats of cinnamon and curry powder were loaded into the jet. The jet made a bombing run of the alien stronghold, dumping the curry powder and cinnamon on to the strange alien structures.

The protective energy bubble of the aliens had been designed to destroy only incoming objects larger than about a millimeter. The curry powder and cinnamon fell right through the protective bubble.

The aliens breathed in the curry powder and cinnamon, and all died instantly. They came from a dustless planet, and had never evolved any apparatus for protecting their lungs from fine particles.

And so the teenage boy who had started a food fight at his high school cafeteria became known as the unlikely world savior who started a food fight that saved planet Earth.

Friday, March 14, 2014

The Lesson From Arthur C. Clarke's Predictive Errors

The television show Prophets of Science Fiction liked to portray science fiction writers as latter-day visionaries with great predictive powers. One of the writers profiled on this show was the late Arthur C. Clarke. Clarke was both a science fiction writer and a nonfiction writer who wrote about space exploration and the future. I greatly enjoyed his work, particularly when I was a teenager. Clarke first proposed communication satellites, and made some very prescient predictions about that technology.

Clarke's predictions about the immediate effects of space travel varied in accuracy. Clarke predicted that an age of manned space exploration would produce a new Renaissance, and judging from this argument the 1970's (directly following the 1969 moon landing) should have been a decade of immortal art. Anyone who remembers the music and television shows of the 1970's may chuckle at that concept.

But what about Clarke's record in making predictions about our century -- how accurate was he?

If fiction can be taken as a form of prediction, Clarke's record in regard to predicting our century was not very good. His most famous fictional work (co-authored with director Stanley Kubrick) was the screenplay for 2001: A Space Odyssey. Although it was a great artistic success (and one of my favorite movies), that movie predicted that the year 2001 would see a manned mission to Jupiter, a giant-sized lunar colony housing more than 100 residents, computers that could have conversations with a human and understand our language, and a giant space station with artificial gravity and very roomy interiors. None of those things actually occurred by 2001. It is now 2014, and no one is living on the moon. We haven't even made it to Mars, and probably won't get there for many years. Although there are “chat bot” computer programs that might fool you (for a while) into thinking you're talking with some one, there is no computer that even has the intelligence of a 1-year-old. The only space station is a small station in which a few astronauts live in cramped conditions, without artificial gravity.

 The Roomy Space Station in 2001: A Space Odyssey

But what about Clarke's nonfiction predictions about our century – how well do they hold up? In 1999 Clarke wrote for the London Sunday Telegraph an article called “The Twenty-First Century: A (Very) Brief History.” Below are some of the predictions he made, along with comments about their accuracy.

Clarke predicted that the year 2002 would see “the first commercial device producing clean, safe power by low-temperature nuclear reactions,” causing the inventors of cold fusion to get a Nobel Prize in physics in that year. Serious misfire.

Clarke predicted that the year 2004 would see the first example of a human clone. Misfire.

Clarke predicted that the year 2005 would see the first return of a soil sample from Mars. Misfire.

Clarke predicted that in the year 2006 the last coal mine would be closed. Very serious misfire.

Clarke predicted that in the year 2009 (because of a nuclear accident) all nuclear weapons would be destroyed. Serious misfire.

Clarke predicted that in the year 2010 “quantum generators (tapping space energy)” would be deployed, and that electronic monitoring would all but eliminate professional criminals from society. Both predictions were complete misfires.

Clarke predicted that in the year 2011 a space probe to Jupiter's moon Europa would discover life on that moon. Misfire.

Clarke predicted that in the year 2014 construction of a Hilton Orbiter Hotel would begin. Misfire.

These misfires are not hand-picked from a list of predictions including quite a few successes. As far as I can tell from his 1999 forecast , pretty much nothing that Clarke predicted to happen between the year 2000 and 2014 actually happened (except for the arrival of a space probe to Saturn, which was already due to arrive in the year Clarke predicted).

These predictions were from one of the twentieth century's leading futurists, who had written a widely-read book entitled Profiles of the Future. My purpose here is not to belittle Clarke, who I regard highly. My purpose is merely to suggest the lesson that no matter how highly regarded a particular futurist may be, you should remember that his predictions are just educated guesses.

So the next time you see Ray Kurzweil predict that highly intelligent computers are just around the corner, take it with a grain of salt.

You should also pay very little attention to the prediction in today's news, from the SETI Institute's senior astronomer Seth Shostak. Shostak predicts that if intelligent life exists in space, we will find it within twenty years. Although there is every reason to suspect that there is very much intelligent life outside of our planet, there is fairly little reason to conclude that if it exists we will find it in twenty years.Whatever reasons have prevented us from finding such intelligent life for the past fifty years may well also prevent us from finding it in the next fifty years.

Wednesday, March 12, 2014

Bouncing Black Holes May Cause the Sun to Suddenly Vanish

The sun has been shining for billions of years, and scientists say that in all probability it will continue shining brightly for billions of additional years. We assume that there is 100% probability that the sun will continue to shine throughout our lifetimes. But surprisingly enough, there is a very small chance that the sun will suddenly disappear at any time -- perhaps a thousand years from now, perhaps ten years from now, or perhaps even tomorrow.

The sun might vanish at any time because there is a very small chance that a particular theory I will now describe is true. If this theory is true, the sun might instantly disappear at any time.

The theory I mention is a theory involving black hole collapses. To explain that theory, I must first discuss why scientists think that black holes are formed. Scientists say that black holes are formed when very massive stars begin to collapse, with the collapse being caused by the enormous gravity of the star. A star that is more than five times more massive than the sun has a tremendous gravity many times higher than the gravity of our planet. But such a star emits lots of energy through thermonuclear fusion, and that causes an outward force that balances the inward force caused by the star's gravity.

But when the star nears the end of its lifetime and runs out of hydrogen and usable helium to burn as nuclear fuel, then there is no longer any outward force to counteract the force of gravity. The star's enormous gravity causes the star to suddenly shrink in size. Gravity crushes the mass of the star in a mighty collapse. Scientists think this causes a supernova explosion, along with the formation of a black hole. Much of the star's mass is blasted off into space, but the remaining mass then collapses into a state of infinite density called a black hole.

What happens to all that matter once this black hole forms? This is a matter for speculation; no one knows for sure. There are many exotic speculations. One speculation advanced by more than one scientist is that when black holes are formed, they create a spacetime wormhole. The idea is that the matter lost in a black hole travels through a wormhole, and then suddenly appears elsewhere in the universe. Such a sudden appearance has been called a white hole. Of course, this idea is pure speculation, and there is no evidence for white holes. But let us consider what the consequences might be if white holes were to be created from the creation of a black hole.

If a white hole were to be created, one possibility is that we might suddenly see a gushing of matter coming out from some point in space, perhaps some point in interstellar space. But we've never observed anything like that happening. So let's consider another possibility.

Another possibility is that once a white hole is created from a black hole, the white hole then immediately collapses to become a black hole again. This would make sense from a gravitational standpoint. Imagine if a star of 10 solar masses were to collapse, causing 7 solar masses to collapse into a black hole. That might cause the appearance of a white hole elsewhere in the universe. But an instant after that white hole appeared, you would then have 7 solar masses suddenly existing in some small area. Gravity would then probably cause all that matter to collapse in a process similar to the process that produced the original black hole.

We are led, then, to a fascinating possibility – the possibility of “ever-bouncing” black holes. The creation of a black hole might be the beginning of a process that works like this:
  1. A super-massive star collapses to become a black hole.
  2. The black-hole creates a spacetime wormhole, which causes the appearance of a white hole somewhere else in the universe, as the mass from the star collapse reappears elsewhere.
  3. The matter coming from that white hole is so dense and concentrated that it very soon collapses to become another black hole.
  4. That black-hole creates a spacetime wormhole, which causes the appearance of a white hole somewhere else in the universe.
  5. These steps keep repeating over and over again endlessly, ad infinitum, forever and ever.
white hole

Theory of ever-bouncing black holes

Because many black holes have been created in the history of the universe, if this “ever-bouncing” black hole theory is true, then white holes could be appearing at various points in the universe millions of times every second. 

At this point the reader may well be thinking: well, that's a fascinating idea, but it is no reason for thinking that the sun may suddenly vanish – because the sun is not a supermassive star of the type that becomes a black hole.

It is true that the sun will never become a black hole purely because of its own gravity. But if this wild theory of “ever-bouncing” black holes is correct, then the sun still might be in danger. This is because when a white hole appears from the creation of a black hole, the white hole could randomly appear within the volume of the sun.

If we assume that a white hole appears at a random position in space, it is overwhelmingly likely that the white hole would appear in interstellar space, the space between stars. But there is a very small but nonzero chance that the white hole could appear in the worst possible place – right in the very volume of space that the sun occupies. Who knows, there could be some strange relativistic reason why a white hole is more likely to appear where there is already matter, perhaps something along the lines of matter being attracted to matter.

If such a white hole were to suddenly appear within the volume of the sun, it would be as if the sun were to suddenly acquire a mass many times greater. Most of that mass would be material that could not be used for nuclear fusion. So rather than suddenly becoming much brighter, the sun would suddenly be like a super-massive star at the end of its lifetime, about to collapse into the super-density of a black hole. Shortly thereafter, the sun would presumably collapse to become a black hole. There might or might not be the flash of a supernova explosion. Then the sun would vanish.

Imagine what it would like for you if this were to happen. You might go to work one day at the office. Then in the middle of the day, people would suddenly start shouting, as they noticed that it was inexplicably dark outside. Some people would say: “Wow, I didn't know there was a total eclipse today.” People would wait for the supposed eclipse to end. But the sunlight would never return.

People would gradually realize that the sun was gone forever. There would then be a desperate struggle, as everyone tried to gather up food, clothing, and generators that might allow them to survive as long as possible in the cold. It would soon become colder than the North Pole. Crops would stop growing. Remnants of the human race would probably be able to survive for a few months longer until the food and fuel ran out. A few lucky ones might even be able to survive for a few years.

Of course, the chance of this happening is extremely remote, but it is interesting to realize that there are theoretical reasons why the sun might suddenly vanish at any time. I don't know what effect such speculation has on you, but I, for one, am going to take serious measures to protect myself from this theoretical cosmic menace.

I am going to go out right now and buy myself a nice pair of wool mittens.