Header 1

Our future, our universe, and other weighty topics


Saturday, January 31, 2015

Busted Bandwagon: The Sociology of BICEP2 Groupthink

It's finally official. Today the online version of the journal Nature (an authoritative source for scientists) has an article headlined “Gravitational Waves Discovery Now Officially Dead.”

In March 2014 the BICEP2 study was released, and it claimed to have found evidence for gravitational waves from the beginning of time. It was a result that would have confirmed the theory of cosmic inflation, a theory about what went on during the universe's first second. In March of that year, cosmologists around the world started crowing about how the discovery of the decade (or even the century) had been made. It was claimed that “smoking gun” evidence for the cosmic inflation theory had been found. For months many scientists and “household name” news sources described the inflated claims of the BICEP2 study as scientific fact.

But now the roof has caved in on this claimed breakthrough. Further analysis by the very large team of Planck scientists has found that the results of the BICEP2 team can plausibly be explained as being the result of ordinary dust and something called gravitational lensing, without imagining that the observations are caused by anything having to do with gravitational waves, the Big Bang, or cosmic inflation.

From the beginning there were lots of reason for suspecting that the BICEP2 study was on very shaky ground. The day after the study was released I released a very skeptical blog post entitled “BICEP2 Study Has Not Confirmed Cosmic Inflation,” pointing out some reasons for doubt. I followed this up in the next month with several other very skeptical blog posts on BICEP2. The reasons I discussed were out there in April 2014, but seemed to be ignored by most cosmologists at that time, who got busy writing lots of scientific papers about the implications of the supposedly historical BICEP2 study.

Now some cosmologists will point their fingers at the scientists of the BICEP2 team, and say, “Their error.” But the blame goes surprisingly wide and deep in this matter. We must also attach blame to the many other cosmologists who jumped the gun, and wrote scientific papers based on the BICEP2 claims, treating them as if they were all-but-proven. You can use this link to find the names of 125 scientific papers that have “BICEP2” in their titles. A great deal of these papers had titles such as “The Blah Blah Blah Theory in Light of BICEP2” or “Blah Blah Blah After BICEP2.” It seems that very many theoretical physicists took a good deal of time to write up papers discussing the profound implications of the BICEP2 study, the main results of which have now been declared “officially dead.”

How can we explain this embarrassing goof? It couldn't have been because our cosmologists are stupid. After all, they write these papers with lots of very hard math. Could it be this snafu occurred because no one warned them that the BICEP2 findings were preliminary, and needed to be confirmed by the Planck team? No, there were plenty of such warnings, and it was clear from the beginning the BICEP2 claims were in conflict with some of the Planck results.

In order to explain this gigantic goof, one needs to consider sociological factors. The group of scientists with one particular specialty is a small subculture subject to bandwagon effects, peer pressure, groupthink, group norms, group taboos, and other sociological influences. Given a proclamation by enough scientists within a field that a particular result is a “stunning breakthrough,” acceptance of that result becomes a group norm. Similarly, once a particular effect or result has been denounced by enough scientists within a field, then a taboo has been established, and rejection of that result becomes a group norm. Once a group norm or group taboo has been established in such a small subculture, it is rather like a buffalo stampede. Running with the herd is very easy, and running against the herd is very difficult. Every subculture imposes punishing sanctions on those within it who defy the group norms and group taboos. 


Peer pressure

For several months, acceptance of the BICEP2 claims was a group norm within the community of cosmologists, and so cosmologists fell in line, and followed the herd. A gigantic bandwagon effect was created. Using many dollars of taxpayers and universities, they wrote up lots of scientific papers that were based on the now-defunct group norm.

What lesson can we learn from this misadventure? One important lesson: make a judgment on the truth of something based on the facts and the evidence, not based merely on whether some consensus has been reached by a small subculture of scientists. That's because such a consensus may be heavily influenced by sociological factors and economic factors within the subculture: herd effects, groupthink, group norms, group taboos, bandwagon effects, and vested interests. Like politicians and judges, scientists like to imagine themselves as impartial judges of truth, judging only on the facts. But scientists are subject to sociological influences just like other people, influences that can decisively affect their pronouncements.

Wednesday, January 28, 2015

The Professor's Fallacious Critique of Cosmic Fine Tuning

A recent article in the Wall Street Journal was entitled “Science Increasingly Making the Case for God.” The article seems to have mixed up some solid points based on modern physics with some dubious arguments based on a misguided idea that the Earth or earthly life is some possibly unique cosmic miracle. The article was rebutted by physicist Lawrence Krauss in this New Yorker article. But in rebutting an opinion piece that seems to have had some logic errors mixed up with a good deal of truth, Krauss has given us a rebuttal that itself is a mixture of some truth and some serious errors of fact and reasoning.

Krauss discusses the origin of life, and he assures us that the building blocks for the first living things are abundant in space. “We have continued to find in space the more sophisticated components associated with the evolution of life on Earth.” This statement includes a link to a news article. But when I follow the link and read the article, I don't find anything that backs up the claim. The linked article refers to the discovery of space chemicals, and says, “Chemicals they found in that cloud include a molecule thought to be a precursor to a key component of DNA and another that may have a role in the formation of the amino acid alanine.” But that's a giant leap away from finding “the more sophisticated components associated with the evolution of life on Earth.” The article merely discusses the discovery of distant precursors of some of the parts of the key molecules of life – rather like finding sand that is a distant precursor of the silicon chips in your computer. In fact, we have not actually found in space "the more sophisticated components associated with the evolution of life on Earth," but only some relatively simple precursors. Krauss is also on very weak ground when he discusses some speculative theory of an MIT scientist, one that has neither experiments nor observations to back it up.

Krauss attempts to rebut arguments that the fundamental constants of the universe are fine-tuned, as if some cosmic designer had chosen them. Krauss says the following:

The constants of the universe indeed allow the existence of life as we know it. However, it is much more likely that life is tuned to the universe rather than the other way around. We survive on Earth in part because Earth’s gravity keeps us from floating off. But the strength of gravity selects a planet like Earth, among the variety of planets, to be habitable for life forms like us.

Here Krauss commits the logical fallacy known as presenting a false dilemma. A false dilemma is when a reasoner speaks as if we must choose between two different things, even though the two things are not mutually exclusive. It's the type of reasoning error committed when someone says something like, “You can either be a true patriot or a Democrat – make up your mind,” without explaining why one can't be both. The false dilemma Krauss presents is the idea that we need to make a choice between the idea that the universe is fine-tuned for life and the idea that life is fine-tuned to the universe’s laws. There is not the slightest reason to make such a choice, as the two ideas are not in any way mutually exclusive. Since it is perfectly possible that we have both a universe that is fine-tuned for life, as well as biological life that is fine-tuned to the laws and realities of the universe, the second of these ideas does nothing at all to undermine the credibility of the first idea.

Krauss then discusses the issue of the fine-tuning of the cosmological constant, something I discuss in this blog post. Scientists say that because of the strange facts of quantum mechanics, the vacuum between stars should be teeming with energy, as what are called virtual particles constantly pop in and out of existence. Quantum field theory allows us to calculate how much energy there should be in the vacuum of space because of these virtual particles. The problem is that when scientists do the calculations, they get a number that is ridiculously wrong. According to this page of a UCLA astronomer, quantum field theory gives a prediction that every cubic centimeter of the vacuum should have an energy density of 1091 grams. This number is 10 followed by 90 zeroes. That is an amount trillions of times greater than the mass of the entire observable universe, which is estimated to be only about 1056 grams.

Another name for this vacuum energy density is the cosmological constant. We know that this cosmological constant is not the ridiculously high number predicted by physicists, but some some very, very low number (although apparently non-zero). Scientists speculate that there may be some “accidental cancellation” of all these strange quantum factors that leaves us with a cosmological constant very close to zero. But in order for you to have that, it would have to be an astonishingly improbable coincidence – kind of like the coincidence you would have if you added up all the purchases of everyone in China, and subtracted from them all of the earnings of every one in the United States, and ended up with a number less than 100 dollars. We would not expect that such a lucky coincidence would occur in even 1 in a billion trillion quadrillion random universes, as it requires fine-tuning to more than one 1 part in 1,000,000,000,000,000,000,000,000,000,000,000,000.000.

Here is how Krauss attempts to explain away this “vacuum miracle” as I have called it. He says this about the cosmological constant:

Is this a clear example of design? Of course not. If it were zero, which would be “natural” from a theoretical perspective, the universe would in fact be more hospitable to life. If the cosmological constant were different, perhaps vastly different kinds of life might have arisen. Moreover, arguing that God exists because many cosmic mysteries remain is intellectually lazy in the extreme.

There are four fallacies or misstatements in this short statement, and let me carefully describe each of them.

First, it is not at all true that a cosmological constant of zero is “ 'natural' from a theoretical perspective.” As many scientists have stated, from the perspective of quantum mechanics, a small or zero cosmological constant is shockingly unnatural and a wildly improbable thing, like the chance of the total salaries of all Americans accidentally exactly or almost exactly equaling the total annual purchases of all Chinese people.

Second, let's look at Krauss' claim that the universe “would be more hospitable to life” if the cosmological constant were zero. There is actually no solid basis for this claim. The only scientific paper I can find advancing such a thesis is a very iffy speculative paper that is meekly entitled, “Preliminary Inconclusive Hint of Evidence Against Optimal Fine Tuning of the Cosmological Constant for Maximizing the Fraction of Baryons Becoming Life.” The paper can be read here. The author of this paper (Don N. Page) argues that you might have a “very small increase” in life in the universe if the cosmological constant was zero. But he seems to have no real confidence in his thesis. Referring to other scientists, he says in his paper, “Email comments by Robert Mann, Michael Salem, and Martin Rees have shown me that it is not at all clear that the very small increase in the fraction of baryons that would condense into galaxies if the cosmological constant were zero instead of its tiny observed positive value would also lead to an increase in the fraction of baryons that would go into life.” Page also points out that another scientist suggests the universe would be less habitable to life if the cosmological constant were lower.

So there is no solid basis at all for Krauss' claim that the universe “would be more hospitable to life” if the cosmological constant were zero – merely a super-iffy unsubstantiated speculation that it might be slightly more hospitable to life, a speculation other scientists dispute. Also, even if you were to get somewhat more life in a universe with a zero cosmological constant, it would still appear to be a case of enormous fine-tuning to get a cosmological constant as low as ours, given the vacuum energy density issue discussed above, and the need for all these quantum contributions to the vacuum to miraculously cancel each other out. One does not disprove a case of fine-tuning by showing that a slightly better result could have been achieved. For example, if I buy you a ticket to a hot Broadway show, and get you a seat in the middle of the second row, that is a type of fine-tuning – and you don't show it isn't fine-tuning by arguing that I could have got you a seat in the middle of the first row.

Third, in the statement above Krauss argues, “If the cosmological constant were different, perhaps vastly different kinds of life might have arisen.” No, that doesn't work to explain away this issue. The issue with the cosmological constant is that if you don't have fine-tuning to more than 1 part in 1,000,000,000,000,000,000,000, then you get no galaxies, no stars, and empty space that is a seething super-dense quantum vacuum with much more mass-energy than the density of solid steel. Under such conditions, no life of any type is possible, no matter how weird it may be.

Fourth, Krauss is using a “straw man” argument when he says that this type of reasoning is “arguing that God exists because many cosmic mysteries remain.” That's not what is going on when people use the cosmological constant (and similar cases of cosmic fine-tuning) to suggest that there is a purpose and plan behind the universe. It is instead a case of reasoning from an extreme case of fine tuning to the likelihood of a fine-tuner, not a case of arguing from the mere existence of cosmic mysteries. One would have a case of “arguing that God exists because many cosmic mysteries remain,” if one used silly reasoning such as “blacks holes and quasars are mysterious, so God probably exists.” But I am not aware of anyone using such reasoning.

Krauss then wraps up his comments with that old skeptic's slogan that extraordinary claims require extraordinary evidence, which is kind of an all purpose excuse for not believing in anything you don't want to believe, no matter how much evidence piles up. First, of all it should be noted that “extraordinary claims require extraordinary evidence” is an inappropriate slogan, similar to claims such as “Bald men require bald wives,” and “Stupid people require stupid leaders.” Imagine if I claimed that John Riser had levitated twenty feet into the air in the middle of the street. That would be an extraordinary claim, but I would not necessarily need any extraordinary evidence to show its likelihood. I could show its likelihood through ordinary, common type of evidence such as the sworn testimony of 20 reliable impartial witnesses, or live television camera footage taken by two different network television cameramen. A much better slogan is, “Extraordinary claims require good, convincing evidence, whether it be a common type of evidence or an unusual type of evidence.”

I may also note that the evidence for the fine-tuning of fundamental constants is extraordinary, involving the work of many scientists over a period of decades, regarding the fundamental traits of physical reality. How is this not extraordinary? 

To read more about the topic of cosmic fine-tuning, with many good examples, read my blog posts here and here.  The first of these posts explains the color-coded table below, which summarizes lots of requirements for the existence of civilized creatures such as us.

 

Sunday, January 25, 2015

Dining Out in 2150: A Science Fiction Story

Joey looked around the luxurious restaurant. It was the most lavish restaurant he had ever been to. The view from the huge windows was breathtaking. Nearby he could see the steeples of a dozen skyscrapers piercing the blue sky. He watched as air cars drifted slowly by outside the restaurant, located on the 125th floor of a huge tower complex.

Joey picked up his menu pad to select his meal. This was something far better than an ordinary paper menu. It was a digital device allowing him to choose from thousands of potential meals he could eat at the restaurant. Maybe I'll try some Chinese, thought Joey. Using his fingers to swipe the pad, he scanned hundreds of images offering delicious dishes from Shanghai, Beijing, and China City. 

 
Deciding against Chinese, Joey used the menu pad to navigate to the European section of the menu. His eyes scanned through hundreds of small images of food from the great cities of Europe. Stopping at the Italian section, he scanned through fifty different pasta and pizza dishes. Finally he found something that looked interesting: a meter-wide ring pizza, with each slice having different toppings.

Joey pushed the Visualize button on the menu pad. Suddenly he saw before him a rotating 3D holographic representation of the ring pizza. He examined each slice as the image rotated before him. That looks good, thought Joey. He pressed the Order button on the menu pad.

So much better to order food like this, thought Joey. There's no need to give a tip to a waitress. And no chance the waitress will mess up your order by writing down the wrong thing.

Joey knew it wouldn't take long to make the pizza ring. He wouldn't have to wait a long time for human cooks to make his food, because his food would be generated automatically by 3D printers, which would print out the dish layer by layer. There was no chance that some clumsy cook would mess things up.

In the few minutes that Joey had to wait, he looked around the restaurant, to see what other people were eating. One woman was dining on a croissant ball just like the ones that only a few people in Paris could enjoy a hundred years earlier. Hollow and the size of a softball, it was made of the most delicate layers of pastry stuff, which somehow stayed in a ball shape until you started to eat it. Another man was dining on a thirty-layer sandwich, made of thin slices of thirty different types of meat and fish. Another man was gorging on some dish called Noodle Ecstasy, a dish in which every noodle had its own unique flavor. Others were enjoying ice cream objects that could be printed out in any shape you desired. One woman was enjoying an ice cream treat shaped like the Chrysler Building, with each floor represented by a different ice cream flavor.

Then a robot brought Joey's meal. His mouth watered as the robot came near. This is going to be so delicious, he thought. After the robot put the dish on his table, Joey reached out his hand to grab a piece of the delicious pizza. This is going to be the greatest meal ever, he thought.

SIMULATION OVER said the virtual reality goggles Joey was wearing. Sadly, Joey took off the goggles, and looked at the bleak scene around him.

He was in a restaurant, but not like the one he had seen in the virtual reality simulation. It was a dim, run-down, underground restaurant packed tight with customers. Its one good thing was the virtual reality goggles it offered to customers.

There were no more inhabited skyscrapers. Most of the great towers had fallen in the two atomic wars, and the Great Collapse that brought down most of civilization. The remnants of the human race lived mainly underground. Agriculture was almost impossible in a devastated world, but there was one dish that the government was able to supply in abundance.

Cmon, it's time to eat your regular meal,” said Joey's wife.

Joey looked at the dinner table. Before him was the same meal he had eaten for the last 700 meals: cold green porridge. It looked like oatmeal, but was a sickly shade of green, as if mold was growing on it.

Joey looked around, and saw that all of the customers in the crowded restaurant had only this same meal on their tables.

Thursday, January 22, 2015

Are Our Cosmologists Just "Talking a Good Game"?

The phrase “talking a good game” refers to speaking about something in a way that sounds like you have mastered the topic, even though you may be relatively clueless about it. People often use jargon to help them in “talking a good game.” By using technical phrases and jargon buzzwords, people can make it sound as if they have mastered some topic that they are really hopelessly confused by.

Here is an example of how this may work in the business world.

What Richard Says What Richard Is Thinking
Our new project will deliver end-to-end models to enhance global users and create “win/win” partnerships for success. We will utilize resonant experiences to facilitate bleeding-edge content that revitalizes back-end communities. While we formulate revolutionary new business paradigms, we will blaze new trails in viral marketing breakthroughs, while at the same time unleashing transitional efficient experiences. Finally, we will drive migratory technologies to innovate front-end solutions and architect real-time convergence. I sure hope they don't figure out how clueless I am about this fancy whatchamacallit project I've been dragged in to work on. All I know is it's some incredibly complicated geek thing involving a bunch of different computer systems. I could ask 500 questions to try to really figure the thing out, but then everyone would know I don't know jack about this kind of stuff. Guess I'll just cross my fingers, and try to BS my way through this.


We can forgive Richard, because after all, modern computer systems are very confusing. But if this type of “talking a good game” can take place for something as simple as a computer system, how much more more  likely is it that this kind of thing can go on when the subject matter is the entire universe?

At the poorly-named Physics Arxiv blog, there is an article entitled The Paradoxes That Threaten to Tear Modern Cosmology Apart. It seems our cosmologists may not have as keen a grasp of the nature of the universe as one would think from hearing their lofty pronouncements.

I was familiar with the “vacuum catastrophe” issue discussed in this article, which is basically the biggest “scandal” of modern cosmology. It turns out when physicists calculate the amount of energy that should exist in every cubic centimeter of empty space, they get a number a gazillion times higher than the maximum value consistent with observations. It seems that ordinary empty space, according to quantum field theory, should be vastly more packed with energy than the center of the sun – although it actually has no such density. The expected energy density of the vacuum, according to this article is “ 10^94 g/cm^3.” That means 1094 grams per cubic centimeter, which is much denser than the density you would get if you packed the entire observable universe into a little space the size of a sugar cube.

This problem arises because quantum field theory tells us that empty space is teeming with energy caused by the spontaneous appearance of virtual particles. This leads to another problem – the problem of energy conservation and an expanding universe. Scientists say that energy cannot be created (except when produced from the conversion of matter to energy). Scientists say that a basic law of the universe is the law of the conservation of mass-energy. According to this law, considering matter and energy as two forms of a single thing called mass-energy, you cannot create new mass-energy. You can convert matter to energy or energy to matter, but the total amount of mass-energy cannot increase.

But since the time of the Big Bang, the universe has been expanding, which means the amount of space has been constantly increasing. But each second that the universe adds more space, it also adds a lot more energy, because according to quantum mechanics, empty space is teeming with energy. So, apparently, an expanding universe is one that is constantly adding vast amounts of energy to itself.

It's as if every second the expanding universe was pulling more than a billion trillion rabbits out of a hat – because the energy it is adding every second is much more than the mass-energy of a billion trillion rabbits. But how can that be when the law of the conservation of mass-energy says that the total mass-energy of the universe cannot increase?

Apparently we have not just the riddle of how the universe's original mass-energy appeared (the unsolved problem of the cause of the Big Bang), but also the riddle of how the universe could be continually adding mass-energy to itself, like some endlessly flowing horn-of-plenty. It seems the kind of mystery you might have if poof a giant planet suddenly appeared in our solar system, and then kept getting bigger and bigger and bigger, defying our concepts of what should be possible.

The Helix Nebula (Credit: NASA)

Monday, January 19, 2015

Bursts of the Gods?

In 2007 astronomers detected a new class of radiation signal from deep space – what are called fast radio bursts. Fast radio bursts are highly energetic but very short-lived bursts of radio energy, typically lasting less than a hundredth of a second. Fewer than twenty of these bursts have been detected. For years, all of the detections came from a single telescope in Australia, but then the Arecibo Observatory in Puerto Rico also detected such a fast radio burst.

Where are the signals coming from? A recent estimate by scientists is that the signals come from a distance that is "up to 5.5 billion light years from Earth,"  which is pretty vague.

Based on the number of fast radio bursts that have been detected, astronomers have estimated that our planet could be receiving as many as 10,000 of these radio bursts per day. What could be causing the signals? Astronomers don't know. Some astronomers speculate that the fast radio bursts could be caused by various exotic types of stellar events, such as unusual solar flares or two neutron stars colliding with each other.

There is, however, a general problem with such explanations. Most highly energetic freak events imagined as possible sources of the fast radio bursts would probably have produced other types of radiation such as gamma ray radiation, x-rays, or visible light. But no one has detected a flash of any of these types of radiation with a position in space (and time of origin) matching any of the fast radio bursts. To give an analogy, it's kind of as if you felt the ground shaking, and assumed it was something heavy falling to the ground, but you didn't hear any noise at the same time. That would throw doubt on your explanation.

Previous discoveries of the fast radio bursts came from mining old observations. But this year scientists detected a fast radio burst in “real time,” noticing it shortly after the signal arrived. The scientists alerted other observatories around the world, asking them to check the point in the sky where the signal was detected, to look for other types of radiation. The other observatories did that, but basically came up empty. This result is described in this recent paper and this news story that came out today. 

The latest fast radio burst (the time unit is milliseconds)
 

According to scientist Daniele Malesani, “The fact that we did not see light in other wavelengths eliminates a number of astronomical phenomena that are associated with violent events such as gamma-ray bursts from exploding stars and supernovae, which were otherwise candidates for the burst.”

In short, we seem to have no really good astrophysical explanations for the fast radio bursts. Given the fact that short radio bursts have been postulated as one means by which extraterrestrial civilizations could announce their existence, there would seem to be a very real possibility that some or many of these short radio bursts are coming from extraterrestrial civilizations.

The idea of extraterrestrial civilizations communicating by fast radio bursts may conflict with a long-standing notion of SETI, the search for extraterrestrial intelligence. I may call this idea the “Christmas gift” concept. It's the idea that one day we will be lucky enough to get from some incredibly old and advanced extraterrestrial civilization a nice, easy-to-digest radio signal designed to be understood by primitive newbie fledglings such as our species. It will be just as if they sent us a wonderful Christmas gift across the vast interstellar void (some have even imagined such a radio signal containing an “Encylopedia Galactica” written for beings of our level).



But the idea of extraterrestrial civilizations communicating by fast radio bursts suggests another possibility: that of super-advanced civilizations communicating in ways that can only be intelligible to other super-advanced civilizations, who might have no problem unraveling trillions of bits of information packed into a tiny radio burst lasting only a fraction of a second. We might determine that such signals are likely to be of intelligent origin, but then experience the frustration of having to wait for centuries until we are technologically advanced enough to decipher and unravel such super-condensed information bursts. It will be like getting a Christmas present and being told it's the greatest present ever, but also being told it will take you thirty years before you can figure out how to get the present out of the box.

Friday, January 16, 2015

The Growing Evidence for a Mysterious Global Consciousness Effect

The Global Consciousness Project is a long-running project that started out at Princeton University. The project uses a network of more than 50 continuously running random number generators across the world, and records cases when the results from such generators deviates from chance during interesting or memorable moments in a year's events. The project has discovered ever-growing evidence that the results from such random number generators (which should be purely random) do tend to deviate from chance at important events during a year.

A random number generator is just a machine or a computer program designed to churn out random numbers continuously, a string of numbers such as 328532503463948235236120575239623663. A separate computer program can analyze such output, and determine how much it is deviating from what is expected by chance. There are various ways to do this statistically, such as counting up the number of 1's in the string of digits, counting up the number of 2's, and so forth. The program can then look for unusual spikes – something such as a case where 210,344 7's were generated during a particular unit of time, but 230,333 8's were generated during that same unit of time (a difference very unlikely to occur by chance). There are statistical algorithms that allow analysts to compute the exact probability of getting a particular random sequence of numbers that differs from chance.

Such random number generators, algorithms, and programs have been used by the Global Consciousness Project since the 1990's. The project compiles a list of significant news events, and keeps track of a deviation from chance in the output of random number generators operating during that event. A long list of such events can be found here. The list includes 497 events, and in each case there is a probability listed that is the chance probability of getting the particular random number deviation that was recorded. In most cases, this probability is not very low. For example, during the recent Charlie Hebdo incident, the recorded deviation had a probability of .143. During some events, the probability has been much lower, such as a probability of only .03 during the September 11 attacks in 2001.

But the real bottom line number in the Global Consciousness Project is the overall cumulative probability. This is the probability of getting all of the deviations from randomness recorded by the project since 1998, purely by chance. That probability is listed in the graph below, taken directly from the project's web site. The graph lists the overall cumulative probability as 3.343 e-13, which is a probability of .00000000000003343, or about 1 chance in 3 trillion, or 1 chance in 3,000,000,000,000. This is an overwhelming significant result. When scientists get a result like that in any other field, they trumpet it as overwhelming proof of the hypothesis they are testing. 

Global Consciousness Project
 

As time has passed since 1998, the overall cumulative probability associated with the Global Consciousness Project has grown smaller and smaller, meaning their evidence of a real effect has grown stronger and stronger. At some point early in the project, you might have been only able to say that the chance of getting such results was 1 in a million. Then years later you would have been able to say that the chance of getting such results was only 1 in a billion. Now, after the project has been running for some 16 years, we have finally reached the point where the bottom line is that the chance of getting the results is 1 in 3 trillion.

So this is a very important fact about the Global Consciousness Project. It has now accumulated very strong evidence for a mysterious anomalous global consciousness effect (or some similar and equally paranormal effect), and the evidence for such an effect keeps growing stronger and stronger with each passing month. The accumulated evidence thus far is evidence that would be accepted as rock-solid proof if submitted to back up any lesser claim such as a claim that some medicine has some curative power, or a claim that an accused person committed a particular crime.

Now, I think there is a typical series of events that happens when a skeptical person reads a post like this. The series of events goes like this: (1) the skeptical person reads a post like this one, leaving him unsettled and annoyed by reading something that doesn't fit in with his preconceptions; (2) the skeptical person then reads the wikipedia.org article on the topic, which has a 100% chance of being a completely biased, one-sided criticism of anything relating to the paranormal; (3) the skeptical person then feels much better, thinking he has gotten the “real story” on the topic by reading the wikipedia.org article on it.

But before you do such a thing by reading the wikipedia.org article on the Global Consciousness Project, let me explain why that article is not at all the “real story” on this topic, but instead merely a ridiculous, uninformative example of jaundiced “ax grinding.” The first reason is that the wikipedia.org article does not even mention the “bottom line” of the project – the overall cumulative probability of 1 chance in 3,000,000,000,000 stated in the graph above. The article merely says, “The GCP claims that, as of late 2009, the cumulative result of more than 300 registered events significantly supports their hypothesis.” I can guess what was going on in the minds of the skeptics who edited this article on wikipedia.org. It's as if they were thinking: let's not tell anyone the bottom line result of the project, because then people might be convinced by it.

It is also ridiculous that the wikipedia.org article focuses on criticizing the claim that the data of the Global Consciousness Project from around September 11, 2001 proves something paranormal. Such data is only a drop in the bucket of data that the Global Consciousness Project has accumulated in more than 15 years of operation. While their data from that one day may not prove anything paranormal, their overall results of 15 years of operation do supply very strong evidence of something paranormal, with an overall cumulative probability of 1 about chance in 3,000,000,000,000.

The wikipedia.org article on this topic is mainly just a series of putdowns by hardcore skeptics. The article ends by quoting someone who says “the only conclusion to emerge from the Global Consciousness Project so far is that data without a theory is as meaningless as words without a narrative." But that's an absurd claim. In many or most cases data is, in fact, quite meaningful even when there is no theory to explain it (such as the data that was accumulated on comets and supernova explosions before we had any idea what such things are). For example, if I get a terrible sickness putting me on the brink of death, I may collect data on my failing health, but have no theory to explain such data. But such data is very meaningful indeed, telling me that I need to see a doctor and may need to set my affairs in order in case I die. Even if I never see any doctor to give me a theory as to my symptoms, the data is meaningful, because it has important implications. Similarly, even though we have no good theory to explain the results of the Global Consciousness Project, the data is extremely meaningful, because it has important implications, one of which is that human consciousness may be something much bigger than we think it is (not to mention the implication that current reductionist materialist paradigms are on the wrong track).

Stripping away its vacuous putdowns such as the quotation above, the wikipedia.org article on the Global Consciousness Project provides no substantive analysis or facts that should cause anyone to doubt the importance or reliability of the project's findings.

Tuesday, January 13, 2015

6 Myths About Science, Scientists and Scientific Theory

People often toss around misconceived ideas about science, scientists, and the nature of scientific theories, whenever such an idea may serve whatever point they are trying to make. Below is a look at some of the more common examples of such misconceptions.

Myth #1: Any good scientific theory is falsifiable

This idea was advanced by the philosopher Karl Popper, and has been repeated by many others. The idea is that we can distinguish between a scientific theory and an unscientific theory or idea by asking whether or not the theory can be proven wrong, or falsified. It is argued that whenever you have a good scientific theory, you can always imagine observations that might disprove that theory, or falsify it.

But it is easy to imagine some examples of perfectly good scientific theories that are not falsifiable. The best example is the theory that extraterrestrial life exists. There is no way to falsify such a theory, because we can imagine no series of observations that would prove it wrong.

Imagine what it would take to disprove the existence of extraterrestrial life. You might think that this could be accomplished by making a survey of all planets in the universe. But there is no way that any civilization (even a civilization millions of years more advanced than ours) could make such a survey. Because of the limit of the speed of light, it would take eons to survey the planets in an entire galaxy of billions of stars. Once such a survey had been completed, there would still be billions of other galaxies to check. Surveying them all would take billions of years.

Suppose we imagine some civilization with some warp-drive that allows instantaneous travel. Even with a large fleet of warp-drive starships traveling instantaneously, it would take many millions of years to check all the planets in a universe such as ours with more than 1,000,000,000,000,000,000,000 stars. After such a survey, could you then say that extraterrestrial life doesn't exist? No, because there would always be the possibility that life could have started somewhere during the eons it took to do the survey, on one of the planets that had already been checked.

So while it may be true that most scientific theories are falsifiable, it is not at all true that all scientific theories are falsifiable. The theory that life exists on other planets is a top-notch scientific theory that is not falsifiable.

Myth #2: Scientists are almost all unbiased and impartial judges of truth


I'm sure that many scientists are unbiased and impartial judges of truth, but there are reasons why many scientists far fall short of such a standard. First, let's consider sociological factors. A modern scientist belongs to a relatively small subculture subject to sociological factors such as peer pressure, group taboos, and group norms. When a scientists deviates from those group norms and group taboos, he may be subject to punishing sanctions from his peers, which may include ridicule, non-publication of papers, or denial of promotions. In some sciences we see a strong herd mentality, in which scientists tend to jump on some bandwagon which may or may not make sense to jump on. In some cases there may be a financial reward or incentive to jump on that bandwagon, and a financial penalty or sociological penalty for “running against the herd” and rejecting it. Can we suppose that in such cases our scientists act as unbiased and impartial judges of truth? Often they do not. We should not assume that scientists are more likely to be unbiased and impartial judges of truth than people in most other professions.

Consider also the case of a scientist who specializes in some theoretical area such as inflation theory or string theory or some flavor of quantum gravity. Early in his career, such a scientist is almost “betting the farm” on that theory, by taking years to study its intricacies. Once such an investment is made, that scientist becomes a vested interest. He will flourish if that theory gains support, and may flounder if the theory loses support. Is such a person going to be an objective judge about whether the theory is probably true? No, not any more than someone holding 100,000 shares of a particular company will be an objective analyst of the company's future prospects.

Myth #3: Science is whatever scientists think or assert

A good definition of science is observations and experimental data accumulated through methodical investigation, and theories that have been conclusively proven from such observations and data. Are the assertions of scientists limited to such a thing? Not at all. Scientists are people who have opinions, and those opinions are often group norms enforced by their subculture. Such norms may or may not be anything verified by observations or data.

We must distinguish between two different things: science and the opinions of scientific academia. The relation is illustrated in the Venn diagram below. Since there is too much in science to be understood by the average scientist, there is a red area outside of the purple area.



An example of an item in the blue area but not in the purple or red area is the opinion of many scientists that life arose billions of years purely because of chance chemical combinations. Such an opinion does not correspond to an observation or experiment. It's an opinion.

In quite a few cases, writers will misspeak, speaking as if things in the blue area of the diagram are things that are part of science. A writer will maintain that such and such gloomy opinion is science, because most scientists think it. But a scientist's opinions are a mixture of science, his own inclinations, and most likely the ideological norms of his particular subculture. Such norms may or may not correspond to anything that has been actually proven by facts or experiments.

Myth #4: Science is only produced by scientists, or is only what is published in scientific journals

As I stated before, science can be defined as observations and experimental data accumulated through methodical investigation, and theories that have been conclusively proven from such observations and data. Anyone today can methodically collect observations by using a camera and carefully noting facts relevant to the photos. This means that any average Joe can contribute to science, without having a science degree. 90% of the facts accumulated in a biology textbook (such as the basic facts of internal anatomy) were originated by people who did not have degrees in science, but who merely made observations and systematically recorded such observations.

Consequently, appalling as such an idea may be to some scientists, anyone who at length methodically investigates a phenomenon with sufficient diligence and honesty produces work that is as much science as some result from some fancy expensive particle accelerator, regardless of whether the phenomenon is considered paranormal. There is no sound basis for the claim that only work published in scientific journals is science. If such a standard were applied, we would have to throw out half of the facts in our science textbooks, which were originally established by researchers long ago who did not publish in scientific journals.

Myth #5: A theory that makes predictions is more worthy of respect

Science writers often claim that when a theory makes predictions, it is more worthy of respect than some other theory that does not make predictions. But this is not correct. The fact that a theory may make predictions does not mean that it is more likely to be correct than some other theory that does not make predictions. 

Consider the following example. A car strikes a pedestrian walking on the outer edge of the road. The first theory is that this was due to pure bad luck, that the driver just failed to notice the person walking on the side of the road. The second theory is that the driver was intentionally trying to kill a random victim. The first theory makes no prediction. But the second theory predicts that the driver will try such a thing again, given his homicidal nature.  Does this mean the second theory is more likely to be true? No, it doesn't. It fact, the second theory is less worthy of belief than the first theory, given that careless people are much more common than homicidal people. 

Myth #6: If scientists spend lots of time on something, then it's science

Because their small subculture is strongly subject to herd effects and groupthink, scientists may jump on a bandwagon and waste millions of taxpayer dollars and countless man-years pursuing some dubious enthusiasm, the popularity of which may persist for decades. But such activity does not necessarily mean that the underlying theory is actually science. Science is not automatically what scientists have long labored on – it is only what they have proven.

Saturday, January 10, 2015

Drone Ball, a Sport of the Future

As I get set to watch another round of NFL playoffs, I can marvel at how little sports has changed in this country during the past 100 years. American sports has been centered around the same games for the past 100 years: football, baseball, and basketball. What happened to all the futuristic sports we were supposed to have by now? It was supposed to be something a little like the visual below, which imagines a futuristic sport of sky racing.



But in the future, the big three sports leagues may get competition from sports leagues organized around robots. Although there are potentially an infinite number of exotic games one could create using robots and drones, I imagine that robot-based sports will probably be based on human sports. It is easy to imagine one possibility: boxing robots, as already imagined in the film Real Steel.

We can also imagine a new sport in which the players would be robots and drones, rather than humans. The sport would be based on football, and it could be called Drone Ball. Rather than having the sport be an exact clone of football, it would be more interesting if this Drone Ball sport had some new rules that took advantage of the flying abilities of drones. This would open up quite a few new strategic possibilities for game play.

Let us imagine some interesting rules for this sport:
  1. As in football, each team can have no more than 11 players on the field. Six of these players must be ground-based robots incapable of flying. The other five players must be flying drones.
  2. As in football, there is a ball that can be handed off to a running back. A running play by a running back will be considered to be over as soon as the robot ball carrier is touched by a robot of the opposing team (as in touch football). Tackles are eliminated, because of the difficulty of creating robots that can rise up again after being tackled.
  3. A pass by the quarterback may be made to either a robot traveling on the ground, or a drone flying in the air. The forward progress of a drone flying in the air (after catching a thrown ball) will be halted as soon as soon as a drone from the other team touches it. A flying drone may be designed with any hardware that allows it to catch a thrown ball, such as funnel-like attachments, suction tube attachments, or appendages that mimic human hands.
  4. Every drone and robot will contain sensors detecting whether it has been touched by an opposing player, and such sensors will be linked together in a computing system. That computing system will act as the game referee, eliminating the need for human referees. The same computing system will announce any fouls or penalties as soon as they are detected (equivalent to throwing a yellow flag on a football play).
  5. Flying drones may be equipped with rubber deflection projectiles they can fire at flying drones of the other team, to divert them from their selected pass route.  Similarly, wide-receiver drones may fire rubber deflection projectiles at members of the opposite team, to keep them away.
  6. A quarterback robot can have a throwing arm capable of making passes as long as 100 yards, as well as a computer capable of trajectory calculations much better than any human can perform.

One can imagine how many interesting strategic options would be presented in such a game. A “wide receiver” drone designed to catch a thrown ball would not be limited to merely running pass routes on the ground. It could also choose any pass route in the air. The total number of possible pass routes would therefore be many times greater . A spectator watching in the stadium might often see types of pass routes he had never witnessed before, something that virtually never happens today. Such routes might include midair hovering, sudden vertical dives, unexpected vertical ascents, and anything else the drone was physically capable of.

A flying drone on the defending team (the equivalent of a cornerback or safety) would also have some new strategic options. It would not simply be a choice of following a receiver on the other team, or trying to intercept the ball. There would also be the option of whether or not to fire a deflection projectile designed to deflect that wide receiver drone from its current route. If the flying defending drone chose a “zone defense,” it would have to defend not just a two-dimensional region of space like a flat plane, but instead a three-dimensional region of space like a cube. One can only imagine what type of “cubic defenses” the Bill Belichicks of the future might think up.

Spectators of this Drone Ball sport would be required to wear protective helmets such as hard hats, to protect them in the rare case when a mis-programmed drone crashed into the spectator stands.

Wednesday, January 7, 2015

The Laws of Nature Are Mainly Quasi-teleological

Scientists study the laws of nature in great detail, but it is relatively rare for anyone to attempt a qualitative assessment of the laws of nature. We might do such a thing by imagining three different quality categories. If a law of nature seems to serve a good purpose, we might call that law quasi-teleological, a term that means “as if it was intended for a purpose.” If a law of nature seems to serve a bad purpose, we might call it dysteleological, a term that means “as if it was intended for a bad purpose.” If a law of nature seems to serve no purpose either good or bad, we can simply call it neutral.

Let us attempt to judge whether the most important laws of nature fall into any of these three categories. One way to do that it is to make judgments based on random incidents, or incidents chosen to support a particular viewpoint. You would be following that approach if you made statements like this:

I just heard that someone was struck dead by lightning. Damn that awful law of electromagnetism! It's so harmful.

I was hiking yesterday in the mountains, and hurt my leg when I slipped and fell. Damn that stupid law of gravity! It's such a terrible law.

Making assessments of laws of nature based on incidental experiences such as this does not make any sense. What we need is an intelligent general-purpose algorithm for assessing whether a law of nature is quasi-teleological, neutral, or dysteleological. I propose the algorithm shown in the flowchart below.



The algorithm starts by asking: is the law of nature necessary (directly or indirectly) for the existence of living things such as ourselves? If the answer to question is “Yes,” the law is considered quasi-teleological, because it helps to achieve a good purpose (the purpose of allowing creatures such as us to exist). If the answer is “No,” the algorithm then asks whether the law of nature is mostly harmful in its effects. If the answer to that second question is “Yes,” the law of nature should be considered dysteleological (a law that serves a bad purpose). If the answer to that question is “No,” the law is considered to be neutral (meaning that it neither seems to serve a good purpose, nor seems to serve a bad purpose).

Let's try a simple example, and see whether the algorithm seems to make sense. Consider the case of the law of gravitation, the universal law of nature that there is a force of attraction between all massive bodies, directly proportional to the product of their masses, and inversely proportional to the square of the distance between them. Gravitation is (indirectly) absolutely necessary for the existence of living beings, because if it were not for gravitation we would have neither a planet to live on, nor a sun to produce warmth. So even though occasionally gravity produces deaths from falling, the fact that gravitation is absolutely necessary for planets decisively trumps all other considerations. It is therefore absolutely correct for us to consider gravitation as a quasi-teleological law. It serves the good purpose of allowing the existence of planets for living things to exist on. In this case the algorithm seems to steer us to the right answer.

Let us apply the same algorithm to other major laws of nature. Another major law of nature is Coulomb's law (the basic law of electromagnetism). This is the law that between all electrical charges there is a force of attraction or repulsion, directly proportional to the product of their charges, and inversely proportional to the square of the distance between them. It is true that very rarely this law helps to kill people in lightning strikes, but that fact is absolutely trumped by the fact that living things could not exist for even a minute without Coulomb's law. Electromagnetism is what makes chemistry possible, and without chemistry we would all instantly die. If you were to turn off Coulomb's law, our bodies would quickly disintegrate. So again, using the above algorithm, we must classify Coulomb's law as a quasi-teleological law, as it serves the good purpose of allowing the existence of biological organisms.

The table below shows a list of fundamental laws of nature. Most of the laws have commonly used names, but some very important laws do not have any common name, although they should have one. One of the most important laws is one I have designated below as the Law of the Five Allowed Stable Particles. This is simply the law that rather than producing hundreds or thousands of different types of stable particles from a high-energy particle collision, nature makes sure that only five types of stable particles result. Although not important now, the current arrangement of matter in the universe would be hopelessly different (in a very negative way) if such a law had not applied shortly after the Big Bang, when all the particles in the universe were colliding together at high speeds. You can say the same about the other conservation laws listed below.



All of these laws have one thing in common: for various reasons, all of them are necessary for the existence of life. In the case of the law of the strong nuclear force, the Pauli exclusion principle, and the law of electromagnetism, this is glaringly obvious, as we couldn't exist for even a minute if these laws didn't exist. In the case of the law of gravitation, it's almost as obvious that it is required for life, as gravitation is absolutely necessary for the existence of planets. In the case of the law of the conservation of baryon number, this college physics textbook says, “if it were not for the law of the conservation of baryon number, a proton could decay into a positron and a neutral pion.” If such a decay were possible, there wouldn't be any protons around by now, nor would there be any life.

In the case of the law of the conservation of charge, the law guarantees that electrons are stable particles that cannot decay into neutrinos, and thereby assures that we have a universe with plenty of the electrons needed for atoms and life. In the case of the laws of quantum mechanics, we have laws that restrict the states that electrons can take inside an atom, and thereby prevent electrons from falling into the nucleus of an atom (something they would otherwise have a strong tendency to do because of the very strong electromagnetic attraction between protons and electrons). 

As all of these laws are needed for life, we must characterize all of them as quasi-teleological. But other laws of nature should be classified as neutral, because they do not seem to have any bad effect nor any good effect. 

Although the modern materialist scientist may attempt to banish teleology from nature, such an attempt is not at all supported by his subject matter. The quasi-teleological nature of the main laws of nature present a huge problem for those who wish to believe in a capricious universe whose characteristics are the result of blind chance. Such people have not only the huge problem of explaining the universe's fine-tuned fundamental constants, but also the problem of explaining the universe's fine-tuned laws.

Sunday, January 4, 2015

Threading the Needle Holes of Cosmic Habitability

In the new book The Improbability Principle, mathematician David J. Hand looks at the issue of apparent cosmic fine-tuning, the many ways in which the universe seems to be tailor-made for the existence of intelligent beings such as us. Hand attempts to explain this away in keeping with his overall thesis that we should not be surprised when incredibly improbable things happen.

Hand looks at the fact that the existence of stars requires an exquisite balance between two of the fundamental forces of the universe, the gravitational force and the electromagnetic force. He suggests that thinking there is such a balance results from the mistake of considering that just one of the two has been fine-tuned to just the right value. When we consider the possibility of both varying, Hand suggests, then the situation is not so amazing. As Hand says on page 214-215 of his book:

We saw that changing the value of either one of these values would mean that the universe would not be suitable for life. But what if we changed them both? What if we increased the electromagnetic force a little, to match the increase in the gravitational force? Do this approximately, and the equilibrium within stars is maintained, so perhaps planets still form and life evolve. Fine-tuning, yes, but with much, much more scope for a pair of values which will lead to life than if the forces must separately take highly specific values.

Hand's reasoning is incorrect. When we have a case in which two fundamental constants of nature are exquisitely balanced, there is no greater likelihood that both will balance if we allow for the possibility of both of them varying. We can see this clearly by considering the case of the proton charge and the electron charge.

There is an exquisite and unexplained balance between the proton charge and the electron charge, in that all protons have a charge of exactly 1.602176565 X 10-19 coulomb, and all electrons have a charge of exactly -1.602176565 X 10-19 coulomb (which is quite amazing given that each proton has a mass 1836 times greater than the mass of each electron). As the astronomer Greenstein has pointed out, there are reasons why stars and planets would not be able to exist if the absolute value of the proton charge and the electron charge differed by even 1 part in 1,000,000,000,000,000,000. Since electromagnetism is a force more than a trillion trillion trillion times greater than the gravitational force, even a tiny change in either the proton charge or the electron charge would mean that electromagnetic effects acting on a large body would overwhelm gravitational effects, and gravitation would be insufficient to keep stars and planets together.

But suppose we imagine random changes in both the electron charge and the proton charge. Would that increase the probability of the two of them matching in the way that is necessary for stars and planets to hold together? No, it wouldn't. If we imagine both constants randomly changing, it is true that this would open up many new possibilities that might be compatible with the existence of stars and planets, such as one in which the proton charge was 3.378921 X 10-12 coulomb and the electron charge was an exactly opposite value of -3.378921 X 10-12 coulomb. But the overall likelihood of an exact match when both constants vary is not any greater than if one allows only one constant to vary.

Similarly, imagine I am playing a casino "million dollar jackpot" game of chance, and start with one random number between 1 and a million. To win, I have to get from the casino another number that matches the first number. If only the second number varies, the chance of success is 1 in a million. Now the casino employee may tell me: increase your chances by letting both numbers be random. But that's a fallacy – my chances of success will not be increased. The probability of getting a match if you start with one number and then get a random number is 1 in a million. The probability of getting a match if you get two random numbers is one million out of a trillion (because there are one million possible matches of two numbers between 1 and a million, and a trillion different possible combinations of two numbers between 1 and a million). But one million out of a trillion is a probability exactly the same as 1 in a million.

Hand then asks us to consider another possibility – that there may be some hidden reason why a change in one fundamental constant might cause a corresponding change in another very different fundamental constant. This might help to explain the exquisite balances within nature, Hand suggests. But this suggestion is an appeal to an imaginary possibility, and Hand provides no facts to back up such a suggestion. To the best of our knowledge, fundamental constants on which life depends (such as the speed of light, the gravitational constant, Planck's constant, the proton charge, and the electron charge) are entirely independent. There's no reason to think that having one such constant be compatible with life would increase the chance that other such constants would be compatible with life.

Hand tries to pass off his groundless imaginary idea as an example of what he calls the “law of the probability lever.” Similarly, if a husband had failed to save enough money for retirement, and his wife complained, the husband could imagine that fairies will give him a million dollars when he reaches the age of 62, and he might call such a fantasy “the law of the fairy contributions.” Imaginary concepts for which there is no factual basis should not be referred to as laws.

Hand then refers to a scientific paper in which one physicist claimed to show that it's not all that unlikely that stars should exist in random universes. Hand summarizes the paper by Adams as follows:

Fred C. Adams, of the Michigan Center for Theoretical Physics, investigated varying the gravitational constant, the fine-structure constant, and a constant determining nuclear reaction rates. He found that about a quarter of all possible triples of these three values led to stars which would maintain nuclear fusion – like the stars in our universe. As he said, “[We] conclude that universes with stars are not especially rare (contrary to previous claims).”

The previous claims Adams referred to are numerous claims made in the scientific literature along the lines that the chance is incredibly low of a random universe allowing stars like ours. There are some reasons why such claims were actually correct, and why Adams is wrong on this issue.

The first reason is that to have any stars at all you need to have a fine-tuning of not just the three constants Adams considered, but other constants he did not consider. For example, Adams completely fails to consider the very precise match between the proton charge and the electron charge needed for the stability of large bodies like planets and stars (previously discussed), a match that would not occur by chance in 1 in 1,000,000,000,000 universes in the parameter space he considers.

The second reason is that the real question is not the likelihood of some type of stars, but yellow stars like the sun, which offer better prospects for the evolution of intelligent life than other types of stars such as red dwarfs or blue giants. Scientists such as Paul Davies have concluded that very small changes in the fundamental constants would preclude the existence of stars like the sun. It's kind of like this: nature must thread one needle hole for there to exist some type of stars, but nature must thread a much tinier needle hole for there to be stars like the sun.

The third reason is that Adams is guilty of a fallacy that we might call the fallacy of the “ant near the needle hole.” Consider an ant that somehow wanders into your sewing kit. If it were smart enough to talk, the ant might look at the eye of a needle hole in your sewing kit, and say, “Wow, that's a big needle hole!” Such an observation will only be made if you have a perspective looking a few millimeters away from the needle hole.




Similarly, Adams has given us graphs in which his “camera” is placed a few millimeters from the needle hole that must be threaded for stars to exist. He has imagined a parameter space in which fundamental constants are merely tripled. But physicists routinely deal with a difference of 40 orders of magnitude (10,000,000,000,000,000,000,000,000,000,000,000,000,000), which, for example, is roughly the difference between the strength of the strong nuclear force and the gravitational force. So if we are imagining a parameter space of alternate universes, we must imagine a parameter space infinitely larger than the relatively microscopic parameter space Adams considered. Rather than just imagining a possible tripling of the fundamental constants Adams considers, we should imagine that any of them could vary by a trillion times or a quadrillion times or a quintillion times.

Taking that correct perspective, we can see how marvelous it was that nature managed to thread the needle holes necessary for our existence. You can visualize it this way. The parameter space is the vast Sahara desert. The needle holes that nature needed to thread for a habitable universe are in different random positions scattered throughout that vast desert. The likelihood of those needle holes being threaded successfully by chance is therefore infinitely smaller than the probability figure reached by Adams.

Thursday, January 1, 2015

Unwelcome Visitors: A Science Fiction Story

It happened with no warning, like a bolt out of the blue. One morning a UFO appeared in the sky over Washington, D.C. It moved to the center of the Ellipse, a huge expanse of grass near the White House. For one day it hovered in the sky at that spot. Then it slowly descended to the ground, and landed in the middle of the Ellipse. At the same time, astronomers noticed that a gigantic spaceship had started to orbit Earth. Pundits speculated that the small UFO had come from the much larger spaceship that orbited the planet.




The US military responded promptly. A ring of awesome firepower assembled around the perimeter of the Ellipse. It included Apache helicopters, M1 Abrams tanks, and companies of infantry armed with laser-guided missiles. The President of the United States walked the short distance from the White House to the Ellipse, to take charge of the situation.

Upon reaching the Ellipse, the President asked a general for a briefing.

So what are these guys – friends or foes?” asked the President.

We have no idea,” said General Holsen. “We've received no radio communication whatsoever.”

Finally a lone figure emerged out of the UFO, and stood next to it. It was rather clear that the visitor was expecting someone to walk toward the center of the Ellipse, so that a meeting could take place.

General Holsen suggested that an armed soldier walk up to greet the alien visitor, but the President would have none of it.

I'm going out there myself,” said the President. “And I'm going alone.” The President knew that if he survived the meeting, he would be hailed for his courage, and it would help him win re-election.

The President walked slowly toward the center of the Ellipse. He saw next to the spaceship a figure about one meter larger than a man. The creature had a brain far larger than a man's. The extraterrestrial had huge blue eyes, a tiny slit for a nose, and a small mouth without lips.

We come in peace,” said the alien. “We are here to welcome you into the galactic society of civilizations. Do not worry, our society abolished all traces of war 500,000 years ago, so there is no chance that we will harm you.”

Welcome to our planet,” said the President. “We greet you as equals.”

You greet us as equals?” said the alien, suppressing a giggle. “No, you've got it all wrong. You see our society is eons older and more advanced than yours. We come here to help you, like a human might help a little puppy lost on the street. And we come here to teach you a few great cosmic truths that your tiny little minds can understand. But almost all of what we have learned is far beyond the grasp of your minimal intelligence.”

So your race is much smarter than ours?” asked the President.

Yes, we are to you as you are to the bugs at your feet,” said the alien. “But don't worry, we can still teach you some tiny fraction of the mighty truths we have learned over the eons. But only if you give up all of your preconceptions, and sit down before us like a little child on its first day in kindergarten. If you do that, you can learn from us some of the greatest truths of the ages, secrets of time and space that took us many thousands of years to learn.”

I understand exactly,” said the President. He walked away from the spaceship, and returned to the perimeter of the Ellipse.

What happened?” asked General Holsen.

They're invaders!” screamed the President. “Destroy them!”

The full military power of the mighty ring of weapons was unleashed on the UFO at the center of the Ellipse. All the M1 Abrams tanks fired their uranium-tipped shells All the Apache helicopters launched their Hellfire missiles. The soldiers launched their laser-guided missiles. The UFO was shattered into a thousand smoldering pieces. The damage was so great that biologists who later examined the wreckage were unable to find any trace of biological material that had not been burned to a cinder.

The large spaceship orbiting Earth then departed, never to be seen again.

The President was hailed as a hero, and won re-election in a landslide. The human race stumbled onward in its benighted ignorance.