Friday, January 31, 2014

Nature Seems to Love the Number Three

Scott Funkhouser of the Military College of South Carolina has proposed that the gigantic number 10122 (ten followed by 121 zeroes) has some special significance in our universe. He says that the ratio of the mass of the universe to the mass of the smallest possible quantum of mass is about 6 X 10121 (which rounds up to be 10122). He also says that the number of ways the particles of the observable universe could be arranged is 2.5 X 10122. He lists three other important cosmic numbers which are very close to 10122. His thesis received a write-up in the respected science journal Nature. 

Funkhouser is following in the footsteps of physicists Paul Dirac and Arthur Eddington, who also detected cosmic large number coincidences. But the coincidences they studied involved a much smaller large number: the number 1040 (ten followed by 39 zeroes). One such coincidence is that the simple equation ct/r (where c is the speed of light, t is the age of the universe, and r is what is called the classical electron radius) gives you a number of about 1040 , which is also about the ratio between the strongest of the four fundamental forces of nature and the weakest of those four forces (the ratio between the strong nuclear force and gravitation).

I am not sure whether there is anything to these large-number coincidences, but I am sure of one thing: the number that nature really favors is not some huge number like 10122 or 1040. The number that nature really favors is a small number.

The number is: three.

We see the number three in many important, fundamental places on the subatomic level and on the cosmic level. Below is a visual that illustrates the point:

the number 3 in nature


Let's look at some of these many ways in which nature favors the number 3 in a deep and fundamental way.

First of all, there are three main types of stable particles: the proton, the neutron, and the electron. These are the three building blocks of atoms. All solid matter consists of atoms built entirely from these three particles.
.
Scientists say that each proton and each neutron is built from smaller particles called quarks. How many quarks are there in a proton? Exactly three. How many quarks are there in a neutron? Exactly three.

Scientists also say that are three “generations” of quarks. The generations are shown in the three purple columns of the table below. The second and third generations (consisting of the top, bottom, strange, and charm quarks) quickly decay into the first generation consisting of the up quark and the down quark (the two types of quarks found in protons and neutrons). 

The Standard Model of Physics (click to expand)
http://en.wikipedia.org/wiki/Standard_Model

There are also three “generations” of leptons, shown as green columns in the above table. The tau and muon generations are very short-lived, leaving the electron as the only stable lepton with significant mass (the neutrinos have virtually no mass).

We also have three types of massive bosons as part of the standard model of physics: the Higgs boson, the W boson, and the Z boson. There are also three types of neutrinos.

On the subatomic level there are three types of charges: positive charges like protons have, negative charges like electrons have, and neutral charges like neutrons have.

Subatomic particles also have an important property called spin. The leptons and quarks all have a lowest spin of ½. The four gauge bosons have a lowest spin of 1. The Higgs boson has a spin of 0. This means there are three different minimum values of spin that particles can have.

As shown in the chart, there are three main properties associated with subatomic particles: mass, charge, and spin. Except for the “ghostly” neutrino particles and the energy particles called photons and gluons, each type of particle has a unique combination of mass, charge, and spin.

There are three fundamental subatomic forces: the strong nuclear force that holds an atomic nucleus together, the electromagnetic force that keeps electrons in the atom, and the weak nuclear force that sometimes causes an atomic nucleus to eject a particle. The other fundamental force, gravity, has no effect on the subatomic level. 

Physicists say that there are three types of "color charge" involved in the strong nuclear force: what are called red charges, blue charges, and green charges. 

Looking at the visible world, there are three dimensions of space: height, width, and depth. There are three main types of matter: gaseous, liquid, and solid. There are three main types of massive objects: planets, stars, and galaxies.

There are also three main types of galaxies: spiral galaxies, irregular galaxies, and elliptical galaxies. Almost all types of galaxies are one of these three types (with other types such as ring galaxies being very rare).

Besides the three laws of motion, there are also three types of natural laws: the laws of physics, the laws of chemistry, and the laws of biology.

As if this wasn't abundant proof enough that nature seems to like the number three, we have the unexplained fact that the ratio between the electric charge of the electron and the electric charge of the down quark (one of the two types of quarks that make up the proton) is precisely 3.0 (three).

So clearly nature seems to favor the number three. Someone disputing this thesis might point out the ratio between the proton mass and the electron mass would have been the perfect place to have put the number three – but the ratio between the proton mass and the electron mass is not three, but 1836.

In response to such an objection I will point out that the number 1836 has in its digits exactly three multiples of the number three: 18, 3, and 6. In fact, when we look at this ratio to seven decimal places, we find that it has six consecutive multiples of the number three, as shown below. 
 

I rest my case. Nature loves the number three.

Postscript: Here is additional evidence that nature loves the number three: an article entitled Physicists Prove Surprising Rule of Threes.

For a web page with lots of visuals that delves into cases of the number three in nature and society, see this very interesting page: www.threesology.org.

Tuesday, January 28, 2014

Dating in the Year 2030: Everything Will Be Different

Let us take a look at how dating may work in the not-too-distant future.

Geological eons ago, when I last participated in the dating scene, trying to find the right person for a relationship was hard, hard work. Circa 1990 a man might look for suitable dating partners through personal ads or dating services, and then contact a female by mail or phone (this was before email became popular). With luck he might then make a blind date for lunch with the female. Most of the time one or the other would end up disappointed, finding that the other person was in some major way unsuitable. It was just as hard work back in those days trying to work the bar scene or disco scene. You might have to approach someone you knew nothing about, and strike up a conversation, without any basis for a starting point (such as some knowledge about the person's job, likes, or background). It was no surprise that starter lines such as “Do you come here often?” were very popular. I never quite got the hang of those.

I am not familiar with the dating scene of the year 2014, but my guess is that it's not much easier. But around the year 2030, dating may be vastly easier.

This may be because of two different technologies: augmented reality technology and facial recognition technology. In the year 2030 the average man “on the prowl” may either wear augmented reality glasses or wear augmented reality contact lenses. When such a man looks through his glasses, he will see all kinds of popup text that identifies or describes the things that he sees.

Such a man will probably also have access to a very sophisticated facial recognition software service, capable of recognizing individuals from their facial features. The man might have to pay for such a service, or he might possibly get it for free (at the price of having to put up with many pop-up ads that appear on his augmented reality glasses).

So when the man goes looking for someone to date, he will merely have to put on his augmented reality glasses, and watch people walk by. The glasses will identify people, using the facial recognition technology. The glasses will also give some information that will make it easy for the man to start a conversation.

Below is a depiction of how things might look for a person wearing such glasses. The man might see Sarah Davis, and have her identified by his glasses. He might then say, “Hey, Sarah. How's that gene designer job going?” No need to use a cheesy all-purpose pickup line. 
 

How augmented reality glasses might look to a wearer

The software service may also provide a desirability rating from one star to four stars. The service might also estimate what the man's chances are of establishing a relationship with the identified individual. This might be calculated based partially on the identified individual's education, physical attractiveness, relationship status, and estimated income level, and also the education, income level, and physical attractiveness of the person wearing the glasses.

If you were a young, well-educated, highly paid, good looking person, you might look through such glasses and typically see a “relationship chance” figure such as 70% or 80%. But if you were an ugly high-school dropout working as a street sweeper, you might look through such glasses and never see a “relationship chance” higher than 5%.

Now imagine you started up a conversation with attractive young prospect Sarah Davis. Sarah might start talking about her job as a gene designer. She might start using technical terms such as DNA. How would you make yourself seem like someone who knows something about these complicated topics? If you ask Sarah “What is DNA?” she'll think you're an airhead. But you would have nothing to worry about. You would rely on your augmented reality glasses. You would switch on a “listen” mode. In this mode, the glasses listen to nearby words, and then give you a popup that gives you a little information about the words it recognizes. So when Sarah refers to DNA, you see on your augmented reality glasses a little popup telling you what DNA is. You can then say something that makes it sound as if you know something about what Sarah is talking about. Sarah is then impressed by your knowledge. 

 

But suppose you start dating lovely young Sarah, and then she dumps you, rejecting you in favor of some more suitable partner. No technology we can imagine will immunize you from this age-old danger. But technology will give you ways of softening the blow.

The first way of softening the blow might be for you to download a photo of Sarah into your favorite virtual reality software. The software will then adapt so that instead of being able to have virtual reality dates in glamorous locations with a virtual Marilyn Monroe, you will instead be able to have glamorous virtual reality dates with someone who looks just like Sarah.

The second way of softening the blow might be for you to download a photo of Sarah into the software of your home 3D printer. You might then be able to print out a soft, squeezable life-sized likeness of Sarah you can cavort with – or perhaps a 3D robot that looks just like Sarah.

The third way of softening the blow might be to play with the interface of your augmented reality glasses. You could instruct the software of the glasses to superimpose the face of Sarah over the face of your next girlfriend. You might then start dating some average-looking woman who is not as lovely as Sarah. But due to the wonders of augmented reality technology, whenever you looked at such a woman through your augmented reality glasses, she would look just like Sarah.

Monday, January 27, 2014

What Particles Collide, Nature Acts Programmatically, As If It Had Ideas

Let us take a very close look at some important laws of nature. When you go to the trouble of looking very closely at these laws, you may end up being stunned by their seemingly programmatic aspects, and you may end up getting some insight into just how apparently methodical and conceptual the laws of nature are.

The laws I refer to are some laws that are followed when subatomic particles collide at high speed. In recent years scientists at the Large Hadron Collider and other particle accelerators have been busy smashing together particles at very high speeds. The Large Hadron Collider is the world's largest particle accelerator, and consists of a huge underground ring some 17 miles wide.

The Large Hadron Collider accelerates protons (tiny subatomic particles) to near the speed of light. The scientists accelerate two globs of protons to a speed of more than 100,000 miles per second, one glob going in one direction in the huge ring, and another glob going in the other direction. The scientists then get some of these protons to smash into each other.

A result of such a collision (from a site describing a different particle accelerator) is depicted below. The caption of this image stated: “A collision of gold nuclei in the STAR experiment at RHIC creates a fireball of pure energy from which thousands of new particles are born.” 
 
particle collision

Such a high-speed collision of protons or nuclei can produce more than 100 “daughter particles” that result from the collision. The daughter particles are rather like the pieces of glass you might get if you and your friend hurled two glass balls at each other, and the balls collided (please don't ever try this). Here is a more schematic depiction of a one of the simplest particle collisions (others are much more complicated):

particle collision


The results of a collision like that shown in the first image may seem like a random mess, but nature actually follows quite a few laws when such collisions occur. The first law I will discuss is one that there is no name for, even though there should be. This is the law we might call the Law of the Five Allowed Stable Particles. This is simply the law that the stable long-lived output particles created from any very high-speed subatomic particle collision are always particles on the following short list:

Particle Rest Mass Electric Charge
Proton 1.67262177×10−27 kg 1.602176565×10−19 Coulomb


Neutron 1.674927351 ×10−27 kg 0
Electron 9.10938291 ×10−27 kg -1.602176565×10−19 Coulomb


Photon 0 0
Neutrino Many times smaller than electron mass 0

I am not mentioning antiparticles on this list, because such particles are destroyed as soon as they as come in contact with regular particles, so they end up having a lifetime of less than a few seconds.

This Law of the Five Allowed Stable Particles is not at all a trivial law, and raises the serious question: how is it that nature favors only these five particles? Why is it that high-speed subatomic particle collisions don't produce stable particles with thousands of different random masses and thousands of different random electric charges? It is as if nature has inherent within it the idea of a proton, the idea of an electron, the idea of a neutron, the idea of a photon, and the idea of a neutrino.

When particles collide at high speeds, nature also follows what are called conservation laws. Below is a table describing the conservation laws that are followed in high-speed subatomic particle collisions. Particles with positive charge are shown in blue; particles with negative charge are shown in red; and unstable particles are italicized (practically speaking, antiparticles are unstable because they quickly combine with regular particles and are converted to energy, so I'll count those as unstable particles). The particles listed before the → symbol are the inputs of the collision, and the particles after the → symbol are the outputs of the collision. The → symbol basically means “the collision creates this.”

Law Description Example of particle collision or decay allowed under law Example of particle collision or decay prohibited under law
law of the conservation of mass-energy
The mass-energy of the outputs of a particle collision cannot exceed the mass-energy of the inputs of the collision proton + protonproton+neutron + positron+electron neutrino electron+electron
antiproton+
electron (prohibited because an antiproton is almost a thousand times more massive than two electrons)
law of the conservation of charge
The ratio between the proton-like charges (called “positive” and shown here in blue) and the electron-like charges (called “negative” and shown here in red) in the outputs of a particle collision must be the same as the ratio was in the inputs of the collision proton + protonproton+neutron + positron +electron neutrino (two proton-like charges in input, two proton-like charges in output)


At higher collision energies:
proton + protonproton+proton+ proton+antiproton


proton + protonproton+neutron +electron+electron neutrino (two proton-like charges in input, only one proton-like charge in output)
law of the conservation of baryon number
Using the term “total baryon number” to mean the total of the protons and neutrons (minus the total of the antiprotons and antineutrons), the total baryon number of the stable outputs of a particle collision must be the same as this total was in the inputs of the collision proton + protonproton +neutron + positron+electron neutrino (total baryon number of 2 in inputs, total baryon number of 2 in the outputs) proton + neutronproton+muon + antimuon (total baryon number of 2 in inputs, total baryon number of 1 in the outputs)
law of the conservation of lepton number (electron number “flavor,” there also being “flavors” of the law for muons and tau particles)
Considering electrons and electron neutrinos to have an electron number of 1, and considering a positron and anti-neutrinos (including the anti-electron neutrino) to have an electron number of -1, the sum of the electron numbers in the outputs of a particle collision must be the same as this sum was in the inputs of the collision neutron→proton
+electron+anti-electron neutrino (total electron number of inputs is 0, net electron number of outputs is 0)
neutron→proton
+electron (total electron number of inputs is 0, but net electron number of outputs is 1)

Each of the examples given here of allowed particle collisions is only one of the many possible outputs that might be influenced by the laws above. When you have very high-energy particles colliding, many output particles can result (and nature's burden in following all these laws becomes higher).

Now let us consider a very interesting question: does nature require something special to fulfill these laws – perhaps something like ideas or computation or figure-juggling or rule retrieval?

In the case of the first of these laws, the law of the conservation of mass-energy, it does not seem that nature has to have anything special to fulfill that law. The law basically amounts to just saying that substance can't be magically multiplied, or saying that mass-energy can't be created from nothing.

But in the case of the law of the conservation of charge, we have a very different situation. To fulfill this law, it would seem that nature requires “something extra.”

First, it must be stated that what is called the law of the conservation of charge has a very poor name, very apt to give you the wrong idea. It is not at all a law that prohibits creating additional electric charges. In fact, when two protons collide together at very high speeds at the Large Hadron Collider, we can see more than 70 charged particles arise from a collision of only two charged particles (two protons). So it is very misleading to state the law of the conservation of charge as a law that charge cannot be created or destroyed. The law should be called the law of the conservation of net charge. The correct way to state the law is as I have stated it above: the ratio between the proton-like charges (in other words, positive charges) and the electron-like charges (in other words, negative charges) in the outputs of a particle collision must be the same as the ratio was in the inputs of the collision.

This law, then, cannot work by a simple basis of “something can't be created out of nothing.” It requires something much more: apparently that nature have something like a concept of the net charge of the colliding particles, and also that it somehow be able to figure out a set of output particles that will have the same net charge. The difficulty of this trick becomes apparent when you consider that the same balancing act must be done when particles collide at very high speeds, in a collision where there might be more than 70 charged output particles.

I may also note that for nature to enforce the law of the conservation of charge (more properly called the law of the conservation of net charge), it would seem to be a requirement that nature somehow in some sense “know” or have the idea of an abstract concept – the very concept of the net charge of colliding particles. The “net charge" is something like “height/weight ratio” or “body mass index,” an abstract concept that does not directly correspond to a property of any one object. So we can wonder: how is it that blind nature could have a universal law related to such an abstraction?

In the case of the law of the conservation of baryon number, we also have a law that seems to require something extra from nature. It requires apparently that nature have some concept of the total baryon number of the colliding particles, and also that it somehow be able to figure out a set of output particles that will have the same total baryon number. Again we have a case where nature seems to know an abstract idea (the idea of total baryon number). But here the idea is even more abstract than in the previous case, as it involves the quite abstract notion of the total of the protons and neutrons (minus the total of the antiprotons and antineutrons). This idea is far beyond merely a physical property of some particular particle, so one might be rather aghast that nature seems to in some sense understand this idea and enforce a universal law centered around it.

The same type of comments can be made about the law of the conservation of lepton number. Here we have a law of nature centered around a concept that is even more abstract than the previous two concepts: the notion of electron number, which involves regarding one set of particle types (including both charged and neutral particles) as positive, and another set of particle types (including both charged and neutral particles) as negative. Here is a notion so abstract that a very small child could probably never even hold it in his or her mind, but somehow nature not only manages to hold the notion but enforce a law involving it whenever two particles collide at high speeds.

The examples of particle collisions given in the table above are simple, but when particles collide at very high speeds, the outputs are sometimes much, more complicated. There can be more than 50 particles resulting from a high-speed proton collision at the Large Hadron Collider. In such a case nature has to instantaneously apply at least five laws, producing a solution set that has many different constraints.

For historical reasons, the nature of our current universe depends critically on the laws described above. Even though these types of high-speed relativistic particle collisions are rare on planet Earth (outside of particle accelerators used by scientists), these types of particle collisions take place constantly inside the sun. If the laws above were not followed, the sun would not be able to consistently produce radiation in the way needed for the evolution of life. In addition, in the time immediately after the Big Bang, the universe was one big particle collider, with all the particles smashing into each other at very high speeds. If the laws listed above hadn't been followed, we wouldn't have our type of orderly universe suitable for life.

By now I have described in some detail the behavior of nature when subatomic particles collide at high speeds. What words best describe such behavior? I could use the word “fixed” and “regular,” but those words don't go far enough in describing the behavior I have described.

The best words I can use to describe this behavior of nature when subatomic particles collide at very high speeds are these words: programmatic and conceptual.

The word programmatic is defined by the Merriam Webster online dictionary in this way: “Of, relating to, resembling, or having a program.” This word is very appropriate to describe the behavior of nature that I have described. It is just as if nature had a program designed to insure that the balance of positive and negative charges does not change, that the number of protons plus the number of neutrons does not change, and that overall lepton number does not change.

The word conceptual is defined by the Merriam Webster online dictionary in this way: “Based on or relating to ideas or concepts.” This word is very appropriate to describe the behavior of nature that I have described. We see in high-speed subatomic particle collisions that nature acts with great uniformity to make sure that the final stable output particles are one of the five types of particles in the list above (protons, neutrons, photons, electrons, and neutrinos). It is just as if nature had a clear idea of each of these things: the idea of a proton, the idea of a neutron, the idea of a photon, the idea of an electron, and the idea of a neutron. As nature has a law that conserves net charge, we must also assume that nature has something like the idea of net charge. As nature has a law that conserves baryon number, we must also assume that nature has something like the idea of baryon number. As nature has a law that conserves lepton number, we must also assume that nature has something like the idea of lepton number.

This does not necessarily imply that nature is conscious. Something can have ideas without being conscious. The US Constitution is not conscious, but it has the idea of the presidency and the idea of Congress.

So given very important and fundamental behavior in nature that is both highly conceptual and highly programmatic, what broader conclusion do we need to draw? It seems that we need to draw the conclusion that nature has programming. We are not forced to the conclusion that nature is conscious, because an unconcious software program is both conceptual and programmatic. But we do at least need to assume that nature has something like programming, something like software. 

Once we make the leap to this concept, we have an idea that ends up being very seminal in many ways, leading to some exciting new thinking about our universe. Keep reading this blog to get a taste of some of this thinking. 

Saturday, January 25, 2014

The 7 Likeliest Disasters of This Century

Let us now look at which calamities and misfortunes we are most likely to see in this century. I will create a list that steers away from fascinating apocalyptic disasters which are very unlikely to occur, and focuses on disasters that have the highest chance of occurring before the year 2100. I do not mean to argue that any of the things on my list are necessarily likely to occur in this century. I simply will look at which gigantic calamities are the most likely to occur in our century, regardless of whether the chance of their occurrence is greater than 50%.

First, I may mention a few fascinating possibilities which make great topics for “disaster porn” (ala Sci-Fi Channel),  but which are not at all likely to occur in our century:
  1. An asteroid or comet collision could one day wipe out mankind, but the chance of a very bad asteroid or comet collision occurring in this century is very low. One need merely consider that we haven't had such a collision during the roughly 5000 years of recorded history, so the chance of it occurring in the remainder of this century is much less than 2%.
  2. There is a gigantic volcanic time-bomb sitting underneath Yellowstone National Park, and scientists think the last time it erupted it pretty much buried North America in ash and dust. It could do the same thing again, as mentioned here, but thankfully the previous eruption was more than 600,000 years ago. So the chance of such a disaster occurring in our century is much less than 1%.
  3. The widely discussed possibility of a “gray goo” disaster caused by an uncontrollable reproduction of nanotechnology is probably something that has a low chance of happening in this century. Nobel Prize winner Richard Smalley argued that there are technical reasons why he don't have to worry about this possibility.
  4. There is very little chance that before the year 2100 robots will take over the world or make us slaves or get rid of us. While there will be fantastic leaps in robot hardware, you need equally great leaps in software before robots are a huge risk to mankind; and software is only increasing gradually rather than exponentially.
But what calamities have a relatively high chance of occurring in this century? Below is my list.

1. A City-Busting Hurricane

Global warming is slowly increasing the temperature of seawater, and it is warm seawater that leads to hurricanes. The National Oceanic and Atmospheric Administration says, It is likely that greenhouse warming will cause hurricanes in the coming century to be more intense globally and have higher rainfall rates than present-day hurricanes.” 

Unfortunately, many of the world's largest cities are in danger from this possibility. Hurricanes cause storm surges that temporarily and locally raise sea levels. When you combine such surges with sea levels that may already be elevated because of global warming, the chance of devastation is very high. The devastation in New York caused by Hurricane Sandy and the similar devastation in New Orleans may be just a foretaste of far worst disasters in our century. Before 2100 we will probably see at least one major city more or less wiped out by a hurricane.

city submerged


2. A City-Busting Nuclear Explosion

Despite the progress that has been made in reducing Cold-War worries about nuclear war, there is a still a very large chance that at least one city will be destroyed by a nuclear bomb before the year 2100. Even though the United States and Russia have reduced their nuclear arsenals, they each still have about 2000 active nuclear weapons. India and Pakistan each have about 100 nuclear weapons. The continued risk of nuclear war was highlighted by a recent announcement by the Bulletin of Atomic Scientists that it was keeping its Doomsday Clock (a warning of global risk) set at five minutes before midnight. Many of the US and Russian nuclear weapon systems are very old, built using technology that does not meet modern standards of reliability. The risk of an accidental launch or detonation is significant.

Besides the danger of a conventional nuclear war, there is the ongoing risk of nuclear terrorism. Outside of the world's nuclear bombs is enough highly enriched uranium and plutonium to make 40,000 additional nuclear weapons. Much of this exists in places in less than ideal security. If terrorists were to steal only a softball-sized amount of this highly enriched uranium, it would then be fairly easy for them to construct a homemade nuclear bomb. We are very lucky that no city has yet been lost in a mushroom cloud.

3. A Mega-drought

Currently California is facing the worst drought in its 163-year history. A few years ago, Texas faced a similar drought. These are probably just like tremors warning us of the earthquakes to come. This study projects future droughts using a Palmer Drought Severity Index, in which bigger droughts have a larger negative number. The worst drought ever recorded had a Palmer Drought Severity Index of -3 or -4. But the study projects that by the year 2100 some parts of the United States will see droughts with a Palmer Drought Severity Index of -8 or -10, more than twice as bad as the worst drought ever recorded. The same study forecasts that by the year 2100 some Mediterranean areas will face droughts with a Palmer Drought Severity Index in the -15 or -20 range, four or five times worse than the worst ever recorded. 

4. An Energy Crunch

By an energy crunch I mean a gap between the amount of energy the world has and the amount of energy the world needs for its current economic activities. There is a huge risk of a gigantic energy crunch in this century, because we have limited supplies of fossil fuels. It is predicted that global oil production will peak within a few decades. It is also predicted that global coal production will peak in this century, probably a few decades later than oil production will peak. The result may be a catastrophe even worse than global warming, one that leaves millions shivering or sweating in a world where electrical power is unreliable. In this study a Cal Tech scientist cites a forecast that 90% of global fossil fuels will be exhausted by 2067.

5. Mass Famine

There are four reasons why a mass famine is a large possibility before the year 2100:
  1. The large possibility of terrible droughts, discussed above.
  2. The global stressing of available water supplies, which will make it harder to irrigate crops.
  3. The large possibility that global oil supplies will plunge long before the year 2100, because of the depletion of this limited resource. Oil is a major factor in agricultural production, being used to transport food and make fertilizers.
  4. The fact that global population will have greatly increased, rising to as high as ten billion.
Add up all these things and you have a very high risk of a mass famine that may kill millions.

6. Global Depression or Economic Collapse

The global economy is something that has grown erratically, in fits and starts, with very little previous planning or design. No one ever designed the global economy to last a thousand years, and there is no reason to be very confident that it can survive something such as a gigantic energy crunch, particularly if such a disaster is combined with some of the other disasters discussed here. In particular, the global economy has a strong tendency to rely on debt, and once things start going wrong, many people lose the ability to repay debts. We saw a tremendous ripple effect caused when many people couldn't repay their debts during the 2008 economic crisis. Something similar could happen to a degree that could be ten times worse, and it bring down the whole economy like a collapsing house of cards. The result would at best be a global depression like we saw in the 1930's, and at worst a global economic collapse. The chance of such a thing seems worse when one considers that after the 2008 financial crisis, the world's nations patched on a lot of band-aids and used "extend and pretend" to muddle through, rather than making the hard choices needed to reduce the risk of similar problems in the future. 

7. A Global Pandemic

The risk of a global pandemic or plague is described in my previous How to Have a Pleasant Pandemic. The risk is very substantial because of three main factors: (1) global warming will increase the risk from certain types of diseases; (2) increased jet travel increases the chance of diseases spreading around the world; (3) there is a very substantial risk that mankind will be inflicted by some new disease deliberately cooked up in a laboratory by some malevolent party.

In regard to the third of these factors, consider that gene-analysis technology that used to cost millions of dollars is now available in desktop devices for a few thousand dollars. Technology leaps such as that may in a few decades make it so that any Tom, Dick, or Harry can whip up a new virus or bacteria in his garage, or genetically turbocharge an existing type of germ. Once that happens the risk of a global pandemic will rise much higher.

So that is my list of the seven likeliest disasters of this century. If it leaves you depressed, please read my vastly more optimistic post The 8 Luckiest Things That Could Happen in the Future.

Thursday, January 23, 2014

The Origin of Life “Breakthrough” That Isn't

Today the very popular web site digg.com has a link to a story about an MIT assistant professor who seems to think he has come up with a big, important breakthrough regarding the origin of life. The digg.com site has the sensational breathless headline Did This Guy Figure Out How Life Came From Matter? When we open the link we find an article that begins by saying, “Jeremy England, a 31-year-old physicist at MIT, thinks he has found the underlying physics driving the origin and evolution of life.”

There is then a summary of England's ideas, with a link to a paper he wrote, and to a Powerpoint-style pdf file that he created. England has some complicated reasoning designed to show that there may be thermodynamic heat-related reasons why molecules might change or make copies of themselves in order to achieve heat dissipation. I have no idea whether England's basic idea is correct, but I think even if it is correct, it is light-years away from being an explanation for how life came from matter.

Here are the most basic components of life:
  1. The genetic code. This is a symbolic or semantic system wherein certain combinations of nucleotides represent particular amino acids, quite similar to the way that certain combinations of letters (such as DOG) represent particular objects.
  2. DNA and RNA molecules. These are not just complicated molecules, but molecules that use the genetic code. Each DNA molecule is a kind of instruction book on how to make a particular type of life, and that book is written in the language of the genetic code.
  3. Cells. These read the instructions in the DNA molecules, and use them to create proteins from amino acids. Cells have to “understand” the genetic code as well as the DNA and RNA molecules do.
Explaining the origin of all of this ends up being an incredibly difficult problem, and England's work does very little to help solve this problem. England claims to have found reasons why certain molecular systems might tend to reach a more organized state because of thermodynamic reasons. So what? You can explain how water becomes more self-organized when it freezes, but that is not relevant to the origin of life.

One of the key issues in the origin of life is whether or not there would have existed the building blocks needed for the appearance of the first RNA molecules. As explained here, there are reasons for doubting that there would have existed the ribose sugars, nucleotides, and nucleosides needed for RNA molecules to originate. Does England discuss the issue? No, he apparently says nothing about the chemical state of the Earth at the time of the origin of life.

Another difficult thing to explain is the origin of the genetic code. How did mere chemicals turn into the semantic representation system needed for the origin of life, before Darwinian evolution began? England gives us zero insight on this issue, which he completely ignores. England's paper doesn't even mention the genetic code. The word “genetic” does not even appear in his paper. Nor does his 70-page PDF file contain any use of the word “genetic” or the phrase “genetic code.”

Does perhaps this “origin of life breakthrough” have at least some observations or experiments to back it up? No, there are none. Neither the scientific paper nor the PDF file mentions any new observations or experiments. It's all pure math and physics equations.

Does England perhaps have some chemistry breakthrough that will explain the great mystery of how we got the “miniature programming language” of the genetic code or RNA from mere chemicals? No, apparently neither his PDF file nor his paper has any chemistry at all in it – it's all pure physics.

In this link England is quoted as saying this: “You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant.” I think this naive statement shows a lack of insight about the huge problems involved in explaining the origin of life. A problem of explaining the origin of a hugely complicated, well-coordinated system and its semantic framework of the genetic code (with many parts working in harmony) is apparently reduced in England's mind to a problem of getting some more random atoms to combine.

Here is how most scientists view the relation of the main sciences: physics gives rise to chemistry and chemistry gives rise to biology. The thesis can be stated like this:

Physics → Chemistry → Biology

Although true to some extent, this thesis fails to explain all of biology, because we really don't understand properly how biology can get started from mere chemistry. But one thing seems very clear to me: you will never be able to explain the origin of biology by skipping chemistry, and trying to go straight from physics to biology. 
 
So work such as England's, using nothing but physics, will never be a real explanation for the origin of life. The visual below illustrates my point. 

 

Wednesday, January 22, 2014

The Final War: A Science Fiction Story

After China united with Russia in the year 2055, the United States of America was worried that it was no longer the world's greatest power. So the US united with Canada to become a huge new nation known as Canmerica. This was the beginning of a great wave of nation-merging which swept the world over the next two decades. When this process finally ended, there were only two powers left in the world: the Blue Alliance (controlling all of the Western Hemisphere) and the Gold Alliance (controlling all of the rest of the world).

For three decades these two great alliances existed in peace. But eventually a dispute arose over vast mineral riches discovered in Antarctica and in the ocean. Both the Blue Alliance and the Gold Alliance claimed these riches as their own. Eventually the dispute led to a war between the two great powers.

The President of the Blue Alliance spoke on 3D television to the billions he ruled over.

We have no choice in this matter,” said President Olsen. “We will never be able to live in peace until the Gold Alliance is destroyed utterly. I hereby announce that a state of war exists between the Blue Alliance and the Gold Alliance.”

But before he could finish his speech, he was notified that nuclear missiles had been launched from the Gold Alliance. President Olsen ordered a full-scale nuclear counterattack. Hundreds of nuclear bombs went off all over the world, leading to the death of billions. All of the dust from the nuclear explosions caused a nuclear winter that darkened Earth's skies for years, causing billions more to die from crop failures.

But the war still was not over. Both sides had prepared elaborate attack plans that would be carried out even after a nuclear holocaust had occurred.

The Blue Alliance set forth drones that reached Europe and Asia, releasing a variety of biological plagues. The Gold Alliance unleashed an invasion of killer robots that attacked the Western Hemisphere. The robots were self-reproducing robots that knew how to make more and more copies of themselves, using metals scavenged from cars and houses.

killer robot


The effects of both of these attacks were devastating. In both cases, the attacks were supposed to be limited. The Blue Alliance thought that its biological plagues would only inflict the Eastern Hemisphere. But it was wrong. The plagues soon spread to the Western Hemisphere as well. The Gold Alliance thought that its self-reproducing killer robots would cause devastation only in the Western Hemisphere, being blocked by the Bering Strait from spreading into Asia. But after the nuclear winter caused temperatures to plummet, the Bering Strait froze over. The killer robots marched over the frozen straight, spreading into Asia, and then into Europe and Africa. Every place the robots entered they left a trail of death and utter devastation.

The population of the world, which had risen to nine billion before the war, dropped first into the millions, and then into the thousands, and then into the hundreds, and then into the dozens.

Some of the last few survivors were in a tiny military unit of the Blue Alliance. The unit used the last functional spy satellite to study heat signatures from the Eastern Hemisphere. They determined that there was no sign of life anywhere in the Eastern Hemisphere, except for the faintest indication of human activity in one ruined city of the Gold Alliance. The military unit set forth a jet on a search and destroy mission to neutralize this last speck of resistance.

While traveling across the ocean, three of the crew members died from the plague virus. But one determined soldier named Arnold Johnson continued flying the jet, until it came to the ruins of the enemy city.

Johnson landed the jet in a risky landing on a vacated air field. Carrying a machine gun and a biological sensor, he set forth to find the remaining pocket of enemy resistance. He searched through the ruins of the city, getting closer and closer to his objective. Finally he found his target.

It was a tent which had been pitched amid the ruins. Lifting the flap of the tent, Johnson saw an old man spooning out beans from a can of beans.

Are you the last left of your kind?” asked Johnson.

Probably,” said the old man, guessing correctly.

Johnson killed the man with his machine gun. Then he raised his fist in triumph.

The war is over!” said Johnson. “We won!”

But the victory was pyrrhic, for he was the last human being left in the world. He died from the plague virus a few days later.

It would have been the end of civilization on planet Earth, but there was one reason it was not. For many years, the planet had been silently observed by a wise and kindly extraterrestrial race called the Voltons. The Voltons had a strict ethic of non-interference. As soon as they realized that mankind was extinct, the Voltons took the uninhabited Earth as their own. First, they removed all traces of human civilization, except for some pieces that were kept in museums, and a handful of places preserved as historical exhibits. Then the Voltons cleaned up all the pollution and ugliness that man had created. The Voltons then slowly went to work building a paradise planet. They succeeded beyond their wildest dreams.

Earth was transformed into a planet where civilization and nature meshed together in perfect harmony. A glorious ideal society of justice, peace, forgiveness and tolerance flourished like no galactic society had ever flourished. A golden age began, and lasted for eons. The planet contained a thousand busy centers of art, philosophy, science, and culture, but nowhere on the planet were there any bombs, missiles or weapons. Earth, which had once been the laughing stock of the galaxy, was transformed into the greatest showpiece of galactic culture. Throughout a region of space stretching for thousands of light-years, beings with a hundred strange appearances all longed to one day visit Earth, or to make their own planet like Earth. The inhabitants of Earth all lived blissfully like philosopher kings, basking in the warm sunlight of tranquility, freedom, wisdom, and equality.

So the end result was very, very happy for the planet Earth, but not for its original inhabitants.

Tuesday, January 21, 2014

Hypothetical Configurations of a Programmed Cosmos

In some previous posts (here, here,  and here) I have advanced the theory that the universe has a cosmic computation layer: a kind of primordial programming that has existed since the known beginning of the universe (the Big Bang). I argued that we need to assume such a layer to account for how the universe manages to fulfill so ably all of its near-infinite computation needs. I also argued that we need to postulate such a computation layer to help explain the astonishing evolution of the universe, the appearance of galaxies, life, and eventually Mind from a universe that began in a superdense state. The theory I have suggested is that our universe is quite real (not a simulation), but that it has been somehow programmed for success from the beginning. To anyone reading my previous post on 18 anthropic requirements that must be met for civilizations such as ours to exist (some of which are most unlikely to occur by chance), such a theory of cosmic programming may seem like a good idea.

Upon hearing such a theory of a cosmic computation layer, some readers no doubt must have objected: we can't believe such a thing, because we can't imagine how such a thing might work; we can form no idea of what type of configuration such a computation layer might have.

My purpose in this post is to rebut such an objection. I will argue that there are hypothetical configurations we can imagine that might allow such a computation layer to exist. My aim is not to show that any one of these configurations is likely, but merely to show that we can imagine various hypothetical configurations for a cosmic computation layer, none of which violates any known findings. I will ask the reader to allow me to engage in some speculation, which may at times get rather exotic. Before dismissing such speculation, please remember a quote from the famous physicist Niels Bohr: “Your theory is crazy, but it's not crazy enough to be true” (a reminder that nature often ends up favoring some pretty mind-bending exotic realities).

First, let me comment on the use of the term “layer.” By speaking of a computation layer, I am not speaking of anything very similar to the layer of a cake. I use the term layer in the same way that software architects use the term, to mean a certain type of functionality that exists in a system, regardless of its physical location (in the same way that such architects speak of an abstraction layer). Computer people might talk of a hardware layer, a driver layer, and a software layer, but they don't actually mean things that are lying on top of each other horizontally like the layers of a cake. The term layer is simply used to mean some particular aspect of the overall functionality, regardless of where it is located. Do a Google image search for “software layer” and you will see many examples.

When I depicted a diagram (in this post) showing a computation layer underneath a mass-energy layer, I stated that the two layers are intermingled or intertwined (and certainly did not mean that one layer was vertically floating over the other).

So what type of arrangement or configuration might allow for such a computation layer to exist? At this stage of our ignorance, we can only speculate. But it is possible to imagine some reasonable configurations that would allow for such a thing.

The Possibility of Invisible Computation Particles

One possibility we can imagine is that the universe may have two types of particles: primary particles such as protons, neutrons, electrons, and photons, and also what we may call computation particles. The purpose of the computation particles might be to facilitate computation related to the primary particles, and to make sure that the universe's programming is followed. There could be at least one computation particle for every primary particle, and there might be many computation particles for each primary particle. The computation particles might somehow shadow or surround the primary particles. Each computation particle might be able to store many bits of information.

The immediate objection one could make is: such particles couldn't possibly exist, because we would have already detected them. But this objection isn't valid in light of current theories about dark matter. Currently physicists say that we are all surrounded by invisible dark matter. They say that we can't see dark matter because it does not interact with electromagnetism (and thus far there have been no unambiguous detections of dark matter). If such a thing is possible, it is possible that there are other types of invisible particles that do not interact with any of the four fundamental forces of our universe, and that are completely undetectable to us through direct observation.

We are not 100% sure that dark matter really exists, but we are absolutely sure that a particle called the neutrino exists. The neutrino has been called the ghost particle. A neutrino has either no mass or very little mass. Neutrinos are emitted by the sun. Scientists say that every second countless neutrinos are passing through your body. Given the reality of such particles, there is nothing implausible about the idea that our bodies and other objects might be intermingled with countless trillions of computation particles we can't see or detect.

We know of one other thing that pervades all of space: the cosmic background radiation, believed to be the faint afterglow of the Big Bang. All of outdoor space is bathed in this faint radiation, which was only detected around 1965. Stand outside and you will be surrounded by the tiny particles of the cosmic background radiation. Then there is also dark energy, which scientists say now makes up about 68% of the universe's mass-energy. It seems that as time passes, scientists are finding more and more cases of where we can say, “We are surrounded by a type of invisible matter or energy we were not aware of previously.” So there is nothing implausible about the idea that we might also be surrounded by (and intermingled with) computation particles we can't see.

The computation particles I am postulating could either be some type of invisible particle different from dark matter or dark energy, or the computation particles might actually be dark matter or dark energy (or part of either of them). Since we know nothing about how massive or complex dark matter particles might be, we can't rule out that they may be computation particles (or that they may partially be computation particles). We can say the same thing about the dark energy particles postulated by scientists – some of those particles may be computation particles. 


invisible particles
 
The Possibility of Emergence Clouds

The term emergence has been used for the tendency of nature to create units that are more than the sum of their parts. One example of emergence is the appearance of life. First you have mere chemicals, and then later there develops a microscopic living thing that is much more than just a combination of chemicals. Another example of emergence is the appearance of conscious Mind. First you have a collection of cells, and then you have a self-aware consciousness that is much more than just a collection of cells.

If we imagine some type of computation particles as previously imagined, we can imagine some of them grouping together in clusters, related to the emergence of some particular thing that is more than just the sum of its parts. Every atom might be associated with an emergence cloud that handles computation related to that particular atom. Every molecule might be associated with its own emergence cloud. There might also be emergence clouds associated with the origin of life, the origin of Mind, and the origin of galaxies.

Your mind might itself be an emergence cloud, a cluster of computation particles that stays together in order for your consciousness to exist. Such an emergence cloud may or may not dissipate when you die.

In this hypothetical configuration, small emergence clouds can exist within larger emergence clouds. The smallest emergence clouds might be the size of atoms or molecules, and the largest emergence clouds might be the size of galaxies or clusters of galaxies.

The Possibility of Hyperluminal Computational Communication

Physicists say that known physical particles such as protons exchange photons as part of the electromagnetic force, with one particle having an influence on another particle. Thinking in a similar vein, we can imagine that computation particles might be able to somehow communicate with other computation particles.

Would such communication be limited by the speed of light? Not necessarily. The speed of light is the speed of all electromagnetic radiation. But the communication between computation particles might use some different type of radiation or energy that is not limited by the speed of light.

Physicists say that known physical particles such as protons both send and receive virtual particles that act as agents of force exchange. So it is therefore not implausible to imagine that if computation particles exist, they might be both senders and receivers of computation-related messages. Under such a scenario, we can imagine each such particle as being rather like a radio receiver and a radio transmitter.

Given a sufficient number of such computation particles scattered around space, communicating with each other at a speed that is perhaps greater than the speed of light, and perhaps instantaneous, you have all the requirements for a computing system of basically unlimited power.

Would There Be Room for Such Particles?

Let's consider: are there any spatial reasons why it would be implausible to assume that there might be one or many computation particles for each material particle? Could it be that things would be too crowded if such particles existed? Certainly not. Scientists tell us that solid matter is almost entirely empty space. You often see schematic diagrams showing electrons as being a substantial fraction of the size of an atom, but such schematic diagrams are very misleading in their spatial depictions. In reality, according to this site the ratio of the radius of an atom to the radius of a proton, neutron, or electron is between 10,000 and 100,000. An atom is almost entirely empty space. So there is a huge amount of empty space within atoms in which computation particles might exist. There might be 1000 computation particles for every proton in an atom, and there still would be enough space within an atom.

The Possibility of a Computation Field

Another possibility is the possibility of a kind of universal computation field, something perhaps rather comparable to the Higgs field. Scientists say the Higgs field is a field that pervades all of space. So we can imagine a computation field that might pervade all of space, helping the universe to satisfy its computation needs. Such a field might act somewhat like a wi-fi network, but might extend to every bit of space.

Just as some physicists depict the creation of a particle as being a kind of disturbance or flicker in a field such as the Higgs field, we might imagine that each computation event in the universe's computation might be a kind of disturbance, flicker or blip in a universe-wide computation field, with the field having innumerable such blips, like a bubbling, boiling ocean.

In the visual below we can imagine the purple grid as being this computation field, with the green grid below it being space that is warped by the presence of matter. However, if such a computation field existed it might better be depicted as pervading all of space.

computation field


Computation Threads in the Fabric of Space?

Still another possibility is the possibility that computation functionality is somehow embedded in the fabric of space. Think of space as being a kind of fabric (a way it is often described). Imagine that this fabric is built from tiny units we may call threads. It could be that every nth thread (every hundredth, every thousandth, every millionth, or some other fraction) is what we might call a computation thread – a unit that helps the universe perform its computation activities. Each such thread might be of a vast length, perhaps stretching for trillions of miles.

Would we be able to detect such a thread as we passed through space? Probably not, largely because ordinary solid matter is something like 99.999% empty space. Astronomers say that stars as big as the sun are sometime crunched into the densest possible state (short of a black hole), and that when the star reaches such a state (called a neutron star), every teaspoon of matter weighs 100 million tons. This shows how empty ordinary matter is. So ordinary matter could pass through space that partially consisted of computation threads. The chance of a collision between such a thread and a material particle would be very low, and a collision might only produce a tiny deflection which would be very hard to detect. Or perhaps a particle of solid matter might be able to pass through such a computation thread without any deflection at all, like a person moving through air.


computation thread

Where Might the Universe's Software be Stored?

So we have imagined how a computation layer could exist, either (a) in the form of computation particles which might cluster into emergence clouds, and which might communicate between each other, perhaps at speeds greater than the speed of light, or (b) a computation field that pervades all of space, or (c) embedded as threads within the fabric of spacetime. But what about the software that would be a vital element of any cosmic computation layer—where might that be located?

I can imagine several possibilities. One is the possibility that such software might somehow be stored as information content within a universal computation field, something similar to the Higgs field. The second possibilitiy is that the software might somehow be lurking within the cosmic background radiation that pervades all of the universe, or within some similar all-pervading radiation that dates from the time of the Big Bang. The photons that we can detect from the Big Bang are microwave photons. But space might also be pervaded by equally ancient particles of some other type, which somehow store the universe's software or some important part of it. If such particles can travel through any solid matter in the same way that neutrinos can, then any particle could “query” the universe's software just by taking a read of this background radiation (in rather the same way that your GPS device gets your current position partially by taking a read from a GPS satellite).

Another possibility is that the software might somehow be stored within the previously imagined computation threads embedded in the fabric of space.

One other mind-bending possibility is suggested by DNA biology. When a human is conceived, an organism does not get the blueprint for a human being from some non-human external source. Instead it reads the blueprint of a human being stored in every tiny little DNA molecule. Every drop of your blood or saliva is teeming with such molecules. In your cells are trillions of copies of the blueprint for how to make a human. This suggests the following possibility: perhaps the software of the universe (or some vital kernal or core of it) is stored in every computation particle (or perhaps every known subatomic particle). In such a case a proton (or a computation particle) might have no need to query any external source for a guideline on how to behave in accordance with the universe's programming. It might merely retrieve the information from itself.

Just as every cell in your body contains DNA that stores the plan and blueprint of a human being, your body might have within it countless trillions of computation particles that each is storing the plan and blueprint of the universe--and perhaps programming that will assure the glorious future pinnacles of cosmic destiny.

Conclusion

It is far too soon to draw any exact conclusions about the details of a cosmic computation layer. In this regard we have a situation similar to the situation biology was in during the middle of the 19th century. At that time someone might have reasoned that there must be some information system that allows the blueprint of a human to be passed on during conception. But at that time it would have been impossible to have figured out the details of how such a system worked. We only learned the details with the discovery of DNA in the twentieth century. Similarly we can make compelling arguments that we need to assume that some cosmic computation layer exists, but we cannot say at this time what the exact configuration of a cosmic computation layer might be. We can merely speculate.

But I think the type of speculations made here show that we can easily imagine ways in which a cosmic computation layer might plausibly exist. So the idea that the universe has a computation layer is quite possible, and cannot be excluded because of any “we can't think of any way that could work” type of reasoning. We can indeed think of quite a few ways in which it might work, and I have described some of those possible configurations, as rough as those ideas may be.

Of course, a mere possibility does not show a likelihood. But I think the likelihood of the universe having a computation layer can be shown based on the need to satisfy the enormous computation demands of the universe (as I have argued here), and on the need to postulate a teleological principle to explain the universe's remarkable evolution from infinite density to galaxies to life and finally to Mind (as I argued here). Many a modern physicist recognizes that the universe seems to have a high degree of fine-tuning, for reasons discussed here and here. Some of these physicists have tried to explain fine-tuning by imagining a multiverse (a collection of a vast number of universes). But we can explain the fine-tuning much more simply and economically with the hypothesis that our material universe has been programmed for success from the beginning.