Header 1

Our future, our universe, and other weighty topics


Showing posts with label anthropic principle. Show all posts
Showing posts with label anthropic principle. Show all posts

Friday, August 5, 2022

Cosmological Natural Selection Theory Gets Even More Falsified

Scientists have long been bothered by why the physical conditions, laws and fundamental constants of the universe seem to be so fine-tuned to allow the existence of planets such as ours and living beings such as us. On page 235 of his book Chaos and Harmony, a University of Virginia professor of astronomy (Trinh Xuan Thuan) stated this:

"The evolution of the cosmos is determined by initial conditions (such as the initial rate of expansion and the initial mass of matter), as well as by fifteen or so numbers called physical constants (such as the speed of the light and the mass of the electron). We have by now measured these physical constants with extremely high precision, but we have failed to come up with any theory explaining why they have their particular values. One of the most surprising discoveries of modern cosmology is the realization that the initial conditions and physical constants of the universe had to be adjusted with exquisite precision if they are to allow the emergence of conscious observers. This realization is referred to as the 'anthropic principle'...Change the initial conditions and physical constants ever so slightly, and the universe would be empty and sterile; we would not be around to discuss it. The precision of this fine-tuning is nothing short of stunning. The initial rate of expansion of the universe, to take just one example, had to have been tweaked to a precision comparable to that of an archer trying to land an arrow in a 1-square-centimeter target located on the fringes of the universe, 15 billion light years away!"

One excellent book is the rather poorly titled book “Modern Physics and Ancient Faith” by Stephen M. Barr, a physics professor at the University of Delaware (which doesn't at all brush away religious thinking as an ancient relic, despite the title). In that book there is an interesting discussion of “anthropic coincidences” that are necessary for our existence. One example given is that of a parameter called v. On pages 126-127 the book makes these interesting comments:

"The long technical name of the parameter v is 'the vacuum expectation value of the Higgs field.'....The value of v is a great puzzle to particle theorists; in fact, it is one of the central puzzles of physics. What is puzzling is that in reasonably simple theories v seems to want to come out to be, not 1, but a number like 1017, i.e, 100,000,000,000,000,000...As far as the possibility of life emerging in our universe is concerned, it would be a disaster for v to be 100,000,000,000,000,000. It would also be a disaster if it were 100,000,000,000,000, or if it were 100,000,000, or if it were 100,000, or if it were 100. Indeed, it would be a disaster if it were 10, or 5, or even 1.5. It would probably be a disaster if v were even slightly different from the value it happens to have in the real world."

So nature “hit the bullseye,” a very distant bullseye, it would seem. This is only one of many astonishing “coincidences” required for our existence. Barr lists seven other such cases, one of which is even more dramatic: the fine-tuning of the cosmological constant. As Barr puts it on page 130 of his book:

"In order for life to be possible, then, it appears that the cosmological constant, whether it is positive or negative, must be extremely close to zero – in fact, it must be zero to at least 120 decimal places. This is one of the most precise fine-tunings in all of physics."

It would be very hard to overestimate how thoroughly all major objects in our universe depend upon the fundamental constants being just right. It is not merely that the existence of extremely organized things such as mammals depends on a fine-tuning of fundamental constants. It is also that the existence of objects such as stars and planets depend on such a fine-tuning.  On pages 64-65 of his book "The Symbiotic Universe," astronomer George Greenstein (a professor emeritus at Amherst College) said this about the equality of the proton and electron charges (which have precisely the same absolute value): 

"Relatively small things like stones, people, and the like would fly apart if the two charges differed by as little as one part in 100 billion. Large structures like the Earth and the Sun require for their existence a yet more perfect balance of one part in a billion billion." 

In fact, experiments do indicate that the charge of the proton and the electron match to eighteen decimal places. Because of the dependency of stars on a very delicate fine-tuning of fundamental constants, you can state it this way: a random universe would be both lifeless and lightless. 

In an attempt to explain such things, physicist Lee Smolin has long advanced a groundless theory he calls  cosmological natural selection, one that no one seems to advance other than himself. It's a theory of a cyclical universe in which the laws of the universe change in each cycle.

 In his book Time Reborn, Smolin describes the theory as follows:

"The basic hypothesis of cosmological natural selection is that universes reproduce by the creation of new universes inside black holes. Our universe is thus a descendant of another universe, born in one of its black holes, and every black hole in our universe is the seed of a new universe. This is a scenario within which we can apply the principles of natural selection."

Smolin claims to have a theory of how the physics of the universe could evolve through natural selection. But how on earth can we get anything like natural selection out of the idea of new universes being created by the formation of black holes? Smolin gave the following strained reasoning: (1) he claimed that the physics that favors a habitable universe are similar to the physics that favor the production of black holes; (2) he claimed that a new universe produced by a black hole might have slightly different physics from its parent universe; (3) he claimed that random variations in physics that would tend to produce universes that produce more black holes would cause such universes to produce more offspring (more universes); (4) he claims that as a result of this “increased reproduction rate” of some types of universes, we therefore would gradually see the evolution of physical laws and constants that tend to favor the appearance of life and also the production of black holes.

The speculations described above hinge upon the linchpin claim that a new universe can be produced from the collapse of a huge star to form a black hole. Some analysts let Smolin get away with making this claim, but there is no reason why that should be done. The idea that a new universe can be produced from the collapse of a black hole is a complete fantasy, with no basis in fact. We have no observations to support such a theory. Nor is there any physics or mathematics to support such a theory. There is no way to write an equation in which you put a new universe on the right side of an equal sign. 

The idea of universes being produced from black holes is a very silly one. A typical black hole arises from the collapse of a star with only only a few solar masses. A universe like ours has a mass-energy of at least 1,000,000,000,000,000,000,000 solar masses. Claiming a new universe can arise from a black hole is like claiming a planet can arise from a grain of sand. 

Smolin claimed that one advantage of his theory of cosmological natural selection is that it makes a falsifiable prediction. In a 2004 paper (page 38) he lists one such prediction:

There is at least one example of a falsifiable theory satisfying these conditions, which is cosmological natural selection. Among the properties ...that make the theory falsifiable is that the upper mass limit of neutron stars is less than 1.6 solar masses. This and other predictions of CNS have yet to be falsified, but they could easily be by observations in progress.”

But by now this prediction has proven to be incorrect. In September 2019 a science news story reported on observations of one of the most massive neutron star ever found. We are told, “The researchers, members of the NANOGrav Physics Frontiers Center, discovered that a rapidly rotating millisecond pulsar, called J0740+6620, is the most massive neutron star ever measured, packing 2.17 times the mass of our Sun into a sphere only 30 kilometers across.” A 2021 story lists the mass of this neutron star as 2.14 solar masses. 

Last week in our science news we had the headline of "Black Widow Pulsar Sets Mass Record." A Sky and Telescope story dated July 28 tells us this: 

"The pulsar PSR J0952-0607, which is some 20,000 light-years away in the constellation Sextans, already holds the title of second-fastest-known rotator, spinning around its axis 707 times per second. Now, it has also shattered the record for most massive neutron star known, weighing in at 2.35 solar masses."

 A CNN story last week confirms; it states, "The PSR J0952-0607 star is 2.35 times the mass of the sun." Below we see an artist's depiction of a neutron star. 


Credit: NASA Goddard Space Flight Center

So Smolin told us that the cosmological natural selection theory would be falsified if any neutron stars were found to be more massive than 1.6 solar masses, and by now it has been found that one neutron star has 2.14 solar masses and another has 2.35 solar masses. Given last week's announcement that the neutron star PSR J0952-0607 has 2.35 times the mass of the sun, we can consider the cosmological natural selection theory to be even more falsified than it previously was. 

But alas, in the world of science it is sadly true that theories keep living on in the speech of scientists long after they have been falsified, kind of like the zombies of bad movies that keep walking about even after they have been killed. Some of the main theories claimed to be "scientific fact" seem to be theories of this type. 

In the past week we had two other science new stories throwing cold water in the face of theorists:

(1) A news story entitled "No trace of dark matter halos" quotes a scientist saying that *the number of publications showing incompatibilities between observations and the dark matter paradigm just keeps increasing every year."
(2) Another article reports that some cosmological model trying to speculate about an eternal cyclical universe falls flat, and actually requires a beginning of the universe. 

Saturday, May 25, 2019

Where Adams Goes Wrong on Cosmic Fine-tuning

This year scientist Fred C. Adams published to the physics paper server a massive paper on the topic of cosmic fine-tuning (a topic I have often discussed on this blog). The paper of more than 200 pages (entitled "The Degree of Fine Tuning in our Universe -- and Others") describes many cases in which our existence depends on cases of some number in nature having (against all odds) a value allowing the universe to be compatible with the existence of life. There are several important ways in which the paper goes wrong or creates inappropriate impressions. I list them below.

Problem #1: Using habitability as the criteria for judging fine-tuning, rather than “something as good as what we have.”

Part of the case for cosmic fine-tuning involves the existence of stars. It turns out that some improbable set of coincidences have to occur for a universe to have any stars at all, and some far more improbable set of coincidences have to occur for a universe to have very stable, bright, long-lived stars like our sun.

Adams repeatedly attempts to convince readers that the universe could have had fundamental constants different from what we have, and that the universe would still have been habitable because some type of star might have existed. When reasoning like this, he is using an inappropriate rule-of-thumb for judging fine-tuning. Below are two possible rules-of-thumb when judging fine-tuning in a universe:

Rule #1: Consider how unlikely it would be that a random universe would have conditions as suitable for the appearance and long-term survival of life as we have in our universe.

Rule #2: Consider only how unlikely it would be that a random universe would have conditions allowing some type of life.

Rule #1 is the appropriate rule to use when considering the issue of cosmic-fine tuning. But Adams seems to be operating under a rule-of-thumb such as Rule #2. For example, he tries to show that a universe significantly different from ours might have allowed red dwarf stars to exist. But such a possibility is irrelevant. The relevant consideration: how unlikely is it that we would have got a situation as fortunate as the physical situation that exists in our universe, in which stars like the sun (more suitable for supporting life than red-dwarf stars) exist? Since "red dwarfs are far more variable and violent than their more stable, larger cousins" such as sun-like stars (according to this source),  we should be considering the fine-tuning needed to get stars like our sun, not just any type of star like a red dwarf. 

I can give an analogy. Imagine I saw in the woods a log cabin house. If I am judging whether this structure is the result of chance or design, I should be considering how unlikely it would be that something as good as this might arise by chance. You could do all kinds of arguments trying to show that it is not so improbable that a log structure much worse than the one observed might not be too unlikely (such as arguments showing that it wouldn't be too hard for a few falling trees to make a primitive rain shelter). But such arguments are irrelevant. The relevant thing to consider is: how unlikely would it be that a structure as good as the one observed would appear by chance? Similarly, living in a universe that allows an opportunity for humans to continue living on this planet for billions of years with stable solar radiation, the relevant consideration is: how unlikely that a universe as physically fortunate as ours would exist by chance? Discussions about how microbes might exist in a very different universe (or intelligent creatures living precariously near unstable suns) are irrelevant, because such universes do not involve physical conditions as good as the one we have.

Problem #2: Charts that create the wrong impression because of a “camera near the needle hole” and a logarithmic scale.

On page 29 of the paper Adams gives us a chart showing some fine-tuning needed for the ratio between what is called the fine-structure constant and the strong nuclear force. We see the following diagram, using a logarithmic scale that exaggerates the relative size of the shaded region. If the ratio had been outside the shaded region, stars could not have existed in our universe.


This doesn't seem like that lucky a coincidence, until you consider that creating a chart like this is like trying a make a needle hole look big by putting your camera right next to the needle hole. We know of no theoretical reason why the ratio described in this chart could not have been anywhere between .000000000000000000000000000000000000000000001 and 1,000,000,000,000,000,000,000,000,000,000,000. So by using such a narrow scale, the chart gives us the wrong idea. In a a less misleading chart that used an overall scale vastly bigger, we would see this shaded region as merely a tiny point on the chart, occupying less than a millionth of the total area on the chart. Then we would realize that a fantastically improbable coincidence is required for nature to have threaded this needle hole.  Adams also uses a logarithmic scale for figure 5, to make another such tiny "needle hole" (that must be threaded for life to exist) look like it is relatively big.


Needle holes can look big when your eye is right next to them

Problem #3: An under-estimation of the strong force sensitivity.

On page 30 of the paper, Adams argues that the strong nuclear force (the strong coupling constant) isn't terribly fine-tuned, and might vary by as much as 1000 without preventing life. He refuses to accept what quite a few scientists have pointed out: that it would be very damaging for the habitability of the universe if the strong nuclear force were only a few percent stronger. Quite a few scientists have pointed out that in such a case, the diproton (a nucleus consisting of two protons and no neutrons) would be stable, and that would drastically affect the nature of stars. Adams attempt to dismiss such reasoning falls flat. He claims on page 31 that if the diproton existed, it would cause only a “modest decrease in the operating temperatures of stellar cores," but then tells us that this would cause a 1500% change in such temperatures (a change from about 15 million to one million), which is hardly modest. 

Adams ignores the fact that small changes in the strong nuclear force would probably rule out lucky carbon resonances that are necessary for large amounts of carbon to exist. Also, Adams ignores the consideration that if the strong nuclear force had been much stronger, the early universe's hydrogen would have been converted into helium, leaving no hydrogen to eventually allow the existence of water. In this paper, two physicists state, “we show that provided the increase in strong force coupling constant is less than about 50% substantial amounts of hydrogen remain.” What that suggests is that if the strong force had been more 50% greater, the universe's habitability would have been greatly damaged, and life probably would have been impossible. That means the strong nuclear force is 1000 times more sensitive and fine-tuned that Adams has estimated.

Problem #4: An under-estimation of the fine structure constant's sensitivity and sensitivity of the quark mass.

On page 140 of the paper, Adams suggests that the fine structure constant (related to the strength of the electromagnetic force) isn't terribly fine-tuned, and might vary by as much as 10000 times without preventing life. His previous discussion of the sensitivity of the fine structure constant involved merely a discussion of how a change in the constant would affect the origin of elements in the Big Bang. But there are other reasons for thinking that the fine structure is very fine-tuned, reasons Adams hasn't paid attention to. A stellar process called the triple-alpha process is necessary for large amounts of both carbon and oxygen to be formed in in the universe. In their paper “Viability of Carbon-Based Life as a Function of the Light Quark Mass,” Epelbaum and others state that the “formation of carbon and oxygen in our Universe would survive a change” of about 2% in the quark mass or about 2% in the fine-structure constant, but that “beyond such relatively small changes, the anthropic principle appears necessary at this time to explain the observed reaction rate of the triple-alpha process.” This is a sensitivity more than 10,000 times greater than Adams estimates. It's a case that we can call very precise fine-tuning. On  page 140, Adams gives estimates of the biological sensitivity of the quark masses, but they ignore the consideration just mentioned, and under-estimates the sensitivity of these parameters.

Problem #5: The misleading table that tries to make radio-fine tuning seem more precise than examples of cosmic fine-tuning.

On page 140 Adams give us the table below:



The range listed in the third column represents what Adams thinks is the maximum multiplier that could be applied to these parameters without ruling out life in our universe. One problem is that some of the ranges listed are way too large, first because Adams is frequently being over-generous in estimating by how much such things could vary without worsening our universe's physical habitability (for reasons I previously discussed), and second because Adams is using the wrong rule for judging fine-tuning, considering “universes that allow life” when he should be considering “universes as habitable and physically favorable as ours.”

Another problem is that the arrangement of the table suggests that the parameters discussed are much less fine-tuned than a radio that is set to just the right radio station, but most of the fundamental constants in the table are actually far more fine-tuned than such a radio. To clarify this matter, we must consider the matter of possibility spaces in these cases. A possibility space is the range of possible values that a parameter might have. One example of a possibility space is the possible ages of humans, which is between 0 and about 120. For an AM radio the possibility space is between 535 and 1605 kilohertz.

What are the possibility spaces for the fundamental constants? For the constants involving one of the four fundamental forces (the gravitational constant, the fine-structure constant, the weak coupling constant and the strong coupling constant), we know that the four fundamental forces differ by about 40 orders of magnitude in their strength. The strong nuclear force is about 10,000,000,000,000,000,000,000,000,000,000,000,000,000 times stronger than the gravitational force. So a reasonable estimate of the possibility space for each of these constants is to assume that any one of them might have had a value 1040 times smaller or weaker than the actual value of the constant.

So the possibility space involving the four fundamental coupling constants is something like 1,000,000,000,000,000,000,000,000,000,000,000,000 larger than the possibility space involving an AM radio. So, for example, even if the strong coupling constant could have varied by 1000 times as Adam claims, and still have allowed for life, for it to have such a value would be a case of fine-tuning more than 1,000,000,000,000 times greater than an AM radio that is randomly set on just the frequency. For the range of values between .001 and 1000 times the actual value of the strong nuclear force is just the tiniest fraction within a possibility space in which the strong nuclear force can vary by 1040 times. It's the same situation for the gravitational constant and the fine-structure constant (involving the electromagnetic force). Even if we go by Adam's severe under-estimations of the biological sensitivity of these constants, and use the estimates he has made, it is a still a situation that this is fine tuning trillions of times more unlikely to occur by chance than a radio being set on just the right station by chance, because of the gigantic probability space in which fundamental forces might vary by 40 orders of magnitude.

A similar situation exists in regard to what Adams calls on page 140 the vacuum energy scale.  This refers to the density of energy in ordinary outer space such as interstellar space. This is believed to be extremely small but nonzero. Adams estimates that it could have been 10 orders of magnitude larger without preventing our universe's habitability. But physicists know of very strong reasons for thinking that this density should actually be 1060 times or 10120  times greater than it is (it has to do with all of the virtual particles that quantum field theory says a vacuum should be packed with).  So for the value of the vacuum energy density to be as low as it is would seem to require a coincidence with a likelihood of less than 1 in 1050 .  Similarly, if a random number generator is programmed to pick a random number between 1 and 1060 with an equal probability of any number between those two numbers being the random number, there is only a microscopic chance of the number being between 1 in 1050 .

Adam's chart has been cleverly arranged to give us the impression that the fundamental constants are less fine-tuned than a radio set on the right station, but the opposite is true. The main fundamental constants are trillions of times more fine-tuned than a radio set on the right station. The reason is partially because the possibility space involving such constants is a more than a billion quadrillion times larger than the small possibility space involving what station a radio might be tuned to.

Problem #6: Omitting the best cases from his summary table.

Another huge shortcoming of Adams' paper is that he has omitted some of the biggest cases of cosmic fine-tuning from his summary table on page 140.   One of the biggest cases of fine-tuning involves the universe's initial expansion rate. Scientists say that at the very beginning of the Big Bang, the universe's expansion rate was fine-tuned to more than 1 part in 1050 , so that the universe's density was very precisely equal to what is called the critical density.  If the expansion rate had not been so precisely fine-tuned, galaxies would never have formed.  Adams admits this on page 40-41 of his paper. There is an unproven theory designed to explain away this fine-tuning, in the sense of imagining some other circumstances that might have explained it.  But regardless of that, such a case of fine-tuning should be included in any summary table listing the universe's fine-tuning (particularly since the theory designed to explain away the fine-tuning of the universe's expansion rate, called the cosmic inflation theory,  is a theory that has many fine-tuning requirements of its own, and does not result in an actual reduction of the universe's overall fine-tuning even if the theory were true).  So why do we not see this case in Adams' summary table entitled "Range of Parameter Values for Habitable Universe"? 

Adams' summary table also makes no mention of the fine-tuning involving the Higgs mass or the Higgs boson, what is called "the hierarchy problem." This is a case of fine-tuning that so bothered particle physicists that many of them spent decades creating speculative theories such as supersymmetry designed to explain away this fine-tuning, which they sometimes said was so precise it was like a pencil balanced on its head.  Referring to this matter, this paper says, "in order to get the required low Higgs mass, the bare mass must be fine-tuned to dozens of significant places." This is clearly one of the biggest cases of cosmic fine-tuning, but Adams has conveniently omitted it from his summary table. 

Then there is the case of the universe's initial entropy, another case of very precise fine-tuning that Adams has also ignored in his summary table. Cosmologists such as Roger Penrose have stated that for the universe to have the relatively low entropy it now has, the entropy at the time of the Big Bang must have been fantastically small, completely at odds from what we would expect by chance.  Only universes starting out in an incredibly low entropy state can end up forming galaxies and yielding life.  As I discuss here,  in a recent book Penrose suggested that the initial entropy conditions were so improbable that it would be more likely that the Earth and all of its organisms would have suddenly formed from a chance collision of particles from outer space. This gigantic case of very precise cosmic fine-tuning is not mentioned on Adams' summary table. 

Then there is the case of the most precise fine-tuning known locally in nature, the precise equality of the absolute value of the proton value and the absolute value of the electron charge.  Each proton in the universe has a mass 1836 times greater than the mass of each electron. From this fact, you might guess that the electric charge on each proton is much greater than the electric charge on each electron. But instead the absolute value of the electric charge on each proton is very precisely the same as the absolute value of the electric charge on each electron (absolute value means the value not considering the sign that is positive for protons and negative for electrons).  A scientific experimental study determined that the absolute value of the proton charge differs by less than one part in 1,000,000,000,000,000,000 from the absolute value of the electron charge. 

This is a coincidence we would expect to find in fewer than 1 in 1,000,000,000,000,000,000 random universes, and it is a case of extremely precise fine-tuning that is absolutely necessary for our existence.  Since the electromagnetic force (one of the four fundamental forces) is roughly 10 to the thirty-seventh power times stronger than the force of gravity that holds planets and stars together, a very slight difference between the absolute value of the proton charge and the absolute value of the electron charge would create an electrical imbalance that would prevent stars and planets from holding together by gravity (as discussed here).  Similarly, a slight difference between the absolute value of the proton charge and the absolute value of the electron charge would prevent organic chemistry in a hundred different ways. Why has Adams failed to mention in his summary table (or anywhere in his paper) so precise a case of biologically necessary fine-tuning? 

Clearly, Adams has left out from his summary table most of the best cases of cosmic fine-tuning.  His table is like some table entitled "Famous Yankee Hitters" designed to make us think that the New York Yankees haven't had very good hitters, a table that conveniently omits the cases of Babe Ruth, Lou Gehrig, Joe DiMaggio, and Micky Mantle. 

Below is a table that will serve as a corrective for Adams' misleading table.  I will list some fundamental constants or parameters in the first column. The second column gives a rough idea of the size of the possibility space regarding the particular item in the first column. The third column tells us whether the constant, parameter or situation is more unlikely than 1 chance in 10,000 to be in the right range, purely by chance. The fourth column tells us whether the constant, parameter or situation is more unlikely than 1 chance in a billion to be in the right range, purely by chance.   For the cosmic parameters "in the right range" means "as suitable for long-lasting intelligent life as the item is in our universe." The "Yes" answers follow from various sensitivity estimates in this post, in Adams' paper, and in the existing literature on this topic (which includes these items).  For simplicity I'll skip several items of cosmic fine tuning such as those involving quark masses and the electron/proton mass ratio. 


Parameter or Constant or Situation Size of Possibility Space More Unlikely Than 1 in 10,000 for Item to Be in Right Range, by Chance? More Unlikely Than 1 in 1,000,000,000 for Item to Be in Right Range, by Chance?
Strong nuclear coupling constant 1040 (difference between weakest fundamental force and strongest one) Yes Yes
Gravitational coupling constant 1040 (difference between weakest fundamental force and strongest one) Yes Yes
Electromagnetic coupling constant 1040 (difference between weakest fundamental force and strongest one) Yes Yes
Ratio of absolute value of proton charge to absolute value of electron charge .000000001 to 1000000000 Yes Yes
Ratio of universe's initial density to critical density (related to initial expansion rate) 1040 to 1/ 1040 Yes Yes
Initial cosmic entropy level Between 0 and some incredibly large number Yes Yes
Vacuum energy density Between 0 and 1060 or 10120 its current value, as suggested by quantum field theory Yes Yes
AM radio tuned to your favorite AM station 535-1605 kilohertz No No
FM radio tuned to your favorite FM station 88-108 megahertz No No

The truth is that each of these cases of cosmic fine-tuning is trillions of time more unlikely to have occurred by chance than a random radio being exactly tuned to your favorite AM station or FM station.  To imagine the overall likelihood of all of these cases of fine-tuning happening accidentally, we might imagine an archer who successfully hits more than 7 archery targets randomly positioned around the circumference of a 180-meter circle,  with the archer being at the center of the circle and blindfolded. 

cosmic fine tuning

Postscript: In June 2019 some scientists published a paper entitled "An update on fine-tunings in the triple alpha process." They end by stating "a relatively small ~0.5% shift" in the light quark mass "would eliminate carbon-oxygen based life from the universe." This is a case of very precise fine-tuning, contrary to the claims of Adams, who tries to make it look like a parameter that could have differed by three orders of magnitude (a factor of about 1000 times).  The scientists also state that "such life could possibly persist up to ~7.5% shifts" in the electromagnetic coupling constant (equivalent to the fine structure constant).   This is also a case of precise fine-tuning, contrary to the claims of Adams, who tries to make the fine structure constant look like a parameter that could have differed by four orders of magnitude (a factor of about 10,000 times). 

In his table on page 140 Adams has told us that the electron/proton mass ratio isn't very sensitive to changes. But in his book The Particle at the End of the Universe, page 145 to 146, physicist Sean Carroll states the following:

 “The size of atoms...is determined by...the mass of the electron. If that mass were less, atoms would be a lot larger. .. If the mass of the electron changed just a little bit, we would have things like 'molecules' and 'chemistry', but the specific rules that we know in the real world would change in important ways...Complicated molecules like DNA or proteins or living cells would be messed up beyond repair. To bring it home: Change the mass of the electron just a little bit, and all life would instantly end.”  

Tuesday, September 25, 2018

Barash's Poor Logic on Cosmic Fine-Tuning

Our universe seems to be incredibly fine-tuned to allow the existence of biological organisms such as ourselves. Against all odds, the fundamental constants have values that allow the existence of long-lived stars, planets and living beings. Make minor changes in any of a dozen places in the universe's fundamental constants and laws, and observers such as us would be impossible.

An example (one of many discussed here) is the exact numerical equality of the absolute value of the proton charge and the electron charge. Given that each proton has a mass 1836 times greater than the mass of each electron, we would not at all expect these two fundamental particles to have electric charges that are exactly equal or exactly opposite. But according to modern science the electric charge of each electron in the universe is the exact opposite of the electric charge of each proton in the universe. The equality has been proven to be an exact match to at least 18 decimal places. We would not expect a coincidence like this to occur in 1 in a trillion random universes. The scientist Greenstein has stated that if this coincidence did not exist, planets could not hold together, because the electromagnetic repulsion between particles in a planet would totally overwhelm the gravity that holds the planet together (electromagnetism being a fundamental force more than a trillion trillion trillion times stronger than gravitation).


In Aeon magazine we recently had an evolutionary biologist named David Barash do his best to sweep under the rug the gigantic reality of cosmic fine-tuning. His “Anthropic Arrogance” essay is a grab bag of points that do not add up to any forcible objection to claims such as, “Our universe shows life-favoring characteristics so fantastically improbable that we should suspect a grand purpose behind its physical reality.”

Barash tries to raise doubt about the topic by quoting Einstein's statement “What really interests me is whether God had any choice in the creation of the world?” He states the following:

Note that Einstein was asking if the deep laws of physics might have in fact fixed the various physical constants of the Universe as the only values that they could possibly have, given the nature of reality, rather than having been ordained for some ultimate end – notably, us. At present, we simply don’t know whether the way the world works is the only way it could; in short, whether currently identified laws and physical constants are somehow bound together, according to physical law, irrespective of whether human beings – or anything else – eventuated.

But it is not at all correct to claim that “we don't know whether the the way the world works is the only way it could.” We do know exactly such a thing. We know, for example, that each proton has a mass 1836 times greater than each electron, and that the electric charge of each proton is the exact opposite of each electron. There is no a priori reason why such numbers could not have been totally different. And so it is with all of the fundamental constants of the universe. The hope occasionally expressed by physicists (that they might one day have a super-theory that explains all the fundamental constants and laws) is just a fantasy hope, kind of like some child saying, “One day I hope to own a marble mountain-top castle in Spain.” There does not exist any theory showing why any of the fundamental constants could not have had a vastly different value.   In the most unlikely event that physicists ever produce such a “this explains it all” theory, then appealing to such a thing may have some force; but until then, appealing to such a highly improbable possibility has no force.

Barash then resorts to a quite ridiculous argument sometimes made, that the universe isn't so fine-tuned for life because most of it is inhospitable to life. He says, “The stark truth is that nearly all of it is incompatible with life – at least our carbon-based, water-dependent version of it.” True, since the majority of the universe is just empty space. But anyone familiar with gravitation will know that you can't have a life-bearing planet without most of a solar system or galaxy being empty space. For example, if the space of the solar system mostly consisted of planets, the mass of such planets would exert so much gravitational force that the atmosphere of the earth would be pulled into space, and no one could breathe (not to mention that you'd be pulled out into outer space whenever you walked outside of your door).

Barash then resorts to a completely fallacious “extremely improbable things are very common” argument often made when materialists discuss cosmic fine-tuning. He points out that if you shuffle a set of cards, the chance of getting that exact sequence is something like 1 in 10 to the sixtieth power. Similarly, he reasons, if you strike a golf ball, there are trillions of different positions where the golf ball could end up, each extremely unlikely. Barash states, “For us to marvel at the fact of our existing (in a Universe that permits that existence) is comparable to a golf ball being amazed at the fact that it ended up wherever it did.”

This type of reasoning is completely erroneous, for it commits the fallacy known as false analogy. The fallacy of false analogy is committed when you draw an analogy between two things that aren't similar. The reason why the average shuffled deck of cards is not comparable to a fine-tuned universe is that a fine-tuned universe is something that resembles a product of design, but a random deck of shuffled cards does not resemble a product of design. The reason why a randomly landed golf-ball is not comparable to a fine-tuned universe is that a fine-tuned universe is something that resembles a product of design, but a randomly landed golf ball does not resemble a product of design. It is therefore erroneous to claim that, “For us to marvel at the fact of our existing (in a Universe that permits that existence) is comparable to a golf ball being amazed at the fact that it ended up wherever it did” – for in the first case there is something that resembles design, and in the second case there isn't.

Barash continues the same witless reasoning by talking about the improbability of one particular sperm uniting with one particular egg to produce a baby. It's the same “extremely improbable things are very common” bad reasoning. He points out that since there are 150 million sperm in a man's ejaculation, it's very improbable that any one sperm would unite with an egg. This is also a false analogy, because all of those sperm are identical, so the uniting of one particular sperm with an egg does not resemble a product of design or even something terribly lucky. So it's a false analogy to compare such a thing with a fine-tuned universe that seems to resemble a product of design and has all kinds of “lucky coincidences” all over the place.

Below is a conversation that illustrates the fallacious nature of the type of reasoning Barash uses in this case:

Son: Bye, Mom. I'm going to Las Vegas, and I will gamble my college fund at the roulette table, continuing to bet all my winnings until I become a billionaire.
Mom: That's crazy – you're all but certain to lose it all.
Son: But Mom, haven't you heard that very improbable things often happen? Why, if I shuffle this deck of cards, the chance of getting that particular sequence of cards is one in a gazillion. So my chance of winning the billion isn't so low.
Mom: You silly goose! Only run-of-the-mill, humdrum improbable things happen all the time. Extremely lucky random events don't happen often.

The son's reasoning is entirely fallacious, because while there very often happens very improbable outcomes that are not lucky and do not resemble the product of design, it is extremely rare and unlikely to have a random outcome resemble a product of design. So the chances of him winning the billion is every bit as low as his mother thinks.

Barash then asks two rhetorical questions about an asteroid collision millions of years ago, and I may note that such questions do nothing to advance his case.

Barash then appeals to the possibility of the multiverse as an explanation for cosmic fine-tuning, the idea that there are a large number of other universes. The fallacy of such an appeal is discussed in detail in this post, in which I give six reasons why such an appeal is fallacious. The best reason for rejecting the multiverse as an explanation for cosmic fine-tuning is the simple fact that you do not increase the likelihood of any one random trial being successful if you increase the number of random trials. For example, your chance of winning a million dollars in a weekend at Las Vegas is exactly the same regardless of whether or not there are an infinity of universes filled with gamblers who gamble at casinos. So whether or not there are a vast number of other universes has no effect on the probability of our universe being accidentally habitable. If there are a sufficient number of improbable coincidences, adding up very forcibly to an appearance of design, we should suspect such design if we think there is only one universe; and we should suspect such design with exactly the same force if we think there are many other universes.


Bad reasoning about your chances at Las Vegas

Barash then has a long paragraph building on the statement, “Shanks suggests that the multiverse hypothesis ‘does to the anthropic Universe what Copernicus’s heliocentric hypothesis did to the cosmological vision of the Earth as a fixed centre of the Universe’.” In the paragraph he drops the names of Galileo, Kepler and Copernicus. But it's not an appropriate comparison, because the conclusions of Galileo, Kepler, and Copernicus were based on observations, and there are zero observations of any other universe. What we have going on here is the same rhetorical trick that I discuss in my post “When Scientific Theorists Use 'Prestige by Association' Ploys.” Barash is trying to give some credibility to the groundless notion of the multiverse by trying to draw a very strained association between the multiverse and the hallowed scientific names of Galileo, Kepler and Copernicus. We shouldn't be fooled by such a maneuver.

Barash then appeals to the possibility of extraterrestrial life-forms that can get by on conditions much worse than we have. But this possibility does nothing to weaken the case for cosmic fine-tuning. I can give an analogy to explain why. If I come to a log cabin in the woods, I may reason that it's too improbable that such a house could have appeared by a chance arrangement of falling logs, and that the house is probably the product of design. If you were there with me, you might say, “That's not true because an organism could have just used a lucky tree hollow as its home.” But that does nothing to defeat my argument. Similarly, if its incredibly improbable that long-lived stable sun-like stars could exist in a random universe (and it is for the reasons discussed here), the existence of conditions that allow such stars strengthens the case for cosmic fine-tuning, regardless of whether some organism could barely get by living on planets revolving around stars that are less favorable for life, such as a star that periodically zaps its planets with high doses of radiation.

Barash then refers us to Lee Smolin's groundless speculation that attempted to combine the idea of natural selection with some weird speculation that collapsing black holes spit out baby universes. This wildly imaginative theory, known as the theory of cosmological natural selection, has not been widely accepted by physicists. We know of no evidence at all that black holes spit out new universes. And since universes don't have genes, and don't mate with other universes, it is preposterous for Smolin to be claiming that natural selection might come into play on the level of universes. Even if it were true that black holes did spawn child universes, this would do nothing to explain the fine-tuned characteristics of our universe, for the same reason that natural selection on planet Earth does not explain the appearance of very complex visible biological innovations (contrary to the claims of those like Barash).

The reason is the same in both cases: the fact that natural selection cannot occur in regard to some particular innovation until after that innovation appears. We cannot explain the appearance of something like a vision system in organisms by saying that such an innovation improved their survival and reproduction rate, because such an improvement (the same as a degree of natural selection) would not occur until after such a biological innovation first appeared; and a consequence that follows something is never the cause of that thing. For similar reasons, natural selection could not be the cause of some universe being fine-tuned. The idea of yanking natural selection from the biological world and trying to fit it into the vastly different world of cosmology makes no more sense than trying to apply Freudian psychology to a discussion of colliding subatomic particles.

Next in Barash's essay he reminds us of the very surprising fact that at the end of Carl Sagan's novel Contact, scientists found that after computing pi (the ratio of the circumference of the circle to its radius) to many additional digits, the scientists found a gigantic circle embedded within the digits of pi. In Sagan's novel this discovery is treated as proof that the universe had a designer. This ending was omitted from the movie of Contact. I'm not sure why Barash is bringing this up. Perhaps he is trying to suggest that scientists finding something suggesting the universe is designed belongs only in fiction. But it suggests that this very influential scientist (Sagan) was not too averse to such a possibility, so it does nothing to help the case Barash is trying to make.

In his last paragraph Barash builds on his previous attempts to associate the idea of cosmic fine-tuning with a claim that humans are the center of the universe, or that the universe was designed for humans. But there is no necessary association between the two, and few people promoting the idea of cosmic fine-tuning claim that the universe was designed for humans specifically, preferring the more general idea that the universe may have been designed for intelligent observers. So in this regard Barash is attacking a straw man. Someone can believe that the universe was designed for life, and that there are numerous different types of intelligent life forms scattered across the universe. You can believe that without believing that humans are unique, and without believing that humans are the most advanced biological organisms, and without believing that humans are the centerpiece of the universe.

So you may have Copernican-style objections about humans being the centerpiece of the universe, but that does nothing to defeat or discredit the idea of cosmic fine-tuning. Whether the universe was fine-tuned for living observers, and whether man is the center of the universe or the most advanced organism in the universe are two entirely different questions. The title of Barash's essay is “Anthropic Arrogance.” But there is nothing arrogant at all about noticing a long series of extremely lucky coincidences and favorable facets of the universe's fundamental constants and laws, and suspecting that more than mere chance is involved.

After noticing the fallacy-ridden reasoning of evolutionary biologist Barash on this topic of the universe's fine-tuned physics, we should ask: in what other places have evolutionary biologists got away with fallacious reasoning? We should then go back and scrutinize their more doubtful statements, such as claims that vastly complex functional systems such as vision systems (more complex than a smartphone) can be explained by saying that they appeared because of accumulations of random mutations, a kind of “stuff piles up” explanation as vacuous as the explanation of “stuff happens.”

Friday, August 24, 2018

More Poor Answers at the "Ask Philosophers" Site

The “Ask Philosophers” website (www.askphilosophers.org) is a site that consists of questions submitted by the public, with answers given by philosophers. No doubt there is much wisdom to be found at this site, although I found quite a few answers that were poor or illogical – such as the ones listed in my previous post. Below are some more examples.

Question 464 is an excellent and concise question: “Is it more probable that a universe that looks designed is created by a designer than by random natural forces?” In reply to this question, Stanford philosopher Mark Crimmins gives a long answer that is poor indeed. He tries to argue that it is hard to exactly calculate just precisely how improbable it might be that a universe was designed, no matter what characteristics it had. Using the term “designy-ness” to apparently mean “resembling something designed,” Crimmins then states, “the mere 'designy-ness' of our universe is not by itself a good reason for confidence that it was designed.”

This doesn't make sense. If we find ourselves in a garden that appears to be designed, with 50 neat, even rows of flowers, that certainly is a good reason for confidence that the garden was designed. If we find ourselves in a structure that appears to be designed, with nice even walls, nice even floors and a nice convenient roof, that certainly is a good reason for confidence that such a structure was designed. And if we find ourselves against enormous odds in a universe with many laws favoring our existence, and with many fundamental constants that have just the right values allowing us to exist, this “designy-ness" would seem to be a good reason for confidence that such a universe was designed. If you wish to escape such a conclusion, your only hope would be to somehow specify some plausible theory as to how a universe might accidentally have such favorable characteristics by chance or by natural factors. It is illogical to argue, as Crimmins has, that the appearance of design in a universe is no basis for confidence that it is designed.  I may note that confidence (which may be defined as thinking something is likely true) has lower evidence requirements than certainty. 

As for his “it's too hard to make an exact calculation of the probability” type of reasoning, anyone can defeat that by giving some simple examples. If I come to your backyard, and see a house of cards on the back porch, I can have great confidence that such a thing is the product of design rather than chance, even though I cannot calculate precisely how unlikely it might be that someone might throw a deck of cards into the air, and for a house of cards to then appear. And if I see a log cabin house in the woods, I can have very great confidence that such a thing is a product of design, even though I cannot exactly calculate how improbable it might be that falling trees in the woods would randomly form into a log cabin.

In question 24743, someone asks the question, “How can a certain bunch of atoms be more self aware than another bunch?” The question is a very good one. We can imagine a shoe box that has exactly the same element abundances of the human brain, with the same number of grams of carbon, the same number of grams of oxygen, and so forth. How could a human brain with the same abundances of elements produce consciousness, when the atoms in the shoe box do not? We can't plausibly answer the question by saying that there is some particular arrangement of the atoms that produces self-awareness.

Let us imagine some machine that rearranges every 10 minutes the element abundances in the human brain, producing a different combination of positions for these atoms every ten minutes. It seems to make no sense to think that the machine might run for a million years and not produce any self-awareness, and that suddenly some particular combination of these atoms would suddenly produce self-awareness.

The answer to this question given by philosopher Stephen Maitzen is a poor one. He merely says, “There's good evidence that the answer has to do with whether a given bunch of atoms composes a being that possesses a complex network of neurons.” There is no such evidence. No one has the slightest idea of how neurons or a network of neurons could produce self-awareness. If you try to suggest that somehow the fact of all of the atoms being connected produces self-awareness, we can point out that according to such reasoning the connected atoms in a crystal lattice should be self-aware, or the densely packed and connected vines in the Amazon forest should be self-aware.

A good answer to question 24743 is to say that there is no obvious reason why one set of atoms in a brain would be more self-aware than any other set of atoms with the same abundances of elements, and that such a thing is one of many reasons for thinking that our self-awareness does not come from our brains, but from some deeper reality, probably a spiritual reality.

In question 4922, someone asks about the anthropic principle, asking whether it is a tautology, or “is there something more substantive behind it.” The anthropic principle (sometimes defined as the principle that the universe must have characteristics that allow observers to exist in it) is a principle that was evoked after scientists discovered more and more cases of cosmic fine-tuning, cases in which our universe has immensely improbable characteristics necessary for living beings to exist in it. You can find many examples of these cases of cosmic fine-tuning by doing a Google search using either the phrase “anthropic principle” or “cosmic fine-tuning,” or reading this post or this post.

The answer given to question 4922 by philosopher Nicholas D. Smith is a poor one. Smith says the anthropic principle “strikes me as neither a tautology nor as something that has anything 'more substantive behind it.' " Whether we can derive any principle like the anthropic principle from the many cases of cosmic fine-tuning is debatable, but clearly there is something enormously substantive that has triggered discussions of the anthropic principle. That something is the fact of cosmic fine-tuning. If our universe has many cases of having just the right characteristics, characteristics fantastically unlikely for a random universe to have, that philosophically is a very big deal, and one of the most important things scientists have ever discovered – not something that can be dismissed as lacking in substance.

cosmic fine-tuning
Against all odds, our universe got many "royal flushes"

In question 40, someone asks a classic philosophical question: “Why does anything exist?” The questioner says, “Wouldn't it be more believable if nothing existed?” The answer to this question given by philosopher Jay L. Garfield is a poor one. After suggesting that the questioner read a book by Wittgenstein (the last thing anyone should do for insight on such a matter), Garfield merely suggests that the question “might not really be a real question at all.” That's hardly a decent answer to such a question.

An intelligent response to the question of “why is there something rather than nothing” would be one that acknowledged why the question is an extremely natural one and a very substantive question indeed. It is indeed baffling why anything exists. Imagining a counter-factual, we can imagine a universe with no matter, no energy, no minds, and no God. In fact, such a state of existence would be the simplest possible state of existence. And we are tempted to regard such a simplest-possible state of existence as being the most plausible state of existence imaginable, for if there were eternal nothingness there would be zero problems of explaining why reality is the way it is. We can kind of get a hint as to a possible solution to the problem of existence, that it might be solved by supposing an ultimate reality the existence of which was necessary rather than contingent. But with our limited minds, we probably cannot figure out a full and final answer as to why there is something rather than nothing. We have strong reason to suspect, however, that if you fully understood why there is something rather than nothing, you would have the answer to many other age-old questions.

In question 3363, a person very intelligently states the following:

When I think about the organic lump of brain in my head understanding the universe, or anything at all, it seems absurdly unlikely. That lump of tissue seems to me more like a pancreas than a super-computer, and I have a hard time understanding how organic tissue is able to reach conclusions about the universe or existence.

We get an answer from philosopher Allen Stairs, but only a poor one. Stairs claims, “Neuroscientists will be able to tell you in a good deal of detail why the brain is better suited to computing than the pancreas is.” This statement implies that neuroscientists have some idea of how it is than a brain can think or create ideas or generate understanding of abstract concepts. They have no such thing. As discussed here, no neuroscientist has ever given a remotely persuasive explanation as to how a brain could understand anything or generate an idea or engage in abstract reasoning. A good answer to question 3363 would have commended the person raising the question, saying that he has raised a very good point that has still not been answered, and has at least brought attention to an important shortcoming of modern neuroscience. Philosophically the point raised by question 3363 is a very important one. The lack of any coherent understanding as to how neurons could produce mental phenomena such as consciousness, understanding and ideas is one of the major reasons for rejecting the idea that the mind is purely or mainly the product of the brain. Many other reasons are discussed at this site.

In question 4165 a person raises the topic of near-death experiences, and asks whether philosophy has an opinion on this type of experience. The answer we get from Allen Stairs is a poor one. He attempts to argue that “it's not clear that it would do much to support the idea that the mind is separate from the body,” even if someone reported floating out of his body and seeing some information that was taped to the top of a tall object, information he should have been unable to see from an operating table. This opinion makes no sense. Such evidence would indeed do much to support the idea that the mind is separate from the body. This type of evidence has already been gathered; see here for some dramatic cases similar to what the questioner discussed (verified information that someone acquired during a near-death experience, even though it should have been impossible for him to have acquired such information through normal sensory experience). 

Stairs states the following to try and support his strange claim that people repeatedly reporting floating out of their bodies does not support the idea that the mind is separate from the brain:

How would that work? Does the bodiless mind have eyes? How did the interaction between whatever was up there on top of that tall object and the disembodied mind work? How did the information get stored? How did the mind reconnect with the patient's brain? The point isn't that the mind must be embodied. The point is that a case like this would only amount to good evidence for minds separate from bodies if that idea gave us a good explanation for the case. As it stands, it's not clear that it gives us much of an explanation at all, let the best one.

Stairs seems to be appealing here to a kind of principle that something isn't an explanation if it raises unanswered questions. That is not a sound principle at all, and in general in the history of science we find that important explanations usually raise many unanswered questions. For example, if we were to explain the rotation speeds of stars around the center of the galaxy by the explanation of dark matter, as many astrophysicists like to do, that raises quite a few unanswered questions, such as what type of particle dark matter is made up of, and how dark matter interacts with ordinary matter.

As for Stair's insinuation that postulating a mind or soul separate from the body is “not much of an explanation at all,” that's not at all true. By postulating such a thing, it would seem that we can explain many things all at once. By postulating a soul as a repository of our memories, we can explain why people are able to remember things for 50 years, despite the very rapid protein turnover in synapses which should prevent brains from storing memories for longer than a few weeks. By postulating a soul as a repository of our memories, we can explain why humans are able to instantly recall old and obscure memories, something that cannot be plausibly explained with the idea that memories are stored in brains (which creates a most severe “how could a brain instantly find a needle in a haystack” problem discussed here). By postulating a soul as the source of our intelligence, we can explain the fact (discussed here) that epileptic children who have hemispherectomy operations (the surgical removal of half of their brains) suffer only slight decreases in IQ, or none at all. By postulating a soul, we can explain how humans score at a 32 percent accuracy on ganzfeld ESP tests in which the expected chance result is only 25 percent (ESP being quite compatible with the idea of a soul).  And by postulating a soul apart from our body, we can explain why so many people have near-death experiences in which they report their consciousness moving out of their bodies. So far from being “not much of an explanation at all” as Stairs suggests, by postulating a soul separate from the body, it would seem that we can explain quite a few things in one fell swoop.