Wednesday, June 29, 2016

Improving the Conformity Factories Known as Science Graduate Schools

Modern science started out as a rather anti-authoritarian type of thing. The great scientists of the Enlightenment era often opposed ancient authorities such as Aristotle and more modern authorities such as the Catholic priesthood. But in the past hundred years scientific academia has itself taken on a strongly authoritarian character. Scientists have set themselves up as a kind of new priesthood. Of course, any priesthood requires conformity and regimentation. One key tool in the production of conformity and regimentation among scientists is the typical science graduate school.


Consider the way in which students are typically taught in such a school. In the modern Internet age with so many opportunities for computer-assisted study and self-study, there are 101 ways in which students could learn without being in the shadow or gravitational pull of authority figures, without being subject to conformity pressure. But it seems that the average graduate school seems to teach classes in the same old way.


Students are given some textbooks produced by professors (typically not the professor teaching the class). Students sit in lecture halls listening to lectures by a professor, who may assign certain chapters in the book for reading. The subject matter may have 1001 great uncertainties, but the combination of the officially approved science textbook and the similar lecture by the professor send a message to the student:


This is the official version of truth we want you to accept. This is the official party line that you should not question. Please accept this standardized pablum we are spoon-feeding you.


But, you may object, isn't it possible to engage in a debate in a science graduate school? Yes, it is technically possible, but the setup makes it pretty unlikely that a student will make any lengthy challenge to what is being taught in class. A setup in which there is a professor standing in front of a class sends a kind of “here is the guy who knows this stuff, so accept what he's saying” message that is enforced by the officially approved textbook. A setup in which there are lots of students sends a kind of “each student has just a little time slice to talk” message in which it would be considered weird for any student to stand up and make a substantive ten-minute challenge to what the professor is teaching. After one or two minutes, such a student would probably be cut off by the professor, who might do something like continue lecturing or ask some other student for his opinion. Then there's the fact that if your professor is giving you a grade, you may feel he will grade you more poorly if you challenge his teachings.


What are the output products of these type of science graduate schools? The output products are too often what we may call sheepentists, a word that can be constructed from the word “sheep” and the word scientist. A sheepentist is a scientist that has been conditioned to be a meek creature of the herd. He will move in whatever direction the members of his science herd are moving in. If a particular theory becomes fashionable among scientists, a sheepentist will be sure to parrot the claims of that theory. 



But how might we teach scientists differently? You might start with the textbooks. The first step would be to abolish the use of fixed hardcopy textbooks in science classes taught in graduate schools, making all books online. The second step would be to create a system whereby student comments could be anonymously inserted anywhere in the text of a textbook. Whenever any student read a claim in a textbook that he felt was dubious or unsubstantiated, the student could insert into the textbook his comments or rebuttal, at exactly the point in the textbook the claim was made. Any such insertions would be permanently preserved in the textbook, so that, for example, if a textbook was used in 2017, 2018 and 2019, then in 2019 students would read (in the middle of the text) all comments made by students in the previous years of 2017 and 2018.


Such a system presumably would not work in high school, and perhaps not even for freshmen college classes – since the text might be cluttered by junk anonymous comments. But presumably by the time someone has entered graduate school, we can assume that he will not be submitting sophomoric or obscene comments to be read by future students. Anonymous comments would be vital to allow people to express opposing opinions without fear of being socially ostracized or peer-pressured within the small subculture of the science graduate school. Some computer software could assign academic credit based on the number and length of comments a student added, giving an incentive for students to insert critical comments into the textbook.


Imagine the benefits of such a system. Now instead of being spoon-fed some “official party line” of truth written only by some conformists, our science textbook reader in graduate school might now get a full spectrum of opinions.


Another improvement would be a system of real-time anonymous feedback from students, which can easily be done using Internet technology. As a professor was droning on to his students, his slide show presentation might be interrupted by popup messages telling him his presentation was useless or unfair or unintelligible.  Also, allow any independent-minded student to opt-out of receiving professorial instruction, by using online options and self-study options (preserving in-class testing). 


Another way of diminishing the “conformity factory” aspect of science graduate schools would be to allow students to choose their own projects for a master's thesis or a doctoral thesis, without getting approval from professors. Requiring approval from professors discourages paradigm-challenging research projects, and encourages “same old same old” type of research. Let a vote of 3 graduate students be sufficient authorization for a master's thesis project or a doctoral thesis project.


Another way of diminishing the “conformity factory” aspect of science graduate schools would be to create a system of frequent public debates inside the school. All of the key assumptions that are rarely questioned would be publicly debated. For example, in a biology school there might be frequent debates such as this:


Do We Really Understand the Cause of Biological Complexity?
Does Your Brain Actually Produce Your Consciousness?
Are We the Only Intelligent Species in Our Galaxy?
Is DNA Actually a Blueprint for Making a Human?
Do We Actually Understand What Causes a Fertilized Egg to Become a Baby?
Are Mental Illnesses Primarily Biological in Origin?
Are Genetically Modified Foods Potentially Hazardous?
Do We Have a Credible Theory for the Origin of the Human Mind?


The debates might be between two contestants, the first having 40 minutes, the second having 40 minutes, and both having 15 minutes for rebuttal. One contestant would argue the “Yes” side and the other the “No” side. A contestant might be either a student or a professor. Academic credit could be given for any students participating as a debate contestant. A winner would be awarded for each debate, based on anonymous audience judging of which contestant won. Extra academic credit would be given for any student who was judged the winner of a debate in which a professor was the opposing contestant.


So if you were a student you might get 3 academic credits for being a contestant in a debate, and 6 academic credits for winning a debate in which a professor was your opponent. This would create an incentive for nonconformity and opinions challenging conventional wisdom, one that would help to counteract the enormous conformity pressure in science graduate schools. Students could also be given academic credit for simply attending a certain number of debates, which would help to make sure that they were exposed to both sides of issues.


These are only some ways in which our current “conformity factory” science graduate schools could be reformed to produce outputs other than conformist “sheepentists” who act like followers of the herd. Among the outputs of such improved schools might be bientests (scientists who have been educated in both sides of controversial issues) and defyentists (scientists who defy unwarranted but entrenched assumptions of the scientific community). If you don't like the idea of creating defyentists, you might want to study tech culture, which currently assigns a great value to what is called disruptive innovation. We need science graduate schools that will generate more disruptive thinkers who will challenge the ossified complacent assumptions of the science priesthood.

Saturday, June 25, 2016

Cloud Computing and the Concept of Non-local Consciousness

Here's an interesting puzzle. A man has a ball which he throws very hard, and the ball comes back to him. When he does this, the ball never bounces off of anything, and never touches anything. There is nothing special attached to the ball, nothing like strings or elastic bands. The ball has no kind of special flying ability. How does the ball keep coming back to the man? Think about this for a few seconds before reading further.

The answer is really quite simple. The man throws the ball straight up into the air, and gravity returns the ball to him. This is a classic example involving lateral thinking, also known as “outside the box” thinking. Many people are puzzled by this problem, because they confine their thoughts to a little “box” that limits their thinking. In this case the “box” is the assumption that the man must be throwing the ball in a roughly horizontal fashion, like some baseball pitcher.

Like people stumped by this “return of the ball” problem, the typical neuroscientist of today seems to be the prisoner of unwarranted assumptions. Faced with the problem of consciousness, and the problem of how our memories our stored, a typical neurologist confines himself to the “inside the box” assumption that the mind must somehow be generated by the brain. So he keeps thinking about some way that chemicals or neuron patterns or electricity might generate consciousness or store memories. This approach has been futile. After decades of knocking their head against this wall, scientists still have no evidence of physical memory traces inside the brain, nor do they have any real understanding of how things such as concepts can arise from the brain. As Rupert Sheldrake says on page 194 of his excellent book Science Set Free, “More than a century of intensive, well-funded research has failed to pin down memory traces in brains.”

The actual answer to the riddle of consciousness may lie in a non-local solution. Our consciousness might arise not from our brains, from some non-local source.

The idea of a non-local source of consciousness may be entirely baffling at first, but there is an analogy that may clarify the idea. The analogy involves cloud computing. Let's compare how computers worked during the 1980's and today. About 1985 if you had a computer, all of your computing and memory storage was done locally. If you did some computer work on some problem, the only thing working on it would be the CPU stored on your desktop computer. If you stored some photos on your computer, they would be stored on the hard drive of your computer.

But nowadays we have a very different situation. You may have some tiny hand-held device that does not even have a hard drive. The device may have little or no local memory. But you can still upload your photos and videos in a way that results in them being permanently stored. You also can do all kinds of computing, with the results permanently stored far away. How can this happen? You are interacting with what is nowadays called the Cloud. 

Cloud computing 

I could start telling you the details of how the Cloud works, discussing external web sites and their sever farms, and so forth. But for the purposes of this discussion, it is much better if I don't get into such details. It is better to think of the Cloud abstractly, as a kind of ethereal amorphous mega-resource that enables non-local computing and non-local storage of information. After we conceive of the Cloud in such a way, a question arises. Could it be that our own memories are not locally stored, but somehow stored in some cosmic consciousness-generating reality, something a little comparable to the Cloud we are now using for our computing?

Rather than being stored inside our brains, our memories could be stored in a kind of consciousness infrastructure somewhat resembling the Cloud of the internet. Our personalities could also be stored in this nonlocal consciousness infrastructure. Under this model, the main purpose of the brain would be functions such as control of autonomic functions, control of muscles, and the processing of visual stimuli. The real core of our consciousness would be stored “in the cloud.” Just as your photo collection may not exist on your handheld device, but “in the cloud,” your memories may not exist in your brain but “in the cloud,” with the latter cloud being a mysterious consciousness infrastructure servicing multiple bodies.

The concept discussed here is a kind of “client/server” concept. In abstract terms, Facebook.com can be thought of a server providing services to a vast horde of different clients, each a user who has a Facebook account. Similarly, it might be that human individuals are like clients who receive their consciousness from a mysterious consciousness infrastructure that acts as a kind of non-local “consciousness server” providing consciousness to many local clients. 

Our memories and identities may be stored non-locally 

Such a theoretical model does not actually require us to buy into a computational model of the mind, in which the mind is regarded as something like a computer output. The essence of this model is not a computational assumption, but a “client/server” concept. The essence of this model is that local entities (or clients) all are enabled by some external, non-local infrastructure which provides them with something that they could not get by themselves. Just as you cannot get Facebook functionality all by yourself (without internet access), it may be that the little mass of flesh between your ears is totally incapable of producing consciousness by itself, and that your consciousness comes from an external consciousness infrastructure that may be thought of as a kind of “consciousness server” serving multiple clients (different people).

Empirical support for such a model may come from a wide variety of paranormal and psychic phenomena which are inexplicable using the hypothesis that your mind is produced entirely by your brain, but which may be explicable through an alternate model in which your memories are stored non-locally, and your consciousness depends on interactions with some great external reality. Empirical support for such a model may also come from studies such as those done by John Lorber, which found astonishing cases of people who had good memories and good intellectual functioning, even though most of their brains were destroyed by diseases such as hydrocephalus.

I may note that some people talking about the idea of non-local consciousness will talk in grandiose metaphysical terms, speculating that consciousness may be in some sense “infinite” or “without beginning and without end.” But the idea of non-local consciousness does not require such lofty notions. It simply requires the idea of an unknown external dependency upon which our consciousness depends.

The history of science has partially been a story of the discovery of previously unknown external dependencies upon which our existence depends. In ancient times people may have thought that the only external dependency that humans relied on was that of the sun. But scientists have gradually discovered more and more other external dependencies, some of them cosmic in scope. First they discovered that our existence depends on a cosmic gravitational force, which holds stars and planets together. Then scientists discovered how our existence depends on a cosmic electromagnetic force or field, which enables the chemistry on which life depend. Later scientists discovered a mysterious cosmic field called the Higgs field, which supposedly “gives mass to all particles.” In light of such previous developments, would it be very surprising if we were to discover one day some “consciousness field” or some external consciousness-enabling infrastructure, acting on a cosmic level to enable memory and consciousness? No, such a discovery would just be another item in the same historical trend of humans discovering more and more external dependencies on which their existence depends.

Postscript: After writing this post, I discovered a 2013 scientific paper,  "Long-Term Memory: Scaling of Information to Brain Size" by Donald R. Forsdyke of the Department of Biomedical and Molecular Sciences of Queens University in Canada.  He quotes the physician John Lorber on an honors student with an IQ of 126 and a severe case of hydrocephaly that left him with almost no brain:

Instead of the normal 4.5 centimetre thickness of brain tissue between the ventricles and the cortical surface, there was just a thin layer of mantle measuring a millimeter or so. The cranium is filled mainly with cerebrospinal fluid. … I can’t say whether the mathematics student has a brain weighing 50 grams or 150 grams, but it’s clear that it is nowhere near the normal 1.5 kilograms.

Forsdyke notes two similar cases in more recent years, one from France and another from Brazil.  He then states the following, suggesting a "cloud computing" idea of the mind vastly different from the "brain makes your mind" idea, and rather similar to what I have mentioned here: 

"For all these storage alternatives, the thinking is conventional in that long-term memory is held to be within the brain, and the hydrocephalic cases remain hard to explain. Yet currently most of us, including the present author, would prudently bet on one or more of the stand-alone forms. The unconventional alternatives are that the repository is external to the nervous system, either elsewhere within the body, or extra-corporeal. The former is unlikely since the functions of other body organs are well understood. Remarkably, the latter has been on the table since at least the time of Avicenna and hypothetical mechanisms have been advanced (Talbot 1991; Berkovich 1993; Forsdyke 2009; Doerfler 2010). Its modern metaphor is 'cloud computing.' "

Tuesday, June 21, 2016

A Critique of Carroll's “Big Picture”

We recently saw the publication of the “The Big Picture” by physicist Sean Carroll. In this book Sean paints a portrait of a gloomy, purposeless, godless universe. Along the way he commits a few errors. Below are some that I detected.

On page 154 Sean discusses the work of ESP researcher Joseph Rhine, a professor at Duke University who worked mainly during the 1930's and 1940's, producing dramatic evidence for ESP. Sean creates the impression that Rhine's work was a fluke that was never replicated. Referring to Rhine's results, Sean says “many attempts to replicate them failed,” and does not mention any followup experiments supporting Rhine's results. But Sean misleads his reader on this topic. In general, subsequent experiments on ESP have indeed replicated Rhine's results, providing very powerful evidence for ESP. In particular, the ganzfeld sensory deprivation experiments provided average results of about 32% in trials in which the expected chance result was 25% (this paper looks at a series of ganzfeld studies, and concludes the probability of getting the results was only about 2 chances in 100 million). More recently, tests with autistic children (such as the scientific paper of Dr. Diane Hennacy Powell) have provided ESP results very strongly replicating Rhine's work, as have phone and email tests done by Sheldrake.


Other than a handful of passing references, Sean shows no sign of having studied anything relating to the paranormal or psychic phenomena. But Sean nonetheless dogmatically declares the impossibility of various paranormal claims, on the grounds that they are inconsistent with what he calls “the Core Theory.” On page 158 he says, “And those concepts – the tenets of the Core Theory, and the framework of quantum field theory on which it is based – are enough to tell us that there are no psychic powers.” Later on page 212 he states, “The Core Theory of contemporary physics...leaves no wiggle room for intervention by nonmaterial influences.”


But what exactly is this Core Theory to which he refers? In Appendix A of the book, he tells us: the Core Theory is a physics equation. Sean describes a complicated physics equation, and then says this:


So there you have it: the Core Theory in a nutshell. One equation that tells us the quantum amplitude for the complete set of fields to go from starting configuration (part of a superpostion inside a wave function) to some final configuration. We know that the Core Theory, and therefore this equation, can't be the final story.


Sean is saying that this Core Theory is basically a complicated physics equations. The inputs and outputs of that equation are purely physical things, and none of its inputs or outputs are anything that is the slightest bit biological, mental, spiritual, or psychological. It is therefore absolutely false and ludicrous for Sean to claim that this Core Theory has anything whatsoever to say about psychic phenomena, the possibility of intervention by nonmaterial influences, or anything whatsoever that is mental, spiritual, or psychological. And even if there was such an implication, it would have little force, because Sean has admitted that this Core Theory “can't be the final story.” Sean also says the Core Theory is based on the “framework of quantum field theory,” but (as discussed here) quantum field theory is famous for making what is commonly called the “worst prediction in the history of physics,” that the vacuum of space should be super-dense (as Sean himself discusses on page 304 of his book). The idea that something so problematic can set reliable prohibitions against completely unrelated things such as psychic phenomena or nonmaterial influences is therefore doubly indefensible.


On page 220 of the book, Sean discusses near-death experiences, and claims that “no cases of claimed afterlife experiences have been subject to careful scientific protocols.” This is false. For at least 25 years there have been physicians and scientists who have methodically studied near-death experiences using careful scientific protocols. For many years the Journal of Near Death Studies has been publishing scientific papers on near-death experiences, papers that have followed scientific protocols. The AWARE study published in 2014 is a study of near-death experiences authored by a large group of scientists and physicians, and it followed careful scientific protocols, and also produced some dramatic evidence results suggestive of a human soul that can leave the body.


Sean seems to have taken a look at the AWARE study, for he mentions its failure to verify out-of-body experiences by using a particular technique involving visual stimuli placed above the beds of people near the brink of death. But he fails to mention that the same study reported a dramatic case of someone who reported an out-of-body experience while his heart was stopped, a person who reported various distinctive details of his heart attack resuscitation attempt that were verified. In this regard, Sean is selectively reporting facts about as fairly as someone who might describe the Apollo program only by saying, “The Apollo program tried to reach the moon, but the Apollo 13 mission failed without landing on the moon,” without mentioning that the Apollo 11 and Apollo 12 missions did successfully land on the moon.


On page 220 of his book, Sean states: “Our status as parts of the physical universe implies that there is no overarching purpose to human lives, at least not inherent in the universe beyond ourselves.” This is a complete non-sequitur, saying that one thing implies another when it does no such thing. We're parts of the universe, so there's no purpose to our lives? The first thing in no way implies the second.


In Chapter 36 of his book, Sean discusses evidence from physics that the universe seems to be fine-tuned in a way that allows life to exist. But Sean tries to discourage anyone who might conclude that the universe was designed for life to appear. On page 305 he states, “We don't know very much about whether life would be possible if the numbers in our universe were very different.” This is entirely false. For more than 35 years scientists have been very carefully considering whether life would be possible if the numbers in our universe were very different, and have made many relevant conclusions about the matter. We know that galaxies and sun-like stars would not exist if physical constants such as the gravitational constant and the fine-structure constant were different by a relatively small amount. We know that if there were a very tiny difference between the absolute values of the proton charge and the electron charge, then planets could not hold together (the proton charge is the exact opposite of the electron charge, with the numbers matching to at least 22 decimal places). We know that small changes in the strong nuclear force in one direction would make stable molecules impossible, and that small changes in another direction would have prevented the formation of carbon and oxygen on which life depends. We know that a tiny change in the cosmological constant or vacuum energy density would have prevented a habitable universe. We know that a very small change in the neutron mass or the proton/electron mass ratio would lead to a universe in which stars like our sun could not exist.  See here or here for more information.

Requirements for our existence (discussed here)

To help explain why we live in such a fine-tuned universe (while preserving his atheistic naturalism), Sean suggests the idea of the multiverse, that there are many universes. He says on page 309 that the multiverse is a “simple, robust mechanism under which naturalism can be perfectly compatible with the existence of life.” Another whopper. Postulating a vast collection of other universes is not a simple assumption, but pretty much the flabbiest and most extravagant assumption possible, something that is really the precise opposite of being simple. From the standpoint of Occam's Razor and metaphysical parsimony, imagining some huge collection of other universes is actually far less simple than imaging a single intelligence behind the universe.


Sean also errs in saying, “if we get a multiverse in this way, any worries about fine-tuning and the existence of life evaporate.” The worries he refers to are his kind of atheist worries, but he's wrong in suggesting that such worries would evaporate in the case of such a multiverse. That's because the chance of success of any one random trial is not increased by increasing the number of random trials. So if the habitability of our universe (by a series of blind chance coincidences) was a gazillion-to-one shot before imagining a multiverse, it is still exactly the same gazillion-to-one shot after you assume such a multiverse. By assuming an infinity of universes, you do not increase by even 1 percent the chance that our universe would be habitable.


In this case Sean describes the most ridiculously flabby and extravagant state of affairs as something “simple,” and he gives us the same “black is white” type of talk in describing a parallel universes theory he seems to be infatuated with. Sean describes the Everett “many worlds” interpretation of quantum mechanics, the crazy idea that the universe is constantly splitting up into different copies, so that everything possible happens in a vast collection of parallel universes. Sean says on page 167 that there is “a lot to love about the Everett/Many-Worlds approach to quantum mechanics,” and describes it as “lean and mean.” No, pretty much nothing you can imagine could be less lean. The Everett “many worlds” interpretation is pretty much the most extravagant and flabby thing imaginable. All those unnecessary parallel universes are 1,000,000,000,000,000,000 tons of metaphysical fat and flab. Calling such a theory “lean” is like calling a 900-pound man “thin.”


But Sean likes the theory, which he provides no evidence for and no reasons for believing in, other than the laughably false claim that it is “lean.” Since he rejects life-after-death, Sean doesn't want me to believe that my dear departed mother is in some heaven or afterlife realm. But Sean apparently does want me to believe that there are an almost infinite number of quantum copies of my mother strolling around in some vast collection of parallel universes. We have near-death experiences as a form of evidence for post-mortal survival, but zero evidence for parallel universes. Sean apparently thinks it's better to believe in something infinitely flabby and infinitely extravagant and unsupported by evidence than to believe in something vastly simpler that is supported by evidence.

Friday, June 17, 2016

If Interstellar Travel Movies Were Realistic

If you have got your astronomical education from watching science fiction movies, you probably think that travel between stars is a pretty fast experience. You just jump into your interstellar spaceship like Han Solo's Millennium Falcon, turn on the warp drive, and whoosh, you jump through hyperspace arriving at a distant planet. Even more recent movies (like the movie Interstellar) depict the same type of rapid interstellar travel.

But this is fiction, not science fact. There is no known evidence for anything like hyperspace that can be used to enable rapid interstellar travel. Nor is there solid evidence that you can instantaneously transport anything by using a space warp. Scientists have not been able to transport even a grain of sand from one place to another using a space warp.

It seems, regrettably, that interstellar travel will be a very slow affair. According to Einstein's special theory of relativity, the speed of light is the fastest speed that can ever be reached. If you could somehow build a spaceship capable of traveling at the speed of light, it would take you about five years to get to the nearest star. But there are engineering reasons for thinking that a spaceship will never be able to accelerate to more than a small fraction of the speed of light. This means that it would take many years to get from one star to another.

I wonder: what if Hollywood were to make a realistic movie about interstellar travel – a movie that was as realistic as possible about the difficulties of interstellar travel, and also the low chance of finding life in some particular target of an interstellar mission? Let's imagine what such a movie might be like.

You might think that such a movie might involve a plot in which the starship crew was put into suspended animation, so that astronauts slept through the interstellar voyage of many years. But that wouldn't be particularly realistic, since the prospects of people being put into artificial hibernation for many years are pretty dim.

One realistic plot for an interstellar voyage would involve astronauts who left Earth on an interstellar voyage while they were young men in their twenties. By the time the starship got to the distant star, the astronauts would be so old that they would not have energy to do much exploring. That might work as a comedy.

Another realistic possibility would be to depict a multi-generation starship. This is the most plausible scenario for interstellar travel. The ship would be large enough for someone to live his entire life on the ship. The original astronauts would have children while the ship was traveling between the stars. By the time the ship finally arrived, the starship would be manned not by the original astronauts, but by the children of such astronauts – or perhaps the grandchildren or the great-grandchildren of the original astronauts.

disclaimer
Recruiting poster for a multi-generation starship

Clearly this movie would need an ensemble cast, and it would need to be one of those movies that spans many years. But what would happen when the spaceship reached the distant planet revolving around another star? Would the astronauts find a lush world teeming with life?

The current thought exercise is to imagine a movie about interstellar travel that is as “realistic as possible.” It seems that such a movie should therefore not find the astronauts discovering life on the distant planet the ship finally reached. The origin of life on our planet still seems like quite the little miracle. We have no idea of how self-replicating molecules could have formed from mere chemicals billions of years ago. We have no idea of how the genetic code that life depends on could have arisen through mere chance. The genetic code is what programmers call a lookup table, and we have no known cases of any lookup tables in nature that ever arose through chance processes.

It seems, therefore, that if we are making our interstellar travel movie as realistic as possible, our astronauts should arrive at the planet revolving around a distant star, and find nothing but a dead rock planet. This might make a nice tragic ending, but it wouldn't have the dramatic oomph that makes a good tragic ending. Perhaps our astronauts could perish on the lifeless planet in a kind of “die in the desolate wilderness” ending resembling the ending of the opera Manon Lescaut.

A more hopeful ending might involve terraforming, an engineering effort to make a rocky, lifeless planet more like Earth. The astronauts on the starship could launch an engineering effort to bring earthly life to the lifeless planet. They might have to stay on their multi-generation starship for many years, orbiting the planet, while lifeforms slowly spread around the planet. Finally landing craft from the starship could land on a planet that had grassy fields. This event might occur hundreds of years after the multi-generation starship had arrived in orbit around the distant planet.

So the movie might depict a scenario like this:

Generation 1: Leaves Earth in the multi-generation starship, and dies aboard the starship.
Generations 2, 3, 4, and 5: Lives their entire lives in interstellar space, never seeing a planet.
Generation 6: Lives to see the starship orbit the lifeless planet revolving around another star.
Generations 6, 7, and 8: Lives aboard the starship, in orbit around the planet, waiting for the terraforming process to finish.
Generation 9: Finally gets to land on the planet, which by now has life and grassy fields.

Although highly realistic, such a movie would probably make much less money than absurdly unrealistic sci-fi epics in which traveling to another star is as easy as riding from one subway stop to another.

Postscript: After this post was published, Hollywood finally released a more realistic movie about interstellar travel, the movie Passengers, which depicts a very long interstellar voyage requiring decades of suspended animation. 

Monday, June 13, 2016

A Tenfold Worsening of the Spiral Galaxy Explanation Problem?

One of the most amazing facts about the large-scale universe is the very large number of beautiful spiral galaxies. Most of the larger galaxies in the local universe are spiral galaxies. In my previous post The Unsolved Mystery of Why So Many Galaxies Are Beautiful Spiral Galaxies, I discussed the inadequacies of current attempts to explain the existence of so many spiral galaxies. 

The Whirlpool Galaxy (Credit: NASA)

Galaxies rotate, taking about 200 million years to rotate. For spiral galaxies this rotation leads to what is called the winding problem. This is the problem that the rotation of spiral galaxies should cause the spiral arms of galaxies to be ruined after a few rotations (in less than a billion years), due to a “winding up” effect. But somehow the spiral arms of spiral galaxies have apparently persisted for more than 10 billion years. Scientists have attempted to explain the persistence of spiral arms using a theory called the spiral density wave theory. But that theory is not in good shape, and is not well-supported by evidence. A scientific paper with the phrase "A case against density wave theory" in its title mentions “further negative evidence for density wave spirals.”

When I wrote my original post, I had never heard of “super spiral” galaxies. In the spring of 2016, news articles started to appear on this extra-large type of spiral galaxies. A NASA press release was entitled “Scientists Discover Colossal 'Super Spiral' Galaxies.” It said that the newly discovered type of spiral galaxy was as big and as bright as the biggest and brightest galaxies previously known.

The new type of “super spiral” galaxy is ten times more massive than our own galaxy. This apparently makes the spiral galaxy explanation problem ten times worse than it previously was.

Page 4 of this paper shows 53 of the “super spiral” galaxies. They look pretty much like spiral galaxies we are used to seeing in photos of distant space. Referring to a major attempt to simulate galaxy evolution with a computer simulation, section 7.2 of the paper says, “"Even the largest galaxy evolution simulations to date, such as the Illustris simulation...are not big enough to manufacture a significant number of super spirals."

The Illustris project was the largest attempt to simulate the universe using a supercomputer, and used 8000 CPU's running in parallel. I searched all 4 scientific papers published by the Illustris team, and found no evidence that their simulation had produced an outcome in which a large fraction of the galaxies are spiral galaxies. The authors made no attempt to categorize how many of their simulated galaxies were spiral galaxies. One of the papers claims that the simulation produced “ a reasonable population of ellipticals and spirals,” but from that statement we cannot tell whether the number of spiral galaxies was 10%, 1%, or .0001%.

I also tried using the “Infinitely Scrolling Galaxy Explorer” of the Illustris project, at this location. This allows you to scroll through simulated galaxies produced by the simulation. Very few of the simulated galaxies had clear spiral arms like that of the Whirlpool galaxy. Almost all the galaxies shown looked like elliptical galaxies or irregular galaxies or disk-shaped galaxies with random concentrations of stars but not spiral arms. It seemed that less than 2% of the simulated galaxies were spiral galaxies, and the number could have been less than 1%. In our universe ring galaxies are rare, but in the Illustris simulation there seemed to be many times more ring galaxies than spiral galaxies. In the Illustris simulated galaxies, in the rare cases in which there was something what looked like a spiral arm, there was almost always just one spiral arm, rather than the two or three spiral arms we see in real spiral galaxies.

I therefore find the way in which the Illustris project reported its outputs to be misleading in regard to the issue of whether the project was able to produce a universe in which a large fraction of the galaxies are spiral galaxies. Here is what an MIT press release of the project stated (a press release reproduced on the Illustris web site):

With this model, we are able to get agreement with observational data on small scales and large scales,” says Mark Vogelsberger, an assistant professor of physics at MIT and first author of a new paper in the journal Nature that describes the modeling effort. While modeling 41,416 galaxies in all, the simulation closely matches the rate at which certain types of galaxies develop across the universe as a whole. “Some galaxies are more elliptical and some are more like the Milky Way, [spiral] disc-type galaxies,” Vogelsberger explains. “There is a certain ratio in the universe. We get the ratio right. That was not achieved before.”

But this statement does not match what you see when you scroll through the galaxies produced by the project, using the “Infinitely Scrolling Galaxy Explorer” on the Illustris web site. While a large fraction of the simulated galaxies are disk-shaped, only a tiny percentage (perhaps as few as 1 percent) look like spiral galaxies with one or more spiral arms (spiral galaxies have 2 or 3 spiral arms). Who should we blame here for this misleading statement? Given the fact that the MIT press release writer has inserted in brackets the word “spiral” (a word Vogelsberger apparently did not use), we perhaps cannot directly blame Vogelsberger. But we can fault him for failing to correct the modified version of his statement. With the insertion of the word “spiral,” the reader is left with the very misleading impression that the Illustris simulation “got the ratio right” by creating a simulated universe in which the number of spiral galaxies was similar to the ratio in the known universe. The simulation did no such thing

Another press release of the Illustris project claimed that the project created “a realistic mix of spiral galaxies like the Milky Way and giant elliptical galaxies.” That phrase (repeated by many other news sources that used the press release) is not accurate in light of the fact that in our universe a large fraction of the galaxies are spiral galaxies, but in the Illustris simulation only a tiny fraction are spiral galaxies (as little as 1% or less). 

It seems that our scientists do not actually have a credible explanation for the high occurrence of spiral galaxies in the universe. The recent discovery of spiral galaxies ten times bigger than any previously observed underscores this explanatory shortfall.

Wednesday, June 8, 2016

Flying Pink Unicorns and Simulated Universes

Elon Musk is very good at technology, but he is much weaker at the game of philosophical arguments, judging from his recent reasoning on the subject of whether we live in a simulated universe. Here was the argument Musk gave:
The strongest argument for us being in a simulation probably is the following. Forty years ago we had pong. Like, two rectangles and a dot. That was what games were.
Now, 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously, and it's getting better every year. Soon we'll have virtual reality, augmented reality.
If you assume any rate of improvement at all, then the games will become indistinguishable from reality, even if that rate of advancement drops by a thousand from what it is now. Then you just say, okay, let's imagine it's 10,000 years in the future, which is nothing on the evolutionary scale.
So given that we're clearly on a trajectory to have games that are indistinguishable from reality, and those games could be played on any set-top box or on a PC or whatever, and there would probably be billions of such computers or set-top boxes, it would seem to follow that the odds that we're in base reality is one in billions.
Tell me what's wrong with that argument. Is there a flaw in that argument?
Sure, Elon, I'll tell you what the flaw is in your argument. What Musk describes is a progression of sophistication in video game technology. But we have not one bit of evidence that any computer or video game has ever itself had the slightest iota of experience, consciousness, or life-flow. We only have evidence that biological creatures such as us can have some experience, consciousness, or life-flow. So there is no basis for thinking that some super-advanced alien civilization could ever be able to produce computers or video games that by themselves were the source of experiences like the ones we have. Making such an assumption based on an extrapolation of technical progress in video games is like arguing that one day video game characters will be so realistic that they will step out of the video game screen and help us clean our house.

Biological entities such as ourselves may have experiences (or simulated experiences) when we view the output of computers or video games, but it is not the computers or video games that ever have such experiences by themselves. We have no good reason for concluding that computers or video games will ever be able to have experiences or life-flow like that we experience, so there is no reason for thinking that our experiences or life-flow is caused by such electronic devices. As for the “one in billions” part of the argument, it is just an arbitrary number that Musk has picked out of a hat, without suggesting any basis for such reasoning.

The lives of humans have produced an ocean of life-flow or experience or consciousness, but we have not the slightest evidence that any computer has produced the tiniest bit of any such thing – not even a drop, we may say. Nor is there any reason for supposing that future advances will produce such a thing. There will never be a way to program a computer so that it “has a day” or “lives an experience.”

A video game is not something that produces experiences, a fact that we can prove by imagining a world in which there are no humans, and asking: how many video game experiences would we then have? A video game is something that can affect the experiences of agents (humans) that are capable of having experiences. Arguing that a video game or computer will be able to produce experiences is like thinking (based on the fact that thunderstorms can affect baseball games) that thunderstorms will one day be able all by themselves to produce baseball games when humans aren't around.

In the original paper discussing the simulation argument that you are (or may be) living in a computer simulation, Nick Bostrom defined an “ancestor simulation” as something that would be produced when a computer would “simulate the entire mental history of humankind.” Here is the unjustified claim given in the conclusion of Nick Bostrom's original paper presenting his argument regarding the idea that you are living in a computer simulation.

A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

Bostrom's paper justified no such assertion. He wants us to choose between one of three possibilities, when there is a very plausible fourth possibility: The fraction of posthuman civilizations that are capable of running ancestor-simulations is zero. Since we have no evidence that computers or video games will ever be capable of producing human-like experience, this is a very plausible possibility that short-circuits his whole line of reasoning. Also, Bostrom does nothing to justify any statements about fractions or ratios, so he has just "picked out of a hat" his third possibility. 

Bostrom also makes the big mistake of implying that if there is one alien civilization interested in creating an “ancestor simulation,” that such a civilization would now be producing countless such simulations. He suggests that if there is one such civilization, the number of simulated lives would greatly outnumber the number of real lives. This is a completely unjustified insinuation. The more often some weird non-essential project has been done, the less people tend to be interested in doing it. While an alien civilization might run some ancestor simulation of the type Bostrom imagines, we have every reason to suspect that it would grow bored with such a thing after some particular number of years, and lose interest in it. Given an alien civilization that had one point in its long existence had an interest in running an ancestor simulation, there is no reason to think (given a very long life-time for that civilization) that it would now be running such simulations. And there is also no reason to believe that it would now be running very many such simulations, so many that the number of simulated lives would outnumber the number of real lives. 

I may note that Bostrom's paper is filled with quite a few goofy misstatements, such as the completely incorrect claim that "there are certainly many humans who would like to run ancestor-simulations if they could afford to do so."  Not correct, as no one prior to Bostrom's paper expressed any interest in creating computer simulations to recreate "the entire mental history of mankind." 

Reasoning like Bostrom's could be used to establish a million and one goofy claims. For example, imagine you want to establish that there is a city populated by flying pink unicorns. You could argue that super-advanced alien civilizations would have the power to create flying pink unicorns, using either genetic engineering or advanced robotics. You could then argue that since there might be trillions of super-advanced alien civilizations, probably at least one of them has had an interest in creating a city filled with flying pink unicorns. Presto! You now have your magic city filled with flying pink unicorns, located somewhere out there in the depths of space. Of course, this is the same kind of reasoning used to establish some suspicion that we are living in an alien civilization's computer simulation. It is reasoning that leaps from “they might possibly have done this weird thing” to “they probably did do this weird thing.”

There is a general problem with all such lines of reasoning. As a civilization's technology increases, the number of possibilities and opportunities available to that civilization exponentially increases. So if we imagine some super-advanced civilization with a technology millions of times greater than ours, we must imagine that the number of possibilities and opportunities open to that civilization would also increase by millions of times, or probably even billions or trillions of times. So with all those nearly infinite possibilities available to such a civilization, millions of which we might be able to imagine and billions of which we could never imagine, what is the chance that a super-advanced civilization would undertake one particular non-essential technological project we might imagine – whether it be creating “ancestor simulations” or creating cities filled with flying pink unicorns? No greater than one in a million.

So let's summarize the case against the idea that you are living in a simulated universe.
  1. There are very strong reasons for believing that experience and consciousness and life-flow such as we have cannot be electronically produced, completely ruling out the possibility that human experience is computer-generated.
  2. Even if it were possible for human experience to be electronically simulated, there is no reason to believe that any extraterrestrial civilization currently has a very strong (and abundantly funded) interest in creating “ancestor simulations” that electronically reproduce lives such as ours, given the nearly infinite number of other projects such extraterrestrials might be preoccupied with.
  3. Even if an extraterrestrial civilization did have such an interest in creating “ancestor simulations,” there is no reason to think that they would create so many such simulations that the number of simulated lives would outnumber the number of real lives.
  4. There is therefore no probability basis for assuming some substantial chance that you are part of some “ancestor simulation” created by extraterrestrials, based on a comparison between the number of simulated lives and the number of real lives. 

     This is not your life

Saturday, June 4, 2016

“Nuke Your Way to Life” Theory Isn't Convincing

The origin of life seems to require liquid water. Geological evidence indicates that our planet was warm enough for liquid water to have existed about 3.5 billion years ago, when the first earthly life appeared. But models of solar evolution lead scientists to conclude that the sun gave off much less heat billions of years ago. Judging only from the sun's evolution, our planet should have been completely frozen three billion years ago. This discrepancy is known as the faint young sun paradox.

Below is a diagram from a scientific paper by Shani and Shtanov discussing the faint young sun paradox. As you can see from the diagram, if we assume (for the sake of simplicity) that our planet has had its current atmosphere for the past 3 billion years, then our entire planet should have been frozen until about 1.7 billion years ago.



Some scientists have tried to solve this problem by suggesting that there were high levels of carbon dioxide in the early atmosphere, causing global warming through the greenhouse effect. But (as discussed in this paper) for such a theory to work, global carbon dioxide levels would have needed to be fifty times larger than today. Such high carbon dioxide levels would have left traces in the fossil record, traces that have not been discovered.

Recently in the news there was discussion of another theory to explain the paradox. The theory is that billions of years ago there were more solar flares: “frequent and powerful coronal mass ejection events from the young Sun—so-called superflares.” A coronal mass ejection is when the sun shoots out lot of particles. Nowadays really powerful coronal mass ejections occur only once every 30 years. But advocates of this solar flare theory claim that billions of year ago such really powerful coronal mass ejections may have occurred as often once a day. Such advocates suggest that if such flares had been enough to melt ice, they might have helped to make life possible.

Helpful solar flares? That doesn't sound very sensible when you read this description of a coronal mass ejection from another web site:

Coronal mass ejections (CMEs) are violent ejections of solar gas, plasma and electromagnetic radiation that can propel more than ten billion tons of solar matter outward from the sun’s atmosphere with the power of over a billion hydrogen bombs....They can extend billions of miles into space. Once jettisoned from the sun’s hold, they can accelerate to several million miles per hour and can reach Earth within one to three days.

A billion hydrogen bombs? So basically this attempt to resolve the early sun paradox by imagining very powerful solar flares is a kind of “nuke your way to life” theory.

There are several reasons why a straightforward version of such a theory seems to make no sense:
  1. We have no known cases of polar ice that was melted because of solar flares.
  2. If ice were to be melted by a short-lived solar flare zapping our planet, it would very quickly freeze again (presumably within hours), killing off any life that may have formed while liquid was available.
  3. The same level of solar flare intensity needed to melt ice would have lots of life-killing radiation intensity.
A more subtle version of the solar flare theory claims that solar flares may have caused nitrous oxide (N20) to have formed in the atmosphere. Nitrous oxide is a greenhouse gas 300 times more powerful than carbon dioxide. Could more nitrous oxide in the atmosphere have warmed up the planet? Probably not, because in section 5.4 of this paper giving a very thorough discussion of the faint young sun paradox issue, we read the following: “Warming by nitrous oxide (N2O) has been suggested [Buick, 2007], but N2O is rapidly photodissociated in the absence of atmospheric oxygen [Roberson et al., 2011], making it an unviable option for the Archean.” Referring to the Archean eon (the the period between 4 billion years ago and 2.5 billion years ago), this statement rebuts the idea that “helpful solar flares” may have caused a greenhouse effect through a production of nitrous oxide. The objection here is that if nitrous oxide had been produced in the early atmosphere, it would have been rapidly destroyed by sunlight.

The same objection is made by scientist James Kasting. A New Scientist article says this:

Kasting thinks that ultraviolet light from the sun might have destroyed nitrous oxide before it could mix into the atmosphere. “It would take a convoluted mechanism to produce that high up in the atmosphere and then get enough of it into the lower atmosphere to produce a good greenhouse effect,” he says.

It would seem, therefore, that the faint young sun paradox is still very much with us. At the end of this scientific paper, the author makes clear that scientists have been knocking their heads on the faint young sun paradox for 40 years, but the problem refuses to go away.