Header 1

Our future, our universe, and other weighty topics


Wednesday, April 18, 2018

We Need Philosophy, But Do We Need Philosophy PhDs?

Philosophy is a very good and necessary thing for any civilization. It is untrue that we do not need philosophy because we have scientists to guide us to the truth. For one thing, a large part of philosophy is ethics, and science is a morally neutral thing that gives no guidance in regard to ethics. For another thing, what is taught by scientists is often a mixture of fact, dogma, and speculations. In many cases the dogma consists of ideas that are not proven, but which have simply become popular in scientific communities. In such cases it is extremely useful to have a philosophical thinker around, regardless of whether such a person has any philosophy credentials. A philosophical thinker can act as a kind of referee or watchdog, alerting the public when scientists are making truth claims that they have not proven by observations or experiments.

Then there is the fact that our scientists are fond of saying that large classes of statements are forbidden to their kind, such as statements about the existence of the supernatural or some higher power. Since so many scientists are taking “hands off” attitudes towards such things, we need non-scientists such as philosophers to help us sort out the logic or lack of logic about truth claims in such an area, and to help sort out whether the evidence is sufficient to warrant beliefs about the supernatural.

The very idea that philosophy and science are completely separate things is erroneous, in the sense that scientists will often engage in philosophical activity as part of their jobs. For example, when a physicist starts speculating about a multiverse, he has strayed into metaphysics. It is appropriate at such times for a philosopher to comment on whether good metaphysics is going on, or poor metaphysics. And when scientists start spouting metaphysics, they sometimes spout the worst kind of metaphysics (violating the philosophical principle of Occam's Razor in the worst way). To give another example, when a scientist starts saying “This type of statement is forbidden to a scientist,” he has strayed outside of science itself into what is known as philosophy of science. At such point we need philosophers of science to give input on whether the scientist's statement is an appropriate rule.

So philosophy is very necessary indeed. And philosophy is still a good subject to be taught as an undergraduate major. For a large fraction of employers, a bachelor's degree is today largely a screening device, mainly serving the purpose of showing that a student is smart enough (and has sufficient writing and thinking skills) to pass a four-year program of study. There are countless employers who will hire any new college graduate with a good GPA, and many of them don't particularly care whether you have a degree in philosophy or French literature or history. Such employers often require employees to use skills they can only learn at their company.

But what about graduate programs in philosophy? Do we have any great need for philosophy PhDs? There is no tremendous need for people to have doctoral degrees in philosophy, and we could certainly get along with far fewer philosophy PhDs. Consider the literary output of a typical academic philosopher. Such a person will largely write for philosophy journals that almost no one reads. A typical philosophy journal will have its content behind a paywall, meaning there will be few Internet readers. And you probably won't be able to read the journal at your local library. 

You can get an idea of the small readership of philosophy papers by going to the website Philpapers.org, which allows you to see the abstracts of a vast number of philosophy papers. If you look at the full abstract for a paper, you will see a graph showing how many people have downloaded the paper. A typical result will be maybe two downloads a month.

The papers written in philosophy journals are typically papers about philosophy written by philosophers purely for the sake of other philosophers. Such papers are often very obscure and written in jargon that only other philosophers can understand. The cultural impact of such technical papers tends to be very slim. When philosophers start writing mainly for other philosophers, they tend to produce forgettable content that has little cultural impact.

There is a general reason why a university environment may be a poor environment for a philosopher. Part of the proper role of a philosopher is to criticize unwarranted or illogical claims from other people, regardless of their status in society. But a university can set up scientists as almost kind of local gods. The biologist or physicist may be a local celebrity at his university, enjoying fame, funding and a large building that may totally dwarf that of some philosopher at the university. This creates a situation in which the philosopher at such a university may have a strong tendency to kowtow to such an authority figure, and take his pronouncements as gospel truth. But such a philosopher may not be doing his job if he does that. Part of a philosopher's role is to expose poor logic and unwarranted claims of authority figures.
 


Will a philosopher at Central University be willing to criticize the unwarranted dogmatism or unjustified statements of Scientist Jones, when Central University is paying Scientist Jones $300,000 a year to secure the services of this well-known figure, and doing everything it can to build up his reputation and status? Probably not. A university environment may not be an ideal environment for a philosopher, just like the Pentagon may not be an ideal environment for an editorial writer analyzing the moral rectitude or logical sense of current US military policies.

There is no clear and obvious reason why we need to have philosophy PhDs. It may be that you can't do much microbiology unless you work at a university with fancy expensive laboratories, or a corporate lab with similar equipment. But anyone can write philosophical content even if he or she is not in a university. People who write philosophical content and place it on the Internet will probably get far more readers than those writing in philosophical journals.

A good deal of a philosophy curriculum involves studying the past works of philosophers. Such a study does not need to be perfect. It's very important that there be nuclear engineers who get things just exactly right, so that nuclear power plants can be built safely; and it's very important that there be geneticists who get things just exactly right, so that gene-splicing activities be done just right, and with minimum risk. But it isn't so terribly important that philosophy teachers get things just exactly 100% right when describing the teachings of past philosophers such as Plato, Kant, or Hegel. The main reason for studying such figures nowadays is perhaps to get a few ideas that someone might find useful in developing his or her own philosophical viewpoint. For such a purpose, it works just fine to have a fairly good knowledge of some past philosopher's ideas, rather than a crystal clear knowledge of that.

It seems that the philosophy departments of universities could serve their purpose well enough if all instruction in philosophy was done by only people with master's degrees rather than PhDs (and an accelerated master's degree would probably be sufficient for teaching philosophy at a university). Since the philosopher should be ever-ready to challenge the thinking of authorities in all fields (government officials, religion authorities, scientists and other philosophers), perhaps philosophers should be in no hurry to set themselves up as authorities with doctoral degrees in philosophy. 

The idea that you have to undergo many years of specialized study before you can call yourself a philosopher is misguided. It is the birthright of every human to philosophize, and any person who thinks deeply on any abstract philosophical topic may rightfully call himself a philosopher. 

Sunday, April 15, 2018

"Consciousness Instinct" Tells No Coherent Tale As to How a Brain Could Make a Mind

In the recent book The Consciousness Instinct, neuroscientist Michael Gazzaniga writes about the brain and the mind. The subtitle of the book is Unraveling the Mystery of How the Brain Makes the Mind. The book is obviously written with the assumption that brains do make minds, a very dubious proposition there are many reasons for doubting.  The book fails to make anything like a substantive defense of the claim that brains do make minds.  

One of the key issues in whether brains make minds is how a brain could possibly generate any such thing as an abstract idea. Brains and neurons are physical things, but ideas are mental things. We can understand how a physical thing can generate another physical thing, and we can understand how a mental thing (a mind) can generate another mental thing (an idea). But no one understands how a physical thing (a brain or some neurons) could generate a mental thing (an abstract idea).

Looking at the index of Gazzaniga's book I see it has no entry for “idea.” But I do see four pages that refer to “thoughts.” Searching those pages, I find on page 8 what seems to be Gazzaniga's only description of how thoughts or ideas are created:

It is as if our mind is a bubbling pot of water. Which bubble will make it up to the top is hard to predict. The top bubble ultimately bursts into an idea, only to be replaced by more bubbles. The surface is forever energized with activity, endless activity, until the bubbles go to sleep. 

As an attempt to explain the origin of ideas, this is a flop. A bubble is a physical thing, not a mental thing. When someone creates an abstract idea (such as the idea of a Trumper after viewing various zealous Trump supporters), such an abstraction (a mental act) bears no resemblance to a bubble floating up from hot water, a physical event which does not involve any observations. It may seem mysterious that bubbles pop up from water as it heats, but that's not an example of something appearing mysteriously. Water has some air trapped inside it, and the air just comes out as the water heats.

Water being heat up in a pot and producing bubbles is in several respects a very poor analogy for the creation of an idea. A human being will come up with a new idea only very slowly, and a human will only have one thought in his head at any single time. But in a heating pot of water, we see dozens of bubbles very quickly rising at the same time. When water starts to boil, it is a frothy chaos that bears no resemblance to the orderly thinking of a person trying to produce a new idea.

Not like a thinking mind

Again on page 206 and 207 Gazzaniga continues his bubble analogy, saying the following:

Thinking about this led me to use the metaphor of bubbling water as a way to conceptualize how our consciousness unfolds....The results bubble up from various modules like bubbles in a boiling pot of water....Each bubble has its own capacity to evoke that feeling of being conscious....Our smoothly flowing consciousness is itself an illusion.

Consciousness is certainly not an illusion, and there are no neurological facts supporting any such metaphor, or the idea that our smooth flow of consciousness is anything like a stream of individual bubbles.

On page 214-215 Gazzaniga gives us more bubble babble, and says “we have memory bubbles and feeling bubbles.” We are told, “As one bubble quickly passes to the next, we have the illusion of feeling about the remembered event.” No, the feelings we have when remembering important events are not illusions. A woman remembering being raped is not having an illusory emotion.

Gazzaniga speaks very frequently about modules in the brain. The term module derives from computer programming. A module is a distinct unit of computer programming code that accomplishes some particular job. There is no meaningful insight involved in talking about mental function as a set of modules. The human mind is a seamless whole. We can speak of different aspects of the human mind, such as insight, imagination, memory, emotion, and so forth. But you're not doing anything to explain such things by fancy talking and calling such things “modules.” There is nothing like computer code producing the aspects of our mind. 

One of the biggest mysteries of the mind is memory. How is it that humans are able to remember things for 50 years or more, despite all the rapid molecular turnover in the brain? And how is it that a human is able to instantly retrieve an old memory, such as we see happening when someone mentions some obscure figure in history and you immediately recall some facts about that person? If the information is stored in some tiny spot of the brain, there should be no way in which the brain could find that exact spot so quickly in a brain that has no indexing, no coordinate system, and no position notation system. Doing such a thing would be like instantly finding just the right book in a vast library in which none of the books had titles on their covers, and none of the bookshelves or book aisles were marked.

Gazzaniga says nothing to explain such difficulties. He says on page 214 about a memory, “It is placed in memory in ways we still don't understand, but it is symbolic information, cold and formal, just as DNA is symbolic information – and just like DNA it has a physical structure.” This is another case of a scientist pretending to have some knowledge he does not have. We have no evidence that our memories are physically stored in our brains as symbolic information, and (as discussed here) no one has a credible theory of how episodic memories could be stored as neuron states, synapse states, or chemical or electricity states. There is no current theory of a structure or encoding scheme by which episodic memories and abstract learned concepts could be brain stored.

What about understanding, insight, and imagination? None of these things are mentioned in the index of Gazzaniga's book.

On page 79 Gazzaniga says, “I will argue that consciousness is not a thing.” He then claims that “consciousness” is just a word we use for “the subjective feeling of a number of instincts and/or memories playing out in time in an organism.” This is not true. I can close my eyes and lie on my bed, thinking of nothing at all, and not remembering anything. That is consciousness that does not involve any instincts or memories.

The final chapter is entitled “Consciousness is an instinct,” which mirrors the title of the book. The claim that consciousness is an instinct is erroneous. The dictionary defines an instinct as “an innate, typically fixed pattern of behavior in animals in response to certain stimuli.” An example of an instinct is the instinct of a newborn baby to suck its mother's breast, or the instinct of a young man to get an erection when he sees a naked woman. Consciousness is not any such thing. It is not a particular reaction we have to a particular stimulus. We can have consciousness even when we are closing our eyes and receiving no stimulus at all.

One wonders: why on earth would Gazzaniga be claiming that consciousness is an instinct, a claim I can never recall any neuroscientist or philosopher of mind making? The only guess I can make is that maybe he was thinking that it would sound like an explanation for consciousness if we can explain consciousness as an example of something we understand: instincts. But we don't understand instincts at all. Instincts are one of the most mysterious aspects of biology. How is it, for example, that an average male would become sexually aroused by a female rather than some other stimulus? There's no neurological or genetic explanation that we can come up with for this. There's no little picture of a naked woman in the genes or the brain that a man reads to serve as a guide for what to become sexually stimulated by.

The lack of any neurological or genetic basis for such an instinct is shown by homosexuality, in which five percent of the male populace becomes sexually stimulated not by women but by members of their own sex. Many other animal instincts cannot be explained by genetics or neurology. Genes are just recipes for making proteins, and lack any power of expression to state behavior rules for organisms. So as we don't understand instincts, we would do nothing to explain consciousness by saying it is an example of an instinct; and consciousness is not an instinct.

What do you do when you're someone claiming that the brain makes the mind, but with no powerful ideas to offer as to how a brain could handle memory or generate ideas or generate consciousness or produce understanding? You fill up the book with digressions and detours, which is what this book does. We are treated, for example, to many pages talking about quantum mechanics.

It is rather like the way it was in high school when the teacher asked you to explain something you did not understand, and you tried to kind of fake your way through rather than saying, “I don't know.” For example, the teacher might ask you, “How did World War II begin in Europe?” and you might answer something like:

World War II was a very important conflict. There were big noisy battles, and American troops were landed in France, and there were lots of tanks shooting at each other. And they dropped lots of bombs. It all ended on a day called VJ Day.

Such is the approach of our neuroscientists, who give us endless assorted facts designed to impress us with their knowledge, but fail to give the answers they would give if they actually understood how a brain could make a mind.

Because of reasons discussed in great detail here, I assert that brains do not actually generate minds, and that no neuroscientist has either a credible theory of how a material brain could produce the memory skills that the human mind has (such as the instantaneous recall of very old memories), or a credible theory of how a brain could store memories for 50 years despite rapid molecular turnover in synapses, or a credible theory of how material brains could produce nonmaterial things such as consciousness, understanding, ideas, or thought.

Wednesday, April 11, 2018

The Paltry Probability of Nearby Chance-Arising Humanoids

The Cosmic Zoo: Complex Life on Many Worlds is a recent book about the possibilities of extraterrestrial life. Written by Dirk Schulze-Makuch and William Bains, the book is mainly a look at earthly biology, with attempts to assure us that what we see on our planet isn't all that amazing, and that we should expect to see similar wonders all over the universe. On page 181 of their book, the authors make this Panglossian statement: “So if life arises on a distant exoplanet, it will also traverse the path from simple to complex, unicellular to multicellular, and produce intelligent animals capable of tool use in its forests and kelp beds, providing the planet is habitable long enough.”

To promote such runaway optimism, the authors introduce a classification scheme attempting to categorize biological innovations. They maintain that biological innovations can be put into three categories: (1) the Critical Path model; (2) the Random Walk model; (3) the Many Paths model.

On page 5 the authors define the Critical Path model like this:

Each transition requires preconditions that take time to develop...Once the necessary preconditions exist on the planet then the transition will occur in a well-defined timescale. It is like filling up a bath tub; once you turn on the taps, the bath will fill up. It just takes time.

On page 5 the authors define the Random Walk model like this:

Each transition is highly unlikely to occur in a specific time step, and the likelihood does not change (substantially) with time. This may be because the event requires a highly improbable event to occur, or a number of highly improbable steps....Once life exists on a planet, ultimately the key innovation will occur, but when it occurs is up to chance, and whether it occurs before the planet runs out of time and becomes uninhabitable is not knowable.

The last sentence is rather confusing, because “ultimately the key innovation will occur” suggests inevitably, but the rest of the sentence suggests no inevitably at all. On page 9 they clarify that this Random Walk model refers to something that is “quite unlikely to occur.”

On page 5 the authors define the Many Paths model like this:

Each transition or key innovation requires many random events to create a complex new function, but many combinations of these can generate the same functional output, even though the genetic or anatomical details of the different outputs are not the same. So once life exists the chance that transition will occur in a given time period is high, but the exact time is not knowable.

Later on page 9 they describe this Many Paths model like this:

There are no specific preconditions for a Many Paths process other that prior existence of life that can achieve the innovation. However, once any appropriate precondition is met, the innovation will happen fairly reliably some time afterwards (as measured in generations). So it it almost inevitable that the innovation will occur eventually. But because there are many ways that it can occur, then each time the function will be carried out by a different mechanism.

Throughout the book again and again the authors attempt to convince us that particular types of biological innovations that occurred on Earth were examples of a Many Paths process, and that we should therefore expect to see them commonly on other planets. The logic of the authors seems to be something like this: when nature shows us there are many ways in which a particular biological innovation can be implemented, we can call this a Many Paths innovation; such innovations are pretty likely because there are many ways they can be implemented, not just one way.

But such reasoning is very fallacious. To judge the likelihood of something, we should not merely consider whether there is only one way that it can be achieved, or many ways. We should instead consider the ratio between the outcomes that do not achieve such a result and the outcomes that do achieve such a result.

We can give a concrete example regarding vision systems, the biological systems that result in vision. In humans the vision system consists of 4 main things: (1) the eye; (2) an optic nerve stretching from the eye to the brain; (3) the visual cortex in the brain (a part of the brain that interprets visual inputs); (4) very complex and fine-tuned proteins such as rhodopsin that are used in vision, helping to capture light. On page 6 the authors submit this as an example of a Many Paths innovation. They state:

An example of the Many Paths Model is the evolution of imaging vision. Eyes that can make images of the world (not just detect light and dark) have evolved many times in insects, cephalopods, vertebrates, and extinct groups like the trilobites.

But you would be committing a great error in logic if you reasoned like the authors, and suggested that it is fairly likely that a vision system would evolve, because there is not just one way to make a vision system, but lots of ways. A better line of reasoning would involve comparing the number of ways to arrange matter that do not result in functional vision to the number of ways that do result in functional vision. That would give you a ratio of more than 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 to 1. From such a perspective the appearance of a vision seems fantastically unlikely.

It is fallacious to argue that something is relatively likely to occur because there are many ways for it to happen. Using such reasoning we might argue that there is a pretty good chance that tornadoes passing through a junkyard will one day assemble a car by chance, because there are many different ways to assemble a car from parts in a junkyard. Even though there are many types of automobiles, the number of arrangements of matter that do not result in automobiles is more than 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times greater than the number of arrangements that do result in automobiles. So the chance of an automobile appearing in such a way is incredibly low, despite there being many different ways to make a car.

Because we do not show some outcome is likely or show that it even has one chance in a trillion quadrillion of occurring by showing there are many ways to achieve the outcome, the classification scheme offered by the authors of The Cosmic Zoo is misleading, and should not be used. But is there some better way to classify biological innovations, some classification scheme that might shed a little light on the chance of them randomly occurring on other planets?

Let me sketch out such a classification scheme. The categories are below. The difficulty level refers to how hard it is for the biological innovation to occur by random mutations and natural selection. 

Category Name Description Example Difficulty Level
Category 1 A biological innovation requiring no major structural change A biochemical change resulting in sun-protective dark skin rather than light skin. Low
Category 2 A biological innovation requiring some small structural change which is repeated many times Hair, which consists of thousands of repetitions of a single hair follicle Moderate
Category 3 A biological innovation requiring multiple small components which are either very simple and easy-to-achieve or complex but individually useful Teeth. Each tooth provides a small benefit. Moderate
Category 4 A biological innovation requiring multiple complex components, none of which by itself provides any benefit to the organism (no increase in survival value or reproduction) A vision system, requiring eyes, an optic nerve, a light-interpreting visual cortex, and complex light-capturing proteins (all are useless unless all 4 components exist) Fantastically improbable; vastly harder than Category 3


The difference between the difficulty of achieving biological innovations in Category 3 and Category 4 is an exponential difference, which we can colloquially describe as “all the difference in the world.” I can give an exact numerical example to illustrate the difference.

Let us imagine that some biological innovation requires five complex components, each of which is individually useful once it occurred. If a species consists of a million organisms, it might be that there is (on average) only 1 chance in 100 of each of these components arising by chance in this population during a 50-million year period. But if each of these components is useful by itself, once such a component appears in the gene pool a “classic sweep” of natural selection might cause all of the organisms to get such an innovation after several generations. So over the course of 50 million years, the overall likelihood of the biological innovation occurring in the population might be only a little less than 1 in 100 to the fifth power, which equals 1 in 10 billion. Those are pretty steep odds, but not totally prohibitive odds.

But let us imagine that some biological innovation requires five complex components, each of which is not individually useful once it occurred, and each of which is not useful until all of the five components have appeared in a single organism. If a species consists of a million organisms, it might be that there is (on average) only 1 chance in 100 of each of these components arising by chance mutations in this population during a 50-million year period. But if each of these components is not useful by itself, once such a component appears in the gene pool there would not be any “classic sweep” of natural selection that might cause all of the organisms in the population to get such an innovation after several generations.

Now the math ends up being radically different. Imagine the first component arrives in the gene pool, and that such an arrival (at any time during the 50 million years) has a chance of 1 in 100. If the second component appears in the gene pool, it would need to occur in some member of the population that already had the first component; or else the second component would be wasted. The chance of this now is 1 in 100 multiplied by 1 in a million (actually less 1 in a million, because the first component, being useless by itself, would probably have disappeared from the gene pool before the second component arrived). So to get an organism with both the first and the second component, we have an overall likelihood of less than 1 in 100 times 1 in 100 times 1 in a million. The same math ends up applying to the third component, the fourth component, and the fifth. Overall the math looks like this for the probability of ending up (at any time during the 50 million year period) with a single organism with all of the five components required for the biological innovation (with the “1 in a million” coming from the size of this population):

Chance of first component appearing in gene pool per 50 million years: 1 in 100.
Chance of second component appearing during this period in an organism already having first component: 1 in 100 multiplied by less than 1 in a million.
Chance of third component appearing during this period in an organism already having first and second components: 1 in 100 multiplied by less than 1 in a million.
Chance of fourth component appearing during this period in an organism already having first, second, and third components: 1 in 100 multiplied by less than 1 in a million.
Chance of fifth component appearing during this period in an organism already having first, second, third, and fourth components: 1 in 100 multiplied by less than 1 in a million.

These are all independent probabilities, and to compute the overall likelihood of all of these things happening in this population during the 50 million years we must compute the first probability multiplied by the second probability multiplied by the third probability multiplied by the fourth probability multiplied by the fifth probability. This gives us an overall probability of less than 1 in 10 to the thirty-fourth power (less than 1 in 10,000,000,000,000,000,000,000,000,000,000.000). These odds can be described as being totally prohibitive. We would not expect such an event to occur even once in the history of the galaxy, even if there are billions of life-bearing planets in the galaxy.

Do we see any of these Category 4 innovations occurring in earthly history? Yes, we see them occurring rather often. One example is the appearance of a vision system. If we consider a minimal vision system consisting of several eye components, an optic nerve, a part of the brain specialized to interpret visual signals, and at least one fine-tuned light-capturing protein, then we have a system consisting of at least five or six components, all of which are necessary for vision. Numerous other examples could be given of such Category 4 innovations. If we consider only random mutations and natural selection, we should not expect such miracles of innovation to be repeated on any other planets in our galaxy. Natural selection (which creates a “classic sweep” causing the proliferation of a useful trait) makes Category 3 innovations more likely, but does nothing to make Category 4 innovations anything other than fantastically improbable.

I call this type of difficulty “the scattering problem.” It is the problem that when we consider how the mutations needed for a complex innovation would (if they occurred) be scattered across the individuals of a population existing over multiple generations, it is exceptionally unlikely that all of the required mutations would ever end up in a single individual. This “scattering problem” rears its ugly head in every type of Category 4 innovation, in which the complex components needed for an innovation are not individually useful (meaning no “classic sweep” can occur until all of the components have appeared in a particular organism).


evolution problem

I can illustrate this scattering problem through an analogy. Let's imagine you're some “ahead of his time” genius who invented the first home computer in 1960. Suppose that this consisted of 7 key parts: a motherboard, a CPU, a memory unit, a keyboard, a monitor, a disk drive, and an operating system disk. If you were to mail one of these parts on 7 different days, sending a different part each day to the same person, there might be a reasonable chance that the person might put them all together to make a home computer. But imagine you did something very different. Imagine you mailed each part in a different year, sending out the parts gradually between 1960 and 1965. Imagine also that each part was mailed to a person you selected through some random process (such as picking a random street and a random time, and asking the name and address of the first person you saw walking down that street) – a process that might give you any of a million people in the city you lived. What would be the chance that the parts you had mailed through such a process would ever be assembled into a single computer? Less than one chance in 1,000,000,000,000,000,000,000,000. The actual likelihood in this example is about 1 in a million to the seventh power, or 1 in 1042.

It is comparable odds that a Darwinian process of random mutations and natural selection would constantly be facing in regard to complex biological innovations of the Category 4 type. If it luckily happened that there somehow occurred in a gene pool all of the random mutations needed for some biological innovation, such gifts would be scattered so randomly across the population and across some vast length of time that there would be less than 1 chance in 1,000,000,000,000,000,000 that they would ever come together in a single organism, allowing the biological innovation to occur for the first time. 

The previous analogy involving computer parts serves well as a rough analogy of the scattering problem. To make a more exact analogy, we would have to imagine some parts distribution organization that persisted for many generations. Such an organization might send out one part of a complex machine in one generation, to a randomly chosen person in a city, and then several generations later send out another part of the machine to some other randomly chosen person in the city (who would be very unlikely to be related to the previous person); and then several generations later send out another part of the machine to some other randomly chosen person in the city; and then several generations later send out another part of the machine to some other randomly chosen person in the city. The overall likelihood of the parts ever becoming assembled into a machine with all the required parts would be some incredibly tiny, microscopic probability. This more exact analogy better simulates the scenario in which favorable mutations supposedly accumulated over multiple generations.

Is there any way to reduce the scattering problem when considering the odds of complex biological innovations occurring by Darwinian evolution? You might try to do that by assuming a smaller population size. However, assuming a smaller population size is very much a case of “robbing Peter to pay Paul.” The reason is this: the smaller the population size, the smaller the chance that some particular favorable random mutation will occur in a gene pool corresponding to that population (just as the smaller the number of lottery ticket buyers in a lottery pool, the lower the chance of any one of them winning a multi-million dollar prize). So any reduction in the assumed population size should involve a corresponding reduction in the average chance of one of the favorable mutations occurring. The result will be that the incredibly low probability of the biological innovation will not be increased.

In the calculations above, I don't even consider an additional aspect of the scattering problem: that the mutations needed for some innovation would be scattered not just over some vast length of time and not just over an entire population but also scattered in random positions of an organism (so, for example, some component needed for an eye would be far more likely to occur uselessly in some other spot such as a foot or an elbow). This consideration just shrinks the likelihood of accidental complex innovations by many additional orders of magnitude, making it billions or trillions of times smaller.

The previous calculations involved the probability of only one organism in a population ending up with some biological innovation. The probability of such a biological innovation becoming common throughout the population (so that most organisms in the population have the innovation) is many times smaller. Evolution experts say that a particular mutation will need to occur many times before it becomes "fixated" in a population, so that all organisms in the population have the mutation. I did not even factor in such a consideration in making the calculations above. When such a consideration is added to the calculation, we would end up with some probability many, many times smaller than the microscopic probability already calculated. Instead of a probability such as 1 in 1032 we might have a probability such as 1 in 1050 or 1 in 10100.



Probability 1: Probability that the gene pool of some particular species will ever experience (possibly scattered in different generations and organisms) each of the random mutations needed for a complex “Category 4” biological innovation, in which there is no benefit until multiple required components exist arranged in a way providing functional coherence Some particular probability
Probability 2: Probability that all of these mutations will exist in the gene pool during one particular generation (possibly scattered among different organisms) Some probability only a tiny fraction of Probability 1
Probability 3: Probability that all of these mutations will ever end up in one particular organism, allowing the biological innovation to occur Some probability only a microscopic fraction of Probability 1-- perhaps a million trillion quadrillion times smaller.
Probability 4: Probability that all of these mutations will ever end up in most of the organisms in the population Some probability many times smaller than Probability 3, and perhaps billions of times smaller.


There is clearly a very strong basis for suspecting that something other than mere chance and natural selection was involved in all the biological innovation that occurred on Earth. Contrary to the naive claims of the authors of The Cosmic Zoo, if we use nothing but the explanations of orthodox Darwinists, we are left with bleak prospects for the existence of humanoid beings elsewhere in our galaxy. A more hopeful attitude would be appropriate only for someone willing to consider metaphysical and teleological considerations that might change the prospects dramatically. It would seem to make no sense for a SETI spokesman to be optimistic, unless he believes in cosmic teleology -- or unless he can specify some way in which non-humanoid extraterrestrials with a radically alien biology might appear without any of the ever-so-improbable Category 4 biological innovations occurring. 

But an orthodox Darwinist will continue to argue along the lines of “Innovations that occurred multiple times must have had a high chance of occurring randomly.” Below is a dialog that illustrates the fallacy in this type of reasoning. Let us imagine a conversation between a mother and a father, with a 3-year-old son.

Mother: That son of ours is learning a few things. I notice that he knows the channel of his favorite TV show, because I see him repeatedly pressing three buttons on the remote, and getting his favorite channel on the first try.
Father: No, that must be just chance. He's probably just randomly pressing number buttons on the remote. He probably accidentally gets the right channel often because there aren't that many channels.
Mother: Are you kidding me? I must have seen our son fifty different times press three numbers on the remote, and get his favorite channel on the first try. That proves it isn't just chance.
Father: Not at all. If our son got the right channel fifty different times, that just proves what I said – that there must be few TV channels, and that there's pretty good odds of him getting the right channel accidentally. Otherwise, he wouldn't have accidentally got his channel so quickly so many times.

The father's reasoning here is very much in error. Each additional time that the son presses three numbers on the remote and gets his favorite TV channel on the first try is actually an additional item of evidence arguing against the claim that blind chance is involved. If there are ten such cases, it argues strongly against the accidental success theory, and if there are fifty such cases, it argues much more strongly against the accidental success theory. The father has simply taken evidence against his theory, and tried to convert it into evidence for his theory. His statement that “otherwise, he wouldn't have accidentally got his channel so quickly so many times” commits the fallacy of overlooking the possibility that something other than chance is involved, and assuming the truth of what the father is attempting to prove (an example of what is called circular reasoning).

Similarly, each additional occurrence of a vastly improbable biological innovation is not evidence that such innovations have accidentally occurred but are instead additional reasons for doubting the theory that such innovations are mere accidents. Similarly, if you are playing poker with a card dealer who very frequently deals himself a royal flush in spades, this doesn't show that it's easy to get by chance a royal flush in spades; it's simply a reason for suspecting that something more than chance is involved.

Saturday, April 7, 2018

The Second Humans: A Science Fiction Story

The prosecution attorney began his opening statement:

Ladies and gentlemen of the jury,” he said. “The woman seated at that table, Seraphina Baker, has committed one of the most terrible crimes a person can commit: a particularly bad type of sex crime. We will now present evidence that will show her guilt beyond reasonable doubt.”

It was the year 2145, but they still conducted court cases pretty much as they did during the twentieth century. A few courts had tried using robotic judges and juries, but they had been laughable failures, and the cases in which they were used had to be retried using regular human judges and juries. But robots were usually  used as court bailiffs, and every court now used electronic stenographers to record all the words spoken in the trial.

The first witness called by the prosecution was Seraphina Baker. She could have taken the Fifth Amendment, and refused to testify in the case. But she thought that she could say some things that would help in her defense.

Ms. Baker, did you begin to consort with one Ethan Baker two years ago?”

Yes,” said Seraphina. “That's when we started dating.”

And did you begin to conduct intimate carnal relations with this Ethan Baker?” asked the prosecution attorney.

What does that mean?” asked Seraphina.

Did you begin to engage in sexual intercourse with Ethan Baker?” said the prosecution attorney.

Yes,” said Seraphina. “But that was only after we were married.”

Your honor,” said the prosecution attorney. “As you well know, marriage is irrelevant to whether the defendant has committed the crime with which she is charged. I move that the defendant's claim about her marriage be struck from the court record.”

I agree,” said the judge.

Now, Ms. Baker,” said the prosecution attorney, “before you began to engage in sexual intercourse with Ethan Baker, did you go the appropriate government website, to check whether this was allowable?”

No,” said Seraphina. “I figured it must be okay since we were married.”

After the prosecution attorney objected, the last part of Seraphina's statement was struck from the court record. The prosecuting attorney was able to get Seraphina to admit that she had made love with her husband, had not used birth control, and that she had given birth to a child.

The trial was a very short one. The defense called two character witnesses to try to show that Seraphina Baker was a very admirable and charitable person. Soon it came time for the prosecution attorney to make his closing argument.

Ladies and gentlemen of the jury,” said the prosecution attorney, “let me briefly review the glorious history of our people. One hundred years ago the genetic engineers bestowed a great blessing upon planet Earth. Using gene-splicing, they created a new race of humans that is now known as the Second Humans. The Second Humans are smarter, stronger, faster, healthier and more beautiful than the lowly race of humans who first lived on this planet, the race we call the First Humans.”

People such as you and me,” continued the prosecution attorney, “know how great it is to be a Second Human, so much superior to the inferior First Humans. But this blessing comes with a solemn obligation. It is the great responsibility of every Second Human to eventually father or mother at least four children. This is so that the Second Humans can grow in numbers, and fulfill their destiny of completely replacing the race of the First Humans.”

And there is one simple thing that every Second Human must do when choosing a mating partner,” continued the prosecution attorney. “He or she must go to a particular government web site, to make sure that any prospective mating partner is also a Second Human. Any Second Human who fails to do this, and who mates with a First Human, is a traitor against the sublime ordained destiny of the Second Humans.”

By her own testimony Seraphina Baker failed to do this simple thing,” said the prosecution attorney. "She has committed the terrible crime she has been accused of: the crime of inferior copulation, the crime of a Second Human mating with a First Human. So you must find her guilty.”

After the defense attorney made his plea, the jury went into the jury room to deliberate. After an hour of debate, they came back with a verdict.

In the case of Seraphina Baker,” said the judge. “How find you on the charge of inferior copulation?”

We find her guilty,” said the jury foreman.

Seraphina was sentenced to ten years in prison. Her husband Ethan and her child Kristen tried to continue to live in their community of Second Humans. But the Second Humans would often make nasty remarks about Kristen, calling her a half-breed. So Ethan and Kristen reluctantly moved to a community of First Humans.

Living in that impoverished community was very hard. Eager to make sure that the Second Humans would take over the planet and replace the First Humans, the government was doing everything it could to help that happen. All of the government benefits went only to the Second Humans, and the First Humans were left with no health insurance, no municipal services, fire-trap housing, much higher effective tax rates, and very poor schools. When the First Humans would go downtown in their cities, they would find many store signs and restaurant signs telling them to keep out.


The Second Humans would often torment the First Humans with this taunt:

First Humans are the worst humans!
First Humans are the worst humans!

Sometimes a Second Human child would ask a First Human child a hard historical or scientific question such as “What caused the Boer Wars?” or “Explain the difference between a quark and a quasar.” If the First Human was slow in answering, the Second Human child would unleash a taunt such as:

Boys can't reckon
Unless they are Second.

A First Human girl might get a taunt like this from a Second Human:

Girls are accursed
Whenever they're First.

Years later Kristen asked her father a question.

If the Second Humans are a superior race,” wondered Kristen, “then why are so many of them cruel and heartless?”

Maybe it's because the genetic engineers didn't find any way to make people more moral by messing with their DNA,” said Ethan sadly.