Header 1

Our future, our universe, and other weighty topics


Monday, March 9, 2020

Complex Innovations by Random Mutations Are a Googol Times More Improbable Than a Point Improvement

The Darwinist explanation for complex functional biology is that it arose from random mutations and natural selection. We have no evidence that any complex biological innovations ever appeared because of random mutations, natural selection or any combination of the two. Darwinists try to back up their explanation of biological innovations by presenting examples of something that improved through a random mutation. But we must distinguish between two different probabilities:

  1. The likelihood of getting an improvement in existing functionality from one or more random mutations (call it Probability 1).
  2. The likelihood of getting a new complex innovation from one or more random mutations (call it Probability 2).

The second of these probabilities is almost infinitely more improbable than the first. I can give an example that illustrates the vast difference between these two probabilities.

Let us consider: what is the likelihood that a random change in some functional and almost-perfected written text may improve that text? We can roughly calculate this probability in a fairly straightforward manner, using this formula:

Probability of a random character improving almost-finished text (P) = percentage of characters that are typing errors (X) multiplied by likelihood of a random character being the character that fixes the typing error (Y).

Here is a simple example. Let's imagine you just typed some code for a computer program. There is a certain percentage of these characters that will be typing errors. We can use 1% as a rough estimate for this probability. The probability that a random character will fix the typing error is about equal to 1 divided by the number of possible characters that you can type using a single keystroke. Given that there are about 50 characters you can type using a keyboard, we can roughly estimate this probability as being 2 percent. So under these assumptions, the likelihood that a random keystroke anywhere in your text will improve your recently typed programming code is roughly 1% (.01) multiplied by 2% (.02). That is a likelihood of about .0002. If you were a very careful typist who made only one mistake per 1000 characters, the probability of a random character improving your text would be only .00002. But if you were a rather careless typist who made a mistake in every ten characters typed, the probability of a random character improving your text would be higher, about .002.

It is clear from this discussion, that it is not vastly improbable that a random mutation will improve some close-to-completed piece of functionality, given a small error rate in the block of semantic information that codes that functionality. So when we hear about a rare random mutation that might slightly improve an existing biological function, we should not be terribly surprised.

But what about the likelihood of random mutations producing whole new functionality? Then the math is entirely different. To calculate the chance of that, we must estimate the likelihood of you producing a working body of instructions (such as a functional computer program) by just typing random characters on your keyboard, or having some computer program generate completely random characters. It is very hard to do anything like an exact calculation of such a probability. I can calculate the exact number of combinations that could be produced from typing a thousand random characters, but there is no way to precisely calculate what percentage of these would be useful or meaningful instructions.

But there is a way to do a very rough calculation. Consider a block of six characters. Each of those characters can be any of about 50 different things, given 26 possible letters, 10 possible numbers, and about 14 different punctuation marks. That means that there are about fifty to the sixth power possibilities ( or 15,625,000,000 possibilities) when you randomly type 6 characters. But the English language has only about 200,000 words, and there may be as many as 100,000 recognizable names such as recognizable last names, first names, and brand names, meaning there are very roughly about 300,000 six-character combinations that might be meaningful. 15,625,000,000 divided by 300,000 is 52,083. That means your chance of typing an English word or recognizable name when you type six random characters is very roughly about 1 in 52,083, which we can round down to be about 1 in 50,000.  In this rough simplistic calculation I ignore the fact that most English words are smaller or larger than six characters. 

We can do a little reality test on this estimate by using the random string generator at this site. When I use “abcdefghijklmnopqrstuvwxyz0123456789~!@#$%^&*()_+={}|[]\” as the allowed characters, and type 100 under “Number of strings,” and type “6” as the length, I get 100 random strings, none of which are intelligible words or names. When I keep pressing the Generate button 5 or 10 times, I fail to see any intelligible word or name. Although this doesn't prove the previous estimate, it is a result consistent with such an estimate.


Random text generated at this site

Now, in a block of 1000 characters there are 166 six-character blocks. So to make a very rough calculation of the chance of you getting meaningful instructions from 1000 randomly typed characters, one that is greatly oversimplified, it would seem that we should multiply the 1 in 50,000 likelihood discussed above by itself 166 times. This gives you a likelihood which ends up being roughly 1 in 10780 or 1 in ten to the seven-hundred-eightieth power. How often should we expect something that unlikely to happen? We should expect that it should never happen in the history of the observable universe.

I may note that the actual chance of getting a useful, meaningful instruction set of 1000 random characters is actually much less than the probability roughly calculated previously. I have merely calculated the chance of getting 1000 random characters that have a very superficial appearance of intelligibility, with all the words being intelligible words. The chance of such words actually making a useful instruction set would be vastly smaller than the microscopic probability just calculated.

So we can see that there is an ocean of difference between the likelihood of a single random mutation producing an improvement in some nearly-complete functional instructions and the likelihood of a complex new functional instruction set arising from random mutations. It is more than a googol times (ten  to the hundredth power times) more improbable that a complex new functional instruction set might arise from random mutations than it is that a single random mutation might improve some existing functional instruction set. A googol is ten to the hundredth power. 

So when our Darwinist biologists attempt to back up the idea that complex biological innovations arose from random mutations, and they refer us to cases where some tiny gain in function resulted from a single point mutation, they are trying to make us believe in something almost infinitely more improbable than the cases they are providing as evidence. Although proteins are not specified by strings of English characters, the mathematics of getting a functional protein from a gene that is a random string of thousands of nucleotide base pairs is very much in the same ballpark as the math I have discussed relating to the probability of getting a functional instruction set from 1000 random characters.  Each of the more than 20,000 types of protein molecules used by humans is its own complex biological innovation, and a protein is typically specified by its own gene that is a unique sequence of thousands of nucleotide base pairs. 

I can give an analogy to illustrate the difference between the likelihood of some random mutation producing a point improvement, and the likelihood of some set of random mutations producing a complex biological innovation. The first likelihood is like the probability of throwing two cards towards a house of cards, and the two cards producing an improvement at one point in the house of cards. The second likelihood is like the chance of you tossing an entire deck of 52 cards into the air, and those cards forming into a house of cards, with most of the cards becoming part of the house of cards. There is obviously no comparison between these two. The first event is something we would expect to see happening maybe once in 50,000 tries. The second event is something we would not expect to occur even once in a million billion trillion quadrillion tries, something we would not expect to see once in the observable universe even if the observable universe was filled with people spending their lives throwing decks of cards into the air.

Realistic math is so very hostile to the explanatory pretensions of our biologists that one scientist last month took the extremely desperate measure of appealing to multiverse speculations to try to back up theories of random biological origins.  I'll have a reply to that one day. 

No comments:

Post a Comment