Header 1

Our future, our universe, and other weighty topics


Sunday, February 23, 2025

Problems a Hundred Miles Over Our Heads

While scientists often boast about how much they know, the truth is that human knowledge is merely fragmentary. The English expression "over your head" means something that is beyond your understanding. There are very many fundamental problems that are a hundred miles over the heads of today's scientists. The diagram below illustrates the situation.

problems scientists have not solved

Let me explain the diagram by explaining why each of the listed problems is many miles over the heads of today's scientists. 

The problem of explaining minds, memory and psychical phenomena. The first cloud in the diagram mentions the mountain-sized problem of explaining human minds and human memory. The problem is gigantic and very much over the heads of today's scientists, both because of the huge variety of human mental experiences and human mental capabilities, and because of the many brain physical shortfalls that exclude the brain as a credible explanation for most such capabilities and experiences. 

boasting scientist
A scientist trying to play "fake it until you make it" 

Morphogenesis problems (super-hard because of DNA limitations).  If someone defines a fertilized human egg as a human being, a definition that is very debatable, you might be able to say, "I understand the physical origin of a human being," and merely refer to a sperm uniting with an egg cell as such an origin.  But a more challenging question is whether anyone understands the physical origin of an adult human being. The physical structure of an adult human being is a state of organization many millions of times more complex than a mere fertilized speck-sized egg cell.  (A human egg cell is about a tenth of a millimeter in length, but a human body occupies a volume of about 75 million cubic millimeters.) So you don't explain the physical origin of an adult human being by merely referring to the fertilization of an egg cell during or after sexual intercourse. 

We cannot explain the origin of an adult human body by merely using words such as "development" or "growth." Trying to explain the origin of an adult human body by merely mentioning a starting cell and mentioning "growth" or "development" is as vacuous as trying to explain the mysterious appearance of a building by saying that it appeared through "origination" or "construction."  If we were to find some mysterious huge building on Mars, a state of great organization, we would hardly be explaining it by merely saying that it arose from "origination" or by saying that it appeared through "construction." When a person tries to explain the origin of a human body by merely mentioning "growth" or "development" or "morphogenesis," he is giving as empty an explanation as someone who tells you he knows how World War II started, because he knows that it was caused by "historical events."

There is a more specific account often told to try to explain the origin of an adult human body. The account goes something like this:

"Every cell contains a DNA molecule that is a blueprint for constructing a human, all the information that is needed. So what happens is that inside the body of a mother, this DNA plan for a human body is read, and the body of a baby is gradually constructed. It's kind of like a construction crew working from a blueprint to make a building."

The problem with this account is that while it has been told very many times, the story is just plain false, as many scientists have confessed. There is no such blueprint for a human being in human DNA. We know exactly what is in human DNA. It is merely low-level chemical information such as the sequence of amino acids that make up polypeptide chains that are the starting points of protein molecules. DNA does not specify anatomy. DNA is not a blueprint for making a human. DNA is not a recipe for making a human. DNA is not a program or algorithm for making a human. 

Not only does DNA not specify how to make a human, DNA does not even specify how to make any organ or appendage or cell of a human. There are more than 200 types of cells in human beings, each an incredibly organized thing (cells are so complex they are sometimes compared to factories or cities).  DNA does not specify how to make any of these hundreds of types of cells. Cells are built from many types of smaller structural units called organelles. DNA does not even specify how to make such low-level organelles. 

The chart below diagrams the hierarchical organization of the human body, and what part of that organization is explained by DNA:

pyramid of organization in a human body

Partially because so few of these layers are explained by DNA or its genes, the problem of explaining morphogenesis (the formation of a full human body) is a problem very far over the heads of scientists. 

Problem of explaining vast levels of biological organization. Below are some categories of innovations. These categories are not mutually exclusive.


Name

Description

Example(s)

Type A Innovation

Innovation requires all of its parts to have any functional benefit

Mousetrap, probably some biological units

Type B Innovation

Innovation requires almost all of its parts before any functional benefit

Jet aircraft, many protein molecules. Suspension bridge. Television, digital computer.

Type C Innovation

Innovation requires most of its parts before any benefit

Cells, most protein molecules, an automobile (which doesn't need its roof, doors or seats or car hood or bumper to be functional), electric fan (which gives some benefit even if the cage and stand are missing), cardiovascular system

Type D Innovation

Innovation requires a series of sub-components, each of which is useless until mostly completed.

Office tower. Each floor provides a benefit. But the construction of each floor requires many new parts, and no floor is useful until mainly completed. Also porcupine barbs (each barb is useful).

Type E innovation

Innovation may have some use in a relatively simple fractional form, but then requires many more parts organized in the right way to achieve a higher level of usefulness

Vision systems (?)

Type F innovation

Innovation requires an arrangement of several complex parts before becoming useful, with at least 25% of its part existing and well-arranged until functionality is achieved


Type G innovation

As each small simple part of the innovation is added, usefulness is slightly increased

Roof insulation, but almost nothing in the world of biology.

Darwinism may be able to explain some Type G innovations. But most of the impressive innovations in biology seem to be Type B innovations or Type C innovations. Innovations of that type are not credibly explained by any of the ideas of Darwinism, including the idea of so-called natural selection. Some of the reasons why Darwinism and gradualism are not credible explanations for most of the more complex innovations in natural history and biology are explained in my post "Anatomically Uninformative DNA, Nonfunctional Intermediates and Useless Early Stages Are Why Gradualism Does Not Work" which you can read here

Part of the reason why biological systems are beyond the explanation of scientists is the very great interdependence of the components of such systems, illustrated by the diagrams below:

complex biological system


interdependence of biological components

Origin of life problem. Everything we have learned about the very great organization and complexity of even the simplest living things suggests that the natural origin of life should be impossible, and should be as unlikely as a thrown deck of cards accidentally forming into a house of cards consisting of 52 cards. The concept of abiogenesis (that life can naturally arise from non-life) is a concept with zero observational and experimental support. Scientists have had no luck in trying to create a living thing in experiments simulating the early Earth, and have failed to create even a single protein molecule in such experiments. Below are some relevant quotes by scientists:

  • "The transformation of an ensemble of appropriately chosen biological monomers (e.g. amino acids, nucleotides) into a primitive living cell capable of further evolution appears to require overcoming an information hurdle of superastronomical proportions (Appendix A), an event that could not have happened within the time frame of the Earth except, we believe, as a miracle (Hoyle and Wickramasinghe, 198119822000). All laboratory experiments attempting to simulate such an event have so far led to dismal failure (Deamer, 2011Walker and Wickramasinghe, 2015)." -- "Cause of Cambrian Explosion - Terrestrial or Cosmic?," a paper by 21 scientists,  2018. 
  • "Biochemistry's orthodox account of how life emerged from a primordial soup of such chemicals lacks experimental support and is invalid because, among other reasons, there is an overwhelming statistical improbability that random reactions in an aqueous solution could have produced self-replicating RNA molecules."  John Hands MD, "Cosmo Sapiens: Human Evolution From the Origin of the Universe," page 411. 
  • "The ongoing insistence on defending scientific orthodoxies on these matters, even against a formidable tide of contrary evidence, has turned out to be no less repressive than the discarded superstitions in earlier times. For instance, although all attempts to demonstrate spontaneous generation in the laboratory have led to failure for over half a century, strident assertions of its necessary operation against the most incredible odds continue to dominate the literature." -- 3 scientists (link).
  • "The interconnected nature of DNA, RNA, and proteins means that it could not have sprung up ab initio from the primordial ooze, because if only one component is missing then the whole system falls apart – a three-legged table with one missing cannot stand." -- "The Improbable Origins of Life on Earth" by astronomer Paul Sutter. 
  • "Even the simplest of these substances [proteins} represent extremely complex compounds, containing many thousands of atoms of carbon, hydrogen, oxygen, and nitrogen arranged in absolutely definite patterns, which are specific for each separate substance. To the student of protein structure the spontaneous formation of such an atomic arrangement in the protein molecule would seem as improbable as would the accidental origin of the text of Virgil's 'Aeneid'  from scattered letter type." -- Chemist A. I. Oparin, "The Origin of Life," pages 132-133.

Matter-antimatter asymmetry problem. Let us imagine the early minutes of the Big Bang about 13 billion years ago, when the density of the universe was incredibly great. At that time the universe should have consisted of energy, matter and antimatter. The energy should have been in the form of very high energy photons that were frequently colliding with each other. All such collisions should have produced equal amounts of matter and antimatter. For example, a collision of high energy particles with sufficient energy creates a matter proton and an antimatter particle called an antiproton. So the amount of antimatter shortly after the Big Bang should have been exactly the same as the amount of matter. As a CERN page on this topic says, "The Big Bang should have created equal amounts of matter and antimatter in the early universe." But whenever a matter particle touched an antimatter particle, both would have been converted into photons. The eventual result should have been a universe consisting either of nothing but photons, or some matter but an equal amount of antimatter. But only trace amounts of antimatter are observed in the universe. A universe with equal amounts of matter and antimatter would have been uninhabitable, because of the vast amount of lethal energy released when even a tiny bit of matter comes in contact with a tiny bit of antimatter.

Below are some relevant quotations by scientists or scientist organizations:

  • "One cannot ignore the deep, unanswered question concerning the origin of the baryonic component because baryons and antibaryons should have annihilated almost completely, leaving only a negligible abundance today. Yet we observe a far greater concentration than the standard model of particle physics  and the first and second laws of thermodynamics should have permitted. So where did baryons come from?"  Astronomer Fulvio Melia, "A Candid Assessment of Standard Cosmology," 2022.
  • "We believe the big bang produced the same amounts of matter and antimatter. These should have annihilated each other, leaving a universe made of electromagnetic radiation and not much else.” -- Professor Stefan Ulmer, a scientist at CERN (link). 
  • "The Big Bang should have created equal amounts of matter and antimatter in the early universe. But today, everything we see from the smallest life forms on Earth to the largest stellar objects is made almost entirely of matter. Comparatively, there is not much antimatter to be found." -- "The matter-antimatter asymmetry problem," a page on the CERN web site describing the European Organization for Nuclear Research projects (link).
The matter/antimatter asymmetry problem is one scientists have made no progress in solving. It seems to be a problem a hundred miles over their heads. 

Problem of explaining the origin of universe. Scientists have no testable theory as to what caused the origin of the universe in the Big Bang. Every attempt that has been made to suggest a natural explanation for the Big Bang has been the thinnest speculation. The problem of what caused the Big Bang is a hundred miles over the heads of scientists. 

Cosmic fine-tuning problem.  Life is possible in our universe because of many seemingly fine-tuned features and fundamental constants. All attempts to naturally explain such fine-tuning have failed.  In particular:
  • Faced with an undesired case of very strong fine-tuning involving the Higgs boson or Higgs field, scientists wrote more than 1000 papers speculating about a theory called supersymmetry which tries to explain away this fine-tuning; but the theory has failed all experimental tests at the Large Hadron Collider.  

  • Faced with an undesired result that the universe's expansion rate at the time of the Big Bang was apparently fine-tuned to more than 1 part in 1,000,000,000,000,000,000,000, scientists wrote more than a thousand speculative “cosmic inflation” cosmology papers trying to explain away this thing they didn't want to believe in, by imagining a never-observed earliest instant in which the universe expanded at an exponential rate. But the "cosmic inflation" theories are unverifiable. Because of the density of the earliest years of the universe, we can never observe the first thousand years of the universe's history. The main prediction of these "cosmic inflation" theories has been that there would be observed something called primordial b-modes. Gigantic sums have been spent looking for these primordial b-modes, but all attempts have failed. 

  • Scientists tried to explain away cosmic fine-tuning by speculating about a multiverse, an imagined infinity or near-infinity of universes. All such speculations do nothing to explain cosmic fine-tuning, for reasons I explain in my posts here and here

Below are some relevant quotations by scientists:

  • "We conclude that a change of more than 0.5 % in the strength of the strong interaction or more than 4 % change in the strength of the Coulomb force would destroy either nearly all C [carbon] or all O [oxygen] in every star. This implies that irrespective of stellar evolution the contribution of each star to the abundance of C or O in the ISM would be negligible. Therefore, for the above cases the creation of carbon-based life in our universe would be strongly disfavoured." -- Oberhummer, Csot, and Schlattl, "Stellar Production Rates of Carbon and Its Abundance in the Universe."
  • "The cosmological constant must be tuned to 120 decimal places and there are also many mysterious ‘coincidences’ involving the physical constants that appear to be necessary for life, or any form of information processing, to exist....Fred Hoyle first pointed out, the beryllium would decay before interacting with another alpha particle were it not for the existence of a remarkably finely-tuned resonance in this interaction. Heinz Oberhummer has studied this resonance in detail and showed how the amount of oxygen and carbon produced in red giant stars varies with the strength and range of the nucleon interactions. His work indicates that these must be tuned to at least 0.5% if one is to produce both these elements to the extent required for life."  -- Physicists B.J. Carr and M.J. Rees, "Fine-Tuning in Living Systems." 
  • "The Standard Model [of physics] is regarded as a highly 'unnatural' theory. Aside from having a large number of different particles and forces, many of which seem surplus to requirement, it is also very precariously balanced. If you change any of the 20+ numbers that have to be put into the theory even a little, you rapidly find yourself living in a universe without atoms. This spooky fine-tuning worries many physicists, leaving the universe looking as though it has been set up in just the right way for life to exist." -- Harry Cliff, particle physicist, in a Scientific American article.
  • "If the parameters defining the physics of our universe departed from their present values, the observed rich structure and complexity would not be supported....Thirty-one such dimensionless parameters were identified that specify our universe. Fine-tuning refers to the observation that if any of these numbers took a slightly different value, the qualitative features of our universe would change dramatically. Our large, long-lived universe with a hierarchy of complexity from the sub-atomic to the galactic is the result of particular values of these parameters." -- Jeffrey M. Shainline, physicist (link). 
  • "The overall result is that, because multiverse hypotheses do not predict the fine-tuning for this universe any better than a single universe hypothesis, the multiverse hypotheses fail as explanations for cosmic fine-tuning. Conversely, the fine-tuning data does not support the multiverse hypotheses." -- physicist V. Palonen, "Bayesian considerations on the multiverse explanation of cosmic fine-tuning."
  • "A mere 1 percent offset between the charge of the electron and that of the proton would lead to a catastrophic repulsion....My entire body would dissolve in a massive explosion...The very Earth itself, the planet as a whole, would crack open and fly apart in an annihilating explosion...This is what would happen were the electron's charge to exceed the proton's by 1 percent. The opposite case, in which the proton's charge exceeded the electron's, would lead to the identical situation...How precise must the balance be?...Relatively small things like atoms, people and the like would fly apart if the charges differed by as little as one part in 100 billion. Larger structures like the Earth and the Sun require for their existence a yet more perfect balance of one part in a billion billion." -- Astronomy professor emeritus George Greenstein, "The Symbiotic Universe: Life and Mind in the Cosmos," pages 63-64
  • "What is particularly striking is how sensitive the possibility of life in our universe is to a small change in these constants. For example, if the constant that controls the way the electromagnetic field behaves in a vacuum is changed by four percent, then fusion in stars could not produce carbon....Change the cosmological constant in the 123rd decimal place and suddenly it's impossible to have a habitable galaxy." --  Marcus Du Sautoy, Charles Simonyi Professor for the Public Understanding of Science at Oxford University, "The Great Unknown," page 221. 
  • "The evolution of the cosmos is determined by initial conditions (such as the initial rate of expansion and the initial mass of matter), as well as by fifteen or so numbers called physical constants (such as the speed of the light and the mass of the electron). We have by now measured these physical constants with extremely high precision, but we have failed to come up with any theory explaining why they have their particular values. One of the most surprising discoveries of modern cosmology is the realization that the initial conditions and physical constants of the universe had to be adjusted with exquisite precision if they are to allow the emergence of conscious observers. This realization is referred to as the 'anthropic principle'...Change the initial conditions and physical constants ever so slightly, and the universe would be empty and sterile; we would not be around to discuss it. The precision of this fine-tuning is nothing short of stunning. The initial rate of expansion of the universe, to take just one example, had to have been tweaked to a precision comparable to that of an archer trying to land an arrow in a 1-square-centimeter target located on the fringes of the universe, 15 billion light years away!" -- Trinh Xuan Thuan, Professor of Astronomy, University of Virginia, Chaos and Harmony”  p. 235.


 

Problem of explaining cell reproduction:  Cells like humans have are enormously complex things.  We have been misled by diagrams that depict cells as having only a few organelles. Most types of human cells have thousands of organelles, of many different types. Human cells are so complex that they have been compared to factories or cities.  How are cells so complex able to reproduce? Scientists cannot explain it. Although the problem of cell reproduction is a million times simpler than the problem of human morphogenesis, even the problem of explaining how cells reproduce is a hundred miles over the heads of scientists.  Typically consisting of many hundreds or thousands of types of proteins, which each has its own special arrangement of hundreds or thousands of amino acids, a human cell can be compared in complexity to an automobile. But suppose you saw an automobile split to become two separate automobiles. That would be a miracle of origination that would confound and baffle you. Human cell reproduction is an event just as baffling as an automobile splitting into two working automobiles. 

Why is there something rather than nothing.   The utter non-existence of the universe is perfectly conceivable, and involves no contradiction. If there had never existed any universe, such a counter-factual state of utter nonexistence would be the simplest possible state of existence, and would have involved zero explanatory problems.  So why is there something rather than nothing? The problem is one a hundred miles over the heads of scientists. 

Problem of explaining the paranormal.  Humans have systematically observed and studied the paranormal for roughly 200 years. The explanatory problems of explaining the paranormal are endless. They include the problem of explaining all of these things:
  • The accounts of very many thousands of reliable witnesses who had near-death experiences, often reporting the most vivid and life-changing experiences at a time when their heart had stopped and their brain waves had shut down, something that should have prevented any experience according to "brains make minds" dogmas. 
  • The accounts of very many people reporting out-of-body experiences in which they observed their own bodies from a position meters away (discussed herehere, and here). 
  • The many cases in which medical personnel who did not have such experiences verified the medical resuscitation details recalled by people who had near-death experiences, who recalled medical details that occurred when such people should have been completely unconscious because their hearts had stopped.
  • Abundant cases of dying people who reported seeing dead relatives.
  • Very many cases of people who saw an apparition of someone they did not know had died, with the witness soon learning the person did die at about the time the apparition was seen (discussed in the 18 posts here). 
  • Very many cases when multiple witnesses reported seeing the same apparition (discussed in my series of posts here). 
  • The very careful research of people like Ian Stevenson who documented countless cases of children who claimed to recalk past lives, and found that their accounts often checked out well, with the details of the “past lives” being corroborated, with the children often having birthmarks corresponding to the deaths they recalled, and with the children often recognizing people or places they should not have been able to recognize unless they had the reported past life.
  • A great abundance of reports in the nineteenth century of spiritual manifestations such as mysterious raps that spelled out messages, tables moving when no one touched them, tables half-levitating when no one touched them, and tables fully levitating when no  one touched them (discussed in the series of posts here).  
  • Spectacular cases in the history of mediums, with paranormal phenomena often being carefully documented by observing scientists, as in the cases of Daniel Dunglas HomeEusapia PalladinoLeonora Piper, and Indridi Indridason.
  • Two hundred years of evidence for clairvoyance in which people could observe things far away or observe things when they were blindfolded or observe things in closed containers such as locked boxes. 
  • Abundant photographic evidence for mysterious orbs, including 800 photos of mysterious striped orbs, orbs appearing with dramatically repeating patterns, and orbs appearing with dramatically repeating patterns while falling water was being photographed. 
  • Abundant reports of mysterious orbs being seen with the naked eye, described in the 120+ posts here.
  • A great abundance of anecdotal evidence for telepathy, with large fractions of the human population reporting telepathic experiences. 
  • More than a century of solid laboratory evidence for telepathy, including cases discussed herehere, and here.  
  • A great abundance of evidence for a phenomenon of materialization, involving the mysterious appearance of tangible human forms. 
  • Extremely numerous cases in which living people report hard-to-explain events and synchronicity suggesting interaction with survivors of death.
Mainstream scientists typically take a "head in the sand" approach when faced with the problem of explaining such things. Their typical attitude is a clear hint about how the problem of explaining the paranormal is a hundred miles over their heads. 

paranormal phenomena

Protein and protein complex origination problem.  There are three aspects of this problem.

Problem of explaining the origin of proteins.  In 2019 computer scientist David Gelernter published a widely discussed book review entitled "Giving Up Darwin." He commented on the improbability of the natural origin of a new type of functional protein:

"Now at last we are ready to take Darwin out for a test drive. Starting with 150 links of gibberish, what are the chances that we can mutate our way to a useful new shape of protein? We can ask basically the same question in a more manageable way: what are the chances that a random 150-link sequence will create such a protein? Nonsense sequences are essentially random. Mutations are random. Make random changes to a random sequence and you get another random sequence. So, close your eyes, make 150 random choices from your 20 bead boxes and string up your beads in the order in which you chose them. What are the odds that you will come up with a useful new protein?...The total count of possible 150-link chains, where each link is chosen separately from 20 amino acids, is 20150. In other words, many. 20150 roughly equals 10195, and there are only 1080  atoms in the universe. What proportion of these many polypeptides are useful proteins?"

Gelernter tells us that the ratio of long useful amino acid sequences (compared to useless amino acid sequences that will not be the basis of functional proteins) is incredibly small. He cites a paper by Douglas Axe estimating that the ratio is something like 1 in ten to the seventy-fourth power, or about 1 in 1074 . 

Gelernter states this:

"Try to mutate your way from 150 links of gibberish to a working, useful protein and you are guaranteed to fail. Try it with ten mutations, a thousand, a million—you fail. The odds bury you. It can’t be done."

The phrasing of the middle sentence is a great understatement. What it should be is something like "Try it with a million mutations, a billion, a trillion, a quadrillion, a quintillion—you fail." If you have some result that you can only get about 1 in 1074 attempts, then you can try 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times, and you still very probably do not succeed.  According to the paper here, "we arrive at a figure of 4×1021 different protein sequences tested since the origin of life." The problem is that isn't enough tries to get even one success, if you're talking about proteins of average length.  If you have some result that you can only get about 1 in 1074 attempts, then 4×1021 tries will not give you a 1 in 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 chance of a single success.

Gelernter misstated the average number of amino acids in a protein. He states, "A protein molecule is based on a chain of amino acids; 150 elements is a 'modest-sized' chain; the average is 250." No, according to the 2012 scientific paper here, "Eukaryotic proteins have an average size of 472 aa [amino acids], whereas bacterial (320 aa) and archaeal (283 aa) proteins are significantly smaller (33-40% on average)." Mammals like us have eukaryotic proteins, so the average human protein has about 472 amino acids, almost twice as many as the number Gelernter cited. 

Let's do some simple math to show the difference here between the right numbers. A reasonable assumption is that every functional protein needs to have at least half of its amino acid sequence just as it is, or the molecule will not perform its function. (There are reasons for thinking that the fraction is actually much larger than 50%, given the high fragility of protein molecules, and their extreme sensitivity to small changes.)  So given that there are twenty amino acids used by living things, the probability of getting a random amino acid sequence serving the purpose of a particular protein can be very roughly estimated as 1 in 20nwhere n is half the length of a protein's amino acid sequence. If we have a protein with a sequence of 250 amino acids, this equals a probability of about 1 in 20125which is the same as about 1 in 10162But if we have a protein with a sequence of 472 amino acids, this equals a probability of roughly 1 in 20236which is the same as about 1 in 10307.  

Humans have 20,000+ types of protein molecules, and the animal kingdom has many millions of types of protein molecules. But the relevant math calculations (like those above) tell us that no type of functional protein ever should have naturally originated in the history of Earth. Darwinism does not remove this problem, or even significantly reduce it. Here are two relevant quotes by scientists:

  • "A wide variety of protein structures exist in nature, however the evolutionary origins of this panoply of proteins remain unknown."  -- Four Harvard scientists, "The role of evolutionary selection in the dynamics of protein structure evolution." 
  • "Tawfik admits the issue of a first protein is 'a complete mystery' because it reveals a paradox: enzymatic function depends upon the well-defined, three-dimensional structure of a protein scaffold, yet the 3D structure is too complex, too intricate, and too coordinated to arise without simpler precursors and intermediates....Tawfik soberly recognizes the problem. The appearance of early protein families, he has remarked, is 'something like close to a miracle.'....'In fact, to our knowledge,' Tawfik and Tóth-Petróczy write, 'no macromutations ... that gave birth to novel proteins have yet been identified.' " -- Tyler Hampton, quoting Dan  S. Tawfik, professor in a Department of Biological Chemistry (link). 

Problem of explaining protein complex formation. A large fraction of all types of proteins are useless unless they act as team members within teams of proteins that are called protein complexes. But scientists do not understand how protein complexes are able to form into such useful teams of proteins. The problem is not explained by DNA and its genes, which do not specify the structure or makeup of any protein complex. You may realize how huge the explanatory problem is when you study how scientists are calling many of these protein complexes "molecular machines" because they so strongly resemble something purposefully constructed. We see below one example, one including propeller-like parts. 

protein complex

Below are some relevant quotes:

  • "The majority of cellular proteins function as subunits in larger protein complexes. However, very little is known about how protein complexes form in vivo." Duncan and Mata, "Widespread Cotranslational Formation of Protein Complexes," 2011.
  • "While the occurrence of multiprotein assemblies is ubiquitous, the understanding of pathways that dictate the formation of quaternary structure remains enigmatic." -- Two scientists (link). 
  • "A general theoretical framework to understand protein complex formation and usage is still lacking." -- Two scientists, 2019 (link). 
  • "Protein assemblies are at the basis of numerous biological machines by performing actions that none of the individual proteins would be able to do. There are thousands, perhaps millions of different types and states of proteins in a living organism, and the number of possible interactions between them is enormous...The strong synergy within the protein complex makes it irreducible to an incremental process. They are rather to be acknowledged as fine-tuned initial conditions of the constituting protein sequences. These structures are biological examples of nano-engineering that surpass anything human engineers have created. Such systems pose a serious challenge to a Darwinian account of evolution, since irreducibly complex systems have no direct series of selectable intermediates, and in addition, as we saw in Section 4.1, each module (protein) is of low probability by itself." -- Steinar Thorvaldsen and Ola Hössjerm, "Using statistical methods to model the fine-tuning of molecular machines and systems,"  Journal of Theoretical Biology.
Problem of explaining protein folding. Proteins are almost always useless unless they have a specific three-dimensional shape.  Different types of proteins have different three-dimensional shapes. But how do such shapes arise? Scientists do not understand this. This unsolved problem is called the protein folding problem.  One attempt at solving the problem has been to advance what is called Anfinsen's Dogma, the claim that the amino acid sequence of a protein forces it to be some particular three-dimensional shape. But there has never been any good evidence to support Anfinsen's Dogma, and there are strong reasons for believing that it cannot be correct.  The case against Anfinsen's Dogma is made in two of my posts that you can read here.  It is sometimes claimed that the AlphaFold2 software did something to help solve the protein folding problem, but such claims are not correct. That software instead merely did something to help solve a different problem, one called the protein folding prediction problem.  The protein folding problem is still unsolved, and there are no good prospects of it being solved. 

biological layers

Homochirality problem.  Chemicals such as amino acids and sugars can be either left-handed or right handed. A left handed amino acid looks like a mirror image of the right-handed amino acid, and a right-handed sugar looks like the mirror image of the left-handed sugar. Homochirality is the fact that in living things essentially all amino acids are left-handed, and all sugars in DNA are right-handed. But when such things are synthesized in a laboratory, or produced in experiments simulating the early Earth, you see equal amounts of left-handed and right-handed amino acids and equal amounts of left-handed and right-handed sugars.

Based on the fact that it is just as easy for left-handed amino acids to form in the laboratory as right-handed amino acids, and just as easy for left-handed sugars to form in the laboratory as right-handed sugars, we would expect for there to be a symmetry in the handedness of amino acids, with an equal amount of left-handed and right-handed amino acids. We would also would expect a symmetry in the handedness of sugars, with equal amounts of left-handed sugars and right-handed sugars. But what we see is an asymmetry, with living things having only left-handed amino acids and right-handed sugars in DNA. This characteristic of earthly life is called homochirality. 

I can give an analogy for why homochirality is such a mystery. Let us imagine a very large box filled with 5000 cards, each displaying one of the letters in the alphabet. On one side of each card is a letter. For example:



On the back side of each card is the mirror image of the letter on the front side of the card. For example:



Now, let us suppose that someone dumped this large box of cards from the top of a tall building. Imagine that the cards fell to the ground, forming a set of useful instructions that was 5000 letters long, and that none of those letters were the mirror  images of the letters in the alphabet. 

We would have two gigantic difficulties in explaining this outcome.  The first problem would be in explaining how we accidentally got a useful and intelligible set of instructions 5000 characters long.  The second problem would be in explaining how the 5000 cards all ended up showing the card side with the regular English letter, with none of them showing the mirror image of the letter on the opposite side of the card. 

The origin of life is as hard-to-explain as the falling cards event just described.  It would be easier to explain if scientists had an explanation for homochirality, but they do not. 

For twenty other posts on this blog on the topic of the tininess of human knowledge, use the link here, and continue to press Older Posts at the bottom right. 

Wednesday, February 19, 2025

Psychic Experiences in the News, Part 2

 Here is the second in a series of videos I am making about news accounts of ESP,  precognition, out-of-body experiences, prophetic dreams and other paranormal experiences. If you have any difficulty viewing this video, try the link here. 


To see another video as long as this one, with the same type of newspaper clippings, see Part 1 of this video series using the link here

Saturday, February 15, 2025

Why Accidents Cannot Produce Very Complex and Useful Instruction Information

Darwinist materialism is built upon the idea that accidents of nature can produce dazzling works of biological construction.  In this post I will explain a small part of the reason why this idea is irrational and utterly unbelievable. Part of the reason is that accidents cannot produce very complex instruction information. By very complex instruction information I mean the type of information you would need to construct some complex thing such as a house, a car, a cell or even a large protein molecule with a specific biological function. 

Let us start with a simple case of probability calculation. What is the chance that a random string of five English characters would produce a five-letter word in the English language? To calculate this, you need to answer two questions:

(1) How many random combinations of five English characters would result in a word in the English language?

(2) How many possible five-character strings of letters could you produce from random combinations of characters?

A Google query of "number of five letter words in English" will give you the answer to the first question. The answer is that there are roughly 100,000 to 120,000 five-letter words in the English language. 

The second question can be answered using the mathematical rule that the number of possible combinations of a sequence of characters or digits is roughly equal to the number of possible values in each position of the sequence multiplied by itself a number of times equal to the length of the sequence.  So, for example:

  • There are 10 possible digits between 0 and 9, so the total number of possible decimal digit sequences with a length of 5 is roughly equal to 10 multiplied by itself 5 times, which equals 100000. (I say "roughly equal" because the exact number is all the numbers between 10,000 and 99,999, which is 99,000 numbers.) 
  • Counting only lowercase letters and digits (a to z and 0 to 9), there are 36 possible characters that can exist in any position in a five-character sequence. The total number of possible five-character character sequences is roughly 36 multiplied by itself five times, which is 60,466,176. 
So what is the chance of a random set of five characters being a word in the English language? The answer is roughly 120,000 divided by 60,466,176, which is 0.00198.  This is roughly about 1 chance in 500. 

Now, imagine we want to calculate the chance of a long series of randomly typed characters producing nothing but words in the English language. To keep things simple, we can calculate this random typing as being a series of five random characters, each followed by a space. If we want to calculate a series of x randomly typed groups of five characters all being words in the English language, we will have to multiply .000198 by itself x times. 

So, for example, the probability of typing 100 consecutive random five-letter sequences and having them all be words in the English language is roughly 1 in 500 (or .000198) to the hundredth power. You can calculate things like this using what is called a large exponents calculator. Using such a calculator, we find that 1 in 500 to the hundredth power is roughly equal to 1 in 10 to the 269th power. 


The large exponents calculator above for some reason prefers to work with integer numbers rather than decimal numbers such as .000198. But since .000198 is very close to 1 in 500, and we are only interested in getting an answer roughly correct, we can simply type 500 in the first input slot above, and remember to divide by 1 the total produced. 

We see that the probability of you typing 100 random five-character sequences and having them all be words in the English language is roughly 1 in ten to the 288th power. This is a number so low that it is prohibitive. Things with this improbability would never occur in the entire history of the universe. 

But what about the probability of you randomly typing 100 random five-character sequences and producing some complex and useful instruction information such as how to build a complex and useful building or invention, or at least one of its parts? Would it be less or greater than the incredibly low probability calculated above? It would surely be very much less, because the calculation above does not even take into account the need to arrange the words in a meaningful order. It is much, much more improbable for randomly generated output to produce a useful instruction sentence such as "use a hammer and nails to hammer together all the wood two-by-fours" than it is is for random words to produce a meaningless but correctly spelled sentence such as "house smart green taste works south quick." So given the previous calculations we are safe in assuming that typing 100 five-character sequences of randomly typed text will produce a useful instruction sentence with a probability very much less than 1 in ten to the 288th power. 

Now, let us consider the instruction information in biology. In biology we have the most gigantic "missing specifications" problem described in detail here. This is because contrary to the false claims that have so often been made, nowhere in DNA or its genes has anyone discovered anything like the instructions needed to build a body or any of its organ systems or any of its organs or any of its cells.  DNA and its genes do not even specify how to build any of the organelles that are the main building components of cells. But we do know that DNA does contain a huge repository of instruction information. The DNA in humans contains more than 20,000 genes. Each of those genes tells how to make a particular polypeptide chain that is the starting point for a particular protein molecule. Such a polypeptide chain is a sequence of amino acids. 

In terms of complexity and functional usefulness, there is a great deal of similarity between a gene and the 100-word instruction sequence I previously imagined.  Simplifying things, I previously imagined 36 possible values at each position in the random sequences I was imagining. For a gene we have a similar situation. A gene specifies a sequence of amino acids, usually hundreds and sometimes thousands. There are twenty amino acids that are used by living thing. Any position in a gene can specify any of twenty amino acids. 

So the math we have with genes is similar to the math previously imagined.  I was previously imagining a sequence of 500 random characters (100 words each consisting of five random characters). The average length of a human gene is a size needed to specify about 450 amino acids. Human protein molecules are in eukaryotic cells, and the scientific paper here says, "Eukaryotic proteins have an average size of 472 aa [amino acids]." And just as the chance of you making a useable English instruction sentence from about 500 random characters is incredibly low (less than 1 chance in 10 to the 288th power), the chance of you getting a useful gene from a random sequence of nucleotide base pairs specifying a random sequence of amino acids is incredibly low. To be functional, a protein molecule half-specified by a gene requires a very special three-dimensional structure, that uses a very hard-to-achieve effect called folding. A functional gene and a corresponding functional protein molecule requires a very special arrangement of amino acids, as special as the arrangement of characters in a functional instruction sentence. 

accidents don't engineer things

The fact that protein molecules require very rare and special sequences of amino acids is shown by how sensitive protein molecules are to small changes. Below are some relevant quotes by scientists:

  • "It seems clear that even the smallest change in the sequence of amino acids of proteins usually has a deleterious effect on the physiology and metabolism of organisms." -- Evolutionary biologist Richard Lewontin, "The triple helix : gene, organism, and environment," page 123.
  • "Proteins are so precisely built that the change of even a few atoms in one amino acid can sometimes disrupt the structure of the whole molecule so severely that all function is lost." -- Science textbook "Molecular Biology of the Cell."
  • "To quantitate protein tolerance to random change, it is vital to understand the probability that a random amino acid replacement will lead to a protein's functional inactivation. We define this probability as the 'x factor.' ...The x factor was found to be 34% ± 6%."  -- 3 scientists, "Protein tolerance to random amino acid change." 
  • "Once again we see that proteins are fragile, are often only on the brink of stability." -- Columbia University scientists  Lawrence Chasin and Deborah Mowshowitz, "Introduction to Molecular and Cellular Biology," Lecture 5.
  • "We predict 27–29% of amino acid changing (nonsynonymous) mutations are neutral or nearly neutral (|s|<0.01%), 30–42% are moderately deleterious (0.01%<|s|<1%), and nearly all the remainder are highly deleterious or lethal (|s|>1%).” -- "Assessing the Evolutionary Impact of Amino Acid Mutations in the Human Genome," a scientific paper by 14 scientists. 
  • "An analysis of 8,653 proteins based on single mutations (Xavier et al., 2021) shows the following results: ~68% are destabilizing, ~24% are stabilizing, and ~8,0% are neutral mutations...while a similar analysis from the observed free-energy distribution from 328,691 out of 341,860 mutations (Tsuboyama et al., 2023)...indicates that ~71% are destabilizing, ~16% are stabilizing, and ~13% are neutral mutations, respectively." -- scientist Jorge A. Villa, "Analysis of proteins in the light of mutations." 2023.
  • "Proteins are intricate, dynamic structures, and small changes in their amino acid sequences can lead to large effects on their folding, stability and dynamics. To facilitate the further development and evaluation of methods to predict these changes, we have developed ThermoMutDB, a manually curated database containing >14,669 experimental data of thermodynamic parameters for wild type and mutant proteins... Two thirds of mutations within the database are destabilising." -- Eight scientists, "ThermoMutDB: a thermodynamic database for missense mutations," 2020. 
Genes contain very complex and useful instruction information. But getting such information by chance is very roughly as improbable as getting useful instruction information from randomly generated text. A gene tells much of what is needed to construct a particular type of complex invention: a protein molecule. A protein molecule is a very special arrangement of hundreds or thousands of amino acids, which have to be just right for that particular type of protein molecule to perform its function.  Human DNA has roughly 20,000 genes, each of which largely tells how to make a different type of complex invention in your body: a particular type of protein molecule with hundreds or thousands of well-arranged parts. The extreme sensitivity and fragility of protein molecules (discussed in the bullet list above) tells us how very special is the required arrangement that must occur in every human gene. 

The likelihood of random mutations producing a novel type of gene that could serve as instructions for how to make a new type of functional protein is roughly the same as the likelihood of 500 randomly typed characters producing a useful and very complex instruction. The chance of both of these things is so very low as to be prohibitive. The chance of some accident or series of accidents producing from scratch either a new useful type of gene or protein or a new useful 100-word instruction is basically zero, so low that we would never expect it to ever happen in the history of the universe. 

From such realities we can derive the very general principle that accidents cannot produce very complex and useful instruction information. This principle matches our intuitions. If someone ever claimed that he spilled a big box of 500 Scrabble letters, and that they fell on the floor and accidentally produced a 100-word complex instruction that was very useful, you would never believe such a tale. 

accidents don't produce inventions

 But what about all the biologists who tell us that all of the millions of types of genes and millions of types of protein molecules in the animal kingdom are the result of accidents of nature, mere random mutations? They are believing the worst type of nonsense. Believing in such a thing is as illogical as believing that all of the books in a huge public library were written by mere ink splashes, rather than the purposeful intention of authors. 

What happened was that between 1875 and 1925 Darwinism became a sacred dogma of the conformist belief communities that reside in the biology departments of universities. In the next decades scientists discovered how mountainous is the organization and information richness of living things. Biologists discovered around the middle of the 20th century that humans require an information richness and level of hierarchical organization vastly beyond anything they had ever been  imagined. At that time all claims of understanding the origin of species and the origin of humans should have been abandoned. 

Darwinism is like a religion

But by then biologists had already made Darwin their Jesus or Buddha, and had made Darwin's boasts of explaining biological origins some sacred dogma that was not to be questioned. So the groundless boast that biologists had explained human origins and the origin of other species continued to be taught, just like some religious dogma that continues to be taught even after facts have discredited it. The biologists made it clear they despised the fundamentalists who clung to the idea that mankind was only about 6000 years old. But by clinging to the discredited explanation boasts of Darwinism, such biologists were acting in the same way as such fundamentalists, clinging to a discredited belief tradition rather than updating their  claims to fit the observed facts. 

And what if you somehow had an explanation for the accidental origin of all the genes in the human body, despite all the reasons discussed above for thinking such a thing is impossible? Then you still wouldn't have a tenth of an explanation for how there arose human bodies, because DNA and its genes do not specify how to make human bodies or any organs or any cells or even any of the organelles that are the building components of such cells. And you also would not have an explanation for human minds and their capabilities, because neither genes nor brains explain such capabilities, for reasons discussed at great length in the posts on my site here

decrepit old theory

You might try to  defeat some of the reasoning above by appealing to possibilities such as lower functional thresholds (such as rare types of protein molecules that might be functional in half form).  Such attempts could easily be demolished by a discussion of facts arguing far more strongly in the opposite direction, such as the fact that most types of protein molecules produce no survival benefit or reproduction benefit by themselves, but only are beneficial when they act as team members in biological components of far greater complexity, such as protein complexes requiring many types of proteins to be useful. A proper study of functional thresholds and interdependent components always undermines the explanatory boasts of biologists rather than supporting them.