Header 1

Our future, our universe, and other weighty topics

Sunday, June 17, 2018

Brains Do Not Work Harder or Look Different During Thinking or Recall

There have been many studies of how a brain looks when particular activities such as thinking or recall occur. Such studies will typically attempt to find some region of the brain that shows greater activity when some mental activity occurs. No matter how slight the evidence is that some particular region is being activated more strongly, that evidence will be reported and reported as a “neural correlate” of some activity. 
But rather than focusing on a question such as “which brain region showed the most difference" during some activity, we should look at some more basic and fundamental questions. They are:

  1. Does the brain actually look different when it is being scanned while people are doing some mental activity requiring thought or memory recall?
  2. Does the brain actually become more active while people are doing some mental activity requiring thought or memory recall?

If you were to ask the average person by how much brain activity increases during some activity such as problem solving or recall, he might guess 25 or 50 percent, based on all those visuals we have seen showing brain areas “lighting up” during certain activities. But as discussed here, such visuals are misleading, using a visual exaggeration technique that is essentially “lying with colors.” The key term to get a precise handle of how much brain activity increases is the term “percent signal change.” While the visual and auditory cortex regions of the brain (involved in sensory perception) may increase by much more than 1 percent, this technical document tells us that “cognitive effects give signal changes on the order of 1% (and larger in the visual and auditory cortices).” A similar generalization is made in this scientific discussion, where it says, based on previous results, that “most cognitive experiments should show maximal contrasts of about 1% (except in visual cortex)."

A PhD in neurophysiology states the following:

Those beautiful fMRI scans are misleading, however, because the stark differences they portray are, in fact, minuscule fluctuations that require complex statistical analysis in order to stand out in the pictures. To date, the consensus is that "thinking" has a very minor impact on overall brain metabolism.

You can get some exact graphs showing these signal changes by doing Google searches with phrases such as “neural correlates of thinking, percent signal change” or “neural correlates of recollection, percent signal change.” Let's look at some examples, starting with recollection or memory retrieval.
  • This brain scan study was entitled “Working Memory Retrieval: Contributions of the Left Prefrontal Cortex, the Left Posterior Parietal Cortex, and the Hippocampus.” Figure 4 and Figure 5 of the study shows that none of the memory retrievals produced more than a .3 percent signal change, so they all involved signal changes of less than 1 part in 333.
  • In this study, brain scans were done during recognition activities, looking for signs of increased brain activity in the hippocampus, a region of the brain often described as some center of brain memory involvement. But the percent signal change is never more than .2 percent, that is, never more than 1 part in 500.
  • The paper here is entitled, “Functional-anatomic correlates of remembering and knowing.” It shows a graph showing a percent signal change in the brain during memory retrieval that is no greater than .3 percent, less than 1 part in 300.
  • The paper here is entitled “The neural correlates of specific versus general autobiographical memory construction and elaboration.” It shows various graphs showing a percent signal change in the brain during memory retrieval that is no greater than .07 percent, less than 1 part in 1000.
  • The paper here is entitled “Neural correlates of true memory, false memory, and deception." It shows various graphs showing a percent signal change during memory retrieval that is no greater than .4 percent, 1 part in 250.
  • This paper did a review of 12 other brain scanning studies pertaining to the neural correlates of recollection. Figure 3 of the paper shows an average signal change for different parts of the brain of only about .4 percent, 1 part in 250.
  • This paper was entitled “Neural correlates of emotional memories: a review of evidence from brain imaging studies.” We learn from Figure 2 that none of the percent signal changes were greater than .4 percent,  1 part in 250.
  • This study was entitled “Sex Differences in the Neural Correlates of Specific and General Autobiographical Memory.” Figure 2 shows that none of the differences in brain activity (for men or women) involved a percent signal change of more than .3 percent or 1 part in 333.

Now let's look at brain scan studies showing brain activity during activities such as thinking, problem solving, and imagination. 

  • This brain scanning study was entitled “Neural Correlates of Human Virtue Judgment.” Figure 3 shows that none of the regions showed a percent signal change of more than 1 percent, and almost all showed a percent signal change of only about .25 percent (1 part in 400).
  • This brain scanning study examined the neural correlates of angry thinking. Table 4 shows that none of the regions studies showed a percent signal change of more than 1.31 percent.
  • This brain scanning study was entitled “Neural Activity When People Solve Verbal Problems with Insight.” Figure 2 shows that none of the problem-solving activity produced a percent signal change in the brain of more than .3 percent or about 1 part in 333.
  • This study is entitled “Aha!: The Neural Correlates of Verbal Insight Solutions.” Figure 1 shows that none of the brain regions studies had a positive percent signal change of more than .3 percent or about 1 part in 333. Interestingly, one of the brain regions studied had a negative percent signal change of .4 percent that was greater than any of the positive percent signal changes.
  • This brain scanning paper is entitled “Neural Correlates of Evaluations of Lying and Truth-Telling in Different Social Contexts.” Figure 3 shows that none of this evaluation activity produced more than a .3 percent signal change in the brain, or about 1 part in 333.
  • This brain scanning paper is entitled "In the Zone or Zoning Out? Tracking Behavioral and Neural Fluctuations During Sustained Attention." It tracked brain activity during a mental task requiring attention. The paper's figures show various signal changes in the brain, but none greater than .09 percent, less than 1 part in 1000. 
  • This brain scanning paper is entitled "Neuronal correlates of familiarity-driven decisions in artificial grammar learning." The paper's figures show various signal changes in the brain, but none greater than 1 percent.
  • This brain scanning study is entitled, "Neural correlates of evidence accumulation in a perceptual decision task." The paper's figures show various signal changes in the brain, but none greater than .6 percent, less than 1 part in 150. 
  • This brain scanning study was entitled, “Neural correlates of the judgment of lying: A functional magnetic resonance imaging study.” We learn from Figure 3 that none of the judgment activity produced a percent signal change in the brain of more than .2 percent or 1 part in 500.

 These studies can be summarized like this: during memory recall and thinking or problem solving, the brain does not look any different, and does not work any harder. The tiny differences that show up in these studies are so small they can be characterized as “no significant difference.” You certainly wouldn't claim that an employee was working harder on some day in which you detected that he merely expended a half of one percent more energy, by working two minutes longer on that day. And we shouldn't say the brain is working harder merely because it was detected that some part used only half of one percent more energy, only one part in 200. 

As for whether the brain looks different during thinking or memory recall, based on the numbers in the studies above, it would seem that someone looking at a real-time fMRI scanner would be unable to detect a change in activity when someone was thinking or recalling something. Brain scan studies have the very bad habit of giving us “lying with color” visuals that may show some region of the brain highlighted in a bright color, when it merely displayed a difference of activity of about 1 part in 200. But the brain would not look that way if you looked at a real-time fMRI scan of the brain during thinking. Instead, all of the regions would look the same color (with the exception of visual and auditory cortex regions that would show a degree of activity corresponding to how much a person was seeing or hearing). So we can say based on the numbers above that the brain does not look different when you are thinking or recalling something. 

A 1 percent difference cannot even be noticed by the human eye. If I show you two identical-looking photos of a woman, and ask you whether there is any difference, you would be very unlikely to say "yes" if there was merely a 1% difference (such as a width of 200 pixels in one photo and a width of 202 pixels in the second photo). So given the differences discussed above (all 1 percent or less, and most less than half of one percent), it is correct to say that brains do not look different when they are thinking or remembering. 

The relatively tiny variation of the brain during different cognitive activities is shown by the graph below, which helps to put things in perspective. The graphed number for the brain (.5 percent) is just barely visible on the graph.

When you run, the heart gives a very clear sign that it is involved, and a young man running very fast may have his heart rate increase by 300%. The pupil of the eye gives a very clear sign that it is involved with vision, because the pupil of the human eye changes from a size of 1.5 millimeters to 8 millimeters depending on how much light is coming into the eye. That's a difference of more than 500%.  But when you think or remember, the brain gives us no clear sign at all that the brain is the source of your thoughts or that memories are being stored or retrieved from the brain.  The tiny variations that are seen in brain scans are no greater than we would expect to see from random variations in the brain's blood flow, if the brain did not produce thought and did not store memories. You could find the same level of variation if you were to do fMRI scans of the liver while someone was thinking or remembering. 

Concerning glucose levels in the brain, an article in Scientific American tells us that a scientist "remains unconvinced that any one cognitive task measurably changes glucose levels in the brain or blood." According to a scientific paper, "Attempts to measure whole brain changes in blood flow and metabolism during intense mental activity have failed to demonstrate any change." Another paper states this: "Clarke and Sokoloff (1998) remarked that although '[a] common view equates concentrated mental effort with mental work...there appears to be no increased energy utilization by the brain during such processes' (p. 664)." 

The reality that the brain does not work harder and does not look different during thinking or recollection may be shocking to those who have assumed that the brain is the source of thinking and the storage place of human memories. But to those who have studied the numerous reasons for rejecting such dogmas, this reality will not be surprising at all. To a person who has studied and considered the lack of any viable theory of permanent memory storage in the brain (discussed here), and the lack of any viable theory of how a brain could instantly retrieve memories (discussed here), and the lack of any theory explaining how a brain could store abstract thoughts as neuron states,  it should not be surprising at all to learn that brains do not work harder or look different when you are thinking or recalling.  The facts discussed here conflict with the dogmas that brains generate thoughts and store memories. If the brain did such things, we would expect brains to work harder during such activities.  

We know for sure that there is a simple type of encoding that goes on in human cells: the encoding needed to implement the genetic code, so that nucleotide base pairs in DNA can be successfully translated into the corresponding amino acids that combinations of the base pairs represent.  To accomplish this very simple encoding,  the human genome has 620 genes for transfer RNA.  But imagine if human brains were to actually encode human experiential and conceptual memories, so that such things were stored in brains. This would be a miracle of encoding many, many times more complicated than the simple encoding that the genetic code involves. Such an encoding would require thousands of dedicated genes in the human genome. But the human genome has been thoroughly mapped, and no such genes have been found.  This is an extremely powerful reason for rejecting the dogma that brains store human experiential and conceptual memories. 

A good rule to follow is a principle I call "Nobel's Razor." It is the principle that you should believe in scientific work that has won Nobel prizes, but often be skeptical of scientist claims that do not correspond to Nobel prize wins.  No scientist has ever won a Nobel prize for any work involving memory, or any work backing up the claim that brains generate thoughts or store memories. 

Postscript: If you do a Google search for "genes for memory encoding," you will see basically no sign that any such things have been discovered, other than one of those hyped-up press stories with an inaccurate headline of "100 Genes Linked to Memory Encoding." The story is referring to a scientific study described by this paper, entitled "Human Genomic Signatures of Brain Oscillations During Memory Encoding." The dubious methodology of the authors was to get data on gene expression, and try to see how much it correlated with oscillations in brain waves.  Out of more than 10,000 genes studied, the authors have found about 100 which they claim were correlated with these brain wave oscillations. They state, "We were successful in identifying over 100 correlated genes and the genes identified here are among the first genes to be linked to memory encoding in humans." But the correlations reported are weak, with most of these 100 genes correlating no more strongly than .1 or .2 (by comparison, a perfect correlation is 1.0, and a fairly strong correlation is .5). The results reported do not seem any stronger than you would expect to get by chance. We would expect that even if there was no causal relation between gene expression and brain waves, that if you compared gene expression in more than 10,000 genes to brain waves, you might find purely by chance a tiny fraction such as 1% of the genes that would look weakly correlated to brain waves (or any other random data, such as stock market ups and downs).   In one of the spreadsheet tables you can download from the study,  there is a function listed for each of these roughly 100 genes, and in each case it's a function other than memory encoding. So such genes cannot be any of the thousands of dedicated genes that would have to exist purely for the sake of translating complex conceptual, verbal, and episodic memories into neural states, and vice versa, if a brain stored memories. No such genes have been identified in the genome. 

Wednesday, June 13, 2018

Real Physics Versus String Theory's Chimerical “Landscape”

At Quanta magazine, there is an article written by string theorist Robbert Dijkgraaf. It's a batty-sounding piece entitled “There Are No Laws of Physics. There's Only the Landscape.” To explain the goofy reasoning behind this piece, I have to give a little background on the history of string theory. String theory started out as a speculative exercise to try to unify two types of physics theories that seem in conflict with another: general relativity (which deals with large-scale phenomena such as solar systems and galaxies) and quantum mechanics (which deals with the subatomic world). The hope of string theorists was that they could find one set of equations that would uniquely describe a reality in which general relativity and quantum mechanics would exist harmoniously. To try to reach this goal, the string theorists had to do all kind of imaginative flights of fancy such as imagining twelve different dimensions.

Eventually it was found out that it was not at all true that string theory predicted a reality only like the one we have. It was found that there could be something like 10500 possible universes in which something like string theory could be true, each with a different set of characteristics. Did the string theorists then give up? No, they invented the name “the landscape” to describe this imaginary set of possible universes. They then started speaking as if speculating about this “landscape” is some proper business of physicists.

This makes no sense. It is the business of science to study reality, not imaginary possibilities. The only case in which it makes sense to study an imaginary universe would be to throw some light upon our own universe. For example, I might study a hypothetical universe in which the proton charge is not the very exact opposite of the electron charge, as it is in our universe. This might shed some light on how fine-tuned our universe is. But except for rare cases like these, it is a complete waste of time to speculate about imaginary hypothetical universes.

String theorists like Dijkgraaf use various verbal tricks to fool us into thinking that they are doing something more scientific than Tolkien spinning stories about the imaginary landscapes of Middle Earth. One trick they use is to call their imaginary universes “models.” Such a word is inappropriate when talking about any speculation about a hypothetical universe different from the one we observe, such as a universe with different laws or different fundamental constants. In science a model is a simplified representation of a known physical reality. So, for example, the Bohr solar system model of the atom is an example of a model. But you are not creating a model when you imagine some unobserved universe with different laws. That's just imaginative speculation, not model building.

Another trick used by Dijkgraaf is to try to make it sound like the weird speculations of string theory have become accepted by most physicists. For example, he writes the following:

The current point of view can be seen as the polar opposite of Einstein’s dream of a unique cosmos. Modern physicists embrace the vast space of possibilities and try to understand its overarching logic and interconnectedness. From gold diggers they have turned into geographers and geologists, mapping the landscape in detail and studying the forces that have shaped it. The game changer that led to this switch of perspective has been string theory.

With this kind of talk you would think that string theory has taken over physics, wouldn't you? But that didn't happen. During the late 1980's there was talk about how string theory was going to take over physics. But as we can see in the diagram below (from a physics workshop), the popularity of string theory has plunged since about 1990. String theory is now like a weird little cult in the world of theoretical physicists, not at all something which most physicists endorse.
physics papers
Another verbal trick used by Dijkgraaf is to use metaphors that might make us think that string theory is talking about something more substantial than speculations about where angels fly about or how long ghosts haunt a house. So in the quote above he compares string theorists to geographers and geologists and gold diggers and mappers, who are all hard-headed down-to-earth people who deal with solid physical reality. But string theorists are not at all like such people, since string theorists deal so much with imaginary universes for which there is no evidence.

Dijkgraaf also tries to insinuate that the so-called models of string theory (a plethora of imaginary universes) are “results of modern quantum physics.” He states the following:

First of all, the conclusion that many, if not all, models are part of one huge interconnected space is among the most astonishing results of modern quantum physics. It is a change of perspective worthy of the term “paradigm shift.”

A result in physics is something established by observation or experiments. Quantum mechanics has results, but string theory has no results. It has merely ornate speculations. You don't get a paradigm shift by speculating about the unobserved.

What Dijkgraaf conveniently fails to tell us is that string theory has been a complete bust on the experimental and observational side. String theory is based on another theory called supersymmetry. Attempts to find the particles predicted by supersymmetry have repeatedly failed. There is no evidence for any version of string theory.

As for Dijkgraaf's claim that “there are no laws of physics, there's only the landscape,” this seems to be a bad case of confusing reality and the imaginary. The “landscape” of string theory is imaginary, but the laws of physics are realities that our existence depends on every minute. For example, if there were no laws of electromagnetism, none of us would last for even 30 seconds. Claims like “there are no laws of physics” by a string theorist suggests that string theory is just an aberrant set of science-flavored speculations out-of-touch with reality. You might call it runaway tribal folklore, the tribe in question being a small subset of the physicist community. And when a string theorist speaks about an imaginary group of possible universes (what string theorists call the landscape), and says that scientists are “mapping the landscape in detail and studying the forces that have shaped it,” as if the imaginary “landscape” was real, it again seems to be a case of confusing the real and the imaginary. Similarly, a Skyrim player might get so lost in the video game's fantasy that he might say, “There's no America, there's only Tamriel.”

Given string theorists, it's hardly a surprise that nbcnews.com has a story entitled “Why some scientists say physics has gone off the rails.” In that story the cosmologist Neil Turok is quoted:

"All of the theoretical work that's been done since the 1970s has not produced a single successful prediction," says Neil Turok, director of the Perimeter Institute for Theoretical Physics in Waterloo, Canada. "That's a very shocking state of affairs."

Saturday, June 9, 2018

No, Building Blocks of Life Were Not Found on Mars

Thursday was a banner day for hype and misinformation in the science world. NASA had a press conference announcing some findings regarding Mars. They announced that some organic molecules had been found in Mars, but only simple molecules that existed in a very low concentration of about a few parts per 10 million. They also released some paper regarding methane readings. On the National Geographic web site, the headline was “Building Blocks of Life Found on Mars.” This headline was false, as was the claim of “landmark discoveries.” 

The building blocks of life are proteins and nucleic acids, both of which are extremely complex molecules. Given just the right arrangement of a large number of proteins and nucleic acids, you might have a cell capable of self-reproduction.  But an organic molecule is simply a molecule containing carbon, one that may either be very simple or one that may be complex. The very term “organic molecule” is a poor one, because many of the so-called organic molecules have nothing to do with life. 

It is true that proteins and nucleic acids are organic molecules, but that doesn't mean you have found anything like a building block of life merely because you have found an organic molecule. The building blocks of an opera company are string musicians such as violinists, and singers such as tenors, sopranos and baritones that can sing Italian. All of these are organisms. But it would make no sense to say, “I have some building blocks of an opera company because I have two mice in my cage, and they are organisms.” It makes equally little sense to say that you have some building blocks of life merely because you have simple organic molecules.

But if the organic molecules found on Mars are not the building blocks of life, are they at least the building blocks of the building blocks of life? No, they are no such thing. The building blocks of proteins are amino acids. The building blocks of nucleic acids are chemicals called purines and pyrimidines. None of these has been found on Mars. So not only have we not found the building blocks of life, we haven't even found on Mars the building blocks of the building blocks of life.

A story of the announcement on the Popular Mechanics site told us that “Curiosity just found an abundance of organic compounds on Mars.” That's not correct, because the actual level of organic molecules was only “a few dozen parts per 100 million,” or about 2 parts in 10 million. By comparison, Earth soil is about 5 percent organic compounds. Science magazine engaged in equally misleading reporting, telling us that “In its quest to find molecules that could point to life on Mars, NASA's Curiosity rover has struck a gusher.” Finding about 2 parts in 10 million is hardly hitting a gusher. Science magazine also gave us the “building blocks of life” bunk, untrue for the reason I just explained.

One scientist named Inge declared that Thursday's announcements were “breakthroughs in astrobiology,” which is nonsensical. You would only have a breakthrough in astrobiology if life were to be discovered.

As for the scientific paper regarding methane, it's pretty much a yawner. Some scientists have found methane on Mars in an extremely low concentration, only about 1 part per billion. The scientists claim that there is a seasonal variation, but since it's only a tiny variation, the variation could easily be simply a random variation, or something due to variability in instrument readings. The scientists have only three years of data, which is not enough for one to have any confidence about season variation. The variation could easily just be random variation not actually caused by a seasonal effect. Similarly, if you plot ups and down of the stock market for a small number of years such as three, you will have maybe a 10% or 20% chance of picking up a “seasonal variation” which is a pure chance variation, not a real seasonal effect.

The authors of the methane paper claim to have picked up evidence of a “strong seasonal variation.” But the graphs of their paper don't seem to show that. Below is Figure 2 from the paper, which is the same one reproduced in the nbcnews.com story. We see readings from three years, and one of those years has only one reading. In red we see the one reading from Mars Year 34 (MY34). In blue we see some readings from Mars Year 33, and in yellow we see readings from Mars Year 32. No clear seasonal trend is shown. The Mars Year 32 readings tell you that methane is strongest in winter and summer. The Mars Year 33 readings tell you that the methane is strongest in summer and autumn. There is too little data to draw any conclusions about a seasonal effect, and any slim suggestion of a seasonal effect could easily be a random variation. You would need four or five years of readings before you could talk with any confidence about any seasonal effect.

Mars graph

Methane on Mars could easily be caused by geological processes having nothing to do with life. 

The nbcnews.com story on Thursday's findings says, “Years ago, the Curiosity rover found evidence that liquid water and the chemical ingredients for microbial life once existed on Mars.” This quote included a hyperlink to a NASA story, but that story merely announced that elements such as carbon, nitrogen, and oxygen had been found on Mars. More misinformation, since “chemical ingredients” implies something much more complicated than elements. Using similar talk, you might claim that the sweet grandmother you live next to is almost like a drug dealer, because she has the “chemical ingredients” in her backyard to make heroin and crack cocaine. But so does everyone else with a backyard, if you consider elements as “chemical ingredients.” 

Has anyone found anywhere in the solar system outside of Earth the building blocks of life? No, because neither proteins nor nucleic acids have been found outside of Earth. But what about the building blocks of the building blocks of life – have they ever been found outside of Earth? There's only one such building block of a building block of life that has been found: glycine has been found in a comet. Glycine is one of the two simplest amino acids. None of the other more complicated amino acids has been found in space. So we've found in space none of the building blocks of life, and have merely found one of the building blocks of the building blocks of life, on a comet rather than Mars.

Let us imagine a woman named Jane who has dated for five years a man named Walter. Jane wants to believe that Walter is a millionaire, but the problem is that she has seen no sign in the past five years that Walter has any money whatsoever. Imagine that one day Jane sticks her arms down between the cracks in Walter's sofa, and finds some pennies; and she then says, “I'm ecstatic – this is money, so he might be a millionaire.” This week our science journalists were like Jane, acting all excited because after years of study Mars has finally coughed up faint traces of biologically irrelevant carbon compounds that are neither building blocks of life nor the building blocks of the building blocks of life. 

Postscript: We have further evidence of Mars bunk in this article published on the web site of Air and Space Magazine. It has the phony-baloney title "Fingerprints of Martian Life." Written by professor Dirk Schulze-Makuch, the article claims that "large and complex organic molecules" were discovered. No such molecules were found. The scientific paper mentions triophenes, which have a mere 9 atoms, and aliphatic compounds, which have a mere 12 or 14 atoms. The average protein molecule in a human has more than 1000 atoms. The article tells us that "proteins or nucleic acids (such as DNA) are the building blocks of life," and gives some devious wording designed to make you wonder whether such things were found on Mars. They certainly were not.  

Tuesday, June 5, 2018

"Most Species Young" Study Makes Biologists Tear Their Hair Out

After the release of a startling new scientific study, it seems more appropriate to ask a question there was already previous reason for asking. The question is: have prevailing explanations in biology flunked the genome tests?

The new study was published in the scientific journal Human Evolution. According to a news report on it, “The study's most startling result, perhaps, is that nine out of 10 species on Earth today, including humans, came into being 100,000 to 200,000 years ago.” The report quotes one of the authors as saying, “"This conclusion is very surprising, and I fought against it as hard as I could.” Given an age of more than four billion years for our planet, you can describe the paper as a "most species young" study, since 200,000 years is less than a ten thousandth of the Earth's age. 

The press story on the study does not tell us that this study's findings are inconsistent with the claims of orthodox Darwinism, other than to give us a section heading of “Darwin perplexed.” But it is rather clear why such a study clashes with Darwinist orthodoxy.

Darwinism has always maintained that species appear very gradually, because of an accumulation of random mutations that occur over vast periods of times. Almost all random mutations have no effect or a harmful effect. A beneficial random mutation must be extremely rare. Useful biological innovations would actually require combinations of random mutations having a coordinated beneficial effect. Such things should be as rare as monkeys typing out useful software subroutines by randomly striking the keys of a keyboard. But we have been told that such incredibly improbable combinations might occur given vast eons of time.

Given this situation, we can say that the plausibility of Darwinism as an explanation of biological innovations is inversely proportional to the speed at which such innovations occur. A Darwinist trying to explain some large set of biological innovations occurring within a span of a million years must make assertions 100 times less plausible than the same person trying to explain these innovations occurring over a time span of 100 million years. This is why the Cambrian Explosion has always been a thorn in the side of evolutionary biologists. During the Cambrian Explosion, most of the animal phyla appeared within a relatively short period of time, only about 5 or 10 million years. It has seemed to many that such an “organization explosion” could not have occurred so quickly under Darwinian assumptions.

If the new study is correct, we would have to assume that there was some burst of biological innovation causing 90% of the world's species to originate in the past 200,000 years. Such an “organization explosion” would be as hard to explain as the Cambrian Explosion. In terms of body plans, it would involve a smaller amount of innovation, but the time period would be much shorter than that of the Cambrian Explosion.

The news report on the article attempts clumsily to suggest some things that might explain this explosion of innovation. The report states the following:

Which brings us back to our question: why did the overwhelming majority of species in existence today emerge at about the same time? Environmental trauma is one possibility, explained Jesse Ausubel, director of the Program for the Human Environment at The Rockefeller University. "Viruses, ice ages, successful new competitors, loss of prey—all these may cause periods when the population of an animal drops sharply," he told AFP, commenting on the study. "In these periods, it is easier for a genetic innovation to sweep the population and contribute to the emergence of a new species."

The last statement is literally true, but it is a statement very prone to give someone who hears it a very wrong idea. The chance of a genetic innovation occurring by random mutations is not at all affected by any of the things mentioned. But if by some miracle of luck some useful genetic innovation had occurred, it might be more likely to survive if, for example, there were fewer competitors. Similarly, the chance of trees falling in a forest and forming by chance into a log cabin will not at all be affected by whether there are earthquakes or forest fires in the forest, but if such a miracle of luck happens to occur, the chance of such a randomly-formed log cabin surviving might be affected by the rate of earthquakes or forest fires. There are, in fact, no environmental conditions that would ever make it more likely that random mutations would be able to produce a burst of biological innovation.

So the finding of the study (that 90% of current species appeared in the past 200,000 years) is in conflict with the claims of Darwinian orthodoxy. As I stated before, the plausibility of Darwinian explanations is inversely proportional to the speed at which biological innovations occur. The more biological innovation occurring in a relatively short time span, the less plausible Darwinism is. A phys.org article discussing a previous study has the headline “Not so fast: researchers find that lasting evolutionary change takes about one million years.” So how could so many species have originated in less than 200,000 years?

The fact that incredibly improbable innovations by random mutations do not become more probable after a mass extinction event is one difficulty. Another difficulty is that we know of no mass extinction event around 200,000 years ago. Geologists do not claim that the earth's environment suddenly changed around that time. The event that supposedly wiped out the dinosaurs occurred millions of years earlier. A look at a graph of temperature changes in the past million years will show nothing special between 50,000 BC and 300,000 BC. 

One of the study authors attempts to smooth things over so that readers are not too shocked by the findings of his study. The news story states the following:

The simplest interpretation is that life is always evolving," said Stoeckle. "It is more likely that—at all times in evolution—the animals alive at that point arose relatively recently."

But that idea does not work, for it requires us to believe in biological innovation occurring at a rate gigantically more rapid than Darwinism can account for. Such an idea also is inconsistent with the fossil record, which does not show new species appearing at even a tenth of a rate so rapid (except for the Cambrian Explosion). 

The news story ends by mentioning another anomaly found by the study:

And yet—another unexpected finding from the study—species have very clear genetic boundaries, and there's nothing much in between."If individuals are stars, then species are galaxies," said Thaler. "They are compact clusters in the vastness of empty sequence space."The absence of "in-between" species is something that also perplexed Darwin, he said.

The problem in question is a gigantic one for orthodox Darwinian explanations. I have various Java programs that simulate something like random evolution. In one of these programs I start out with a group of simulated organisms consisting of a “DNA string” with 100 mainly random characters. The program makes random mutations on this “DNA,” and checks for the appearance of useful features. So, for example, if this DNA string contains the letters “two eyes” or “two ears” or “two legs” or “two arms” or “two lungs,” then the organism with such a DNA becomes more likely to reproduce (and the more such useful features, the more likely the simulated organism will be to reproduce). Under program conditions similar to that which might occur in the natural world, the program might run for 2000 simulated generations without any useful innovations appearing. But suppose I modify the program to make it much more easy for biological innovations to occur, making things less realistic. And suppose I start out with simulated organisms that have a few useful features. What I then find is that there occurs a very strong amount of what we may call species fragmentation.

So, for example, imagine I start out with a population of 10,000 simulated organisms that each have no useful features but “eyes” and “legs.” If I then run 2000 simulated generations, and “load the dice” so that biological innovations can occur way more easily, I will not end up with something like a new species with eyes, legs, and one or two other features. Instead I will get a situation in which the final population is rather “all over the map.” Maybe 1000 simulated organisms will have eyes and legs, another 1000 will have only eyes, another 1000 will have only legs, another 1000 will have eyes and ears but no legs, another 1000 will have only legs and arms but no eyes or ears, and so forth. This type of “species fragmentation” is exactly what should occur under random evolution whenever it is easy enough for innovation to occur relatively rapidly. But that is not what we see in nature. Instead, there are very few or no “in-between” species, and, as the study notes, species are like galaxies in space and the organisms of that species like stars of that galaxy, with no stars between the galaxies.

Then there was another way in which the recent study suggested our biology experts are not on the right track. The news report states the following:

It is textbook biology, for example, that species with large, far-flung populations—think ants, rats, humans—will become more genetically diverse over time. But is that true? "The answer is no," said Stoeckle, lead author of the study, published in the journal Human Evolution. For the planet's 7.6 billion people, 500 million house sparrows, or 100,000 sandpipers, genetic diversity "is about the same," he told AFP.

But how can this be if genetic diversity is caused by random mutations, as Darwinism claims? There are only a certain number of random mutations that occur per 100,000 organisms. So if genetic diversity was really caused by random mutations, we should inevitably expect that a species with billions of organisms should have many times more genetic diversity than a species with a small population. As this paper based on Darwinian principles states, “Genetic theory predicts that levels of genetic variation should increase with effective population size.” But according to the new Human Evolution study, that's not true. Instead, we have an anomaly that has been called Lewontin's paradox.

So for these reasons the new Human Evolution study is a kind of trifecta of aggravation for the mainstream biologist, something that may make such a person tear his or her hair out. 

But at the Collective Evolution site, a writer is happy about the study, suggesting it may support the idea of extraterrestrial involvement with evolution. 

The recent study is not the first genetic study to confound Darwinian predictions. An interesting series of studies has attempted to look for evidence of what are called “classic sweeps” in the genomes of human DNA. A classic sweep is what would occur if some useful new feature were to occur because of one or more random mutations in the DNA of one organism of a population, with the feature becoming more and more common in the population, because of some benefit it provided that increased the likelihood of survival and reproduction. When the “classic sweep” has finished, the entire population has the beneficial feature. It has long been an assumption of orthodox Darwinists that most biological innovations appear through such “classic sweeps,” also called “classic selective sweeps.” But a 2011 study in the journal Science had the title “Classic Selective Sweeps Were Rare in Recent Human Evolution.” By “recent human evolution” the study meant the past 250,000 years.

Such a result is very much at odds with the predictions of Darwinism. For an orthodox Darwinist, if there was very few classic selective sweeps in humans during the past 200,000 years, that's news as bad as it would be for a UFO or SETI enthusiast if we were to find that Earth-sized planets are rare in the habitable zone of other stars. 

A more recent scientific study in 2014 found there was virtually no sign of adaptive evolution in the human genome. The paper published in a mainstream science journal looked for traces of natural selection by looking for something called “fixed adaptive substitutions” in human DNA. The paper stated, “Our overall estimate of the fraction of fixed adaptive substitutions (α) in the human lineage is very low, approximately 0.2%, which is consistent with previous studies.”  It's hard to imagine a bigger fail or flop for Darwinian explanations. If such explanations were correct, we would have expected to find such signs of adaptive evolution in a large fraction of the human genome, not a fifth of one percent. 

Talking about biological origins, our braggart biologists have long been telling us “we got this,” but they would be using more truthful slang if they were to lose their hubris and say, “Basically we don't know jack about how complex biology originates.” The knowledge of man is tiny and fragmentary, and the mysteries of nature are many and mountainous.