Header 1

Our future, our universe, and other weighty topics


Friday, August 30, 2019

Ignoring the Red Flags: Madoff and Materialism

Around the year 2005 it seemed that one of the the most respectable securities firms around was Bernard L. Madoff Investment Securities LLC. It was run by Bernie Madoff, who at that time seemed to have the finest credentials. He was the former chairman of NASDAQ, one of America's largest securities trading establishments. Around 2005 the word around the upper class elite was that many smart upper-crust people were investing their money with Madoff's company. His investors “included banks, hedge funds, charities, universities, and wealthy individuals who have disclosed about $41 billion invested with Bernard L. Madoff Investment Securities LLC,” according to a wikipedia.org page. According to the same page, famous people investing with Madoff included “Steven Spielberg, Jeffrey Katzenberg, actors Kevin Bacon, Kyra Sedgwick, John Malkovich... Zsa Zsa Gabor, Mortimer Zuckerman, Baseball Hall of Fame pitcher Sandy Koufax, the Wilpon family (owners of the New York Mets), broadcaster Larry King and World Trade Center developer Larry Silverstein.” For years, it seemed like you could make more money investing with Madoff's firm than with any other securities firm.

But in 2008 it was discovered that Bernie Madoff's company was guilty of a massive 64-billion-dollar fraud. Madoff was running a gigantic Ponzi scheme, a scheme in which investments can seem to do well as long as the investors don't all ask for their money back, in which case they will find they have actually suffered massive losses. It was the largest Ponzi scheme in history. Madoff plead guilty, and was sentenced to 150 years in prison. Investors lost tens of billions of dollars because of the fraud.

Long before the fraud became public, there were many red flags regarding Madoff's company: warning signs that people should have paid attention to, but ignored. In the year 2000 a financial analyst named Harry Markopolos had done research on Madoff's company, and concluded “that Madoff's numbers didn't add up.” He told the Securities Exchange Commission that it was mathematically impossible for Madoff's to be getting the returns it claimed using the strategies it claimed to use. But the SEC ignored his warnings, including a 2005 17-page memo which listed 29 red flags or warning signs, and which had the title “The World's Largest Hedge Fund is a Fraud.” Markopolos approached the Wall Street Journal in 2005 with his claims that Madoff's company was a fraud, but the Wall Street Journal ignored him.

It seemed that the “social proof” involving Madoff's company was so great that people ignored the red flags. Social proof is when people regard something as well-established or proven or sensible based on the number of people who support it, the depth of their commitment, and the prestige of the most prestigious supporters. Given all of the wealthy, famous investors who had given so many billions to Bernie Madoff, his company had tons of “social proof.” But its financial claims were bunk. When investors got quarterly statements listing their investment returns, they were being told outrageous falsehoods.

Now let us consider modern-day materialism, the belief system that reigns in today's universities. Materialism consists of unproven tenets such as the claim that life and biological species arose accidentally, the belief that there are no souls or spirits, and the belief that human mental phenomena (such as consciousness and memory) can be explained entirely by the brain. We may ask: is modern-day materialism its own system of institutional falsehood rather similar to the Madoff scheme, and is it a kind of scheme of misleading baloney in which people are year after year fed false information that cannot be true because the math just doesn't add up? Does materialism have its own red flags just as bad as existed for the Madoff company, red flags that are being ignored because of an abundance of “social proof” leading people to think, “It's too big to be wrong?”

Let us consider five cases in which the math just doesn't work for materialism (which includes the idea that mental activity such as recollection is caused purely by brain activity, and that life and humans arose purely because of accidental natural processes).

  1. Materialists claim that life originated from non-life, when there was some lucky combination of non-living chemicals that caused something living to appear. Scientific research has shown that even the simplest living microorganisms require more than 100 proteins. But a protein is a combination of hundreds of amino acids arranged in just the right way to achieve a functional effect. It has been estimated that the probability of even a single complex functional protein appearing by chance (from a random combination of its amino acids) is less than 1 in 10 to the seventieth power  -- more unlikely than  you guessing correctly the 10-digit social security numbers of seven consecutive strangers you met. It seems, therefore, that if mere chance is involved, we would not expect life to naturally appear from chance combinations of chemicals at any time in the history of the universe.
  2. Materialists claim that mankind originated from random mutations and natural selection. We know that there have been about 100 billion humans who have lived since 8000 BC. The total number of humans or human ancestors who lived between 300,000 BC and 8000 BC is less than 3 billion, and probably less than 1 billion. There has been little impressive human evolution during the 100 billion lifetimes since 8000 BC, with the biggest examples being only minor things like lactose tolerance or better breathing for some humans living in high altitudes. But according to materialist accounts, between 300,000 BC and 8000 BC there was an enormous amount of evolution that led to the origin of speaking humans and humans capable of math, art, architecture, agriculture, science and philosophy. This requires we believe that during less than 3 billion lives there occurred vastly more helpful  evolution than occurred during 100 billion lives. But the mathematics of random mutations make such a thing exceptionally unlikely – as unlikely as a country getting 30 tornadoes in one year, and then only 1 tornado in the next 60 years. Calculations of the "waiting time" period needed for only a tiny fraction of the random mutations needed to account for hominid changes presumed to have occurred between 300,000 BC and 8000 BC suggest that it would actually have required very many millions of years for such mutations -- billions of years, in fact (to use the figure in this table for  a string of only 5 nucleotides). 
  3. Materialists claim that memories are stored in the human brain, and materialist neuroscientists typically tell us that memories are stored in tiny brain components called synapses. But we know that these synapses are made up of proteins that have short lifetimes. The average lifetime of a synapse protein is only about two weeks. But old people can accurately remember experiences they had more than 50 years ago, or things they learned more than 50 years ago. This length of time (50 years) is some 1000 times longer than the two weeks that is the average lifetime of synapse proteins. So it seems that if memories are stored in the brain, they should not even last for  a thousandth of the maximum length of time that humans can remember things.
  4. Humans such as actors who play Hamlet or Muslims who memorize all of their holy book can remember vast amounts of information with complete accuracy. But we know that synapses and neurons are very noisy.  A scientific paper states, "Several recent studies have documented the unreliability of central nervous system synapses: typically, a postsynaptic response is produced less than half of the time when a presynaptic nerve impulse arrives at a synapse." Another scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower."  So synapses in the cortex often transmit signals with a likelihood of between 10% and 50%. If we do an estimate of how well humans should be able to remember based on the amount of noise in neurons and synapses, taking into account the need for a brain signal to travel through multiple synapses, we must conclude that if memories are stored in brains, we should not be able to remember with more than 1% accuracy. But Hamlet actors and Wagnerian tenors remember very large amounts of stored information with 100% accuracy.
  5. When people talk about the speed of brain signals, we typically hear an estimate of about 100 meters per second. But that is not a reliable estimate of the speed of brain signals, and is only how fast a brain signal may travel when moving across the fastest tiny parts of the brain (what are called myelinated axons).  A realistic estimate of the speed of brain signals would have to take into account the relatively slow speed of signal transmission through dendrites (estimated at only .5 meter per second), the very strong slowing factor of what are called synaptic delays (having a strong cumulative delaying effect), and another very strong slowing factor caused by what is called synaptic fatigue (a refractory period in which a synapse will often have to rest before firing again).  When you realistically take into account all of these factors,  as I have done at length in this post,  the math ends up showing that brain signals should not be able to travel in the cortex at much more than a snail's pace of about 1 or 2 centimeters per second.  But humans can remember very obscure memories instantly, such as when a quiz show contestant instantly identifies a randomly chosen name or place -- much, much faster, it would seem, than would be possible if our recall involved relatively slow brain signals moving about in our brains. 
In all of these cases, it seems that for materialism the math just doesn't add up. So there are lots of red flags suggesting materialism is feeding us falsehoods. But such red flags are being mainly ignored, just as happened in the Madoff case. Maybe it's for the same reason that it took so long for people to see what was wrong in the Madoff case: because there was an abundance of “social proof.” Just as the Madoff company had lots of "social proof" in the form of its prestigious investors, modern-day materialism has lots of "social proof," in the form of prestigious professors who support it.

Just as a Madoff investor may have reasoned that the Madoff company was “too big to be a sham,” someone today may reason that institutional academic materialism is “too big to be bunk.” But nothing is too big to be bunk or too big to be a sham.


Monday, August 26, 2019

8 Reasons for Doubting Claims of the Heritability of Intelligence

How much, if any, is intelligence inherited? We can imagine two extreme scenarios. Under the scenario of 0% intelligence heritability, two parents who both had an IQ of about 130 would have no reason at all for thinking that their children would have an IQ above 100. Under the scenario of 100% intelligence inheritability, two parents with an IQ of about 130 could be certain that their children would have an intelligence about the same as their parent's intelligence. It is often claimed that the heritability of intelligence is about 50%. If that were true, two parents with an IQ of about 130 would have a fairly strong reason for suspecting that their children's intelligence would be above average, but not have anything close to certainty about such a matter.

But the claim that intelligence is even 50% inheritable is unproven. I will explain some reasons for doubting such a claim. I will argue that we do not have very convincing evidence that the genetic heritability of intelligence is much higher than 0.

Reason #1: The Lack of Large Parent-Child IQ Databases Needed to Prove a Claim of Intelligence Heritability

What would you need to firmly establish a claim that intelligence is 50% heritable? You would need a very large database listing the IQ scores of very many thousands or millions of individuals, along with the IQ scores of their parents. But such a database is not known to exist anywhere. Consider a case such as my own family. I have never formally had my IQ tested, nor have my two children. When I was in high school, I was told that students had their IQ tested by the school. Conceivably somewhere in some school's database might be an IQ score for me, and in some other school's database might be IQ scores for my children. But there is no database that links these records together, allowing a researcher to use the information as part of a deduction about the heritability of intelligence. The few studies that have attempted to gather the IQ of both parents and children have usually involved samples of no more than a few thousand parents and children. It is impossible to draw very reliable conclusions about human intelligence in general from samples so small.

The study here claims to have used a large database of IQ scores from Norway, one that included parent and child relationships. I'll discuss in the next paragraph why this wasn't actually a database of IQ scores. The study concludes, “The correlation we observe between father’s IQ and that of his son is .38 .” That's an estimate below the typical estimate of 50% for the heritability of intelligence. But since the paper fails to specify how these parent and child relationships were derived, we should be skeptical of this result. The paper is sketchy in its details, and does nothing to tell us: exactly how did the authors know how to connect fathers and sons? We are not even told how many father-son pairs were used. In a large data set with lots of information, someone might use a computer program to scan the data and do a calculation of the correlation between father's IQ and son's IQ. But a single line of errant code in such a computer program would cause the program to produce the wrong answer.

Also, the paper tells us that a standard IQ test was not even the basis of these supposed “IQ scores,” and that the “IQ scores” were derived from a trio of tests. The paper tells us, “The IQ measure is a composite score from three speeded IQ tests -- arithmetic, word similarities, and figures...the word test is similar to the vocabulary test in WAIS.” That doesn't sound like a reliable way at all of testing IQ, and sounds largely like a way of testing education rather than intelligence. The source here says the vocabulary part of the WAIS test "measures word knowledge," but a good IQ test isn't supposed to measure knowledge. The paper seems to have wrongly described a trio of cognitive and educational tests as being an IQ score. We therefore should have no confidence that the paper has informed us about the heritability of IQ.

Reason #2: The Low Reliability of Twin Studies

Lacking what they need (very large databases with the IQ scores of both parents and children), scientists have tried to rely on much smaller databases involving the IQ scores of twins. The strategy of this type of study is as follows:

  1. Collect data relevant to the IQ scores of twins.
  2. Attempt to figure out whether the twins were monozygotic (so called “identical twins”) or dizygotic twins.
  3. Look and see whether there is a closer correlation between the IQ scores of the monozygotic twins and the IQ scores of the dizygotic twins.

The largest study I have found of this type is one that uses something called the Genetics of High Cognitive Abilities (GHCA) Consortium, That term seems to be merely an umbrella used to collect the data from six different studies done in different countries. The number of twins in the database was about 11,000. Based on somewhat higher correlations in IQ between monozygotic twins than in dizygotic twins, the study estimated that intelligence is about 50% heritable.

There are several reasons why such a result is not terribly convincing.
  1. Studies such as this are usually run by researchers interested in proving a genetic basis for intelligence. There are any number of ways in which bias might have influenced the studies, such as in decisions on which twins to include.
  2. A large fraction of the data on whether twins were monozygotic twins was made from parent questionnaires rather than biological DNA testing. Such questionnaires are less reliable than DNA testing.
  3. The study's paper tells us that the IQ of the twins was estimated by the people running the study (sometime from two or more other tests taken). But were the people doing such estimates blind as to whether the twins were monozygotic or dizygotic? If not, such people might have had a tendency to give similar IQ scores to monozygotic twins. The study makes no mention of a blinding protocol being used. 
  4. We are not entitled to draw conclusions about the heritability of intelligence in the general population from studies involving only twins. There could be any number of reasons why the heritability of intelligence is a much different number when twins are involved, particularly since we do not at all understand where intelligence comes from and whether it is actually a product of the brain (there being many reasons for doubting claims that human minds are created by the brain).
  5. It is clear from the scientific paper that IQ scores were often derived from written tests in which vocabulary knowledge played an important part. Such tests are largely measures of learned knowledge, something different from intelligence. 
Reason #3: The Sometimes Poor Research Record of Scientists Looking for Genetic Associations

In considering the question of whether intelligence is highly heritable, we should consider the research record of geneticists attempting to show genetic associations with human traits. Over the years such scientists have raised many false alarms, claiming that there were genetic associations when there wasn't good evidence for such a thing. One example of a blunder by such scientists was described in a recent article in The Atlantic entitled “A Waste of 1000 Research Papers.” The article is about how scientists wrote a thousand research papers trying to suggest that genes such as SLC6A4 were a partial cause of depression, one of the leading mental afflictions. The article tells us, "But a new studythe biggest and most comprehensive of its kind yet—shows that this seemingly sturdy mountain of research is actually a house of cards, built on nonexistent foundations." There have been many such cases in which scientists tried to convince us that some factor has a genetic link, but the evidence was weak.  An article says, "Prior to 2005, the field was largely a scientific wasteland scattered with the embarrassing and wretched corpses of unreplicated genetic association studies, with barely a handful of well-validated genetic risk factors peeking above the noise." 

It seems that when scientists are eager to show that there is some genetic basis for something, they can be guilty of analysis bias that causes them to think there is a good evidence for a genetic link when no such evidence exists. Such an example should make us wonder whether the same faulty biased analysis is at work in studies trying to show a link between genes and intelligence.

Reason #4: The Flynn Effect Contradicts Claims That Intelligence is Highly Heritable

The Flynn effect is a well-documented effect that involves a gradual increase in performance in intelligence tests. The increase seems to be about 3% per decade, and has seemingly been occurring since the 1930's (although in some countries in recent years we have failed to see evidence of such a thing). Given how often mothers in their thirties have children in recent decades, what this means is that a child born around 1990 may typically have an IQ 9% greater than the IQ of his parents. Such a thing is a point against claims that intelligence is heritable.

In some of the studies involving the heritability of intelligence, we see a kind of “Flynn effect subtraction” in which the data is massaged to factor out the Flynn effect, or a calculation formula is modified to remove the Flynn effect. But there is no justification for such a thing, which can be described as a kind of cover-up to get rid of a data effect that is inconsistent with the claim of a heritability of intelligence.

Reason #5: The Fact That There Is No Clear Evidence of Any Genes for Intelligence, and That No “IQ Gene” Has Been Found

To try to show a genetic basis for intelligence, scientists sometimes run what is called a genome-wide association study or GWAS. Some of these studies have claimed to have found genes that were associated with intelligence. But there are several reasons why such a thing does not itself show that genes determine intelligence.

Let us consider the important fact that a GWAS will usually find evidence for any thing that is being looked for, purely because of random variations unrelated to a causal effect.  For example, let's suppose you do a study in which you scan the genomes of 10,000 people and try to find correlations with the DNA in these people and their favorite songs.  You will inevitably be able to find some slight correlations here and there which could be cited as a genetic basis for your song preferences. But such an association would be spurious, and would be caused by mere random variations that did not have a causal effect. Your genes don't code your song preferences.

The same type of spurious association could be cropping up in studies trying to link genes to intelligence. So we may ask questions such as this:

(1) How much of a replication effect is there, with the same genes showing up in multiple studies trying to find genes for intelligence?
(2) How strong an association is reported? How much of intelligence variation might be attributable to a particular candidate for an intelligence gene?

Typically there are only weak associations found in such GWAS studies on the heritability of intelligence. For example, an article on one study says, "Even if a person had both copies of all the variants, she would score an average of 1.8 higher points in an IQ test."

As for replication, reports of "intelligence genes" generally do not replicate well. A scientific study attempted to replicate reported associations with some genes (DTNBP1, CTSD, DRD2, ANKK1, CHRM2, SSADH, COMT, BDNF, CHRNA4, DISC1, APOE, and SNAP25) using " using data sets from three independent, well-characterized longitudinal studies with samples of 5,571, 1,759, and 2,441 individuals." The replication attempt failed, and the study is entitled,  "Most reported genetic associations with general intelligence are probably false positives."

An article by a psychologist states the following:

"It’s the best kept secret of modern science: 16 years of the Human Genome Project suggest that genes play little or no role in explaining differences in intelligence. While genes have been found for physical traits, such as height or eye colour, they are not the reason you are smarter (or not) than your siblings. Nor are they why you are like your high-achieving or dullard parents, or their forebears."

Reason #6: The Intrinsic Implausibility of Claims That Genes  Determine Intelligence

Scientists have no understanding of how neurons can create thinking, imagination, abstract ideas or understanding. The claim that human thinking comes from the brain is a speech custom of scientists, rather than a fact established by observations.  There are actually strong reasons for doubting that the brain is the cause of human intelligence. One very strong reason is that when people undergo hemispherectomy operations (in which half of their brain is removed to stop epileptic seizures), it has little effect on their intelligence (as documented here).  In the same link just given, you can read about people with high intelligence despite losing the great majority of their brains.  Because we have such strong reasons for doubting that the brain is the actual source of human intelligence, it is intrinsically implausible that some genes relating to the brain may have a large effect on intelligence. 

Reason #7: Much of the Reported Heritability of Intelligence May Be a Heritability of Dyslexia (or Similar Perception Problems) Unrelated to General Intelligence

Dyslexia is a reading disorder that is believed to affect 5% to 10% of children. Dyslexia is not a defect in general intelligence, and a person with dyslexia will do just as well as a person without dyslexia on an intelligence test that does not require reading. "Dyslexia independent of IQ" is the headline of a story at the MIT News.   Dyslexia is believed to be a highly heritable. A scientific paper on dyslexia says, “Inherited factors are estimated to account for up to 80%.”

But almost all data on IQ comes from paper-and-pencil tests requiring reading. A large part of the data suggesting a heritability of intelligence may simply be caused by a heritability of dyslexia that has nothing to do with intelligence. This is what scientists call a “confounding factor,” and in the case of the heritability of intelligence, it seems to be a largely overlooked confounding factor.

Reason #8: The Fact That IQ Tests Are Imperfect Measurements of Intelligence

It has long been pointed out that IQ tests are far-from-perfect as measurements of intelligence.  One reason is that many things other than intelligence can affect scores on IQ tests. Here are some of the things:

(1) Visual perception. Since an IQ test uses a test booklet that requires a lot of reading, a person's vision clarity and visual perceptual abilities can affect the scores -- but such things are different from intelligence.  If a student comes from a poor family that lacks money to keep his vision at 20/20, so much worse will the student's IQ score be.
(2) Manual dexterity. A typical IQ test will require many different cases of filling in exactly the right oval on a test form, and a small percentage of students might be relatively slow at such a manual task, for reasons having nothing to do with intelligence. 
(3) Learned vocabulary. IQ tests are supposed to use simple words, to avoid being tests of learned knowledge. But any test with many written question must inevitably be partially a test of learned vocabulary.  For example, when I went to the web site for the Wechsler IQ test, the first question (in the short sample test I tried) uses the word "contradictory" and the word "enhance." Very many school children would not know the meaning of these words.  
(4) Motivation. Scores on IQ tests are strongly affected by the degree of motivation of the test taker.  According to this page, when students are offered money rewards for high scores, it will affect the IQ scores by an average of 20 points. We read, "Thus rewards higher than $10 produced values of more than 1.6 (roughly equivalent to more than 20 IQ points), whereas rewards of less than $1 were only one-tenth as effective."  So imagine the students in a low-quality school where students may tend to have lower motivation.  The average IQ result at such a school might be much lower than the average IQ result at a better school, even if the students at the schools have the same intelligence. 
(5) Dyslexia. This perceptual reading disorder (not an intelligence defect) can drastically affect scores on written IQ tests.
(6) Distractions in test rooms. This can have a large affect on IQ scores. Let's imagine a disorderly school, with quite a few "class clowns" or unruly students. Such students might create any number of distractions that might affect the concentration of students taking IQ tests. 
(7) Number of sharpened pencils of test takers. At a school with students from richer families, an average student may have two sharpened pencils for his IQ test. But at a school with students from poor families, an average student may have only one not-very-sharp pencil.  If the pencil breaks, so much worse for his IQ score. 
(8) Percentage of sick test takers.  A test taker will probably do worse on an IQ test if he is sick.  In affluent communities, parents might hire a "baby sitter" if their child is sick, preventing him from being present during an IQ test. In poorer communities, the sick student might be much more likely to be told to go to school even when he's sick. In some third-world countries,  we can easily imagine 5% or more of the IQ test takers being sick. 
(9) Number of recent traumas affecting test takers. Let's imagine student Sally sits down for a 90-minute IQ test at her school. She finds it rather hard to concentrate very hard for 90 minutes because her thoughts drift to the recent gunfire in her neighborhood, or the alcoholism or drug abuse of one or her family members.  The result may be a lower score on the test. 


things affecting IQ scores

The factors discussed above cast great doubt on claims of the heritability of intelligence. The same factors (particularly the ones discussed under Reason #8) are reasons casting great doubt on all claims that people of one race or nationality are (on average) more intelligent or less intelligent than people of some other race or nationality. 

We can imagine a radically different type of intelligence test. A person would be placed in a basement room with a small window three meters above the ground. There would be 20 large filled boxes in the room.  The person would be told that he is being locked in the room, and that he must escape. He would be told that the faster he escapes, the more money he will make. The test would be to see how fast the person can think of stacking the boxes to make a stair-like structure that can be used to allow escape through the window.  On such a test we can easily imagine students from poorer, troubled schools getting scores just as high as students from elite schools. 

I don't rule out the possibility that intelligence is to some degree heritable, although I think there is no strong evidence for a very large amount of such heritability.  We can certainly imagine some possibilities that might one day firm up that type of evidence, including better intelligence tests with fewer confounding variables, and larger databases tracking intelligence scores for parents and their children. 

Postscript: See the link here for a lengthy discussion of the work of Leon J. Kamin, who made a strong critique of twin studies used to bolster claims of a heritability of intelligence.  Below is an excerpt:


 In summing up his TRA study findings in The Science and Politics of I.Q., Kamin wrote, “To the degree that the case for a genetic influence on I.Q. scores rests on the celebrated studies of separated twins, we can justifiably conclude that there is no reason to reject the hypothesis that I.Q. is simply not heritable.” He reached a similar conclusion in relation to IQ genetic research as a whole. Kamin was not concluding that heredity had no influence on IQ test scores, but rather that IQ genetics researchers, who bore the burden of proof, had failed to provide scientifically valid evidence that it did. In a 1976 book review, Harvard evolutionary geneticist and Kamin’s future collaborator Richard Lewontin wrote that Kamin had discovered, in IQ genetic research, “a pattern of shoddiness, carelessness, miserable experimental design, misreporting, and misrepresentation amounting to a major scandal.”
One of the studies most enthusiastically cited by those claiming intelligence is highly heritable is a 1990 Minnesota study of twins reared apart, called MISTRA. The study is debunked by a lengthy 2022 scientifc paper by Jay Joseph entitled "A Reevaluation of the 1990 'Minnesota Study of Twins Reared Apart' IQ Study." We read this:
"In 1980, sociologist Howard Taylor described what he called 'The IQ game,' by which he meant IQ-genetic researchers’ 'use of assumptions that are implausible as well as arbitrary to arrive at some numerical value for the genetic heritability of human IQ scores on the grounds that no heritability calculations could be made without benefit of such assumptions' (Taylor, 1980, p. 7). The MISTRA IQ study can be seen as an exemplar of 'IQ game' bad science...The MISTRA IQ study failed to discover evidence that genetic factors influence IQ scores and cognitive ability across the studied population.".

Thursday, August 22, 2019

The Quixotic “Wing and a Prayer” Impracticality of NASA's Europa Clipper Mission

A press report a few days ago reported, “NASA has cleared the Europa Clipper mission to proceed through the final-design phase and then into spacecraft construction and testing, agency officials announced yesterday.” Too bad. The Europa Clipper mission will basically be a $4 billion dollar waste of money that won't produce any very important scientific results.

Europa is a moon of the planet Jupiter. The Europa Clipper mission will be solely focused on getting more information about this distant moon. But the Europa Clipper won't have the job of discovering what Europa looks like. We already know that, from previous space missions.


Europa (Credit: NASA)

The Europa Clipper spacecraft will take photos of Europa more close-up than previous photos. But there won't be any very interesting close-ups, due to the fact that the surface of Europa is almost featureless, consisting of frozen ice. So the Europa Clipper won't find any interesting geological features like the Valles Marineris on Mars. The most interesting features on the surface are merely cracks in the ice. Close-up photos of those won't provide photos that people will be pasting on their walls.

The reason why scientists are interested in Europa is that they think that there could be life in an ocean underneath the icy surface of Europa. Will the Europa Clipper be able to confirm that life exists on Europa? It seems not, for the mission does not include a lander.

But NASA scientists have a kind of “wing and a prayer” idea about how the Europa Clipper spacecraft might detect life. They hope that it might be able to fly through a water geyser erupting on Europa, and sniff signs of life in water vapor. At 2:11 in the NASA video here, we are told that Europa “might be erupting plumes of water,” and that “if that's true, then we could fly through those plumes with the spacecraft.” There are two reasons why there is virtually no hope that such a thing would ever succeed in detecting life.

The first reason is the enormous improbability of abiogenesis, life appearing from non-life in an under-the-ice ocean of Europa. To calculate this chance, we must consider all of the insanely improbable things that seemed to be required for life to originate from non-life. It seems that to have even the most primitive life originate, you need to have an “information explosion,” a vast organization windfall comparable to falling trees luckily forming into a big log-cabin hotel. Even the most primitive microorganism known to us seems to need a minimum of more than 200,000 base pairs in its DNA (as discussed here).

Scientists have been knocking their heads on the origin-of-life problem for decades, and have made very little progress. The origin of even the simplest life seems to require fantastically improbable events. Protein molecules have to be just-right to be functional. It has been calculated that something like 1070 random trials would be needed for a single type of functional protein molecule to appear, and many different types of protein molecules are needed for life to get started. And so much more is also needed: cells, self-replicating molecules, a genetic code that is an elaborate system of symbolic representations, and also some fantastically improbable luck in regard to homochirality (like the luck of you tossing a bucket full of pennies on the floor, and having them all turn up heads).  The complete failure of all attempts to search for radio signals from extraterrestrials would seem to provide further evidence against claims that the origin of life is relatively easy. 

There is another reason the “sniff life from a water geyser's vapor” would have virtually no chance of succeeding. The evidence that water plumes even occur on Europa is only borderline, with some research casting doubt on the evidence. If water plumes occur on Europa, they seem to occur only very rarely and for a short time. The paper here suggests plume “ballistic timescales of only 1000” seconds, making the chance of a spacecraft flying through a plume incredibly unlikely (less than the chance of me dying from stray gunfire).

It would not at all be a situation like the following:

Mr. Spock: Captain, I detect a water plume from a geyser on Europa.
Captain Kirk: Quick, hurry over there while it lasts! Go to Warp Factor 8!

If a rare water geyser eruption occurred, the Europa Clipper spacecraft probably would not be anywhere close to Europa's surface. This is because the Europa Clipper mission plan does not have the spacecraft orbiting Europa. Instead, the plan is to just have the spacecraft repeatedly fly by Europa, flying by it about 45 times, so that the spacecraft does not pick up too much deadly radiation near Europa. With only such intermittent appearances close to Europa, the spacecraft would need an incredibly lucky coincidence to occur for the spacecraft to fly through some short-lived water plume ejected by a geyser.

We can compare this scheme to the “wing and a prayer” scheme of a traveler who plans to travel without any food to a city, the plan being that the traveler will walk with his mouth open and hope that someone discards food by throwing it into the air, with the food luckily landing in the traveler's mouth.

At the 2:52 mark in the NASA video, we get some talk that reveals the main motivation behind Europa exploration. It's all about trying to prove (contrary to all the known facts) that “the origin of life must be pretty easy,” to use the words in the video. For people with certain ideological tendencies, proving that the origin of life was easy is like a crusade. But zealous crusaders often don't make logical plans, as we saw during the Middle Ages when there were foolish missions such as the Children's Crusade, in which an army of children marched off to try to capture the Holy Lands from Muslim armies. The Europa Clipper mission's odds of biological detection success seem like the odds of success faced by the Children's Crusade.

Sunday, August 18, 2019

He Tries to Play Natural History Mr. Fix-it

The book “Lamarck's Revenge” by paleontologist Peter Ward is a book that tries to kind of tell us: evolution theory is broken, but there's a fix. Unfortunately, the fix described is no real fix at all, being hardly better than a tiny little band-aid.

On page 43 the author tells us the following:

'Nature makes no leap,' meaning that evolution took place slowly and gradually. This was Darwin's core belief. And yet that is not how the fossil record works. The fossil record shows more 'leaps' than not in species.”

On page 44 the author states the following:

Charles Darwin, in edition after edition of his great masterpiece, railed against the fossil record. The problem was not his theory but the fossil record itself. Because of this, paleontology became an ever-greater embarrassment to the Keepers of Evolutionary Theory. By the 1940s and '50s this embarrassment only heightened. Yet data are data; it is the interpretation that changed. By the mid-twentieth century, the problem posed by fossils was so acute it could no longer be ignored: The fossil record, even with a century of collecting after Darwin, still did not support Darwinian views of how evolution took place. The greatest twentieth century paleontologist, George Gaylord Simpson, in midcentury had to admit to a reality of the fossil record: 'It remains true, as every paleontologist knows, that most new species, genera, and families, and that nearly all new categories above the level of families, appear in the record suddenly, and are not led up to by known, gradual, completely continuous transitional sequences.' "

So apparently the fossil record does not strongly support the claims of Darwin fans that new life forms appear mainly through slow, gradual evolution.  Why have we not been told this important truth more often, and why have our authorities so often tried to insinuate that the opposite was true? An example is that during the Ordovician period, the number of marine animals in one classification category (Family, the category above Genus) tripled, increasing by 300% (according to this scientific paper). But such an "information explosion" is dwarfed by the organization bonanza that occurred during the Cambrian Explosion, in which almost all animal phyla appeared rather suddenly -- what we may call a gigantic complexity windfall. 


information explosion

Ward proposes a fix for the shortcomings of Darwinism – kind of a moldy old fix. He proposes that we go back and resurrect some of the ideas of the biologist Jean-Baptiste Lamarck, who died in 1829. Ward's zeal on this matter is immoderate, for on page 111 he refers to a critic of Darwinism and states, “The Deity he worships should be Lamarck, not God.” Many have accused evolutionary biologists of kind of putting Darwin on some pedestal of adulation, but here we seem to have this type of worshipful attitude directed towards a predecessor of Darwin.

Lamarck's most famous idea was that acquired characteristics can be inherited. He stated, "All that has been acquired, traced, or changed, in the physiology of individuals, during their life, is conserved...and transmitted to new individuals who are related to those who have undergone those changes." For more than a century, biologists told us that the biological theories of Lamarck are bunk. If the ideas of Lamarck are resurrected by biologists, this may show that biologists are very inconsistent thinkers who applaud in one decade what they condemned the previous decade.

Ward's attempt to patch up holes in Darwinism is based on epigenetics, which may offer some hope for a little bit of inheritance of acquired characteristics. In Chapter VII he discusses the huge problem of explaining the Cambrian Explosion, which he tells us is “when the major body plans of animals now on Earth appeared rapidly in the fossil record.” Ward promises us on page 111 that epigenetics can “explain away” the problem of the Cambrian Explosion. But he does nothing to fulfill this promise. On the same page he makes a statement that ends in absurdity:

The...criticism is that Darwinian mechanisms, most notably natural selection combined with slow, gene-by-gene mutations, can in no way produce at the apparent speed at which the Cambrian explosion was able to produce all the basic animal body plans in tens of millions of years or less. Yet the evidence of even faster evolutionary change is all around us. For example, the ways that weeds when invading a new environment can quickly change their shapes.”

The latter half of this statement is fatuous. The mystery of the Cambrian Explosion was how so many dramatic biological innovations (such as vision, hard shells and locomotion) and how so many new animal body plans were able to appear so quickly. There is no biological innovation at all (and no real evolution) when masses of weeds change their shapes, just as there is no evolution when flocks of birds change their shapes; and weeds aren't even animals. The weeds still have the same DNA  (the same genomes) that they had before they changed their shapes. Humans have never witnessed any type of natural biological innovation a thousandth as impressive as the Cambrian Explosion.

Ward then attempts to convince us that “four different epigenetic mechanisms presumably contributed to the great increase in both the kinds of species and the kinds of morphologies that distinguished them that together produced the Cambrian explosion” (page 113). By using the phrase “presumably contributed” he indicates he has no strong causal argument. To say something “contributed” to some effect is to make no strong causal claim at all. A million things may contribute to some effect without being anything close to being an actual cause of that effect. For example, the mass of my body contributes to the overall gravitational pull that the Earth and its inhabitants exert on the moon, but it is not true that the mass of my body is even one millionth of the reason why the moon keeps orbiting the Earth. And each time you have a little charcoal barbecue outside, you are contributing to global warming, but that does not mean your activity is even a millionth of the cause of global warming. So when a scientist merely says that one thing “contributes” to something else, he is not making a strong causal claim at at all. And when a scientist says that something  “presumably contributed” to some effect, he is making a statement so featherweight and hesitant that it has no real weight.

On page 72-73 Ward has a section entitled “Summarizing Epigenetic Processes,” and that section lists three things:

  1. Methlyation,” that “lengths of DNA can be deactivated.”
  2. Modifications  of gene expression...such as increasing the rate of protein production called for by the code or slowing it down or even turning it on or off.”
  3. Reprogramming,” which he defines as some type of erasing, using the phrase “erase and erase again,” also using the phrase “reprogramming, or erasing.”

There is nothing constructive or creative in such epigenetic processes. They are simply little tweaks of existing proteins or genes, or cases of turning off or erasing parts of existing functionality. So epigenetics is useless in helping to explain how some vast burst of biological information and functional innovation (such as the Cambrian Explosion) could occur.

Trying to make it sound as if the idea that epigenetics had something to do with the Cambrian Explosion, Ward actually gives away that such an idea has not at all gained acceptance, for he states this on page 118: “From there the floodgates relating to epigenetics and the Cambrian explosion opened, yet none of this has made it into the textbooks thus far.” Of course, because there's no substance to the idea that epigenetics can explain even a hundredth of the Cambrian explosion. The Cambrian Explosion involved the relatively sudden appearance of almost all animal phyla, but Ward has failed to explain how epigenetics can explain the origin of even one of those phyla (or even a single appendage or organ or protein that appeared during the Cambrian Explosion).  If the inheritance of acquired characteristics were true, it would do nothing to explain the appearance of novel biological innovations, never seen before, because the inheritance of acquired characteristics is the repetition of something some organisms already had, not the appearance of new traits, capabilities, or biological innovations. 

So we have in Ward's book the strange situation of a book that mentions some big bleeding wounds in Darwinism, but then merely offers the tiniest band-aid as a fix, along with baseless boasts that sound like, “This fixes everything.”

A book like this seems like a complexity concealment.  Everywhere living things show mountainous levels of organization, information and complexity, but you would never know that from reading Ward's book.  He doesn't tell you that cells are so complex they are often compared to factories or cities. He doesn't tell you that in our bodies are more than 20,000 different types of protein molecules, each a different very complex arrangement of matter, each with the information complexity of a 50-line computer program, such proteins requiring a genome with an information complexity of 1000 books. He doesn't mention protein folding, one of the principal reasons why there is no credible explanation for the origin of proteins (the fact that functional proteins require folding, a very hard-to-achieve trick which can occur in less than .00000000000000000000000001 of the possible arrangements of the amino acids that make up a protein).  In the index of the book we find no entries for "complexity," "organization," "order," or "information." Judging from its index, the book has only a passing reference to proteins, something that tells you nothing about them or how complex they are.  The book doesn't even have an index entry for "cells," merely having an entry for "cellular differentiation." All that Ward tells you about the stratospheric level of order, organization and information in living things is when Ward tells you on page 97 that "there is no really simple life," and that life is "composed of a great number of atoms arranged in intricate ways," a statement so vague and nebulous that it will go in one of the reader's ears and out the other. 

In his interesting recent book Cosmological Koans, which has some nice flourishes of literary style, the physicist Anthony Aquirre tells us about just how complex biological life is. He states the following on page 338:

"On the physical level, biological creatures are so much more complex in a functional way than current artifacts of our technology that there's almost no comparison. The most elaborate and sophisticated human-designed machines, while quite impressive, are utter child's play compared with the workings of a cell: a cell contains on the order of 100 trillion atoms, and probably billions of quite complex molecules working with amazing precision. The most complex engineered machines -- modern jet aircraft, for example -- have several million parts. Thus, perhaps all the jetliners in the world (without people in them, of course) could compete in functional complexity with a lowly bacterium."

complex_figure
Some arrangements are too complex to have appeared by chance

Wednesday, August 14, 2019

The Baloney in Salon's Story About Memory Infections

The major online web site www.salon.com (Salon) has put up a story on human memory with the very goofy-sounding title, “A protein in your brain behaves like a virus, infecting your cells with memories.” The story is speculative nonsense, and it seems to be based on a BS hogwash misstatement (but not one told by the writer of the story). The story tells us, “Viruses may actually be responsible for the ability to form memories,” a claim that not one in 100 neuroscientists has ever made.

The story mentions some research done by Jason Shepherd, associate professor of neurobiology at the University of Utah Medical School. Shepherd has done some experiments testing whether a protein called Arc (also known as Arc/Arg 3.1) is a crucial requirement for memory formation in mice. He does this by removing the gene for Arc in some mice, which are called “Arc knockout” mice, meaning that they don't have the gene for Arc, and therefore don't have the Arc protein that requires the Arc gene.

In the Salon.com story, we read the following claim by Shepherd: “ 'If you take the Arc gene out of mice,' Shepherd explains, 'they don’t remember anything.' ” Right next to this quote, there's a link to a 2006 scientific paper by 26 scientists, none of them Shepherd (although strangely the Salon story suggests that the paper is research by Shepherd and his colleagues). Unfortunately, the paper does not at all match the claim Shepherd has made, for the paper demonstrates very substantial memory and learning in exactly these “Arc knockout” mice that were deprived of the Arc gene and the Arc protein.

The Morris water maze test is a test of learning and memory in rodents. In the test a rodent is put at the center of a large circular tub of water, which may or may not have milk powder added to make the water opaque. Near one edge of the tub is a submerged platform that the rodent can jump on to escape the tub. A rodent can find out the location of the platform by exploring around. Once the rodent has used the platform to escape the tub, the rodent can be put back in the center of the tub, to see how well the rodent remembers the location of the little platform previously discovered. 

In the paper by the 26 scientists, both 25 normal mice and 25 “Arc knockout” mice were tested on the Morris water maze. The “Arc knockout” mice remembered almost as well as the normal mice. The graphs below show the performance difference between the normal mice and the mutant “Arc knockout mice,” which was only minor. The authors state that in another version of the Morris water maze test of learning and memory, there was no difference between the performance of the “Arc knockout” mice and normal mice. We are told, “No differences were observed in the cue version of the task.”

Unimpressive “water maze” results from the paper by 26 scientists, showing no big effect from knocking out the Arc gene 

What title did the 26 scientists give their paper? They gave it the title, “Arc/Arg3.1 Is Essential for the Consolidation of Synaptic Plasticity and Memories.” But to the contrary, the Morris water maze part of the experiments shows exactly the opposite: that you can get rid of Arc/Arg3.1 in mutant “knockout” mice, and that they will still remember well, with good consolidation of memories (the test involved memory over several days, which requires consolidation of learned memories). The 26 authors have given us a very misleading title to their paper, suggesting it showed something, when it showed exactly the opposite.

A 2018 paper by 10 scientists testing exactly the same thing (whether “Arc knockout” mice do worse on the Morris water maze) found the same results as the paper by the 26 authors: that there were only minor differences in the performance of the mutant “Arc knockout mice” when they were tested with the Morris water maze test. In fact that paper specifically states, “Deletion of Arc/Arg3.1 in Adult Mice Impairs Spatial Memory but Not Learning,” which very much contradicts Shepherd's claim that “If you take the Arc gene out of mice, they don’t remember anything.”

Unimpressive “water maze” results from the 2018 paper by 10 scientists, showing no big effect from knocking out the Arc gene

So why did scientist Shepherd make the false claim that “If you take the Arc gene out of mice, they don’t remember anything,” a claim disproved by the paper by the 26 scientists (despite its misleading title contradicting its own findings), and also disproved by the 2018 paper? Shepherd has some explaining to do.

In the paper here, Shepherd and his co-authors state, “The neuronal gene Arc is essential for long-lasting information storage in the mammalian brain,” a claim that is inconsistent with the experimental results just described in two papers, which show long-lasting information retention in mice that had the Arc gene removed. In the paper Shepherd refers in a matter-of-fact manner to “information storage in the mammalian brain,” but indicates he doesn't have any real handle on how such a thing could work, by confessing that “we still lack a detailed molecular and cellular understanding of the processes involved,” which is the kind of thing people say when they don't have a good scientific basis for believing something.

We have seen here two cases of the misuse of the word "essential" in scientific papers.  I suspect that the word is being misused very frequently in biological literature, in cases where scientists say one thing is essential for something else when the second thing can exist quite substantially without the first.  Similarly, a scientific paper found that the phrase "necessary and sufficient" is being massively misused in biology papers that failed to find a "necessary and sufficient" relationship.  To say that one thing is necessary and sufficient for some other thing means that the second thing never occurs without the first, and the first thing (by itself) always produces the second.  An example of the correct use of the phrase is "permanent cessation of blood flow is both necessary and sufficient for death." But it seems biologists have massively misused this phrase, using it for cases in which no such "necessary and sufficient" relationship exists.  A preposterous example is the paper here which tells us in its title that two gut microbes (microorganisms too small to see) are "necessary and sufficient" for cognition in flies -- which is literally the claim that tiny microbes are all that a fly needs to think or understand. 

Postscript: I have received a reply from Jason Shepherd, who quite rightfully scolds me for misspelling his name in the original version of this post. I apologize for that careless error.  Here is Shepherd's reply, which I am grateful to receive:

"You've conflated two different terms and don't seem to understand what consolidation is: Learning and Memory. Arc KO mice can indeed learn information. Memory is the storage and retention of information that was learned. Arc KO mice have normal short term memory, as in if you test them in the water maze and other memory tasks they do, indeed, show memory. However, if you come back a day later...there is NO retention of that information. There is, in fact, an essential role for this gene in the consolidation of memory. That is, conversion of short to long-term memory."

I do not find this to be a satisfactory explanation at all, and when I wrote the post I was well aware of what consolidation is when referring to memory:  a kind of firming up of a newly acquired memory so that it is retained for a period of time such as days.  As for Shepherd's claim about Arc KO mice (mice that have had the Arc gene knocked out) that "if you come back a day later...there is NO retention of that information," that is clearly disproved by the two scientific papers that I cited.  The water maze test is a test of learning, conducted over 5 or more days,  showing a mouse's improved ability over multiple days to find a hidden platform. The graphs cited above very clearly show the retention and consolidation of learned information over multiple days in Arc knockout mice. The mice have got better and better at finding the hidden platform in the water maze, improving their score over several days, which they could only have done if they had retained information learned in previous days.  In three of the graphs above, we see steady improvement of Arc knockout mice in the water maze test, over a 9-day period. If it were actually true that in Arc KO mice "there is NO retention of that information," we would see a straight horizontal line in the performance graph for the water maze, not a diagonal line showing steady improvement from day-to-day.   

In his reply Shepherd rather seems to insinuate that the water-maze test is merely a test of short-term memory, but it is very well known that the Morris water-maze test is a test of long-term memory and memory consolidation.  For example, here we read that the Morris water maze test "requires the mice to learn to swim from any starting position to the escape platform, thereby acquiring a long-term memory of the platform’s spatial location."  And the link here tells us that the Morris water-maze tests "several stages of learning and memory (e.g., encoding, consolidation and retrieval)."  And the paper here refers to "spatial long-term memory retention in a water maze test."  And the recent Nature paper here says, "We next examined the hippocampus-dependent long-term memory formation in cKO mice by Morris water maze test." Figure 1 of the paper specifically cites a Morris water maze test, speaking just as if it tests what the paper calls "long-term memory consolidation." The first line of the scientific paper here states that the Morris water maze test was established to test things such as "long-term spatial memory." 

Here are some relevant comments from a science book about memory, comments disputing the idea that the Arc protein has anything to do with long-term memory persisting for days: 

"The half-life of the Arc protein is a few hours: think about the implications of this fact...Thus, if Arc (or any other protein with similar regulation and kinetics) participates in synaptic potentiation, this mechanism can only allow potentiation to be maintained for less than a day. Induction of Arc or any similar protein is clearly not a maintenance mechanism for very long-lasting events."

Post-postscript: I should have also included the graphs below from the 2018 paper on Arc knockout mice (from Figure 4).  In the first graph we see the performance of Arc knockout mice in the Morris water maze test, compared to normal mice. There is no noticeable difference in the 9-day test: the two lines match almost exactly -- just as if knocking out the Arc gene had produced no effect on memory or learning. 


The second and third graphs show "search strategies" used by Arc knockout mice (cKO) and the normal mice (WT-control).  The two graphs look the same. In his latest reply (which you can read below), Shepherd quotes this paper talking about a difference in search strategies between the two groups, but in the graphs above we fail to see any real  difference. The caption for these graphs says, "Similar progression of navigational strategy was observed in both groups."  Apparently there were two different experimental results, one of which failed to show any real difference in the search strategies used by the mice lacking the Arc gene. 

Post-post-postscript: In a 2020 paper Shepherd and his co-authors find that the Arc gene is "not required for LTP maintenance," a finding that seems to conflict with the claim that the Arc gene is vital for memory, or is at least discouraging to those who have made such a claim. The authors say they were "surprised by this result."