Header 1

Our future, our universe, and other weighty topics


Thursday, August 22, 2019

The Quixotic “Wing and a Prayer” Impracticality of NASA's Europa Clipper Mission

A press report a few days ago reported, “NASA has cleared the Europa Clipper mission to proceed through the final-design phase and then into spacecraft construction and testing, agency officials announced yesterday.” Too bad. The Europa Clipper mission will basically be a $4 billion dollar waste of money that won't produce any very important scientific results.

Europa is a moon of the planet Jupiter. The Europa Clipper mission will be solely focused on getting more information about this distant moon. But the Europa Clipper won't have the job of discovering what Europa looks like. We already know that, from previous space missions.


Europa (Credit: NASA)

The Europa Clipper spacecraft will take photos of Europa more close-up than previous photos. But there won't be any very interesting close-ups, due to the fact that the surface of Europa is almost featureless, consisting of frozen ice. So the Europa Clipper won't find any interesting geological features like the Valles Marineris on Mars. The most interesting features on the surface are merely cracks in the ice. Close-up photos of those won't provide photos that people will be pasting on their walls.

The reason why scientists are interested in Europa is that they think that there could be life in an ocean underneath the icy surface of Europa. Will the Europa Clipper be able to confirm that life exists on Europa? It seems not, for the mission does not include a lander.

But NASA scientists have a kind of “wing and a prayer” idea about how the Europa Clipper spacecraft might detect life. They hope that it might be able to fly through a water geyser erupting on Europa, and sniff signs of life in water vapor. At 2:11 in the NASA video here, we are told that Europa “might be erupting plumes of water,” and that “if that's true, then we could fly through those plumes with the spacecraft.” There are two reasons why there is virtually no hope that such a thing would ever succeed in detecting life.

The first reason is the enormous improbability of abiogenesis, life appearing from non-life in an under-the-ice ocean of Europa. To calculate this chance, we must consider all of the insanely improbable things that seemed to be required for life to originate from non-life. It seems that to have even the most primitive life originate, you need to have an “information explosion,” a vast organization windfall comparable to falling trees luckily forming into a big log-cabin hotel. Even the most primitive microorganism known to us seems to need a minimum of more than 200,000 base pairs in its DNA (as discussed here).

Scientists have been knocking their heads on the origin-of-life problem for decades, and have made very little progress. The origin of even the simplest life seems to require fantastically improbable events. Protein molecules have to be just-right to be functional. It has been calculated that something like 1070 random trials would be needed for a single type of functional protein molecule to appear, and many different types of protein molecules are needed for life to get started. And so much more is also needed: cells, self-replicating molecules, a genetic code that is an elaborate system of symbolic representations, and also some fantastically improbable luck in regard to homochirality (like the luck of you tossing a bucket full of pennies on the floor, and having them all turn up heads).  The complete failure of all attempts to search for radio signals from extraterrestrials would seem to provide further evidence against claims that the origin of life is relatively easy. 

There is another reason the “sniff life from a water geyser's vapor” would have virtually no chance of succeeding. The evidence that water plumes even occur on Europa is only borderline, with some research casting doubt on the evidence. If water plumes occur on Europa, they seem to occur only very rarely and for a short time. The paper here suggests plume “ballistic timescales of only 1000” seconds, making the chance of a spacecraft flying through a plume incredibly unlikely (less than the chance of me dying from stray gunfire).

It would not at all be a situation like the following:

Mr. Spock: Captain, I detect a water plume from a geyser on Europa.
Captain Kirk: Quick, hurry over there while it lasts! Go to Warp Factor 8!

If a rare water geyser eruption occurred, the Europa Clipper spacecraft probably would not be anywhere close to Europa's surface. This is because the Europa Clipper mission plan does not have the spacecraft orbiting Europa. Instead, the plan is to just have the spacecraft repeatedly fly by Europa, flying by it about 45 times, so that the spacecraft does not pick up too much deadly radiation near Europa. With only such intermittent appearances close to Europa, the spacecraft would need an incredibly lucky coincidence to occur for the spacecraft to fly through some short-lived water plume ejected by a geyser.

We can compare this scheme to the “wing and a prayer” scheme of a traveler who plans to travel without any food to a city, the plan being that the traveler will walk with his mouth open and hope that someone discards food by throwing it into the air, with the food luckily landing in the traveler's mouth.

At the 2:52 mark in the NASA video, we get some talk that reveals the main motivation behind Europa exploration. It's all about trying to prove (contrary to all the known facts) that “the origin of life must be pretty easy,” to use the words in the video. For people with certain ideological tendencies, proving that the origin of life was easy is like a crusade. But zealous crusaders often don't make logical plans, as we saw during the Middle Ages when there were foolish missions such as the Children's Crusade, in which an army of children marched off to try to capture the Holy Lands from Muslim armies. The Europa Clipper mission's odds of biological detection success seem like the odds of success faced by the Children's Crusade.

Sunday, August 18, 2019

He Tries to Play Natural History Mr. Fix-it

The book “Lamarck's Revenge” by paleontologist Peter Ward is a book that tries to kind of tell us: evolution theory is broken, but there's a fix. Unfortunately, the fix described is no real fix at all, being hardly better than a tiny little band-aid.

On page 43 the author tells us the following:

'Nature makes no leap,' meaning that evolution took place slowly and gradually. This was Darwin's core belief. And yet that is not how the fossil record works. The fossil record shows more 'leaps' than not in species.”

On page 44 the author states the following:

Charles Darwin, in edition after edition of his great masterpiece, railed against the fossil record. The problem was not his theory but the fossil record itself. Because of this, paleontology became an ever-greater embarrassment to the Keepers of Evolutionary Theory. By the 1940s and '50s this embarrassment only heightened. Yet data are data; it is the interpretation that changed. By the mid-twentieth century, the problem posed by fossils was so acute it could no longer be ignored: The fossil record, even with a century of collecting after Darwin, still did not support Darwinian views of how evolution took place. The greatest twentieth century paleontologist, George Gaylord Simpson, in midcentury had to admit to a reality of the fossil record: 'It remains true, as every paleontologist knows, that most new species, genera, and families, and that nearly all new categories above the level of families, appear in the record suddenly, and are not led up to by known, gradual, completely continuous transitional sequences.' "

So apparently the fossil record does not strongly support the claims of Darwin fans that new life forms appear mainly through slow, gradual evolution.  Why have we not been told this important truth more often, and why have our authorities so often tried to insinuate that the opposite was true? An example is that during the Ordovician period, the number of marine animals in one classification category (Family, the category above Genus) tripled, increasing by 300% (according to this scientific paper). But such an "information explosion" is dwarfed by the organization bonanza that occurred during the Cambrian Explosion, in which almost all animal phyla appeared rather suddenly -- what we may call a gigantic complexity windfall. 


information explosion

Ward proposes a fix for the shortcomings of Darwinism – kind of a moldy old fix. He proposes that we go back and resurrect some of the ideas of the biologist Jean-Baptiste Lamarck, who died in 1829. Ward's zeal on this matter is immoderate, for on page 111 he refers to a critic of Darwinism and states, “The Deity he worships should be Lamarck, not God.” Many have accused evolutionary biologists of kind of putting Darwin on some pedestal of adulation, but here we seem to have this type of worshipful attitude directed towards a predecessor of Darwin.

Lamarck's most famous idea was that acquired characteristics can be inherited. He stated, "All that has been acquired, traced, or changed, in the physiology of individuals, during their life, is conserved...and transmitted to new individuals who are related to those who have undergone those changes." For more than a century, biologists told us that the biological theories of Lamarck are bunk. If the ideas of Lamarck are resurrected by biologists, this may show that biologists are very inconsistent thinkers who applaud in one decade what they condemned the previous decade.

Ward's attempt to patch up holes in Darwinism is based on epigenetics, which may offer some hope for a little bit of inheritance of acquired characteristics. In Chapter VII he discusses the huge problem of explaining the Cambrian Explosion, which he tells us is “when the major body plans of animals now on Earth appeared rapidly in the fossil record.” Ward promises us on page 111 that epigenetics can “explain away” the problem of the Cambrian Explosion. But he does nothing to fulfill this promise. On the same page he makes a statement that ends in absurdity:

The...criticism is that Darwinian mechanisms, most notably natural selection combined with slow, gene-by-gene mutations, can in no way produce at the apparent speed at which the Cambrian explosion was able to produce all the basic animal body plans in tens of millions of years or less. Yet the evidence of even faster evolutionary change is all around us. For example, the ways that weeds when invading a new environment can quickly change their shapes.”

The latter half of this statement is fatuous. The mystery of the Cambrian Explosion was how so many dramatic biological innovations (such as vision, hard shells and locomotion) and how so many new animal body plans were able to appear so quickly. There is no biological innovation at all (and no real evolution) when masses of weeds change their shapes, just as there is no evolution when flocks of birds change their shapes; and weeds aren't even animals. The weeds still have the same DNA  (the same genomes) that they had before they changed their shapes. Humans have never witnessed any type of natural biological innovation a thousandth as impressive as the Cambrian Explosion.

Ward then attempts to convince us that “four different epigenetic mechanisms presumably contributed to the great increase in both the kinds of species and the kinds of morphologies that distinguished them that together produced the Cambrian explosion” (page 113). By using the phrase “presumably contributed” he indicates he has no strong causal argument. To say something “contributed” to some effect is to make no strong causal claim at all. A million things may contribute to some effect without being anything close to being an actual cause of that effect. For example, the mass of my body contributes to the overall gravitational pull that the Earth and its inhabitants exert on the moon, but it is not true that the mass of my body is even one millionth of the reason why the moon keeps orbiting the Earth. And each time you have a little charcoal barbecue outside, you are contributing to global warming, but that does not mean your activity is even a millionth of the cause of global warming. So when a scientist merely says that one thing “contributes” to something else, he is not making a strong causal claim at at all. And when a scientist says that something  “presumably contributed” to some effect, he is making a statement so featherweight and hesitant that it has no real weight.

On page 72-73 Ward has a section entitled “Summarizing Epigenetic Processes,” and that section lists three things:

  1. Methlyation,” that “lengths of DNA can be deactivated.”
  2. Modifications  of gene expression...such as increasing the rate of protein production called for by the code or slowing it down or even turning it on or off.”
  3. Reprogramming,” which he defines as some type of erasing, using the phrase “erase and erase again,” also using the phrase “reprogramming, or erasing.”

There is nothing constructive or creative in such epigenetic processes. They are simply little tweaks of existing proteins or genes, or cases of turning off or erasing parts of existing functionality. So epigenetics is useless in helping to explain how some vast burst of biological information and functional innovation (such as the Cambrian Explosion) could occur.

Trying to make it sound as if the idea that epigenetics had something to do with the Cambrian Explosion, Ward actually gives away that such an idea has not at all gained acceptance, for he states this on page 118: “From there the floodgates relating to epigenetics and the Cambrian explosion opened, yet none of this has made it into the textbooks thus far.” Of course, because there's no substance to the idea that epigenetics can explain even a hundredth of the Cambrian explosion. The Cambrian Explosion involved the relatively sudden appearance of almost all animal phyla, but Ward has failed to explain how epigenetics can explain the origin of even one of those phyla (or even a single appendage or organ or protein that appeared during the Cambrian Explosion).  If the inheritance of acquired characteristics were true, it would do nothing to explain the appearance of novel biological innovations, never seen before, because the inheritance of acquired characteristics is the repetition of something some organisms already had, not the appearance of new traits, capabilities, or biological innovations. 

So we have in Ward's book the strange situation of a book that mentions some big bleeding wounds in Darwinism, but then merely offers the tiniest band-aid as a fix, along with baseless boasts that sound like, “This fixes everything.”

A book like this seems like a complexity concealment.  Everywhere living things show mountainous levels of organization, information and complexity, but you would never know that from reading Ward's book.  He doesn't tell you that cells are so complex they are often compared to factories or cities. He doesn't tell you that in our bodies are more than 20,000 different types of protein molecules, each a different very complex arrangement of matter, each with the information complexity of a 50-line computer program, such proteins requiring a genome with an information complexity of 1000 books. He doesn't mention protein folding, one of the principal reasons why there is no credible explanation for the origin of proteins (the fact that functional proteins require folding, a very hard-to-achieve trick which can occur in less than .00000000000000000000000001 of the possible arrangements of the amino acids that make up a protein).  In the index of the book we find no entries for "complexity," "organization," "order," or "information." Judging from its index, the book has only a passing reference to proteins, something that tells you nothing about them or how complex they are.  The book doesn't even have an index entry for "cells," merely having an entry for "cellular differentiation." All that Ward tells you about the stratospheric level of order, organization and information in living things is when Ward tells you on page 97 that "there is no really simple life," and that life is "composed of a great number of atoms arranged in intricate ways," a statement so vague and nebulous that it will go in one of the reader's ears and out the other. 

complex_figure
Some arrangements are too complex to have appeared by chance

Wednesday, August 14, 2019

The Baloney in Salon's Story About Memory Infections

The major online web site www.salon.com (Salon) has put up a story on human memory with the very goofy-sounding title, “A protein in your brain behaves like a virus, infecting your cells with memories.” The story is speculative nonsense, and it seems to be based on a BS hogwash misstatement (but not one told by the writer of the story). The story tells us, “Viruses may actually be responsible for the ability to form memories,” a claim that not one in 100 neuroscientists has ever made.

The story mentions some research done by Jason Shepherd, associate professor of neurobiology at the University of Utah Medical School. Shepherd has done some experiments testing whether a protein called Arc (also known as Arc/Arg 3.1) is a crucial requirement for memory formation in mice. He does this by removing the gene for Arc in some mice, which are called “Arc knockout” mice, meaning that they don't have the gene for Arc, and therefore don't have the Arc protein that requires the Arc gene.

In the Salon.com story, we read the following claim by Shepherd: “ 'If you take the Arc gene out of mice,' Shepherd explains, 'they don’t remember anything.' ” Right next to this quote, there's a link to a 2006 scientific paper by 26 scientists, none of them Shepherd (although strangely the Salon story suggests that the paper is research by Shepherd and his colleagues). Unfortunately, the paper does not at all match the claim Shepherd has made, for the paper demonstrates very substantial memory and learning in exactly these “Arc knockout” mice that were deprived of the Arc gene and the Arc protein.

The Morris water maze test is a test of learning and memory in rodents. In the test a rodent is put at the center of a large circular tub of water, which may or may not have milk powder added to make the water opaque. Near one edge of the tub is a submerged platform that the rodent can jump on to escape the tub. A rodent can find out the location of the platform by exploring around. Once the rodent has used the platform to escape the tub, the rodent can be put back in the center of the tub, to see how well the rodent remembers the location of the little platform previously discovered. 

In the paper by the 26 scientists, both 25 normal mice and 25 “Arc knockout” mice were tested on the Morris water maze. The “Arc knockout” mice remembered almost as well as the normal mice. The graphs below show the performance difference between the normal mice and the mutant “Arc knockout mice,” which was only minor. The authors state that in another version of the Morris water maze test of learning and memory, there was no difference between the performance of the “Arc knockout” mice and normal mice. We are told, “No differences were observed in the cue version of the task.”

Unimpressive “water maze” results from the paper by 26 scientists, showing no big effect from knocking out the Arc gene 

What title did the 26 scientists give their paper? They gave it the title, “Arc/Arg3.1 Is Essential for the Consolidation of Synaptic Plasticity and Memories.” But to the contrary, the Morris water maze part of the experiments shows exactly the opposite: that you can get rid of Arc/Arg3.1 in mutant “knockout” mice, and that they will still remember well, with good consolidation of memories (the test involved memory over several days, which requires consolidation of learned memories). The 26 authors have given us a very misleading title to their paper, suggesting it showed something, when it showed exactly the opposite.

A 2018 paper by 10 scientists testing exactly the same thing (whether “Arc knockout” mice do worse on the Morris water maze) found the same results as the paper by the 26 authors: that there were only minor differences in the performance of the mutant “Arc knockout mice” when they were tested with the Morris water maze test. In fact that paper specifically states, “Deletion of Arc/Arg3.1 in Adult Mice Impairs Spatial Memory but Not Learning,” which very much contradicts Shepherd's claim that “If you take the Arc gene out of mice, they don’t remember anything.”

Unimpressive “water maze” results from the 2018 paper by 10 scientists, showing no big effect from knocking out the Arc gene

So why did scientist Shepherd make the false claim that “If you take the Arc gene out of mice, they don’t remember anything,” a claim disproved by the paper by the 26 scientists (despite its misleading title contradicting its own findings), and also disproved by the 2018 paper? Shepherd has some explaining to do.

In the paper here, Shepherd and his co-authors state, “The neuronal gene Arc is essential for long-lasting information storage in the mammalian brain,” a claim that is inconsistent with the experimental results just described in two papers, which show long-lasting information retention in mice that had the Arc gene removed. In the paper Shepherd refers in a matter-of-fact manner to “information storage in the mammalian brain,” but indicates he doesn't have any real handle on how such a thing could work, by confessing that “we still lack a detailed molecular and cellular understanding of the processes involved,” which is the kind of thing people say when they don't have a good scientific basis for believing something.

We have seen here two cases of the misuse of the word "essential" in scientific papers.  I suspect that the word is being misused very frequently in biological literature, in cases where scientists say one thing is essential for something else when the second thing can exist quite substantially without the first.  Similarly, a scientific paper found that the phrase "necessary and sufficient" is being massively misused in biology papers that failed to find a "necessary and sufficient" relationship.  To say that one thing is necessary and sufficient for some other thing means that the second thing never occurs without the first, and the first thing (by itself) always produces the second.  An example of the correct use of the phrase is "permanent cessation of blood flow is both necessary and sufficient for death." But it seems biologists have massively misused this phrase, using it for cases in which no such "necessary and sufficient" relationship exists.  A preposterous example is the paper here which tells us in its title that two gut microbes (microorganisms too small to see) are "necessary and sufficient" for cognition in flies -- which is literally the claim that tiny microbes are all that a fly needs to think or understand. 

Postscript: I have received a reply from Jason Shepherd, who quite rightfully scolds me for misspelling his name in the original version of this post. I apologize for that careless error.  Here is Shepherd's reply, which I am grateful to receive:

"You've conflated two different terms and don't seem to understand what consolidation is: Learning and Memory. Arc KO mice can indeed learn information. Memory is the storage and retention of information that was learned. Arc KO mice have normal short term memory, as in if you test them in the water maze and other memory tasks they do, indeed, show memory. However, if you come back a day later...there is NO retention of that information. There is, in fact, an essential role for this gene in the consolidation of memory. That is, conversion of short to long-term memory."

I do not find this to be a satisfactory explanation at all, and when I wrote the post I was well aware of what consolidation is when referring to memory:  a kind of firming up of a newly acquired memory so that it is retained for a period of time such as days.  As for Shepherd's claim about Arc KO mice (mice that have had the Arc gene knocked out) that "if you come back a day later...there is NO retention of that information," that is clearly disproved by the two scientific papers that I cited.  The water maze test is a test of learning, conducted over 5 or more days,  showing a mouse's improved ability over multiple days to find a hidden platform. The graphs cited above very clearly show the retention and consolidation of learned information over multiple days in Arc knockout mice. The mice have got better and better at finding the hidden platform in the water maze, improving their score over several days, which they could only have done if they had retained information learned in previous days.  In three of the graphs above, we see steady improvement of Arc knockout mice in the water maze test, over a 9-day period. If it were actually true that in Arc KO mice "there is NO retention of that information," we would see a straight horizontal line in the performance graph for the water maze, not a diagonal line showing steady improvement from day-to-day.   

In his reply Shepherd rather seems to insinuate that the water-maze test is merely a test of short-term memory, but it is very well known that the Morris water-maze test is a test of long-term memory and memory consolidation.  For example, here we read that the Morris water maze test "requires the mice to learn to swim from any starting position to the escape platform, thereby acquiring a long-term memory of the platform’s spatial location."  And the link here tells us that the Morris water-maze tests "several stages of learning and memory (e.g., encoding, consolidation and retrieval)."  And the paper here refers to "spatial long-term memory retention in a water maze test."  And the recent Nature paper here says, "We next examined the hippocampus-dependent long-term memory formation in cKO mice by Morris water maze test." Figure 1 of the paper specifically cites a Morris water maze test, speaking just as if it tests what the paper calls "long-term memory consolidation." The first line of the scientific paper here states that the Morris water maze test was established to test things such as "long-term spatial memory." 

Sunday, August 11, 2019

13 Ways Experts Can Evade and Stifle Unwanted Observations

Experts sometimes portray themselves as people who do not hesitate to follow the evidence wherever it points. But in reality many a modern expert shows a strong tendency to evade and stifle evidence that conflicts with his cherished dogmas, his entrenched ideas about the way reality works. There are quite a few ways in which an expert may evade or stifle some observational result that he doesn't want to accept. Let's look at some of these ways.

Way #1: Just “File-Drawer” the Unwanted Result

The term “file drawer effect" refers to the existence of unpublished scientific studies. After running a study getting an undesired result, a scientist may decide to not even write up the study as a scientific paper. It is believed that the file drawers of scientists contain data on many studies that were never written up as papers, and never published in scientific journals.

If a scientist gets a negative result failing to show an effect he was looking for, the scientist may never write up the experiment as a paper, and he may justify this by saying that it would be hard to get the paper published, because scientific journals don't like to publish negative results. Or, if the results seem to conflict with existing dogmas, the scientist may think to himself that it would be hard to get a paper with such a result published, given existing prejudices. study making surveys of scientists found that 63% of US psychologists surveyed and 63% of evolution researchers surveyed confessed to "not reporting studies or variables that failed to reach statistical significance (e.g. p ≤0.05) or some other desired statistical threshold." This is quite the intellectual sin, because it is just as important to report negative results as it is to report positive results. 

Way #2: Prune the Data to Remove the Unwanted Result

When a scientist gets an unwanted result in an experiment or a meta-analysis of previously published papers, the scientist might try to get a better result by removing some of the data items collected, or removing some of the scientific papers included in the meta-analysis. If the scientist collected the data himself, he may arbitrarily remove any extreme data item, on the grounds that it was an “outlier.” Or he may apply some filter criteria that gets rid of certain data items. For example, if a study with 10,000 subjects is analyzing the safety of a drug, and 100 of them died not long after taking the drug, some of those hundred troubling data points might be removed by introducing some inclusion criteria which excludes most of those who died.

Or if a scientist is doing a meta-analysis of 100 studies of the effectiveness of some medical technique not popular with scientists (for example, acupuncture or homeopathy), and the result ends up showing an effectiveness for the technique, this undesired result can be modified by modifying the inclusion criteria used to decide which studies will be included in the meta-analysis. Similarly, data items may be yanked out of a scientific paper, to help the paper achieve something that may be reported as "statistically significant." study making surveys of scientists found that 38% of US psychologists surveyed and 23.9% of evolution researchers surveyed confessed to "deciding to exclude data points after first checking the impact on statistical significance."

Way #3: Keep on Collecting More Data Until You Get a Result Less Unwanted

If a scientist collects a certain amount of observational data, and he is left with an unwanted result, another technique he can use to get a more desirable result is to continue collecting more data until a more desirable result is achieved. For example, lets us imagine a scientist tries to do a study showing that lots of television watching increases your chance of sudden cardiac death. Suppose the scientist gets data on 10,000 years of living of 1000 subjects, but does not find the effect he is looking for. The scientists can just keep collecting more data, such as trying to get data on 20,000 years of living of 2000 subjects. As soon as the desired result appears, the scientist can then stop collecting data, and write up his paper for publication. It is easy to see how such a technique can distort reality. study making surveys of scientists found that 55.9% of US psychologists surveyed and 50.7% of evolution researchers surveyed confessed to "collecting more data after inspecting whether the results are statistically significant."  

Way #4: Just Switch the Hypothesis If the Original Hypothesis Is Not Supported by the Experiment

A very common technique is what is called HARKing, which stands for Hypothesizing After Results are Known. A scientific experiment is supposed to state a hypothesis, gather data, and then analyze whether the data supports the hypothesis. But imagine you are some scientist gathering data that does not support the original hypothesis. Rather than writing this up as a result against the original hypothesis, you can come up with some other hypothesis after the data has been collected, and then write up the scientific experiment describing it as a study that confirms the new hypothesis. You might avoid even mentioning that the experiment had failed to confirm the original hypothesis. One problem with this is that it has the effect of stifling or covering up a negative result which should have been the main news item coming from the scientific paper. The bad effects of Hypothesizing After Results Are Known are described here.

There is a general technique that can be used to prevent or discourage the four effects just describe. The technique is for institutions to arrange for pre-registered studies with guaranteed publication. Under such a method, a scientist has to publish in writing (before collecting data) the hypothesis that he is testing, and specify in detail the exact experimental method that will be used, including data inclusion criteria. Then regardless of the result achieved, the result must be written up and published in a journal, in a way true to the original experimental design specification. It is widely recognized that such pre-registration and guaranteed publication would result in more reliable and robust scientific papers, with a reduction in misleading publication bias and file-drawer effects. But the scientific community has made almost no effort to use such a methodology. 

study making surveys of scientists found that 27% of US psychologists surveyed and 54% of evolution researchers surveyed confessed to "reporting an unexpected finding as having been predicted from the start."  Should this cause us to suspect that evolution researchers are more prone to lie than psychology researchers? 

Way #5: Mask the Undesired Result with a Paper Title Not Mentioning It

Scientists write the titles of their own papers, and such titles can often mask or hide undesired results. An example was a scientific paper describing an attempt to look for long-lived proteins in the synapses of the brain, proteins that might help to explain human memories that can last for 50 years. As discussed here, the paper found no real evidence for any such thing.

The paper found the following:

  • Studying thousands of brain proteins, the study found that virtually all proteins in brains are very short-lived, with half-lives of less than a week.
  • Table 2 of the paper gives specific half-life estimates for the most long-lasting brain proteins, and in this table only 10 out of thousands of brain proteins had half-lives of 10 days or longer.
  • Of the proteins whose half-life is estimated in Table 2, only one of them has a half-life of longer than 30 days, that protein having a half-life of only 32 days.
  • A graph in the paper indicates that none of the synapse proteins had a half-life of more than 35 days.

So what was the title of this paper finding the undesired result of no evidence for synapse proteins with a lifetime of years? It was this: 'Identification of long-lived synaptic proteins by proteomic analysis of synaptosome protein turnover.” The paper thereby masked and stifled the actual observational result, that no truly long-lived synaptic proteins were found.

Way #6: Get Your Study's Press Release to Mask the Undesired Result with a Headline Not Mentioning It

How a scientific paper is reported is determined not just by the title used for the paper but the headline used in the press release announcing the paper. Such press releases often mask or hide undesired observational results. A good example is the press release for the scientific paper just discussed, which had the very misleading title, “In Mice, Long-Lasting Brain Proteins Offer Clues to How Memories Last a Lifetime.” This title masked and stifled the actual observational result, that no evidence of any synaptic proteins lasting years was found.

A scientist may claim that he has nothing to do with the headline used by the university press office when it issues a press release describing the scientist's study. But I suspect the typical situation is that such a press release is sent to the scientist before it is released, with the scientist having the opportunity to object if anything in the press release is inaccurate or misleading.

Way #7: Bury the Undesired Result Outside of the Abstract, Placing It Within a Sea of Details

If a scientist gets an undesired result, one way to kind of stifle it is to not state the result in the abstract of the paper, but to place it somewhere deep inside the paper, surrounded by paragraphs and paragraphs of fine details. Only the most diligent readers of the paper will notice the result. Since most scientific papers are hidden behind paywalls, with only the paper abstract being easily discovered through a web search, the result will be that very few people will ever read about the undesired result.

Way #8: State the Undesired Result Using Jargon That Almost No One Can Understand

Yet another way for a scientist to stifle an undesired result is to state the result using some fancy jargon or mathematical expression that almost no one will be able to understand. Such a thing is easy to do, just by using “science speak” that is like some foreign language to the average reader. For example, if the paper has found that some popular pill increases the chance of people dying, such a result can be announced by saying that the pill "produces a statistically significant prognostic detriment," a phrase a reader will be unlikely to understand. 

Way #9: Attack Someone's Methodology Producing an Undesired Result

The previous ways involved how a scientist may stifle or evade a result that came up in his own activity. But there are also ways to stifle or evade results produced by other people. Let's look at some of those.

One way is to attack the methodology of an study that produced a result you don't want to accept. For example, if an ESP experimenter got a result indicating ESP exists, you can complain about a lack of screens between the test subject and the observer. If such screens are used, you can complain that the observer and the experimenter were not in separate rooms. If they were in separate rooms, you can complain that a bit of noise might have traveled between the rooms. If the rooms were far-separated, you can complain that maybe the test subject could have peeked through a keyhole or a transom, or left a secret video camera somewhere.

Although there is nothing wrong per se in attacking the methodology of an experiment, given that many scientific studies have a very poor methodology, it should be noted that methodological criticisms often involve hypocrisy. Scientist X will often complain about a lack of some thing in Scientist Y's work, when Scientist X does not include that thing in his own experiments.

Way #10: Try to Undermine Confidence in Those Reporting the Unwanted Observations

An expert may use several different techniques to undermine confidence in an observer who has reported an unwanted observation. One commonly used technique is what is called gaslighting. It involves making one or more insinuations that the observer is suffering from some psychological problem, credibility problem or observational problem. 


shaming

Way #11: Speak As If the Observations Were Not Made

One of the most common techniques an expert may use to stifle an unwanted observational result is the technique of simply speaking as if some type of observational result was never made, without trying to discredit or even mention the undesired observations. For example, it is not uncommon for neuroscientists to say that there is no evidence that consciousness can continue after signs of brain activity have ceased, despite very strong evidence that exactly such a thing happens during many near-death experience.  To give another example, scientists often simply claim there is no evidence for paranormal phenomena, despite the very strong laboratory evidence for ESP that has been collected over many years. 


Way #12: "Speculate Away" the Undesired Observations


Faced with an undesired result, a scientist will often resort to elaborate speculations designed to explain away the result.  Here are some examples: 

(1) Faced with nothing but negative results from decades of searches for extraterrestrial radio signals,  scientists have resorted to speculations (such as the zoo hypothesis) designed to allow them to continue to believe that our galaxy is filled with intelligent life. 
(2) Faced with observational results suggesting that scientists don't well understand the composition of the universe or the dynamics of stellar movements, scientists invented the speculations of dark energy and dark matter,  rather than confess their ignorance about the composition of the universe  and the dynamics of stellar movements. 
(3) Faced with evidence that the proteins in synapses are too short-lived for synapses to be a storage place for memories, scientists created very elaborate chemical speculations designed to explain away this result.
(4) Faced with undesired evidence from near-death experiences that human consciousness can keep operating after brains have shut down, scientists created various elaborate speculations designed to explain away such evidence, including speculations of secret caches of hallucinogens in the brain that have not yet been discovered. 

Way #13: Just Avoid Mentioning the Undesired Observations, and Make No Mention of Them in Textbooks or Authoritative Subject Reviews

The final way is the simplest way for experts to stifle unwanted observations. They can simply avoid mentioning the observational results in places where they should be discussed, places like textbooks and review articles written by scientists.  This way is used so massively that certain science information sources (such as the journal Nature and typical textbooks) effectively act like "filter bubbles," preventing their readers from learning about many important facts that may conflict with their worldviews.