Sunday, August 11, 2019

13 Ways Experts Can Evade and Stifle Unwanted Observations

Experts sometimes portray themselves as people who do not hesitate to follow the evidence wherever it points. But in reality many a modern expert shows a strong tendency to evade and stifle evidence that conflicts with his cherished dogmas, his entrenched ideas about the way reality works. There are quite a few ways in which an expert may evade or stifle some observational result that he doesn't want to accept. Let's look at some of these ways.

Way #1: Just “File-Drawer” the Unwanted Result

The term “file drawer effect" refers to the existence of unpublished scientific studies. After running a study getting an undesired result, a scientist may decide to not even write up the study as a scientific paper. It is believed that the file drawers of scientists contain data on many studies that were never written up as papers, and never published in scientific journals.

If a scientist gets a negative result failing to show an effect he was looking for, the scientist may never write up the experiment as a paper, and he may justify this by saying that it would be hard to get the paper published, because scientific journals don't like to publish negative results. Or, if the results seem to conflict with existing dogmas, the scientist may think to himself that it would be hard to get a paper with such a result published, given existing prejudices. study making surveys of scientists found that 63% of US psychologists surveyed and 63% of evolution researchers surveyed confessed to "not reporting studies or variables that failed to reach statistical significance (e.g. p ≤0.05) or some other desired statistical threshold." This is quite the intellectual sin, because it is just as important to report negative results as it is to report positive results. 

Way #2: Prune the Data to Remove the Unwanted Result

When a scientist gets an unwanted result in an experiment or a meta-analysis of previously published papers, the scientist might try to get a better result by removing some of the data items collected, or removing some of the scientific papers included in the meta-analysis. If the scientist collected the data himself, he may arbitrarily remove any extreme data item, on the grounds that it was an “outlier.” Or he may apply some filter criteria that gets rid of certain data items. For example, if a study with 10,000 subjects is analyzing the safety of a drug, and 100 of them died not long after taking the drug, some of those hundred troubling data points might be removed by introducing some inclusion criteria which excludes most of those who died.

Or if a scientist is doing a meta-analysis of 100 studies of the effectiveness of some medical technique not popular with scientists (for example, acupuncture or homeopathy), and the result ends up showing an effectiveness for the technique, this undesired result can be modified by modifying the inclusion criteria used to decide which studies will be included in the meta-analysis. Similarly, data items may be yanked out of a scientific paper, to help the paper achieve something that may be reported as "statistically significant." study making surveys of scientists found that 38% of US psychologists surveyed and 23.9% of evolution researchers surveyed confessed to "deciding to exclude data points after first checking the impact on statistical significance."

Way #3: Keep on Collecting More Data Until You Get a Result Less Unwanted

If a scientist collects a certain amount of observational data, and he is left with an unwanted result, another technique he can use to get a more desirable result is to continue collecting more data until a more desirable result is achieved. For example, lets us imagine a scientist tries to do a study showing that lots of television watching increases your chance of sudden cardiac death. Suppose the scientist gets data on 10,000 years of living of 1000 subjects, but does not find the effect he is looking for. The scientists can just keep collecting more data, such as trying to get data on 20,000 years of living of 2000 subjects. As soon as the desired result appears, the scientist can then stop collecting data, and write up his paper for publication. It is easy to see how such a technique can distort reality. study making surveys of scientists found that 55.9% of US psychologists surveyed and 50.7% of evolution researchers surveyed confessed to "collecting more data after inspecting whether the results are statistically significant."  

Way #4: Just Switch the Hypothesis If the Original Hypothesis Is Not Supported by the Experiment

A very common technique is what is called HARKing, which stands for Hypothesizing After Results are Known. A scientific experiment is supposed to state a hypothesis, gather data, and then analyze whether the data supports the hypothesis. But imagine you are some scientist gathering data that does not support the original hypothesis. Rather than writing this up as a result against the original hypothesis, you can come up with some other hypothesis after the data has been collected, and then write up the scientific experiment describing it as a study that confirms the new hypothesis. You might avoid even mentioning that the experiment had failed to confirm the original hypothesis. One problem with this is that it has the effect of stifling or covering up a negative result which should have been the main news item coming from the scientific paper. The bad effects of Hypothesizing After Results Are Known are described here.

There is a general technique that can be used to prevent or discourage the four effects just describe. The technique is for institutions to arrange for pre-registered studies with guaranteed publication. Under such a method, a scientist has to publish in writing (before collecting data) the hypothesis that he is testing, and specify in detail the exact experimental method that will be used, including data inclusion criteria. Then regardless of the result achieved, the result must be written up and published in a journal, in a way true to the original experimental design specification. It is widely recognized that such pre-registration and guaranteed publication would result in more reliable and robust scientific papers, with a reduction in misleading publication bias and file-drawer effects. But the scientific community has made almost no effort to use such a methodology. 

study making surveys of scientists found that 27% of US psychologists surveyed and 54% of evolution researchers surveyed confessed to "reporting an unexpected finding as having been predicted from the start."  Should this cause us to suspect that evolution researchers are more prone to lie than psychology researchers? 

Way #5: Mask the Undesired Result with a Paper Title Not Mentioning It

Scientists write the titles of their own papers, and such titles can often mask or hide undesired results. An example was a scientific paper describing an attempt to look for long-lived proteins in the synapses of the brain, proteins that might help to explain human memories that can last for 50 years. As discussed here, the paper found no real evidence for any such thing.

The paper found the following:

  • Studying thousands of brain proteins, the study found that virtually all proteins in brains are very short-lived, with half-lives of less than a week.
  • Table 2 of the paper gives specific half-life estimates for the most long-lasting brain proteins, and in this table only 10 out of thousands of brain proteins had half-lives of 10 days or longer.
  • Of the proteins whose half-life is estimated in Table 2, only one of them has a half-life of longer than 30 days, that protein having a half-life of only 32 days.
  • A graph in the paper indicates that none of the synapse proteins had a half-life of more than 35 days.

So what was the title of this paper finding the undesired result of no evidence for synapse proteins with a lifetime of years? It was this: 'Identification of long-lived synaptic proteins by proteomic analysis of synaptosome protein turnover.” The paper thereby masked and stifled the actual observational result, that no truly long-lived synaptic proteins were found.

Way #6: Get Your Study's Press Release to Mask the Undesired Result with a Headline Not Mentioning It

How a scientific paper is reported is determined not just by the title used for the paper but the headline used in the press release announcing the paper. Such press releases often mask or hide undesired observational results. A good example is the press release for the scientific paper just discussed, which had the very misleading title, “In Mice, Long-Lasting Brain Proteins Offer Clues to How Memories Last a Lifetime.” This title masked and stifled the actual observational result, that no evidence of any synaptic proteins lasting years was found.

A scientist may claim that he has nothing to do with the headline used by the university press office when it issues a press release describing the scientist's study. But I suspect the typical situation is that such a press release is sent to the scientist before it is released, with the scientist having the opportunity to object if anything in the press release is inaccurate or misleading.

Way #7: Bury the Undesired Result Outside of the Abstract, Placing It Within a Sea of Details

If a scientist gets an undesired result, one way to kind of stifle it is to not state the result in the abstract of the paper, but to place it somewhere deep inside the paper, surrounded by paragraphs and paragraphs of fine details. Only the most diligent readers of the paper will notice the result. Since most scientific papers are hidden behind paywalls, with only the paper abstract being easily discovered through a web search, the result will be that very few people will ever read about the undesired result.

Way #8: State the Undesired Result Using Jargon That Almost No One Can Understand

Yet another way for a scientist to stifle an undesired result is to state the result using some fancy jargon or mathematical expression that almost no one will be able to understand. Such a thing is easy to do, just by using “science speak” that is like some foreign language to the average reader. For example, if the paper has found that some popular pill increases the chance of people dying, such a result can be announced by saying that the pill "produces a statistically significant prognostic detriment," a phrase a reader will be unlikely to understand. 

Way #9: Attack Someone's Methodology Producing an Undesired Result

The previous ways involved how a scientist may stifle or evade a result that came up in his own activity. But there are also ways to stifle or evade results produced by other people. Let's look at some of those.

One way is to attack the methodology of an study that produced a result you don't want to accept. For example, if an ESP experimenter got a result indicating ESP exists, you can complain about a lack of screens between the test subject and the observer. If such screens are used, you can complain that the observer and the experimenter were not in separate rooms. If they were in separate rooms, you can complain that a bit of noise might have traveled between the rooms. If the rooms were far-separated, you can complain that maybe the test subject could have peeked through a keyhole or a transom, or left a secret video camera somewhere.

Although there is nothing wrong per se in attacking the methodology of an experiment, given that many scientific studies have a very poor methodology, it should be noted that methodological criticisms often involve hypocrisy. Scientist X will often complain about a lack of some thing in Scientist Y's work, when Scientist X does not include that thing in his own experiments.

Way #10: Try to Undermine Confidence in Those Reporting the Unwanted Observations

An expert may use several different techniques to undermine confidence in an observer who has reported an unwanted observation. One commonly used technique is what is called gaslighting. It involves making one or more insinuations that the observer is suffering from some psychological problem, credibility problem or observational problem. 


shaming

Way #11: Speak As If the Observations Were Not Made

One of the most common techniques an expert may use to stifle an unwanted observational result is the technique of simply speaking as if some type of observational result was never made, without trying to discredit or even mention the undesired observations. For example, it is not uncommon for neuroscientists to say that there is no evidence that consciousness can continue after signs of brain activity have ceased, despite very strong evidence that exactly such a thing happens during many near-death experience.  To give another example, scientists often simply claim there is no evidence for paranormal phenomena, despite the very strong laboratory evidence for ESP that has been collected over many years. 


Way #12: "Speculate Away" the Undesired Observations


Faced with an undesired result, a scientist will often resort to elaborate speculations designed to explain away the result.  Here are some examples: 

(1) Faced with nothing but negative results from decades of searches for extraterrestrial radio signals,  scientists have resorted to speculations (such as the zoo hypothesis) designed to allow them to continue to believe that our galaxy is filled with intelligent life. 
(2) Faced with observational results suggesting that scientists don't well understand the composition of the universe or the dynamics of stellar movements, scientists invented the speculations of dark energy and dark matter,  rather than confess their ignorance about the composition of the universe  and the dynamics of stellar movements. 
(3) Faced with evidence that the proteins in synapses are too short-lived for synapses to be a storage place for memories, scientists created very elaborate chemical speculations designed to explain away this result.
(4) Faced with undesired evidence from near-death experiences that human consciousness can keep operating after brains have shut down, scientists created various elaborate speculations designed to explain away such evidence, including speculations of secret caches of hallucinogens in the brain that have not yet been discovered. 

Way #13: Just Avoid Mentioning the Undesired Observations, and Make No Mention of Them in Textbooks or Authoritative Subject Reviews

The final way is the simplest way for experts to stifle unwanted observations. They can simply avoid mentioning the observational results in places where they should be discussed, places like textbooks and review articles written by scientists.  This way is used so massively that certain science information sources (such as the journal Nature and typical textbooks) effectively act like "filter bubbles," preventing their readers from learning about many important facts that may conflict with their worldviews. 

No comments:

Post a Comment