Header 1

Our future, our universe, and other weighty topics


Monday, November 30, 2020

Researchers Won't Cooperate With a Survey Asking About Their Questionable Practices

 In the journal Science there was recently an article with a  headline "Largest ever research integrity survey flounders as universities refuse to cooperate."  It seems that some Dutch scientists created a National Survey on Research Integrity to learn more about the degree of poor practices occurring among science and medical researchers. The Dutch scientists had a real simple plan: just send many thousands of researchers a brief list of questions asking about "Responsible Research Practices" and "Questionable Research Practices." I am unable to find out what the exact questions are, as they are not listed on the web site of this survey. 

You would think that such a survey attempt would not run into any difficulty. But it seems that fewer than 15% of the scientists sent the survey have responded to it. The article tells us that "university presidents argued that a survey would just not be suitable for such a sensitive topic."  Only 5 out of 15 universities agreed to participate in the survey,  and only "on the condition that they could have a say on the survey’s setup and content."  This "have a say" amounted to pressure that more positive questions should be included in the survey, more questions asking about good behavior such as sharing data.  We can imagine a politician making a similar demand, which might work like this:

Journalist: Now, I want to ask about that rumor you are cheating on  your wife. 

Politician: Well, let's make a deal. I'll answer that question, but only if you first ask a question about the time I was nice to a hungry puppy, and another question about the time I gave some food to a homeless person. 

Despite the survey having been made more pleasant with questions about both good behavior and bad behavior,  only 13% of those getting the survey have answered it.  This is even though the survey was designed with some fancy scheme designed to "anonymize" individual responses, so that you can't tell who sent a particular response, and no one will suffer consequences from confessing bad behavior. It seems our scientists don't want to talk about all the poor  practices that are going on in science research. 

So the response of the typical scientist getting the survey is rather like the response of the politician below:

JournalistDo you often tell lies in your Senate speeches? 

Politician: No comment. 

Journalist: Do you sometimes take bribes? 

Politician: No comment. 

JournalistIs it true you are a bigamist? 

Politician: No comment. 

Luckily, we don't really need a survey to discover all the questionable practices that go on in scientific research. We can just do random inspections of scientific papers, using resources such as the Physics paper server at arXiv.org and the biology preprint server to find such papers.  Here is how some bad practices can be found:

  • In an experimental paper, do a search for the word "blind" to discover whether a blinding protocol was used. You will find that in very many or most such papers there is no mention of a blinding protocol, meaning there was a fair chance of biased  data gathering or biased data analysis, in which researchers see what they're hoping to see and find what they're hoping to find. 
  • In an experimental paper, do a search for the phrase "sample size calculation" to discover whether the researchers made any attempt to calculate the minimum study group sizes needed to get a robust result. You will usually find that they did not. 
  • In an experimental paper, do a search for the phrase "n=" and "n =" to find the study group sizes that were used. You will very often find that study group sizes smaller than 15 were used, meaning that there was a high chance of a false alarm because of a too-small sample size. 
  • Search for the phrase "conflict of interest" to find any conflicts of interest.  Such a search will often show that some of the researchers were employees of a company that stands to profit if the study claimed a positive result, or investors in such a company. 
  • Search for the phrase "registered" to see whether the study was a pre-registered study that declared (before it began) that it would test one particular hypothesis using one particular method.  You will almost always find the study was no such thing. Studies that were not pre-registered are often kind of "fishing expeditions" in which dozens of possible effects or correlations are looked for after the data has been collected; and when such "HARKing" (Hypothesizing After Results are Known) occurs, there may be a large chance of a false alarm.  
  • Analyze the paper to see whether the study's title made some claim that is never established by robust evidence, something that very often occurs.
  • Analyze the paper's abstract to see whether the study's abstract made some claim that is never established by robust evidence, something that very often occurs. A scientific study found that 48% of scientific papers use "spin" in their abstracts. 
  • Whenever you read some dubious claim that has a reference number, as if the claim was proven by some previous paper, track down the corresponding paper to find whether it actually showed or asserted such a claim. You will often find it did not. 

According to a previous meta-analysis, when asked if they had knowledge of a colleague who fabricated or falsified data, or modified research data, "between 5.2% and 33.3% of respondents replied affirmatively." According to Figure 5 of the same meta-analysis, six different studies found  that about 2% of scientists confess that they themselves fabricated, falsified or altered data.  We have every reason to suspect that the actual percentage of scientists doing such a thing is much higher, simply because the percentage of people who confess to wrongdoing will always be much smaller than the percentage that commit wrongdoing. One paper ("Analysis and Correction of Inappropriate Image Duplication: the Molecular and Cellular Biology Experience") concluded that "as many as 35,000 papers in the literature are candidates for retraction due to inappropriate image duplication."  They found that 6% of the papers "contained inappropriately duplicated images." A study tells us the following about a survey of scientists:

"Up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices."

In The Lancet, Stuart Clarke wrote the following:

"The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, 'poor methods get results'... In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data. Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours."

questionable research practices
Questionable research practices

We should remember that a good fraction of scientists are scrupulously honest people who rigorously follow high standards very carefully, but we should also remember that many are no such thing, and that some of the most important claims of modern scientists are claims that will not seem very well-established after we carefully inspect the types of questionable research practices and weak arguments used to try to substantiate such claims.  

No comments:

Post a Comment