In the new book Rigor
Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hopes,
and Wastes Billions by
Richard Harris, we are told this: “Misleading animal studies have led to billions of
dollars worth of wasted effort and dead ends in the search for
drugs.” There is a rather acidic visual on the front
cover of the Rigor Mortis book. We see a toe tag tied around
the letter “I” in the title, like the toe tag they tie around the
toes of corpses. In the “Name” slot of the toe tag, we see:
“Biomedical Research.”
That visual is an
exaggeration, since biomedical research isn't dead. But judging from
the book, there are serious problems in the field. It seems that a
very large fraction of research studies cannot be replicated. The
problem was highlighted in a widely cited 2005 paper by John
Ioannidis entitled, “Why Most Published Research Studies Are
False.” A scientist named C. Glenn Begley and his colleagues tried
to reproduce 53 published studies called “ground-breaking.” He
asked the scientists who wrote the papers to help, by providing the
exact materials to publish the results. Begley and his colleagues were only able to
reproduce 6 of the 53 experiments.
In 2011 Bayer
reported similar results. They tried to reproduce 67 medical studies,
and were only able to reproduce them 25 percent of the time. On page
14 of the book by Harris, we are told that one expert estimates that
28 billion dollars a year is spent on untrustworthy papers.
Part of the problem
is a culture that provides high rewards for splashy results that can
be called “ground-breaking,” but which makes it rather hard for a
biologist to get a paper published if the paper reports a failure to
replicate a previous study. Another part of the problem is
insufficient attention to methodology and precise mathematics. One
expert quoted on page 172 says that in the current culture of
biomedical research, it “pays to be first” but “it doesn't
necessarily pay to be right.” The expert laments, “It actually
pays to be sloppy and just cut corners and get there first,”
noting that this is “really wrong.”
On page 96 we learn
about a problem with misidentified cell lines, in which experiments
are done assuming some series of cells are from one type of organism
when they are actually from some other type of organism. We read:
A 2007 study
estimated that between 18 and 36 percent of all cell experiments use
misidentified cell lines. That adds up to tens of thousands of
studies, costing billions of dollars....Sometimes, even the species
isn't correct. Nelson-Rees found a “mongoose” cell line was
actually human and determined that two “hamster” cell lines were
from marmosets and humans, respectively. “Have the Marx Brothers
taken over the cell-culture labs?” Roland Nardone asked in a 2008
paper bemoaning this state of affairs.
On page 203 of the Harris book, an expert laments that “we haven't trained a lot of our biologists to think mathematically or to understand or analyze data.” On the same page we are told that there are no standards in whole genome sequencing, that there are no standards in searching for mutations in genomes, and that in searching for mutations in genomes, “Nobody does it the same way.”
A Pro Publica
article is entitled, “When Evidence Says No, But Doctors Say Yes.”
Apparently there is a problem of some doctors recommending
procedures that aren't backed up by evidence. Below is a quote from the
article:
In a 2013 study,
a dozen doctors from around the country examined all 363 articles
published in The New England Journal of Medicine over a decade —
2001 through 2010 — that tested a current clinical practice, from
the use of antibiotics to treat people with persistent Lyme disease
symptoms (didn’t
help) to the use of specialized sponges for preventing infections
in patients having colorectal surgery (caused
more infections). Their results, published in the Mayo Clinic
Proceedings, found 146 studies that proved or strongly suggested that
a current standard practice either had no benefit at all or was
inferior to the practice it replaced; 138 articles supported the
efficacy of an existing practice, and the remaining 79 were deemed
inconclusive.
Another huge problem
in contemporary medical practice involves doctors who invest in
fantastically expensive equipment, and who then give advice that may
be biased by their desire to pay off the cost of such a machine (or
profit from its use).
It seems that there is a huge amount of unnecessary medical treatment being done. A New Yorker article by a doctor states the following:
It seems that there is a huge amount of unnecessary medical treatment being done. A New Yorker article by a doctor states the following:
In
just a single year, the researchers reported, twenty-five to
forty-two per cent of Medicare patients received at least one of the
twenty-six useless tests and treatments....The
Institute of Medicine issued a report stating that waste accounted
for thirty per cent of health-care spending, or some seven hundred
and fifty billion dollars a year, which was more than our nation’s
entire budget for K-12 education....Millions of
people are receiving drugs that aren’t helping them, operations
that aren’t going to make them better, and scans and tests that do
nothing beneficial for them, and often cause harm.
If you are asked to take some expensive test or undergo some expensive medical procedure, the following are good questions to ask your doctor:
(1) Is the course of treatment or testing you are recommending considered a standard practice or "best practice" for patients with my set of circumstances?
(2) If you were teaching a room full of medical students, would you recommend this exact treatment or testing for someone with my set of circumstances?
Look for a firm, confident answer of "Yes," rather than a weaker answer such as "Doctors often do this."
During earlier
times, people had complete faith in words spoken by anyone wearing
the black outfit of the priest. Today we have somehow been socially
conditioned to regard anyone with a white coat as a totally reliable
source of information. Perhaps both forms of “color confidence”
involved too much uncritical trust.
Postscript: An article in the Guardian states the following:
More than 70% of the researchers (pdf), who took part in a recent study published in Nature have tried and failed to replicate another scientist’s experiment. Another study found that at least 50% of life science research cannot be replicated.
A Nature article states this: "Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature." This smells like scientists having too much overconfident faith in their fellow scientists.
Postscript: An article in the Guardian states the following:
More than 70% of the researchers (pdf), who took part in a recent study published in Nature have tried and failed to replicate another scientist’s experiment. Another study found that at least 50% of life science research cannot be replicated.
A Nature article states this: "Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature." This smells like scientists having too much overconfident faith in their fellow scientists.
No comments:
Post a Comment