Scientists have long put certain other scientists on pedestals. Such idolization can be bad for science. Centuries ago, the progress of science was greatly harmed by the fact that people put Aristotle on a pedestal, thinking that his conclusions (largely wrong) were well-founded scientific findings. For a long time, few people
wanted to do work that might challenge the conclusions of Aristotle,
because it would be almost like heresy to have to say, “Aristotle
was wrong.” Rather the same situation has often existed in regard
to other scientists who were put on pedestals. When scientists
become scientist fanboys, erroneous beliefs can become entrenched dogmas.
Scientists exert peer-pressure on their fellow scientists in a number of ways. The most common type of peer-pressure involves getting scientists to conform to some belief "party line" popular among scientists. There are numerous ways to exert such pressure, such as the peer-review process by which contrarian or unorthodox opinions or inconvenient research results can be excluded.
Scientists exert peer-pressure on their fellow scientists in a number of ways. The most common type of peer-pressure involves getting scientists to conform to some belief "party line" popular among scientists. There are numerous ways to exert such pressure, such as the peer-review process by which contrarian or unorthodox opinions or inconvenient research results can be excluded.
At the Neuroskeptic blog there was a very disturbing anonymous quote from an “early career researcher” or ECR. The scientist is quoted as stating this: “I have been constantly harassed by superiors to modify data in order to provide better agreement with known experimental values in order to make the paper look better for publishing at prestigious journals.” What is particularly shocking is that it is not a case of a single apprentice researcher admitting to doing such an unethical thing, but a claim that he or she is “constantly harassed by superiors” to do such an unethical thing.
Why is such an accusation not terribly surprising? It has been clear for a long time that early-career scientists are told that there is a particular path to success that they must follow to be appointed as professors. The path to success is to publish one or more papers in one of the top science journals such as Cell or Nature or Science. But your chance of getting a paper published in such journals is not very high if your research suggests something that conflicts with existing dogmas and prevailing theories. Also, it is well known that science journals have a strong bias against publishing negative results, such as when a researcher looks for some effect but does not find it in his experimental results. So we can understand why researchers might be pressured to “modify data in order to provide better agreement with known experimental values.”
An online article tells us that scientists are being pressured by their peers to produce "sexier" results
A group of junior researchers at the University of Cambridge have established a campaign to reduce the pressure faced by scientists to produce “sexier” results, which they say can lead to inaccurate research work....Founder of the ‘Bullied into Bad Science’ campaign Corina Logan, a Leverhulme early career zoology research fellow at Cambridge, told The Times that junior researchers faced mounting pressure from senior academics to produce exciting results....Speaking to Varsity, Logan explained that she had started the movement “in response to the feedback I received after giving talks on how we researchers exploit ourselves and discriminate against others through our publishing choices. “Early career researchers often come up to me after these talks to say they would like to publish ethically but feel like they can’t because their supervisor won’t let them or they are reluctant to because they have heard that they need to publish in particular journals to be able to get jobs and grants.”
What are these "sexier" and "exciting" results that scientists are being pressured into producing? They are sometimes dubious studies that may seem to back up the cherished dogmas of scientists, by resorting to one of the many "building blocks of bad science literature" that I specified in this post. The dubious methodologies of such studies are often hard to discover, because the studies are hidden behind paywalls. But the "sexier" and "exciting" results are splattered all over the news media, thanks to science journalists who often act like credulous parrots by engaging in pom-pom journalism. There are great monetary incentives for such "echo chamber" effects, because the more sensational-sounding a science news headline, the more advertising revenue it will generate. The greater the web clicks on a page announcing some hyped-up result, the greater the ad revenue from online ads.
An online article states the following:
It is difficult to overstate how much power a journal editor now had to shape a scientist’s career and the direction of science itself. “Young people tell me all the time, ‘If I don’t publish in CNS [a common acronym for Cell/Nature/Science, the most prestigious journals in biology], I won’t get a job,” says Schekman. He compared the pursuit of high-impact publications to an incentive system as rotten as banking bonuses.
According
to this post, more than 50 academics “each has a story of being
told by senior colleagues that their career would be on the line if
they did not keep up a steady flow of eye-catching results in top
journals, where their articles cannot be read without an expensive
subscription.”
The journals “where their articles cannot be read without an expensive subscription” are commonly called paywall journals. How do such paywalls abet bad science? They make it hard for people to do a quality check of bad science when it is published.
For example, in the field of neuroscience there is an extremely widespread problem of studies that have low statistical power because too few research animals are used. The minimum number of animals or subjects per study group for a moderately convincing experimental result has been estimated as being from 15 to 30 or more. But a sizable fraction of all neuroscience studies use far fewer than 15 animals or subjects per study group, often as few as 6 or 8. In such low-power studies, there is a very high chance of false alarms. How can we tell whether a study used a suitable number of animals or subjects per study group? You can't do that by reading the abstract of the scientific study, which typically does not tell how many animals or subjects per study group were used. To find out how many animals or subjects per study group were used, you have to read the full paper.
But paywall journals make it very hard to read papers. They thereby abet bad science, by making it hard for people to check the details of scientific studies. If you've done some dubious study with low statistical power and a high risk of a false alarm, a paywall is your best friend, for it will minimize the chance that anyone will discover the sleazy shortcuts you took.
According to a 2015 article, more than half of the world's scientific publishing is controlled by only five corporations. Such corporations add little to the scientific research they publish, and by using paywalls they severely restrict readership of scientific papers. It's a system that's very unnecessary, because nowadays it is easy to publish scientific research online. But scientists keep up their compliance with the paywall publishers, and even work for free for the publishing giants by doing unpaid peer review of scientific papers. This is rather like someone spending his summers working for free flipping hamburgers for one of the fast-food giants. Why haven't scientists rebelled against such a system, in which they are acting as unpaid laborers to enhance the profits of a handful of corporate giants? Sadly, once an unworthy system is in place (either as a set of customs long followed by scientists or a set of beliefs which scientists have long clung to), our scientists (acting like prisoners of habit) are bad about rising up against such a system to say, "Let's come up with something better."
No comments:
Post a Comment