Monday, January 9, 2023

Do Scientists Prefer That Science Writers Lack Deep Knowledge of Topics They Write About?

Science journalism is currently in a bad state. Most days when I read science news sites, I read at least one article written by some science journalist making untrue claims. Science journalists routinely write articles with triumphal click-bait headlines that are not justified by anything reported in the article. Science journalists routinely parrot unjustified boastful claims made by scientists. Instead of acting like good journalists who critically analyze the claims of people with a motive for boasts and self-promoting claims, our science journalists tend to act like credulous cheerleaders. 

The result is often what we can call a North Korean style of journalism. Just as North Korean journalists reverently pass on every claim from their government officials, treating such proclamations rather like sacred revelations not to be doubted, our science journalists reverently pass on every claim from professors, treating such proclamations rather like something written on stone tablets by the finger of God. 

Key facts are very often withheld in the stories of science journalists, and misleading language is often used. An example we see again and again is when a story reports that some life-related chemical is found in space. The key relevant fact in all such stories is: in what abundance was the chemical detected? But the science journalist will typically give no mention of such a key fact. The reason is that the abundance found is typically some negligible amount, maybe something like one atom per cubic kilometer. There will be typically some phony headline such as "Key Building Block of Life Found in Space." The chemical mentioned will typically not be an actual building block of life. 

The visual below shows some of the main sins of today's typical science journalist:

bad science journalism

There are economic reasons that help explain such poor behavior by science journalists. The science journalist is part of a profit complex that I have called the Academia Cyberspace Profit Complex. Within such a complex, misleading but interesting-sounding or triumphal headlines or science paper titles are strongly encouraged. Web sites have "science news" headlines, and clicking on such headlines leads to pages containing ads. The more people view those pages, the more ad revenue the web site gets. Such a situation strongly encourages the writing of stories with misleading but interesting-sounding or triumphal headlines. The more such stories appear, the more money a web site can make. Scientists are also incentivized to write papers with misleading but interesting-sounding or triumphal headlines. Such headlines tend to increase the number of paper citations the paper receives, and the number of such citations is a performance metric used to judge the success of scientists. The diagram below shows some of the economics going on in this Academia Cyberspace Profit Complex, with profit events depicted in yellow.

Academia Cyberspace Profit Complex

An author describes the situation like this:

"The rise of mass marketing created the cultural substrate for the so-called post-truth world we live in now. It normalized the application of hyperbole, superlatives, and untestable claims of superiority to the rhetoric of everyday commerce. What started out as merely a way to sell new and improved soap powder and automobiles amounts today to a rhetorical infrastructure of hype that infects every corner of culture: the way people promote their careers, universities their reputations, governments their programs, and scientists the importance of their latest findings. Whether we’re listening to a food corporation claim that its oatmeal will keep your heart healthy or a university press office herald a new study that will upend everything we know, radical skepticism would seem to be the rational stance for information consumers."

Some of the reasons for the poor quality of science journalism are economic reasons hinted at above. Another major reason for the poor quality of science journalism is that science journalists very often lack deep knowledge about the topics they are writing about.  In the US most science journalists seem to lack any science degree, even a bachelor's degree with a major in one of the sciences. The Global Science Journalism Report for 2021 got responses from 600+ science journalists, after trying to get answers from 800 of them. On page 16 we read this:

"Almost all participants who provided information about their training background (n = 617) report having a university degree (96%). Of these, most have a university degree in science (42%), a third have a university degree in journalism (37%) and 17% have a university degree in other areas."

On the same page we are told that only 43% of the science journalist respondents in the US and Canada have any degree in science. We may presume that the actual number of science journalists in the US having a bachelor's degree or higher in science is probably less than 40%, because about 25% of the people sent the survey did not answer, and people lacking science training would probably be among the most likely not to answer a survey that included questions asking about qualifications. 

Some science journalists have a Master's Degree in "Science Communication," which does basically nothing to establish them as being qualified to write about most of the deep topics they write about. For example, Stony Brook University of SUNY offers a Master's Degree in Science Communication, which requires only 33 credits (1 or 1.5 years of study).  None of the courses are regular science courses.  

Do we hear scientists complaining about the depth of science knowledge of science journalists? I have never heard a scientist make such a complaint. We may wonder whether scientists actually want for science journalists to be people who lack deep knowledge about the topics they write about. Why is that? It seems that the greater a person's knowledge on a topic, the more likely that person would be to challenge dubious and unwarranted claims from scientists, and the more likely they would be to report skeptically when scientists make dubious boasts.  

Imagine you are a neuroscientist who has just produced another one of those poorly designed experimental studies involving rodents, fear and memory. If you have made some triumphal claim such as "New evidence for memory engrams discovered," the last thing you want is for some press release on your study to be written by someone who is very knowledgeable about neuroscience and the many dire problems in current neuroscience research. For then instead of getting the desired press release or news story with a headline like this:

        Scientists Locate Cells Storing Memories

you might instead get a story with a headline like this:    

       Scientists Claim to "See Memories," But Was It a False Alarm?

On page 28 of the Global Science Journalism Report we have a sobering response from the survey of 600+ science journalists. Roughly half of them agree with the statement "Science journalism is primarily 'cut, paste and translate' from US and UK science outlets." So about half of our science journalists apparently think their role is mainly to just parrot whatever claims they receive, rather that critically judging them. 

Nowhere in the Global Science Journalism Report do we get any indication that our science journalists regard themselves as having a role of presenting balanced views or critically analyzing the claims made by scientists. It seems our science journalists have been conditioned to act like North Korean journalists, who reverently pass on all claims they receive from government officials. One of the responsibilities of a good journalist is to ask people tough questions. When our scientists are interviewed by science journalists, it seems the science journalists ask nothing but the softest of softball questions. 

The online Quanta Magazine supplies us with endless examples of appalling science journalism, often written by writers who seem to have no deep knowledge of the deep topics they are talking about. We endlessly read on this site descriptions of junk science studies that are trumpeted as great breakthroughs. The latest example of such journalism is an article entitled "How the Brain Distinguishes Memories From Perceptions." Right after the title we have this subtitle claim: "The neural representations of a perceived image and the memory of it are almost the same." There is actually zero evidence that there exists in the brain any such thing as a neural representation of a perceived image or a neural representation of a memory. The only evidence we have for any type of representation in a brain is the DNA representation of very low-level chemical information such as amino acids, which occurs through combinations of nucleotide base pairs. Such representations are found in pretty much all cells in the body.  When they scan brain tissue, scientists never see anything looking like representations of remembered or perceived objects. Any claims to the contrary are examples of pareidolia, rather like people claiming to see the face of Jesus in their toast. 

We then have in the Quanta Magazine article the groundless claim that some paper "has identified at least one of the ways in which memories and perceptions of images are assembled differently at the neurological level." Neuroscientists have no understanding at all of how memories could be assembled at the neurological level, and no credible theory of how such a thing could occur. An examination of the paper  being touted ("Perception and memory have distinct spatial tuning properties in human visual cortex") quickly reveals that some severe cases of Questionable Research Practices are occurring. It's a brain scan study that used only 9 subjects, which is way, way too small for any kind of reliable result. Another paper tells us that even study group sizes of 200 are too small to produce a reliable result in brain scanning studies, telling us this: "These findings suggest that samples consisting of ~200–300 participants have in reality still low power to identify reliable SBB-associations [structural brain behavior associations] among healthy participants."  Similarly, another paper was entitled "Reproducible brain-wide association studies require thousands of individuals." So if you need thousands of brain scan subjects for a reliable, reproducible result, and your study only used 9 subjects, how big a shortfall did you commit? A shortfall of hundreds of times. It's like trying to build a house not with 3-meter-long 2-by-4's, but with wood strips the size of match sticks. 

Having no blinding protocol (very much needed for a study like this), the paper being hailed by Quanta Magazine confesses its own lack of statistical power, saying, "Given the small sample size of both our study and that of Breedlove et al., and the small size of the expected effect, future work should adjudicate between the predictions of these models with a highly powered experiment." That's basically confessing that your study bungled, but crossing your fingers that maybe someone in the future will get something similar after following proper research standards. The paper tells us how it used four different types of datasets that it calls "artificial datasets," which is another word for fictional data. If you took a Science Procedures 101 course, one of the first things you might learn is: analyze real data coming from your observations, not fictional data you simply made up out of thin air. Do we hear in the Quanta Magazine story hailing this junk study that only 9 subjects were used? No. It's rather like the operating rule of such stories is "hide the dirty laundry and never list the tiny sample size." 

And so it goes on, again and again and again in Quanta Magazine: science journalists enthusiastically hailing poorly-designed junk science research that is suitable for little more than wrapping fish and lining the bottom of bird cages.  It's rather like our Quanta Magazine writers never learned a single principle regarding how to distinguish robust science from junk science. Such are the kind of science writers our universities like to have, particularly when employed as press release writers. 

Do college press offices fill job openings with ads like this?

In the recent post here by Scientific American columnist John Horgan and the recent post here by Peter Woit of the widely read Not Even Wrong blog, you can read about Quanta Magazine's ridiculously  credulous coverage of a baloney story claiming that physicists had created a spacetime wormhole, one of the staples of science fiction. It seems that someone at that magazine had passed on this not-actually-true story (which physics expert Woit calls a mere publicity stunt) as real news. Similar stuff goes on in their coverage of biology.  The supposed "wormhole" merely existed in some speculative computer simulation. Hilariously, the Quanta Magazine writers call themselves "the best team in science journalism." 

No comments:

Post a Comment