Header 1

Our future, our universe, and other weighty topics


Friday, August 22, 2025

Scientist Flubs and Flops #11



scientist refusing to study evidence

gigantically complex systems in human body


too small sample sizes in neuroscience


science silos


scientific fraud


skeptic's vow


dysfunction in science academia


science clickbait


science news clickbait

citation-seeking scientist



dysfunction in science journalism

dysfunctional science news



Press button to watch video


hype and error in science news

  • "However widespread is the acceptance among cognitive neuroscientists of this second part of the ontological postulate -- the mind is an emergent factor from the interactions among the vast number of neurons that make up the brain -- it must be reiterated that there is no proof of it, and it has to be considered as an unprovable assumption rather than a provable fact."-- psychology professor emeritus William R. Uttal, 2011 (link).
  • "Neuroscience, as it is practiced today, is a pseudoscience, largely because it relies on post hoc correlation-fishing....As previously detailed, practitioners simply record some neural activity within a particular time frame; describe some events going on in the lab during the same time frame; then fish around for correlations between the events and the 'data' collected. Correlations, of course, will always be found. Even if, instead of neural recordings and 'stimuli' or 'tasks' we simply used two sets of random numbers, we would find correlations, simply due to chance. What’s more, the bigger the dataset, the more chance correlations we’ll turn out (Calude & Longo (2016)). So this type of exercise will always yield 'results;' and since all we’re called on to do is count and correlate, there’s no way we can fail. Maybe some of our correlations are 'true,' i.e. represent reliable associations; but we have no way of knowing; and in the case of complex systems, it’s extremely unlikely. It’s akin to flipping a coin a number of times, recording the results, and making fancy algorithms linking e.g. the third throw with the sixth, and hundredth, or describing some involved pattern between odd and even throws, etc. The possible constructs, or 'models' we could concoct are endless. But if you repeat the flips, your results will certainly be different, and your algorithms invalid...As Konrad Kording has admitted, practitioners get around the non-replication problem simply by avoiding doing replications.” -- A vision scientist (link). 
  • "Scientists need citations for their papers....If the content of your paper is a dull, solid investigation and your title announces this heavy reading, it is clear you will not reach your citation target, as your department head will tell you in your evaluation interview. So to survive – and to impress editors and reviewers of high-impact journals,  you will have to hype up your title. And embellish your abstract. And perhaps deliberately confuse the reader about the content." -- Physicist Ad Lagendijk, "Survival Blog for Scientists."  
  • "Thirty-four percent of academic studies and 48% of media articles used language that reviewers considered too strong for their strength of causal inference....Fifty-eight percent of media articles were found to have inaccurately reported the question, results, intervention, or population of the academic study....Among the 128 assessed articles assessed, 107 (84 %) had at least one example of spin in their abstract. The most prevalent strategy of spin was the use of causal language, identified in 68 (53 %) abstracts."" -- Statement by scientists in a scientific paper. 
  • "This system comes with big problems. Chief among them is the issue of publication bias: reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results. So scientists go to great lengths to hype up their studies, lean on their analyses so they produce 'better' results, and sometimes even commit fraud in order to impress those all-important gatekeepers."  -- Brain scientist Stuart Ritchie (link).
  • "Throughout all the journals, 75% of the citations were Fully Substantiated. The remaining 25% of the citations contained errors...In a sampling of 21 similar studies across many fields, total quotation error rates varied from 7.8% to 38.2% (with a mean of 22.4%)." -- Neal Smith and Aaron Cumberledge, "Quotation errors in general science journals."
  • "Ioannidis (2005) and Pfeiffer and Hoffmann (2009) argue that reliability of findings published in the scientific literature decreases with the popularity of a research field, in part because competition leads to corner-cutting and even cheating, and in part because if many people do the same type of experiment, this increases the chances (from a statistical perspective) of getting an experiment with misleading results. Carlisle (2021) identified flaws in 44% of medical trials submitted to the Journal Anaesthesia between February 2017 to March 2020, where individual patient data was made available; this is compared to 2% when it was not."  -- Three scientists (link). 
  • "It’s time to admit that genes are not the blueprint for life....It’s time to stop pretending that, give or take a few bits and pieces, we know how life works." -- Biologist Denis Noble (link).
  • "If Alexandrian fires were to consume all of thousands of metres of library space devoted to the archives of behaviourist and pavlovian journals from the 1920s to the 1960s, I doubt much of more than historical interest would be lost.-- Neuroscientist Steven Rose (link).
  • "We, as a community of scientists, are so obsessed with publishing papers — there is this mantra 'publish or perish,' and it is the number one thing that is taught to you, as a young scientist, that you must publish a lot in very high profile journals. And that is your number one goal in life. And what this is causing is an environment where scientific fraud can flourish unchecked. Because we are not doing our job, as scientists. We don’t have time to cross-check each other, we don’t have time to take our time, we don’t have time to be very slow and patient with our own research, because we are so focused with publishing as many papers as possible. So we have seen, over the past few years, an explosion in the rise of fraud. And different kinds of fraud. There is the outright fabrication — the creating of data out of whole cloth. And then there’s also what I call 'soft fraud' — lazy science, poorly done science. Massaging your results a little bit just so you can achieve a publishable result. That leads to a flooding of just junk, poorly done science." -- Scientist Paul Sutter (link). 
For a 62-page free E-book filled with confessions like the ones above, use the link here 

No comments:

Post a Comment