Header 1
Our future, our universe, and other weighty topics
Wednesday, January 12, 2022
Was It Better Afterlife Evidence Than the "Cross Correspondences"?
Saturday, January 8, 2022
30 Misleading Terms Used in Science Literature
Let us look at some "give you the wrong idea" words and phrases used by scientists and science writers, terms that tend to mislead the general public.
#1 Building Blocks of Life
Scientific literature is constantly misusing and abusing the phrase "building blocks of life." The very term is an improper one, because living things are internally dynamic to the highest degree, with a constant replacement of tiny parts (protein molecules and cells) occurring within an organism; so a comparison to a structure with static "building block" parts is inappropriate. If anyone refers to "building blocks of life," there are only two appropriate ways to use the term: (1) when referring to macroscopic life, the "building blocks of life" would be cells; (2) when referring to microscopic one-celled life, the "building blocks of life" would be protein molecules. But routinely science literature will refer to some low-level chemicals such as amino acids or nucleotides as being "building blocks of life" when they are no such things (being at best mere building blocks of the building blocks of life). A particularly egregious abuse of language occurs when science literature mentions some organic chemicals that are not necessary for life and are neither the building blocks of life nor the building blocks of the building blocks of life, and such literature refers to such chemicals as "building blocks of life." Such misstatements occur often in astrobiology literature and origin of life literature.
#2 Long-term Potentiation
What is misleadingly called “long-term potentiation” or LTP is a not-very-long-lasting effect by which certain types of high-frequency stimulation (such as stimulation by electrodes) produces an increase in synaptic strength. The problem is that so-called long-term potentiation is actually a very short-term phenomenon. A 2013 paper states that so-called long-term potentiation is really very short-lived:
#3 Synaptic Plasticity
#4 Dark Matter
Scientists speculate that most of the matter in the universe is some invisible form of matter not yet discovered. They call this "dark matter." But that is a misleading term which implies visible matter that is dark. The "dark matter" imagined by cosmologists is invisible. A non-misleading term cosmologists should be using for such a possibility is "invisible matter."
#5 Dark Energy
Scientists speculate that most of the energy in the universe is some invisible form of energy not yet discovered. They call this "dark energy." But that is a misleading term which implies visible energy that is dark. The "dark energy" imagined by cosmologists is invisible, and cosmologists should be calling it "invisible energy."
#6 Earth-like Planets
Many Earth-sized planets have been discovered, but no Earth-like planets has been discovered outside of our solar system. We should not be calling a planet "Earth-like" unless life was discovered, and life has not been discovered on any other planet. Scientists and science journalists very often describe a merely Earth-sized planet as an "Earth-like planet." Such language is very misleading.
#7 Adaptations
Biologists use the word "adaptations" to refer to what they believe are extremely complex accidental inventions. For example, a biologist may refer to the first eyes as "adaptations," or the first wings as "adaptations," or the appearance of hands with fingers as an "adaptation."
What is misleading here is the use of a euphemism designed to hide us from the shockingly far-fetched belief that is being advanced. Imagine if a biologist were to say something like this (referring to the Cambrian Explosion): "At this point in natural history there occurred in many different species the appearances of the extremely complex accidental inventions called eyes." Alarm bells would go off in a person's mind, for he may remember that he has never witnessed in his life anything like an extremely complex accidental invention, and the idea has a nonsensical sound to it.
By using the term "adaptations," biologists avoid this problem. But it is misleading to refer to very complex biological innovations as "adaptations," since the word "adaptation" (derived from the word "adapt") merely means "a change or the process of change by which an organism or species becomes better suited to its environment" (to give a dictionary's definition). It is misleading because the term suggests some little not-too-hard thing has occurred, when some incredibly-hard-to-credibly-explain thing has occurred. In a biology paper whose primary author is Tom Roschinger, a CalTech scientist in the Division of Biology and Biological Engineering, we read, "Biological systems have evolved to amazingly complex states, yet we do not understand in general how evolution operates to generate increasing genetic and functional complexity." Biological innovations so complex and organized that there is no current credible explanation for them should not be referred to by some term ("adaptations") that make them sound like some small-change affair.
#8 "Reservoirs" in Reference to Extremely Sparse Molecule Levels, which May be Referred to Inaccurately as "Large Organic Molecules"
In the post here I discuss an example of an outrageous abuse of language sometimes occurring in science literature: the use of the term "reservoirs" to refer to molecule levels that are vastly more sparse (i.e. vastly less dense) than the molecule levels found in earthly clouds. The molecules referred are sometimes sometimes some of the simplest organic molecules, but are misleadingly referred to as "large organic molecules."
#9 Astrobiologist
Until extraterrestrial life is discovered, the term "astrobiologist" must be classified as a misleading term, as it suggests or implies that extraterrestrial life has been discovered. It would be less misleading if people referred to astrobiologists as "extraterrestrial life theorists," which would correctly signify the speculative nature of their studies.
#10 String Theory
The term "string theory" is a term that misleads us by suggesting theorists speculating about something not-very-unusual (string being a very common household item). But string theorists make some of the most bizarre speculations ever made. It would be better to call string theorists "other-dimension theorists," which would tell us about how speculative such theorists are. Different versions of string theory postulate up to 26 different dimensions.
Groundlessly claiming a warrant in quantum mechanics, Hugh Everett created a groundless pure-craziness theory that the universe is constantly splitting into different copies of itself, so that every possibility becomes reality. The term "Many Worlds" theory is used for Everett's theory. But such a term is very misleading, because it makes an insane idea have a reasonable sound to it. Because so many extrasolar planets have been discovered, it sounds reasonable for someone to say that there are many worlds. But Everett's theory involves infinitely more than the mere claim that there are many worlds, since it involves the madness of claiming that there are infinite versions of each and every world, and infinite different versions of you and everyone else.
#12 Body Plan
#13 Scientific Consensus
The term "scientific consensus" is one of the abused terms in the world of scientific academia. Some leading dictionaries define a consensus as an agreed opinion among a group of people. The first definition of "consensus" by the Merriam-Webster dictionary is "general agreement: unanimity." But scientists have very often referred to a "scientific consensus" on some particular topic when there was no good evidence that such a consensus existed, and quite a bit of evidence that no consensus actually exists.
Some scientist advancing a new theory will start to say "more and more" scientists are accepting his theory. Once he starts to get a few people adopting his ideas, he may claim that "there is a growing trend" towards accepting his theory. If some small fraction of scientists adopts his theory, he may claim this as a "growing consensus." Then if maybe half of scientists adopt his theory, he may claim this as a "consensus." It is easy to see why such misleading statements occur. The more popular you make a theory sound, the more people will be likely to adopt it.
I may note that claims of either a scientific consensus or anything remotely approaching a scientific consensus tend to be extremely unreliable. The only way to reliably measure how many scientists believe in a theory is to do a secret ballot of scientists. Such secret ballots (of large numbers of scientists) never occur or almost never occur.
#14 Talk of Brain Regions "Lighting Up" or "Activating"
All regions in the brain are constantly active. When scientists do scans of brains, they typically find differences in activity of less than half of one percent (about 1 part in 200) between one region and another. But science writers often refer to such very slight differences in activity as cases of some particular brain region "lighting up" or some particular brain region "activating." That is misleading, as it suggests a large difference in activity, when the actual difference in activity is only very tiny.
#15 Skull (When Used to Describe Bone Fragments)
The word "skull" is a word with a very exact definition. The Merriam Webster dictionary defines a skull as "the skeleton of the head of a vertebrate forming a bony or cartilaginous case that encloses and protects the brain and chief sense organs and supports the jaws." Paleontologists and their press workers routinely misuse the word "skull," by using the term to refer to small bone fragments believed to be from a skull. Calling such fragments a skull is often as misleading as using the term "automobile" to refer to a bumper, a seat and a tire collected from a junk yard.
#16 Big Bang
The phrase "Big Bang" when used in reference to the origin of the universe misleads us by suggesting the idea of the universe started out as some big bomb that exploded, a kind of "cosmic egg" that blew up. The Big Bang theory actually holds that the universe began in a state of infinite density called a singularity, and expanded very rapidly from such a state, with the density rapidly dropping. A non-misleading term to describe the theory might be to call it the "origination-from-zero-radius theory" or the "origination-from-infinite density" theory.
#17 Chemical Imbalances
It has long been claimed that one or more psychiatric problems are caused by "chemical imbalances." Scientists do not have any understanding of how some "chemical balance" results in normal mental function, and they do not have any understanding of how some "chemical imbalance" could cause psychiatric problems.
#18 Genetically Determined
Scientists and science writers very often claim that some outcome is "genetically determined" when there is merely evidence that the outcome is genetically influenced. There is a huge difference between a first thing merely influencing or affecting a second thing (merely having some effect on it), and the first thing determining the second thing (being the main cause of that thing). For example, the weather influences how a car looks, but the weather does not determine how a car looks (a car's look is determined by the manufacturing process used to create it). We do not have any evidence that either human mental traits or the structure of the human body is genetically determined. Because genes merely specify low-level chemicals such as protein molecules, we have the strongest reason for thinking that human mental traits and the physical structure of humans cannot be determined by genes. All that we have is evidence for a much weaker claim: the claim that human mental traits and the physical structure of humans is influenced or affected by genes.
#19 Scientific Method
The myth that scientists follow some algorithm called "the scientific method" is one of the most long-standing myths of scientist culture. Statements of how this "scientific method" works vary widely but a typical description will include steps such as this:
- Formulate a hypothesis
- Design an experiment to test the hypothesis
- Communicate results whether the experiment supports the hypothesis
- If the experiment fails to support the hypothesis, formulate a new hypothesis.
- Creatively interpreting the negative result to make it look like something supporting the hypothesis being tested.
- Changing the hypothesis to match whatever result was obtained.
- Playing around with the data in some statistical way until the negative result can be claimed as a positive result in favor of the hypothesis (or a neutral result consistent with the hypothesis).
- Dismissing the result conflicting with the hypothesis by special pleading, such as claiming that far-above-chance results in tests of psi or ESP were produced by subjects cheating, or claiming that there was experimenter error or equipment error.
- Simply filing away the results without trying to publish them, and retrying the experiment, perhaps with some modification that will make the experiment much more likely to produce a seemingly positive result.
#20 "The Neural Circuitry Behind" Some Behavior or Mental State
The very term "neural circuit" is misleading. A circuit is an unbroken electrical path, typically a roughly circular path that starts and ends in the same place. Neural pathways are not circular or even rather circular, they do not start and end in the same place, and they have a huge number of breaks, the breaks of synaptic gaps. Therefore, sections of brain tissue should not be referred to as "neural circuits."
As for the the idea that some behavior or mental state or mental trait can be explained by some arrangement of tissue in the brain, such an idea has no empirical support.
#21 Claims Brains Are "Wired for" or "Hard-Wired for" This or That
The term "hard wiring" is an old mechanical term meaning to be determined by a particular arrangement of wires. Before modern electronics and software programming, the behavior of certain mechanical devices such as switchboards were determined by arrangements of wires, particular arrangements being called types of "hard wiring." Although neuroscientists sometimes speak as if investigating arrangements of wire-like components in the brain might shed light on human behavior, no one has ever shown that any human behavior can be explained by some arrangement of such components in the brain. It is therefore very misleading to claim that humans are "hard-wired" to do any particular thing.
#22 Theory of Everything
The misleading term "theory of everything" has long been used by physicists to mean some theory that would unite two major physics theories: quantum mechanics and the theory of relativity. The term has always been misleading, because each so-called "theory of everything" is merely a physics theory, and explains nothing in the world of biology or psychology.
#23 "Regulate" or "Control" or "Sculpt" or "Mold" or "Direct" When Used About Genes or Chemicals
Chemicals inside the body are mindless things, and it is misleading to refer to them using action words that suggest they are intelligent agents. The quote below in a biologist's essay suggests that there is a massive problem of biologists using verbs in an inappropriate way when describing genes:
#24 "Intended to Simulate" When Used About Experiments Like the Miller-Urey Experiment
For seventy years science literature has been speaking in a very misleading way about experiments like the Miller-Urey experiment. That experiment used a small closed glass aparatus filled with chemicals. Every other day the chemicals were continuously bombarded with electrical discharges. The result was some amino acids that accumulated at the bottom of the apparatus. Claimed as a simulation of early Earth conditions, the apparatus was no such thing, as nowhere in the early Earth did there ever exist such an enclosed space subject to even a thousandth as much electrical energy. Moreover, the chemicals used are now believed to be a mixture not matching the atmosphere of the early Earth.
The results of such an experiment never should have been claimed as a result in support of claims of abiogenesis. But for 70 years science literature has been passing off such an experiment as something supporting claims that amino acids could have naturally formed on the early Earth. Typically science writers will not claim that the experiment realistically simulated the early Earth (a claim that is obviously false). Instead, we will merely hear that the experiment "intended to simulate" or "was designed to simulate" the early Earth. The insinuation is that the experiment might have simulated the early Earth, but we know that the experiment never was a realistic simulation of the early Earth. The same phrase of "intended to simulate" is used for other experiments that just as obviously fail to realistically simulate early Earth conditions.
#25 Cambrian Diversification
Since the time of Darwin, the Cambrian Explosion has always been the greatest embarrassment to Darwinists. The main branches of animal species are called phyla, and there are about 35 phyla in the animal kingdom. Contrary to what we would expect under the assumptions of Darwinism, all or almost all of the animal phyla first appear in the fossil record in a relatively short span of time, what is called the Cambrian Explosion occurring about 640 million years ago. Under Darwinist assumptions, we would expect such phyla to have originated gradually during the past 700 million years.
To reduce this embarrassment, some science writers have started using the euphemistic term "Cambrian Diversification," which does not sound like such a dramatic and sudden burst of biological innovation. You can compare this to someone avoiding references to the "atomic bombing" of Hiroshima and Nagasaki, and referring instead to the August 1945 events as the Hiroshima and Nagasaki "urban renewal projects," or someone trying to trivialize the painting of the Sistine Chapel's ceiling by referring to it as a "ceiling color diversification."
#26 Natural Selection
Selection is a term meaning a choice by a conscious agent. The so-called "natural selection" imagined by those who use such a term does not actually involve any selection or choice. The "natural selection" imagined by biologists merely involves a survival-of-the-fittest effect, in which fitter organisms survive longer or reproduce more. The duplicity of using the term "natural selection" for some imagined effect that is not actually selection is a word trick that was started by Charles Darwin, who coined the term "natural selection."
#27 Selection Pressure
When biologists use the term "selection pressure," they are simply using a variant of the term "natural selection." The term "selection pressure" is doubly-misleading, first because there is no actual selection involved in so-called selection pressure (selection being an act by a conscious agent), and second because there is no actual pressure involved.
#28 Early Human
The defining characteristic of humans is their use of symbols. The term "early human" is very often misleadingly used in science literature, to refer to pre-human species which have never been proven to have used symbols. Such language is used to try to bolster claims that species arising before humans were ancestors of humans. A person who lacks any good evidence that Species X existing before humans evolved into humans may simply take the shortcut of calling this Species X an "early human" species. But if there is no good evidence that Species X used symbols, then it should not be called an "early human" species.
#29 "Genetic Blueprint" or "Genetic Program" or "Genetic Recipe"
What I call the Great DNA Myth is the myth that inside DNA is some blueprint or recipe that specifies how to make a human body.
There are various ways in which this false idea is stated, all equally false:
- Someone may describe DNA or the genome as a blueprint for an organism.
- Someone may describe DNA or the genome as a recipe for making an organism.
- Someone may describe DNA or the genome as a program for building an organism.
- Someone may claim that DNA or genomes specify the anatomy of an organism.
- Someone may claim that genotypes (the DNA in organisms) specify phenotypes (the observable characteristics of an organism).
- Someone may claim that genotypes (the DNA in organisms) "map" phenotypes (the observable characteristics of an organism) or "map to" phenotypes.
- Someone may claim that DNA contains "all the instructions needed to make an organism."
- Someone may claim that there is a "genetic architecture" for an organism's body or some fraction of that body.
- Using a little equation, someone may claim that a "genotype plus the environment equals the phenotype," a formulation as false as the preceding statements, since we know of nothing in the environment that would cause phenotypes to arise from genotypes that do not specify such phenotypes.
All of these versions are equally false, because DNA only contains low-level chemical information (such as which sequences of amino acids make up polypeptide chains that are the starting points of protein molecules), not high-level structural information, Many biology authorities have confessed this reality, and at the post here you can read statements by more than twenty biology experts stating that DNA is not a blueprint or a program or a recipe for building an organism.
#30 "Essential for"
In the world of neuroscience we often have incorrect claims that this or that protein or this or that brain part is "essential for" some cognitive ability. In some cases experiments have shown that the cognitive ability continues to exist even when the supposedly "essential" thing has been removed.
Tuesday, January 4, 2022
Replication Study Show the Massive Rot Within Experimental Biology
A recent scientific paper gave the results of a large project designed to test how well cancer studies replicate. Entitled "Reproducibility in Cancer Biology: Challenges for assessing replicability in preclinical cancer biology," the paper is a shocking portrait of a massive degree of malfunction within the world of experimental biology.
The authors attempted to replicate 193 experiments from 53 widely-cited cancer research papers. The authors were shocked to find that not a single one of the 193 papers gave a methods description sufficient for the authors to reproduce the experiment without asking for more information from the scientists who ran the experiment. They state, "None of the 193 experiments were described in sufficient detail in the original paper to enable us to design protocols to repeat the experiments, so we had to seek clarifications from the original authors."
Upon asking for additional information from the scientists who ran the experiments, the authors found that while 41% of the scientists were very helpful in providing information, 9% of the scientists were only minimally helpful, and 32% of the scientists were not helpful at all or did not respond to requests for information. Such a result suggests that a large fraction of all cancer experiments (a quarter or more) are either junk science procedures that scientists are ashamed to discuss, or fraudulent experiments that scientists refuse to talk about any further, because of a fear of their fraud being discovered.
Imagine if you were a scientist who had pulled some shenanigans or skulduggery when doing an experiment. Years later, you get an email from someone saying, "I am trying to replicate your experiment, but your paper has not given me enough information -- can you please answer this list of questions?" What would you do? In such a case you would probably just not answer the email. The last thing you would want is for someone to discover the sleazy shortcuts you had used, or to discover that you had fudged the results. Conversely, if you did the experiment using best-practices methods, and proceeded in an entirely honest and commendable manner, you would not be troubled by such an email, and would probably answer it in a helpful way.
The paper authors tried to reproduce the cancer studies in a way that produced a statistical power of at least .80 (which can be roughly described as a pretty good likelihood that the result was not a false alarm). They often found that the studies they were trying to reproduce typically used too small a sample size to reach such a standard. We read this: "As an illustration, the average sample size of animal experiments in the replication protocols (average = 30; SD = 16; median = 26; IQR = 18–41) were 25% higher than the sample size of the original experiments (average = 24; SD = 14; median = 22; IQR = 16–30)."
This is quite interesting from the standpoint of neuroscience experimental research. In neuroscience experiments, the great majority of scientific experiments use way-too-small sample sizes. Most experimental neuroscience papers use a sample size smaller than 15 for some of the study groups. I have often cited this "15 subjects per study group" as a minimal quality standard that neuroscientists typically fail to meet in their experiments. But according to the figures quoted above, it seems the quality shortfall is even greater than I have described. The paper suggests that sample sizes should be an average of not just 15, but 30. If that is correct, then the failure of neuroscientists to use adequate sample sizes in their experiments is far greater than I have suggested.
In the paragraph below the authors discusss some of the rot they have discovered within experimental biology:
"The present evidence suggests that we should be concerned. As reported in Errington et al., 2021b, replication efforts frequently produced evidence that was weaker or inconsistent with original studies. These results corroborate similar efforts by pharmaceutical companies to replicate findings in cancer biology (Begley and Ellis, 2012; Prinz et al., 2011), efforts by a non-profit biotech to replicate findings of potential drugs in a mouse model of amyotrophic lateral sclerosis (Perrin, 2014), and systematic replication efforts in other disciplines (Camerer et al., 2016; Camerer et al., 2018; Cova et al., 2018; Ebersole et al., 2016; Ebersole et al., 2019; Klein et al., 2014; Klein et al., 2018; Open Science Collaboration, 2015; Steward et al., 2012). Moreover, the evidence for self-corrective processes in the scientific literature is underwhelming: extremely few replication studies are published (Makel et al., 2012; Makel and Plucker, 2014); preclinical findings are often advanced to clinical trials before they have been verified and replicated by other laboratories (Chalmers et al., 2014; Drucker, 2016; Ramirez et al., 2017); and many papers continue to be cited even after they have been retracted (Budd et al., 1999;Lu et al., 2013; Madlock-Brown and Eichmann, 2015; Pfeifer and Snodgrass, 1990)...Fundamentally, the problem with practical barriers to assessing replicability and reproducibility is that it increases uncertainty in the credibility of scientific claims. Are we building on solid foundations? Do we know what we think we know?"
A separate paper by the same authors ("Investigating the replicability of preclinical cancer biology") gives results on what degree of success was achieved in trying to reproduce the selected experiments. Getting little or no help from such a large fraction of the scientists, and finding the original papers failing to give enough information for replication, the authors were only able to re-run 50 of the 193 experiments they had originally chosen. Of those 50, only 46% were successfully replicated in the sense of producing results like those reported in the original paper.
So after setting the goal of replicating 193 experiments, and doing their best to replicate all 193 whenever possible, the authors were only able to successfully replicate about 23 of the experiments. That's a pitiful replication rate of only about 12%. The authors report this: "One method compared effect sizes: for positive effects, the median effect size in the replications was 85% smaller than the median effect size in the original experiments, and 92% of replication effect sizes were smaller than the original." What this suggests is that the effects reported in experimental biology papers tend to be massively overstated.
What all these numbers give us is a vivid portrait of massive decay, rot, malfunction and arrogance within experimental biology. Another scientific study surveying animal researchers (discussed here) gives similar results. Very clearly, junk experimental results are being produced to a massive degree by experimenters very often guilty of Questionable Research Practices. From the facts that such a large fraction of the experimenters refuse to respond to questions from those attempting to reproduce their experiments, and the fact that only a small fraction of the studies can be successfully replicated, we may assume that either a very large amount of fraud or a massive degree of incompetent activity is occurring within experimental biology -- probably both. Therefore, a good general principle to follow is: assume that any novel experimental biology result you read about in the science news is bogus or junk science, unless the result has been very well replicated, with many other experimenters getting the same result. (Vaccine results have been massively replicated, because when millions of people have taken a vaccine without harm, that is equivalent to massive replication.)
I have long discussed the poor practices and shabby standards of experimental neuroscience. When poor research practices occur in neuroscience, the damage is mainly intellectual. Junk neuroscience experiments cause people to wrongly think that scientists are on the right track in their assumptions about minds and brains, which is not true. They are very much on the wrong track, betting the farm on false assumptions. But at least such misleading junk science experiments don't lead to physical human suffering. It's a different situation if so many cancer research studies are unreliable. We can only guess how great is the physical toll to human beings when so many cancer studies are not reliable.
Yesterday a jury found Elizabeth Holmes guilty of wire fraud. Her company Theranos had bilked investors out of countless millions, long making grand biology-related promises but producing only feeble results. There is many an Elizabeth Holmes (male and female) in the world of experimental biology. They victimize not wealthy investors but the federal government. Every year the US government doles out billions for scientific research, and a large fraction of this goes to fund junk experimental science that cannot be replicated because poor experimental procedures were used, or because the scientists were trying to prove something that is untrue.
You can see endless cases of wasted money by using the National Science Foundation's query tool. Below is an example, searching for grants given on the topic of synapses:
https://www.nsf.gov/awardsearch/simpleSearchResult?queryText=synapses
Very often when you click on the rows of your search results, and very carefully analyze both the original research proposal and the resulting scientific papers published, you will find that a grant proposal was submitted promising some grand result, but that the scientific papers produced were merely junk science papers describing experiments using Questionable Research Practices such as a lack of a blinding protocol or way-too-small sample sizes. The paper titles and the paper abstracts often claim to have found something not actually shown by the research.
The US government has a whole big agency (the IRS) dedicated to tracking down and punishing people who file false tax returns. But the government seems to have no agency dedicated to tracking down and penalizing "sham, scam, thank you Sam" researchers who bilk Uncle Sam out of millions by getting lavish government research grants and then producing junk experimental results incapable of being successfully replicated. We may presume that in the labs they sometimes whisper that Uncle Sam is an easy mark.
In an unsparing essay entitled "The Intellectual and Moral Decline in Academic Research," PhD Edward Archer states the following:
"Universities and federal funding agencies lack accountability and often ignore fraud and misconduct. There are numerous examples in which universities refused to hold their faculty accountable until elected officials intervened, and even when found guilty, faculty researchers continued to receive tens of millions of taxpayers’ dollars. Those facts are an open secret: When anonymously surveyed, over 14 percent of researchers report that their colleagues commit fraud and 72 percent report other questionable practices....Retractions, misconduct, and harassment are only part of the decline. Incompetence is another....The widespread inability of publicly funded researchers to generate valid, reproducible findings is a testament to the failure of universities to properly train scientists and instill intellectual and methodologic rigor. That failure means taxpayers are being misled by results that are non-reproducible or demonstrably false."