Header 1

Our future, our universe, and other weighty topics


Sunday, January 29, 2023

The World Economic Forum's Mediocre Record at Predicting Global Risks

An organization called the World Economic Forum has long been issuing an annual report on global risks.  In a post on this blog way back in 2013, I reviewed the World Economic Forum's 2013 report on global risksListing technological risks and economic risks over the next ten years, the Forum failed to foresee substantial risks from a pandemic. Its 2013 report listed "Vulnerability to Pandemics" as being of below-average likelihood in the next ten years. 

A look at the World Economic Forum's 2019 report on global risks also shows a failure to alert us to the some of the biggest global economic hazards the world faced between 2020 and 2023. The 2019 report gave us the chart below. The only mention of a pandemic is a red diamond in the top left quadrant, one labeled "spread of infectious diseases." This was rated as having a "below average" probability. In the chart below we see no explicit mention of war as a global risk. There is a mention of "interstate conflict," but that is a vague term.  Inflation is listed as having a below-average risk (with its opposite, deflation, having the same risk). 

World Economic Forum Global Risks 2019

The 2019 report listed the following as the top 10 global risks in terms of likelihood:

global risks

Oops, there was no mention of a pandemic as one of the top 10 global risks in the next 10 years. There was also no mention of either war or inflation as one of the biggest global risks. The three biggest risks the global economy has faced in the past three years seem to have been the COVID-19 pandemic, inflation, and the risk from the Russian invasion of Ukraine. 

The 2020 World Economic Forum report on global risks did not do any better. It gave us estimates of risk that were not very different from the estimates shown above. Below is what the 2020 report listed as the top 10 global risks in the next 10 years:

global risks 2020

Again, there was no mention of a pandemic as one of the top 10 global risks in the next 10 years. There was also no mention of either war or inflation as one of the biggest global risks. The report had a chart looking very similar to the first chart shown in this post, and in that chart both "Unmanageable inflation" and "Deflation" were listed as risks of below-average likelihood, with deflation listed as the more likely of the two. 

In my 2013 post on the World Economic Forum's 2013 report on global risks, I chided the report for its overlooking the threat of nuclear war, stating this:

"My only objection to this summary graph is, again, that it inexplicably ignores the risk with greatest impact: the risk of nuclear war. A conservative estimate of the risk of a nuclear war is 1 percent per year. The United States has approximately 2150 active nuclear weapons (7700 in all), and Russia has 1800 (8500 in all). Every year those weapons continue to exist, there is a chance of nuclear war, through things such as a software error, a mechanical error (as in Fail Safe), a deliberate launch of weapons by a sub or missile base commander afflicted by insanity or rage (as in Dr. Strangelove),  or one side misidentifying something as a nuclear attack (such as happened in 1995, described here). Nuclear war should have been listed as the threat with greatest impact."  

The 2020 World Economic Forum report on global risks did risk "weapons of mass destruction" as number 2 on its list of risks with the greatest impact, thereby partially correcting the strange omission I had noted in my 2013 post.  But that 2020 report failed to mention conventional war as one of the 10 most likely risks. 

In the 2021 World Economic Forum report on global risks we see them starting to get things right. There is a chart with the same format as the first chart shown in this post.  Now suddenly we see a red diamond labeled "Infectious diseases" at the top right of the chart, indicating a risk of the highest likelihood. The diamond labeled "Deflation" has changed into a diamond labeled "Price instability," which covers both inflation and deflation. Now we see the 10 most likely global risks listed as below:

global risks 2021

Finally, we see what could be a reference to international war (described as "interstate relations fracture") listed as one of the ten most likely results. But the improvement of the 2021 global risks report wasn't anything much to brag about. COVID-19 had been declared a pandemic by early 2020, and by the year 2021 Russia was loudly threatening to invade Ukraine, something it did in February 2022. 

The 2021 global risks report failed to alert us to the risk of inflation. The report only mentioned inflation twice, each time in a very vague way in which inflation is described as price instability that could be either inflation or its opposite (deflation). Inflation in the US rose to 4.7% in 2021, and 9.1% in 2022.  

The 2022 global risks report of the World Economic Forum has this figure:

global risks 2022

None of the risks listed above include a mention of inflation, which was 9.1% in 2022 in the US. None of these "most severe risks on a global scale over the next 10 years" include an explicit mention of the risk of conventional war or nuclear war.  This makes no sense. The world still has thousands of nuclear weapons that are vastly greater risks in the next ten years than almost every item listed above. What kind of "nuclear weapons amnesia" is going on here? The war between Russia and Ukraine (which is receiving heavy military assistance from the US and Europe) makes the threat of nuclear warfare worse than it has been for quite a few years. 

All in all, it seems that the World Economic Forum's record of forecasting risks is not terribly impressive. 

Wednesday, January 25, 2023

Dirty DUNE: The 3 Billion Dollar Boondoggle Has Started

Scientists believe that when two very high-energy photons collide, they produce equal amounts of matter and antimatter, and that when matter collides with antimatter, it is converted into high-energy photons. Such a belief is based on what scientists have observed in particle accelerators such as the Large Hadron Collider, where particles are accelerated to near the speed of light before they collide with each other. But such conclusions about matter, antimatter and photons lead to a great mystery as to why there is any matter at all in the universe.

Let us imagine the early minutes of the Big Bang about 13 billion years ago, when the density of the universe was incredibly great. At that time the universe should have consisted of energy, matter and antimatter. The energy should have been in the form of very high energy photons that were frequently colliding with each other. All such collisions should have produced equal amounts of matter and antimatter. For example, a collision of high energy particles with sufficient energy creates a matter proton and an antimatter particle called an antiproton. So the amount of antimatter shortly after the Big Bang should have been exactly the same as the amount of matter. As a CERN page on this topic says, "The Big Bang should have created equal amounts of matter and antimatter in the early universe." But whenever a matter particle touched an antimatter particle, both would have been converted into photons. The eventual result should have been a universe consisting either of nothing but photons, or some matter but an equal amount of antimatter. But only trace amounts of antimatter are observed in the universe. A universe with equal amounts of matter and antimatter would have been uninhabitable, because of the vast amount of lethal energy released when even a tiny bit of matter comes in contact with a tiny bit of antimatter.

The mystery of why we live in a universe that is almost all matter (rather than antimatter) is called the baryon asymmetry problem or the matter-antimatter asymmetry problem.  There is no reasonable prospect that this problem will be solved in our lifetimes.  It's like the problem of "why is there something rather than nothing?" That's not a problem we can expect to solve in our lifetimes. 

But sometimes when scientists have embarked on a gigantically polluting boondoggle, they may evoke the matter-antimatter asymmetry problem to try to sanctify their misguided schemes.  That is what is going on with a project called DUNE, which seems to be one of the more ill-conceived, wasteful and polluting projects scientists have ever devised. DUNE stands for Deep Underground Neutrino Experiment. An article in the journal Science tells us that DUNE "is now expected to cost $3 billion, 60% more than the preliminary estimate, and construction has slipped 4 years, with first data expected in 2029."

On its "Frequently Asked Questions" page, the web site of the DUNE project tries to answer the question, "Why is DUNE scientifically important?" It fails to answer that question in any remotely persuasive way.  It says, "DUNE aims to find out, for example, whether neutrinos are the key to solving the mystery of how the universe came to consist of matter rather than antimatter," but it gives no rationale for thinking that such a thing is true, and no explanation of how it might be discovered that such a thing is true.  There's a link to a video which also fails to explain any rationale for thinking that neutrinos could possibly be the explanation for the matter/antimatter asymmetry problem. The video discusses the matter/antimatter asymmetry problem, and then tries to hint that the answer may lie in neutrinos, without ever justifying such an insinuation. 

Trying to play up the importance of neutrinos (which make up a vastly smaller fraction of the universe's mass than protons), the video makes the claim (at the 54 second mark) that neutrinos are "the most abundant matter particles in the universe."  Scientists actually believe that the amount of matter in protons is many times greater than the amount of matter in neutrinos, and that the most abundant matter substance in the universe is some other undiscovered type of matter called dark matter, which is believed to exist in vastly greater mass amounts than either protons or neutrinos. A neutrino has only about a millionth of the mass of an electron, and each proton is 1836 times more massive than each electron.  In an article entitled "The Composition of the Universe," a PhD tells us that "n
eutrinos are also part of the universe, although only about 0.3 percent of it." At an expert answers site, we read the following:

"The problem with neutrinos is that they are very light. There is no conceivable mechanism that would produce enough of them to make up a significant percentage of the total mass of the universe."

Neutrinos are "bit players" in the physical drama of the universe, and make up very much less of the universe's mass than protons. That means it is pretty much impossible that the matter/antimatter asymmetry problem will be solved by studying neutrinos. There is no solid scientific rationale for spending billions of dollars studying cosmic "bit players" such as neutrinos. 

Like LIGO, the DUNE project will be very expensive in terms of its global warming cost. One of its detectors will be constructed more than a kilometer underground. That kind of deep digging has a high cost in terms of carbon dioxide emissions, and tends to create pollution in a variety of ways.  But you would never know that from a very misleading document that was filed, one claiming that this massive construction project will have "no significant impact," by which it means environmental impact.

Entitled "Finding of No Significant Impact and Floodplain Statement of Findings," the document tells us the following:

"Construction of the underground detector—necessary to eliminate cosmic radiation that could interfere with the detector—would require excavation and transportation of a large volume of rock. The rock would be transferred to either the Gilt Edge Superfund site, or to the Open Cut in Lead, a former surface mining pit that was part of the former Homestake Mine. Truck, conveyor and/or a rail system would be used. The Gilt Edge Superfund site is a highly disturbed former gold mine in Deadwood....Up to 950,000 cubic yards (yd3) of soils would be removed and re-used or stored on site. Up to 45,000 yd3 of rock would be excavated, but important geological resources would not be affected."

The document tells us that up to a million cubic yards of soil and rock would be excavated by the DUNE project, and much of it  transported and dumped at some pit or mine. This project has very obviously a very large environmental cost, including a very large global warming footprint. But contrary to all the facts it is stating, the document claims there will be "no significant impact" on the environment. It states that the DUNE construction project "
would not individually or cumulatively have a significant effect on the quality of the human environment."  You might as well claim that leveling fifty city blocks in Manhattan would have no significant effect on the environment. 

The government visual below (referring to arsenic contamination) reminds us of one of the countless reasons why massive hard-rock removal projects and massive soil removal projects can have very big environmental impacts.  At the end of the yellow line shown below is where DUNE will be massively involved. 



The DUNE project is an environmentally reckless boondoggle "white elephant" project. Neutrinos are mere bit players in the physical makeup of the universe. There is no reason to think that the DUNE neutrino project will do anything to solve the great mystery of why the Big Bang did not yield a universe with equal amounts of matter protons and antimatter antiprotons, or nothing but photons arising from the combination of such antimatter and matter. And in the unlikely event that the scientists who work on the DUNE project ever happen to report some five-sigma event relating to neutrinos, we should be suspicious about  their reports.  Once a project has been born with the very misleading claim that digging a million cubic yards of soil and rock will have no significant environmental impact, then we should be suspicious about the accuracy of all further statements related to such a project. 

Let us imagine the best result that might happen from the DUNE project. There might be discovered some reason why the Big Bang should have produced more neutrinos (made of matter) than anti-neutrinos (made of antimatter). But it would be worth very little to know such a thing. What we are interested in knowing is not why the Big Bang might have produced some universe with only ghostly neutrinos as matter, but why the Big Bang left us with so many protons that are vastly more massive than neutrinos. The precise name for this problem is the baryon asymmetry problem or the matter-antimatter asymmetry problem.  It is the problem of why the observed number of baryons (protons and neutrons) in the universe is more than trillions of times greater than the number of antibaryons (antiprotons and antineutrons), contrary to what the Big Bang theory predicts. There is no hope that the baryon asymmetry problem can be solved by doing experiments with neutrinos, because neutrinos are not baryons.  There is no point at all in spending three billion dollars trying to establish why there might exist a universe with only neutrinos, because we don't live in such a universe.  There is a point in trying to figure out why we live in a universe with so many baryons.  But the DUNE project would do nothing to solve that problem. 

When I wrote an earlier post I called DUNE a "billion-dollar boondoggle." Now I read in Scientific American this: "But last year, the megaproject’s price tag was reevaluated to more than $3 billion for the first phase alone—roughly double the original estimate for the entire endeavor." Now DUNE stands as something three times worse: a three-billion dollar boondoggle.  And maybe it will become a six-billion dollar boondoggle. The story also tells that the whole DUNE project may be superfluous, because some Japanese project will accomplish the same scientific goals.  We read the following, which has a "they can't get their story straight" ring to it:

" 'There is very strong support within the community for [LBNF/DUNE] to happen,' says Orebi Gann. Yet in internal documents seen by Scientific American, a current co-spokesperson for DUNE successfully ran for election earlier this year on the basis that 'LBNF/DUNE is currently experiencing a poor acceptance in the [high-energy physics] community … seriously challenging the future of DUNE. ' ”

The Scientific American article raises quite a few red flags. We read this:

"Most everyone agrees that with billions of dollars already allocated to the megaproject—by the U.S. and international partners—there is no turning back.  'The "sunk cost" fallacy is always present when you’re this far down the road,' Asaadi says. Luminaries of the particle physics community are haunted by the cancellation of the Superconducting Super Collider, a multibillion-dollar particle accelerator, in the early 1990s. Congress pulled funding after the budget ballooned and dubious spending on costly parties and catered lunches was revealed. As a result, 'particle physics moved to Europe,' says Francis Halzen, principal investigator of the IceCube neutrino experiment. 'Hopefully everybody has learned that by killing a project, the money doesn’t return to you, or even to science.' Then again, unquestionably supporting a major project whose ‘world’s first’ aspirations may no longer be achievable carries risks, too. 'We are in a catch-22. Cancellation of DUNE would be a black-eye to the credibility of high-energy physics,' an anonymous source and member of the DUNE collaboration told Scientific American. 'We need to find a way out of this, and the way out isn’t obvious. ' ”

I had criticized the DUNE project as early as 2014, and I warned in my 2020 post that DUNE would be an environmental disaster. At the end of the current wikipedia.org article on DUNE, we read this statement that suggests I wasn't crying wolf:

"In June 2021, plumes of dust rising from the Open Cut due to DUNE construction led to complaints from businesses, homeowners, and users of a nearby park.[52] Complaints continued through spring 2022 without adequate response from Fermilab management, resulting in the South Dakota Science and Technology Authority shutting down excavation on March 31, 2022.[53] An investigation ensued in which the Fermilab management team admitted to failures in protocols, and instigated new measures to prevent black dust from leaving the Open Cut.[54] [55] With these assurances in place, Fermilab was allowed to resume rock dumping on April 8, 2022." 

The DUNE project is the latest evidence that physicists can be very bad at spending money, wasting billions and decades on fruitless efforts. In a recent article in the New York Times, a physicist makes this confession, referring to imaginary "supersymmetrical particles" that physicists wasted endless hours speculating about, without ever discovering:

"That has been a little bit crushing; for 20 years I’ve been chasing the supersymmetrical particles. So we’re like deer in the headlights: we didn’t find supersymmetry, we didn’t find dark matter as a particle."  

Saturday, January 21, 2023

Pathogen Gene Splicing Without Level 5 Labs Is Playing "Megadeath Russian Roulette"

For mankind science has been both a blessing and a curse. Science has improved the lives of very many, but science has cast the darkest shadow over the lives of billions, by making possible weapons that risked the survival of civilization.

The book The Doomsday Machine by Daniel Ellsberg is a gripping and frightening look at how militarists have put the world not far from destruction by creating machinery of nuclear devastation that has often had inadequate safeguards. Chapter 17 deals with how American nuclear scientists gave the go-ahead for testing a nuclear bomb even though they seemed to have thought there was a real possibility that the bomb might destroy all earthly life by setting the entire atmosphere on fire.

The idea may seem absurd today, but that's only because so many nuclear bombs have been tested, without such a thing happening. Let's consider a scientist judging the question in 1943 or 1944. The idea behind a nuclear bomb is to start a chain reaction. The neutrons of one atom are stripped off by the explosion, and these neutrons travel out into space, hitting other atoms, causing those atoms to lose their neutrons. The process continues over and over again. But when would this chain reaction stop? Before the first atomic bomb was exploded, scientists didn't know.

A scientist in 1944 could have been certain that a nuclear bomb exploded in space would have only caused a limited chain reaction, because eventually neutrons traveling out from the explosion would run into the vacuum of space, causing the nuclear chain reaction to stop. But the earth's atmosphere is not a vacuum. It has many atoms of oxygen and nitrogen. So a scientist around 1944 must have been worried about a terrifying possibility: that a nuclear bomb exploded in the atmosphere would cause a chain reaction that would keep spreading throughout the atmosphere, being fueled by the atoms of oxygen and nitrogen in the atmosphere.

One scientist named Hans Bethe thought that such an ignition of the atmosphere was impossible. But we are told on page 276 of Ellsberg's book, “[Enrico] Fermi, in particular, the greatest experimental physicist present, did not agree with Bethe's assurance of impossibility.” On page 279 Ellsberg states, “Nearly every account of the problem of atmospheric ignition describes it, incorrectly, as having been proven to be a non-problem – an impossibility – soon after it first arose in the initial discussion of the theoretical group, or at any rate well before a device was actually detonated.” On page 280 Ellsberg quotes the official historian of the Manhattan Project, David Hawkins:

"Prior to the detonations at the Trinity site, Hiroshima, or Nagasaki, Hawkins told me firmly, they never confirmed by theoretical calculations that the chance of atmospheric ignition from any of these was zero. Even if they had, the experimentalists among them would have recognized that the calculations could have been in error or could have failed to take something into account."

The second part of this quote makes a crucial point. Anyone with engineering experience knows that there is usually no way that you can prove on paper that some engineering result  will happen or will not happen. The only way to have confidence is to actually do a test. An engineer can go over some blueprints of a bridge with the greatest scrutiny, but that does not prove that the bridge will not collapse when heavy trucks roll over it. A software engineer can subject every line of his source code to great scrutiny, but that does not prove that his program will not crash when users try to use it. When you are doing very complex engineering, the only way to discover whether something bad will happen is by testing. So the idea that the nuclear engineers did some calculations to make them confident that the atmosphere would not explode is erroneous. They could have had no such confidence until a nuclear bomb was actually tested in the atmosphere.

According to one source Ellsberg quotes on page 281, Enrico Fermi  (one of the the top physicists working on the atomic bomb) stated the following before the test of the first atomic bomb, referring to an ignition of the atmosphere that would  have killed everyone on Earth:

"It would be a miracle if the atmosphere were ignited. I reckon the chance of a miracle to be about ten percent."

Apparently the atomic bomb scientists gambled with the destruction of all of humanity. There is no record that the scientists ever informed any US president about the risk of atmospheric ignition. After the first atomic bomb was tested in 1945, scientists got busy working on a vastly more lethal weapon: the hydrogen bomb. When the first hydrogen bomb was exploded in 1952, with a destructive force a thousand times greater than that of the first atomic bomb, the ignition of the entire atmosphere (or some similar unexpected side-effect) was again a possibility that could not be excluded prior to the test. Again, our physicists recklessly proceeded down a path that (for all they knew) might well have destroyed every human.  Even if you ignore the risk of an atmospheric detonation, there was a whole other reason why scientists were gambling with mankind's destruction: the fact that there is always a risk of a nuclear holocaust in a world packed with H-bombs.

Following the invention of the hydrogen bomb, the number of nuclear weapons grew larger and larger, until about the 1980's when there were some 50,000 nuclear weapons in the world.  Thankfully the number of nuclear weapons has declined since then. But we still all live under the threat that billions may be killed because of the nuclear bombs that scientists invented. And now there is another danger of many millions or even billions dying from the work of scientists: the danger of a pandemic worse than COVID-19 coming from reckless scientific experimentation with bacteria and viruses.

Fears about such a topic are mentioned in a recent Washington Post article with the title "Lab-leak fears are putting virologists under scrutiny." The article refers us to an editorial of the American Society for Microbiology which makes it sound like microbiologists have zero interest in making their activities safer.  The editorial endorses gain-of-function research in which viruses or bacteria are artificially engineered to improve their deadliness.  We read this about such gain-of-function research: "We should clearly delineate the benefits of the research that we perform, including explaining why GOF [gain-of-function] is the preferred approach to reaping those benefits in those cases where it is." The editorial attempts to reassure us by saying, "We must acknowledge that not every gain-of-function experiment carries the risk of global catastrophe." So this is supposed to reassure us, that when virologists monkey with virus genomes they are not always risking a global catastrophe? That sounds about as reassuring as a son saying, "Mom, don't worry about me playing Russian Roulette -- sometimes when I play the gun has no bullets." 

We then have in the Washington Post article this piece of misleading speech trying to justify perilous gain-of-function research:

"To probe the coronavirus’s secrets requires experiments that may involve combining two strains and seeing what happens. The creation of recombinant or chimeric viruses in the laboratory is merely mimicking what happens naturally as viruses circulate, researchers say. 'That’s what viruses do. That’s what scientists do,' said Ronald Corley, the chair of Boston University’s microbiology department and former director of NEIDL."

No, viruses are not intelligent agents, and do not perform experiments combining the genomes of two different microbes for the sake of "seeing what happens." And viruses don't have fancy technologies such as CRISPR allowing them to create exactly whatever Frankenstein-style microbe they wish to create.  The Washington Post article then tells us, "Experiments in the United States and the Netherlands created versions of the H5N1 influenza virus that could be more easily transmitted among ferrets," and that as a result of this "The National Science Advisory Board for Biosecurity warned of a possible 'unimaginable catastrophe.' " 

But then we have a virologist who makes it sound like the biggest problem is that people are worried about gene-splicing virologists unleashing such horrors on the world. The virologist says, "I knew the world was crazy, but I hadn’t exactly realized how crazy." But why would it be crazy to have such perfectly reasonable concerns? 

The Washington Post article then attempts to convince us that gene-splicing gain-of-function virus laboratories in the US are safe, telling us this:

"The NEIDL is basically a fortress. Hundreds of security cameras are sprinkled through the building, along with motion sensors and retina scanners...'I feel safer working in this building than being out on the streets walking around,' said Corley, suggesting he would be more likely to catch a bad virus outside than while working among pathogens in his laboratory."

So what is the idea here, that the security cameras and motion sensors and retina scanners are supposed to catch escaping invisible viruses? That's the funniest joke I've heard this month.  As for Corley's inner feelings, they don't have any weight in the world of science, where safety should be measured by things that are numerically quantifiable. 

A 2015 USA Today article entitled "Inside America's Secretive Biolabs" discussed many severe problems with safety in such pathogen research labs, telling us this:

"Vials of bioterror bacteria have gone missing. Lab mice infected with deadly viruses have escaped, and wild rodents have been found making nests with research waste. Cattle infected in a university's vaccine experiments were repeatedly sent to slaughter and their meat sold for human consumption. Gear meant to protect lab workers from lethal viruses such as Ebola and bird flu has failed, repeatedly.

A USA TODAY Network investigation reveals that hundreds of lab mistakes, safety violations and near-miss incidents have occurred in biological laboratories coast to coast in recent years, putting scientists, their colleagues and sometimes even the public at risk.

Oversight of biological research labs is fragmented, often secretive and largely self-policing, the investigation found. And even when research facilities commit the most egregious safety or security breaches — as more than 100 labs have — federal regulators keep their names secret.

Of particular concern are mishaps occurring at institutions working with the world's most dangerous pathogens in biosafety level 3 and 4 labs — the two highest levels of containment that have proliferated since the 9/11 terror attacks in 2001. Yet there is no publicly available list of these labs, and the scope of their research and safety records are largely unknown to most state health departments charged with responding to disease outbreaks. Even the federal government doesn't know where they all are, the Government Accountability Office has warned for years."

You should read the entire USA Today article. It will give you the chills. We read of the pathogen death of a researcher in a lab, and we read this:

"A lab accident is considered by many scientists to be the likely explanation for how an H1N1 flu strain re-emerged in 1977 that was so genetically similar to one that had disappeared before 1957 it looked as if it had been 'preserved' over the decades. The re-emergence 'was probably an accidental release from a laboratory source,' according to a 2009 article in the New England Journal of Medicine."

There is a system for classifying the security of pathogen labs, with Level 4 being the highest level currently implemented. It is often claimed that Level 4 labs have the highest possible security. That is far from true. At Level 4 labs, workers arrive for shift work, going home every day, just like regular workers. It is easy to imagine a much safer system in which workers would work at a lab for an assigned number of days, living right next to the lab. We can imagine a system like this:

Level 5 Lab

Under such a scheme, there would be a door system preventing anyone inside the quarantine area unless the person had just finished working for a Research Period in the green and red areas. Throughout the Research Period (which might be 2, 3 or 4 weeks), workers would work in the red area and live in the green area. Once the Research Period had ended, workers would move to the blue quarantine area for two weeks. Workers with any symptoms of an infectious disease would not be allowed to leave the blue quarantine area until the symptoms resolved.  

There are no labs that implement such a scheme, which would be much safer than a Level 4 lab (the highest safety now used).  I would imagine the main reason such easy-to-implement safeguards have not been implemented is that gene-splicing virologists do not wish to be inconvenienced, and would prefer to go home from work each night like regular office workers.  We are all at peril while they enjoy such convenience. Given the power of gene-splicing technologies such as CRISPR, and the failure to implement tight-as-possible safeguards, it seems that some of today's pathogen gene-splicing is recklessly playing "megadeath Russian Roulette."   

hazards of science
Don't worry a bit -- their offices have security cameras

Even when they don't deal with pathogens, gene-splicing scientists may create enormous hazard. The book Altered Genes, Twisted Truth by Steven M. Druker is an extremely thorough look at potential hazards involving genetically modified organisms (GMOs). The book has been endorsed by more than 10 PhD's, some of which are biologists. Here is a rather terrifying passage from page 192 of the book:

"Accordingly, several experts believe that these engineered microbes posed a major risk. Elaine Ingham, who, as an Oregon State professor, participated in the research that discovered those lethal effects, points out that because K. planticola are in the root system of all terrestrial plants, it is justified to think that the commercial release of the engineered strain would have endangered plants on a broad scale – and, in the most extreme outcome, could have destroyed all plant life on an entire continent, or even on the entire earth...Another scientist who thinks that a colossal threat was created is the renowned Canadian geneticist and ecologist David Suzuki. As he puts it, 'The genetically engineered Klebsiella could have ended all plant life on the continent.' ”

Tuesday, January 17, 2023

50 Types of Questionable Research Practices

A very naive assumption often made about research papers is that they must be good if they got published in a major science journal.  The truth is that leading science journals such as Cell very often publish research papers of extremely low quality, papers written by authors guilty of multiple types of Questionable Research Practices. Typically the authors of the papers and the referees judging whether the papers should be published are members of the same research community, where bad research habits may predominate.  The referees are unlikely to exclude papers because they committed sins that the referees themselves committed in their own papers. 

Below is a list of 50 Questionable Research Practices that may be committed by researchers. 

  1. Not publishing (or writing up and submitting for publication) a study producing negative or null results.
  2. Not publishing (or writing up and submitting for publication) a study producing results conflicting with common beliefs or assumptions or the author's personal beliefs.
  3. Asking for authorship credit on a study or paper which you did not participate in.
  4. Allowing some other person to appear as an author of a paper which he did not substantially contribute to. 
  5. Fabrication of data, such as reporting observations that never occurred.
  6. Selectively deleting data to help reach some desired conclusion or a positive result, perhaps while using "outlier removal" or "qualification criteria" to try to justify such arbitrary exclusions, particularly when no such exclusions were agreed on before gathering data, or no such exclusions are justifiable. 
  7. Selectively reclassifying data to help reach some desired conclusion or a positive result.
  8. Concealing results that contradict your previous research results or your beliefs or assumptions.
  9. Modifying results or conclusions after being pressured to do so by a sponsor.
  10. Failing to keep adequate record of all data observations relevant to a study.
  11. Failing to keep adequate notes of a research process.
  12. Failing to describe in a paper the "trial and error" nature of some exploratory inquiry, and making it sound as if you had from the beginning some late-arising research plan misleadingly described in the paper as if it had existed before data was gathered. 
  13. Creating some hypothesis after data has been collected, and making it sound as if data was collected to confirm such a hypothesis (Hypothesizing After Results are Known, or HARKing).
  14. "Slicing and dicing" data by various analytical permutations, until some some "statistical significance" can be found (defined as p < .05), a practice sometimes called p-hacking. 
  15. Requesting from a statistician some analysis that produces "statistical significance," so that a positive result can be reported.  
  16. Using concepts, hypothetical ideas and theories you know came from other scholars, without mentioning them in a paper.
  17. Deliberately stopping the collection of data at some interval not previously selected for the end of data collection, because the data collected thus far met the criteria for a positive finding or a desired finding, and a desire not to have the positive result "spoiled" by collecting more data. 
  18. Failing to perform a sample size calculation to figure out how many subjects were needed for a good statistical power in a study claiming some association or correlation.
  19. Using study group sizes that are too small to produce robust results in a study attempting to produce evidence of correlation or causation rather than mere evidence of occasional occurrence. 
  20. Attempting to justify too-small study group sizes by appealing to typical study group sizes used by some group of researchers doing similar work, as if some standard was met, when it is widely known that such study group sizes are inadequate. 
  21. Use of unreliable and subjective techniques for measuring or recording data rather than more reliable and objective techniques (for example, using sketches rather than photographs, or attempting to measure animal fear by using subjective and unreliable judgments of "freezing behavior" rather than objective and reliable measurements of heart rate spikes). 
  22. Failing to publicly publish a hypothesis to be tested and a detailed research plan for gathering and interpreting data prior to the gathering of data, or the use of "make up the process as you go along" techniques that are never described as such. 
  23. Failure to follow a detailed blinding protocol designed to minimize the subjective recording and interpretation of data.
  24. Failing to use known observed facts and instead using speculative numbers (for example, using projected astronomical positions ages in the future rather than known astronomical positions, or "projected future body weight" rather than known current body weight).
  25. Making claims about research described in a paper that are not justified by any observations or work appearing in the paper.
  26. Giving a paper a title that is not justified by any observations or work appearing in the paper.
  27. "Math spraying": the heavy use of poorly documented equations involving mathematics that is basically impossible to validate because of its obscurity. 
  28. Making improper claims of scientific agreement on debatable topics, often with unjustified phrases such as "scientists agree" or "no serious scientist doubts" or claims using the ambiguous word "consensus"; or making unsupported assertions that some particular claim or theory is "well-established" or the "leading" explanation for some phenomenon.  
  29. Faulty quotation: writing as if some claim was established by some previous paper cited with a reference, when the paper failed to establish such a claim (the paper here, for example, found that 1 in 4 paper citations in marine biology are inappropriate). 
  30. Lazy quotation:  writing as if some claim was established by some previous paper cited with a reference, when the paper was not read or understood by those making the citation. 
  31. Including some chart or image or part of an image that did not naturally arise from your experimental or observational activities, but was copied from some other paper not mentioned or data arising from some different study you did. 
  32. Altering particular pixels of a chart or image to make the chart or image more suggestive of some research finding you are claiming.
  33. Failing to use control subjects in an experimental study attempting to show correlation or causal relation, or failure to have subjects perform control tasks.  In some cases separate control subjects are needed. For example, if I am testing whether some drug improves health, my experiment should include both subjects given the drug, and subjects not given the drug. In other cases mere "control tasks" may be sufficient. For example, if I am using brain scanning to test whether recalling a memory causes a particular region of the brain to have greater activation, I should test both tasks in which recalling memory is performed, and also "control tasks" in which subjects are asked to think of nothing without recalling anything. 
  34. Using misleading region colorization in a visual that suggests a much greater difference than the actual difference (such as showing in bright red some region of a brain where there was only a 1 part in 200 difference in a BOLD signal, thereby suggesting a "lighting up" effect much stronger than the data indicate).
  35. Failing to accurately list conflicts of interests of researchers such as compensation by corporations standing to benefit from particular research findings or owning shares or options of the stock of such corporations. 
  36. Failing to mention (in the text of a paper or a chart) that a subset of subjects were used for some particular part of an experiment or observation, giving the impression that some larger group of subjects was used. 
  37. Using misleading language suggesting to casual readers that the main study group sizes were much larger than the smallest study group sizes used (for example, claiming in an abstract that 50 subjects were tested, and failing to mention that the subjects were divided up into several different study groups, with most study groups being smaller than 10).  
  38. Mixing real data produced from observations with one or more artificially created datasets, in a way that may lead readers to assume that your artificially created data was something other than a purely fictional creation. 
  39. The error discussed in the scientific paper here ("Erroneous analyses of interactions in neuroscience: a problem of significance"), described as "an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the other is not (P > 0.05)."  The authors found this "incorrect procedure" occurring in 79 neuroscience papers they analyzed, with the correct procedure occurring in only 78 papers. 
  40. Making vague references in the main body of a paper to the number of subjects used (such as merely referring to "mice" rather than listing the number of mice), while only giving in some "supplemental information" document an exact statement of the number of subjects used.
  41. Using in a matter-of-fact way extremely speculative statements when describing items observed, such as using the extremely speculative term "engram cells" when referring to some cells being observed, or calling certain human subjects "fantasy-prone." 
  42. Exposing human research participants to significant risks (such as exposure to lengthy medically unnecessary brain scans) without honestly and fully discussing the possible risks, and getting informed consent from the subjects that they agree to being exposed to such risks. 
  43. Providing inaccurate information to human subjects (for example, telling them "they must continue" to perform some act when subjects actually have the freedom to not perform the act), or telling them inaccurate information about some medicine human subjects are given (such as telling subjects given a placebo that the pill will help with some medical problem). 
  44. Failing to treat human subjects in need of medical treatment for the sake of some double-blind trial in which half of the sick subjects are given placebos.  
  45. Assuming without verification that some human group instructed to do something (such as taking some pill every day) performed the instructions exactly. 
  46. Turning numerically continuous variables into discrete non-continuous categories (such as turning temperature readings spanning 50 degrees F into four categories of cold, cool, warm and hot). 
  47. Speaking as if changes in some cells or body chemicals or biological units such as synapses are evidence of a change produced by some experimentally induced experience, while ignoring that such cells or biological units or chemicals undergo types of constant change or remodeling that can plausibly explain the observed changes without assuming any causal relation to the experimentally induced experience. 
  48. Selecting some untypical tiny subset of a much larger set, and overgeneralizing what is found in that tiny subset, suggesting that the larger set has whatever characteristics were found in the tiny subset (paper refers to "the fact that overgeneralizations from, for example, small or WEIRD [ Western, Educated, Industrialized, Rich, and Democratic] samples are pervasive in many top science journals").
  49. Inaccurately calculating or overestimating statistical significance (a paper tells us "a systematic replication project in psychology found that while 97% of the original studies assessed had statistically significant effects, only 36% of the replications yielded significant findings," suggesting that statistical significance is being massively overestimated). 
  50. Inaccurately calculating or overestimating effect size.
bad science

In some fields such as cognitive neuroscience, most papers are guilty of several of these Questionable Research Practices, often more than five or ten of them. In compiling this list I got some items from the paper "Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity."

Friday, January 13, 2023

It Was Like a Beggar Bragging About His Income 100 Reincarnations in the Future

In my post "Some Brain Wave Analysts Are Like 'Face of Jesus in My Toast' Claimants" I compared certain neuroscientists to someone who looks at his toast every day, year after year, looking for something that might look like the face of Jesus. A person looking for the face of Jesus in his toast might be frustrated after many days and years of such activity with no success. He might then resort to taking pictures of his toast every day, and playing around with image manipulation algorithms such as sharpening, contrast adjustment, color alteration, saturation adjustments, solarization, Gaussian blur, and so forth. Finally, after torturing the photographic data every day in many different ways, for hundreds or thousands of days, he might announce he has a photo that looks a little like the face of Jesus. 

Such "keep torturing the data until it confesses" tactics are extremely dubious. But they go on abundantly in neuroscience and fields such as astrophysics, evolutionary biology and cosmology. The modern neuroscientist has 1001 ways to play around with data until it rather seems to suggest some story to his liking. 

torture data until it confesses

Such "keep torturing the data until it confesses" funny business also goes on abundantly in the field of cosmology. We saw a big example of that recently in an article on the Big Think website, a site which  often gives us very bad examples of bad reasoning and misleading language. The article by a scientist is entitled "The Case for Dark Matter Has Strengthened." Nothing of the sort has happened, just something like "keep torturing the data until it confesses," or even worse.

The theory of dark matter arose because scientists observed stars rotating around the center of our galaxy at a rate different from the rate predicted by gravitational theory. To try and resolve this discrepancy, astrophysicists created the theory of dark matter: the idea that each galaxy such as ours is within a much larger cloud of invisible matter. The theory required very specific assumptions about the distribution of this dark matter: the idea that our disk-shaped galaxy is surrounded by a spherical halo of dark matter. 

For decades scientists "bet the farm" on the Lambda Cold Dark Matter theory, a move which made little sense. There were never any direct observations of any such thing as cold dark matter, so scientists had to claim it was invisible.  And even though cosmologists and astrophysicists believed in it with a fervor, cold dark matter never had any place in the Standard Model of Physics. How ironic that scientists often blast people for having faith in important invisible realities, when they have put such unquestioning faith in things they say are important, invisible and never directly observed: dark matter and dark energy.  Maybe their thinking is: "you can believe in important invisibles but only OUR important invisibles." 

One of the biggest reasons for rejecting this theory of dark matter is that the location of our galaxy's satellite galaxies does not match the location predicted by dark matter theory. Our Milky Way galaxy is surrounded by more than a dozen much smaller "dwarf galaxies." The Dark Matter theory predicts that such satellite galaxies should be randomly distributed in a spherical volume surrounding our galaxy. But instead our galaxy's satellite galaxies are found in a disk-like distribution, near the plane of our disk-like galaxy. The Big Think article confesses, "There has been one observation that is extremely difficult for the dark matter camp to explain: the distribution of small galaxies surrounding bigger ones."  

The article makes this confession:

"The Milky Way is a spiral galaxy, which means it looks a little like a spinning disk, about 100,000 light-years across and 12,000 light-years thick — essentially a cosmic pizza pan. This is the shape of the visible stars and galaxies. However, dark matter theory says that dark matter is essentially a big, spherical cloud, maybe 700,000 light-years across, with the Milky Way located at the center. Because dark matter is important in galaxy formation, dark matter theory suggests that the satellite galaxies of the Milky Way should also be spherically distributed around it. On the other hand, if dark matter isn’t real, and the correct explanation for speedily rotating galaxies is that the laws of physics must be modified, scientists predict that the satellite galaxies should orbit the Milky Way in roughly the same plane as the Milky Way — essentially extensions of the Milky Way itself. When astronomers measure the location of the 11 known satellite galaxies of the Milky Way, they find that they are located in the plane of the Milky Way. Furthermore, the observed configuration is very improbable from a dark matter point of view.   

Instead of favoring the dark matter theory, the positions of our galaxy's satellite galaxies favors a different theory, the theory of MOND (Modified Newtonian Dynamics), an alternate theory of gravity. So how does our Big Think scientist attempt to deal with this embarrassing situation? In a section most ridiculously titled "Another Win for Dark Matter," we read this (referring to Leo galaxies that are two of the 12+ satellite galaxies of our galaxy):

"Both Leo galaxies are currently located approximately in the plane of the Milky Way. However, the other, closer satellite galaxies are more spherically distributed, although not completely so. If the Leo satellites are excluded from the analysis, the data no longer strongly favors the modified physics hypothesis. Importantly, when the motion of the Leo galaxies is measured by the Gaia satellite, the authors found that their location in the plane of the Milky Way is a temporary one. When they project their location a billion years into the past or future (a blink of an eye, cosmologically speaking), these galaxies are no longer located in the galactic plane."

Here we have several ridiculous maneuvers or errors:

(1) The first ridiculous maneuver is an appeal to some analysis in which you simply ignore two of the 12+ satellite galaxies of our galaxy. That is utterly senseless. There are more than 12 of these satellite galaxies, and you should be paying attention to all of them. Arbitrarily excluding two of the 12+ dwarf galaxies from the analysis is like some male saying to his girlfriend, "If you exclude from your analysis my $150,000 college debt and my $35,000 credit card debt, you'll see my financial picture looks pretty good."

(2) The second ridiculous maneuver is making some projection of satellite galaxy positions a billion years into the past and a billion years into the future, in order to try to escape the embarrassing present data that contradicts the theory you are arguing for.  We don't know what the position of satellite galaxies will be a billion years from now, nor do we know what their position was a billion years ago. All we know is what their position is now, and that is all we should be considering. Asking someone to ignore the present position of the galaxies and consider some projected position a billion years from now is like some male (whose income is from street begging) asking his girlfriend to ignore his current financial condition, and to consider his financial condition 100 reincarnations in the future, when he may be a billionaire. It is also utterly deceptive to be referring to a billion years as "a blink of an eye." Even in cosmological terms, a billion years is one thirteenth of the age of the universe, something vastly different from "a blink of an eye."

(3) The author has erroneously claimed that the Milky Way has 11 satellite galaxies, even though the article here tells us that the Milky Way has at least 14 satellite galaxies, and the paper here gives in Figure 1 a chart listing the names, masses and luminosities of 18 satellite galaxies of the Milky Way, while stating "The Milky Way has at least twenty-three known satellite galaxies." The author refers to a paper here that has arbitrarily chosen to analyze only 11 of these satellite galaxies, apparently failing to recognize the paper was analyzing only a subset of the Milky Way's satellite galaxies. 

The Big Think article show an extreme example of some of the worst tendencies of today's scientists: their tendency to try to slice, dice, twist, shake, contort and distort beyond recognition observational data they don't like. It would be charitable to call such tactics "keep torturing the data until it confesses." A better description would be "keep torturing the data like crazy until you get the faintest hard-to-understand mumble or whisper, and then claim that sound you didn't understand was a confession."

This type of thing goes on all the time all over the place in the world of science:

  • What happens when an animal experimental study produces merely a null result? The data is "fixed" by throwing out certain observations, which is justified on the basis of "excluding outliers."
  • What happens when a human experimental study fails to produce anything but a null result? The data is "fixed" by throwing in some new "qualification criteria", which is used to exclude certain subjects that helped to prevent a reporting of a positive result. 
  • What happens when a genome did not have the right characteristics wanted for some evolutionary story line? The problem is "fixed" by replacing that genome with some "projected genome" using the guesswork called phylogenetics. 
  • What happens when astronomical objects have some position not matching what is predicted by some favored theory? The problem is "fixed" by replacing the current known positions with projected positions thousands, millions or billions of years into the past or future. 
  • What happens when a study fails to support the hypothesis it was trying to support? The problem is "fixed" by creating a new hypothesis in the middle of doing the study. This example of Questionable Research Practices (called HARKing or Hypothesizing After Results are Known) is not possible when a study is pre-registered or a "registered report." But most studies are not pre-registered or a "registered report."
  • What happens when a study looking for some effect fails to find any good evidence for the effect? The problem may be "fixed" by gathering more data, or trying a different set of test subjects. As soon as the effect barely shows up, the gathering of data is stopped, lest things be spoiled by having the "statistically significant" result disappear when still more data is gathered.  

In the paper "The Dubious Credibility of Scientific Studies" by Natalie Ferrante of Stanford University, we read this:

"The current process of undertaking, implementing, reviewing, and finally publishing a scientific study is riddled with flaws, as study results are subjected to many biases and interpretations at every level between inception and publication. As a result, when these studies finally reach the public, they are often depicted in ways that fail to reflect the genuine results and are at times utterly incorrect. Industries touting their products, scientists influenced by grants and prestige, reviewers adhering to personal
political agendas, and journalists pressed to sell papers all in turn contribute to the inherently skewed depiction of scientific results to the public. These factors have allowed for a highly unpredictable credibility in scientific reporting, an observation that has been highly overlooked and disregarded. The dissemination and publicity of this incorrect or skewed information, which is believed to be scientifically accurate, can have a detrimental effect on the public in their everyday lives."