Header 1

Our future, our universe, and other weighty topics


Friday, July 30, 2021

Swaggering Goofs of the Galileo Project Announcement

Harvard astronomy professor Avi Loeb recently announced a project he calls the Galileo Project. His announcement of the project in Scientific American is characterized by a sometimes errant haughty pomposity. 

Things go wrong in the very title and subtitle of this announcment. The title is "Announcing a New Plan for Solving the Mystery of Unidentified Aerial Phenomena."  Since humans have been greatly puzzled by unidentified aerial phenomena for many decades, and have made no real progress in solving the problem,  we should all be very humble before this great mystery.  It seems grandly presumptuous to announce "a plan for solving the mystery of unidentified aerial phenomena," just as it would sound grandly presumptuous to announce "a plan for solving the mystery of the universe's origin."  A better title would have been something like "Announcing a New Plan for Further Study of Unidentified Aerial Phenomena."

The subtitle is: "The newly organized Galileo Project will use a three-pronged approach to replace unreliable eyewitness reports with reproducible scientific observations."  The project announced is mainly some high-tech automated monitoring system. But why would anyone think that such a system would be a replacement for eyewitness reports, as if such reports would stop once such a system is set up? 

When we read the paragraphs of Loeb's announcement, we get some insight into his attitude towards eyewitness reports. It rather seems to be an attitude of snobbish contempt towards those who report seeing strange things with their eyes or who take simple photos or videos (what Loeb calls "low-quality instrumental data") rather than using fancy expensive high-tech equipment like astronomers use. 

It is easy to explain why this attitude makes no sense. Observations do not become more reliable when we use fancy equipment to make them. In fact, in some cases it seems the more complicated the equipment used, the higher the risk that something will go wrong.  A very simple point-and-click camera is almost foolproof, but there are 101 ways to go wrong if you are using some fancy camera with 200 features and 75 filters, or some fancy machine that has fifty little dials and buttons. So it simply isn't true that the fancier the equipment, the more reliable the observation. 

Loeb makes the statements below:

"In the courtroom,  eyewitness testimony can lead to a life sentence in jail. But in science, such testimony is of limited value. Science mandates quantitative measurements by instruments, removing the subjective impressions of humans from the balance scale of reliability."

No, science does not mandate "quantitative measurements by instruments." Some scientific studies use instruments to make quantitative measurements, but very many do not. Many scientific papers are based on the simple eyewitness testimony of the scientist or some other witness, or simple photographs that are not quantitative. A paper or statement does not become more scientific when it uses instruments to make quantitative measurements.  High-tech observations are very often subject to a thousand-and-one different interpretations, and you do not remove "the subjective impressions of humans" by using high-tech equipment. 

Any astronomer knows very well that we do not remove the subjective interpretations of humans when you do things such as using spectroscopy. The analysis of spectra is often highly subjective, since there are many different ways to interpret spectroscopy results, because of issues such as "spectral overlap" (in which signals from different metals or molecules appear in the same spot, making it hard to sort things out).  Similarly, the analysis of faintly observed things in telescopes is not at all something free from the subjective.  The perfect example is Loeb's very subjective interpretation of the faint photographic specks showing the distant object ‘Oumuamua. Most astronomers disagree with his subjective interpretation of that object, that it was an  extraterrestrial spaceship (for reasons such as those discussed here). 

In the previously quoted statement, Loeb rather seems to be insinuating that scientists follow evidence standards higher than those of courts.  But that isn't true. Courts have formal, elaborate standards of evidence such as the Federal Rules of Evidence used by US federal courts.  Scientists have no formal written standards of evidence, and they often seem to be accepting or excluding evidence based on capricious vacillating whims. 

Next Loeb states this:

"Similarly, one-time events—miracles, for example—do not have scientific credibility. Science rests on reproducible results that can be replicated by creating similar circumstances over and over again."

No, it is not true that one-time events do not have scientific credibility. Many things observed only one time do have scientific credibility when their occurrence is backed up by sufficient evidence. For example, each time a supernova explosion occurs, the explosion of that particular star occurs only once. To give another example, the eruption of Mt. St. Helens occurred only once, but such an eruption has perfectly good scientific credibility. Loeb is a professor of astronomy, which is not an experimental science, not a laboratory science, and is mostly not based on "reproducible results that can be replicated by creating similar circumstances over and over again." His claim that "a complete quantitative knowledge of the conditions in an experimental setup is a fundamental prerequisite for scientific data to be credible" is incorrect, because a large fraction of scientific data is not gathered in experiments but through non-experimental observation. 

What is really going on when scientists make statements like the ones just quoted, and similar statements Loeb makes trying to pretty much belittle all previous observations of UFOs (also called UAPs)?  What seems to be happening is that scientists are making excuses for scholarly indolence. They're making lame excuses for not studying some large body of evidence that they haven't studied. We should ignore such excuses, recognizing them as just reasons some people give for failing to seriously study some topic.  

So, for example, when Loeb tells us "the data on ‘Oumuamua was obtained through scientific observations on fully equipped state-of-the-art telescopes, whereas even the best UFO reports stem from a jittery camera on a fighter jet maneuvering along an unknown path," we should chuckle at this "holier-than-thou" comparison, because (1) the observations of ‘Oumuamua were just observations of a blurry little speck by a telescope seeing something at its observational limits, and (2) since Loeb is not an actual scholar of UFO reports, no one should think he is qualified to be telling us what are the "best UFO reports."  And when Loeb belittles military observations on the grounds that "military personnel have insufficient training in science and no authority over unexpected phenomena in the sky," it seems like just a lame excuse for ignoring observations by reliable witnesses, often involving good photographic evidence. 

The Galileo Project announced by Loeb consists of three things:

(1) Setting up some "high resolution multi-detector" machines that will scan the sky for unidentified phenomena, machines that may combine cameras and other scientific equipment such as telescopes, infrared sensors, radar or spectrographic equipment. 
(2) Some effort looking for more objects coming into the solar system from outside of the solar system, like the very rare 'Oumuamua object. We can expect that little will come from this, because 'Oumuamua is now out of range of our telescopes, and because objects like it are not expected to be seen more than very rarely (perhaps never again in the next ten years). 
(3) Some effort looking for mysterious satellites orbiting the earth, satellites created by extraterrestrials. There is no reason to believe any such satellites exist.  Looking (among the many orbiting satellites) for orbiting satellites from extraterrestrials makes about as much sense as looking in New York City for extraterrestrial impostors pretending to be humans. 

None of this is anything that should get anyone particularly excited. Currently there are more than a billion people who take cameras with them when they walk around outside carrying their camera-equipped smartphones, and there are also very many millions of security cameras that monitor outdoors for 24 hours.  It would seem that there is not much chance that some dramatic UFO would be detected by a small number of machines set up by this Galileo Project, and not by any other 24-hour camera or any other people photographing something they saw in the sky. 

There is no reason to think that some data from some Internet-connected observation machine will necessarily be better evidence than a good eyewitness report from reliable witnesses.  Internet-connected machines can be hacked, and the databases where they store data can be hacked; but the human mind cannot be hacked. The same hackers that are causing so much trouble for corporations these days might upload some fake UFO data to an Internet-connected observation machine or its database, for any number of reasons. 

There is a reason why an eyewitness testimony from an ordinary person might have just as much weight or more weight than something claimed by a scientist. Scientists have very often shown a tendency to report finding things that are not really there. This goes on all the time in the worlds of neuroscience and evolutionary biology, where scientists use all kinds of statistical tricks and wishful interpretations to summon up things that were probably not really there. The same thing can happen in cosmology and astronomy. An example is the report of phosphine in the atmosphere of Venus. It was made by one team of scientists, and quickly disputed by other scientists who said there was no actual evidence for phosphine in the atmosphere of Venus.  As I explain in this post, scientists have many ways of conjuring up phantasms of their own creation.  But such tricks are unknown to the ordinary man.  An ordinary person's report of seeing something much different from anything he had ever seen before may therefore have just as much weight or even more weight than some scientist's claim of finding something that showed up after elaborate statistical transformations or data massaging or wishful data analysis, particularly if the ordinary person had nothing to gain by making such a report.  Conversely, scientists often have much to gain by interpreting something in some dubious way, such as when a professor writes a book claiming some barely visible speck seen at the observational limits of telescopes is an extraterrestrial spaceship. 

A July document by Loeb has this laughable comment about the "high-resolution multi-detector" machines proposed for the Galileo Project: "A megapixel image of the surface of an unusual object will allow us to distinguish the label: 'Made in China' from the alternative: 'Made on Exo-Planet X' ".  So we'll be able to tell if something is from another planet because the UFO will have an English label telling us which planet it came from? Hilarious.  

The July document by Loeb that I quote is one that takes for granted that some closeup telescope photo of a UFO would allow us to tell whether it was from Earth or some other planet.  But there is no photo of a UFO that would allow us to know that it was from another planet. Drones are incredibly advanced these days, as we saw at the Olympics opening ceremony when an army of flying drones formed a globe shape with the earthly continents.  With drones being so advanced, humans can make almost any imaginable appearance for an object in the sky (for example, a huge flying saucer shape like the lifted-by-a-black-helicopter saucer shape that appeared in the sky during the 1984 Olympic closing ceremonies).   You could presumably tell whether something was from beyond Earth if you tracked motion that no object of earthly origin is capable of.  But telescopes aren't suitable for tracking incredibly fast-moving objects in the sky.   

It is the right of every person to make potentially important observations of mysterious phenomena. When scientists try to suggest that UFO observations don't count unless they are made by fancy expensive equipment only used by scientists,  it smells like an elitist move in which the privileged few try to keep power exclusively in their hands, wrongly denying a slice of power to the common masses.  It's like some guy at the New York Times saying, "It doesn't count if the news report came from a small town paper; it only counts if it originated  in the New York Times or the Washington Post." 

Strangely, referring to observations of Unidentified Aerial Phenomena,  Loeb says that the Galileo Project will "not seek data from government-owned sensors that were not designed for this purpose."  Ignoring observations of something from instruments that were not designed to see that particular thing is not a defensible principle.  It's like saying, "We should ignore exploding bombs seen by our security cameras, because our cameras were not designed to see exploding bombs."  Also, in his July document "Getting a Megapixel Image of UAP," Loeb refers to no instruments but telescopes that were themselves not designed for photographing UFOs or Unidentified Aerial Phenomena.

Loeb's subtitle brags about "reproducible scientific observations." "Reproducible observations" is a phrase correctly applied to laboratory experiments that produce the same results each time they are run. But UFOs appear only very rarely.  Some fancy machines dedicated to observing UFOs will not have much more of a tendency to produce "reproducible scientific observations" of UFOs than will some small-town club of ordinary persons photographing the sky each night from their back yard. 

According to Loeb, this Galileo Project may grow into an "AI/DL system" (an artificial intelligence/deep learning system), and we should be very excited because "ultimately, we may launch our AI/DL systems for interstellar travel towards distant destinations, such as habitable planets around other stars, where they could reproduce themselves with the help of accompanying 3-D printers."  This far-out fantasy is at least much more credible than a recent  Scientific American post by Loeb warning us of the danger of Earth being instantly destroyed by an attacking interstellar dark energy heat wave (a danger that he proposed that we reduce by entering into a treaty with nearby extraterrestrials, like someone who had forgotten that the 1939 Molotov–Ribbentrop Pact did not prevent the Soviet Union from being invaded in 1941). 

Loeb brags that "if there is something out there, we will find it." But it is by no means clear that people who are not scholars of UFO sightings will have a high chance of solving UFO sightings  because they have fancier equipment.  The first sentence of the project goal statement of the Galileo Project manages to insinuate that no one has previously scientifically and systematically studied UFOs.  That's the kind of misimpression you may get when people who are not scholars of UFOs talk about UFOs. 

The idea of using telescopes to photograph UFOs may excite you  until you consider a few facts. It has been estimated that there are between 200,000 and 500,000 amateur astronomers in the US alone, and there may well be millions of such amateur astronomers in the world. Such people are typically equipped with telescopes that are linked to cameras. What's the first thing someone with a camera-linked backyard telescope is going to do when he sees a UFO in the sky? He'll zoom in and try to get a telescopic photo.  But we don't have much in the way of impressive UFO photos produced by telescopes. So it would seem that the odds are not very good for a project trying to use telescopes to produce a close-up photograph of a UFO. 

professor elitism
Is this how Ivy League professors view things?

Monday, July 26, 2021

How Many Are Put At Risk for Junk Brain Scan Experiments?

In brain-related experiments we very often see defective or questionable research practices. To give examples:

  • Scientists know that the most reliable way to do an experiment is to first state a detailed hypothesis, how data will be gathered, and how data will be analyzed, using methods called "pre-registered studies" or "registered reports." But most experimental neuroscience studies do not follow such a standard, but instead follow a much less reliable "fishing expedition" technique, in which data is gathered, and then the experimenter is free to slice and dice the data in any way he wants, trying to prove any hypothesis he may dream up after collecting the data. 
  • Because very many neuroscience observations are the kind of observations where subjective interpretations may be at play, a detailed and rigorous blinding protocol is an essential part of any reliable neuroscience experiment. But such a blinding protocol is rarely used, and in the minority of neuroscience experiments that claim to use blinding, the blinding will usually be only fragmentary and fractional. 
  • It is well-known that neuroscience experiments trying to establish correlations will not be reliable unless they use an adequate sample size. The minimum for a moderately reliable research result is 15 subjects per study group, with each mouse or person used in the person being one such subject. But neuroscience experiments commonly use much smaller study group sizes. It is extremely common to find that a neuroscience experiment used a study group size as small as 13 subjects or 11 subjects or 9 subjects of only 6 subjects. 
  • A web site describing the reproducibility crisis in science mentions a person who was told of a neuroscience lab  "where the standard operating mode was to run a permutation analysis by iteratively excluding data points to find the most significant result," and quotes that person saying that there was little difference between such an approach and just making up data out of thin air. 
  • A press release says this about brain scans: "Hariri said the researchers recognized that 'the correlation between one scan and a second is not even fair, it’s poor.'...For six out of seven measures of brain function, the correlation between tests taken about four months apart with the same person was weak....Again, they found poor correlation from one test to the next in an individual. The bottom line is that task-based fMRI in its current form can’t tell you what an individual’s brain activation will look like from one test to the next, Hariri said....'We can’t continue with the same old ‘"hot spot" research,' Hariri said. “We could scan the same 1,300 undergrads again and we wouldn’t see the same patterns for each of them.” The press release is talking about a scientific study by Hariri and others that can be read here.  The study is entitled, "What is the test-retest reliability of common task-fMRI measures? New empirical evidence and a meta-analysis." The study says, "We present converging evidence demonstrating poor reliability of task-fMRI measures...A meta-analysis of 90 experiments (N=1,008) revealed poor overall reliability."

About 40%  of neuroscience experiments involve rodents. This will come as a surprise to someone who reads the science news, where the headlines rarely refer to mice when announcing experiments that involved mice. When you study the apalling research practices so common in neuroscience experiments, and the very large prevalence of junk science, you may say something like, "It's largely a sham, but at least only mice or rats are being harmed."

But very many neuroscience experiments involve humans.  The experiments that involve humans may put patients at risk, for the sake of junk science results that do nothing to provide robust evidence for anything.

Let us consider fMRI studies. Neuroscientists love to do fMRI studies that involve brain scans. Such scans are typically healthy subjects, with there being no medical reason at all for the brain scan. 

It is a dogma among neuroscientists that fMRI scans are safe. But we should remember that neuroscientists are very dogmatic creatures who often repeat claims that are dubious and unproven (as you can tell by reading the posts on this blog).  Do we really know that fMRI scans are free of any risk?

One danger of fMRI scans is well-known: the risk of the very strong magnets used by such machines causing some metal object to be hurled at a high speed, causing injury or death.  In 2001 a six-year-old boy was killed in the US during an fMRI scan, when the machine turned an oxygen canister into a flying projectile.  There is also the risk that the more powerful fMRI scans may raise the risk of cancer in the person getting the scan. 

In the wikipedia.org article for Functional Magnetic Resonance Imaging, we read the troubling passage below:

"Genotoxic (i.e., potentially carcinogenic) effects of MRI scanning have been demonstrated in vivo and in vitro, leading a recent review to recommend 'a need for further studies and prudent use in order to avoid unnecessary examinations, according to the precautionary principle'. In a comparison of genotoxic effects of MRI compared with those of CT scans, Knuuti et al. reported that even though the DNA damage detected after MRI was at a level comparable to that produced by scans using ionizing radiation (low-dose coronary CT angiography, nuclear imaging, and X-ray angiography), differences in the mechanism by which this damage takes place suggests that the cancer risk of MRI, if any, is unknown."

Below are some relevant research papers:

(1) Referring to cardiac magnetic resonance imaging (CMR),
the 2015 study here ("Impact of cardiac magnetic resonance imaging
 on human lymphocyte DNA integrity") states, 
"The present findings indicate that CMR should be 
used with caution and that similar restrictions may apply as 
for X-ray-based and nuclear imaging techniques in
order to avoid unnecessary damage of DNA integrity with 
potential carcinogenic effect."

(2) The 2009 study here ("Genotoxic effects of 3 T 
magnetic resonance imaging in cultured human lymphocytes")
cautions about the use of a high-intensity
("3T and above") MRI, and states that 
"potential health risks are implied in the MRI and especially
HF MRI environment due to high-static
magnetic fields, fast gradient magnetic fields, and strong 
radiofrequency electromagnetic fields," also noting that 
"these results suggest that exposure to 3 T MRI induces
 genotoxic effects in human
 lymphocytes," referring to effects that may cause cancer. 

(3) The 2015 study here ("Biological Effects of Cardiac Magnetic 
Resonance  on Human Blood Cells") found that "Unenhanced CMR
is associated with minor but significant immediate blood cell
alterations or activations figuring inflammatory response, as well as 
DNA damage in T lymphocytes observed from day 2
 until the first month but disappearing at 1-year follow-up." The 
study found such worrisome results with the less-powerful 1.5T
scanning, which is being gradually replaced with twice-as-powerful
3T scanning.

A paper tells us the following about the newer twice-as-powerful
3T MRI machines that have been replacing the older 1.5T MRI
machines, suggesting their magnetic fields are much stronger than
the strength needed to lift a car:

"The main magnetic field of a 3T system is 60,000 times
 the earth's magnet field. The strength of electromagnets
 used to pick up cars in junk yards is about the field strength 
of MRI systems with field strengths from 1.5-2.0T.
 It is strong enough to pull fork-lift tires off of machinery,
 pull heavy-duty floor buffers and mop buckets into
 the bore of the magnet, pull stretchers across the room
 and turn oxygen bottles into flying projectiles reaching
 speeds in excess of 40 miles per hour."

Let us look at an example of a morally dubious study recently in the news, a study entitled "Predicting learning and achievement using GABA and glutamate concentrations in human development."  It was a study that attempted to link levels of two brain chemicals (GABA and glutamate) to learning ability and math ability.  As so often happens, a study that failed to produce any impressive result was misleadingly represented in the press as if it had found something important. 

We can see from Figure 2 of the study that no strong correlation was found between levels of these brain chemicals and math ability. In the graphs of Figure 2 we see little circles that are all over the place on the graphs, indicating a lack of any strong correlation.   We read these results, mentioning math achievement (MA):

"In particular, the glutamate concentration in the IPS was negatively associated with MA in younger participants but positively associated with MA in mature participants (Fig 2A, β = .13, t(225) = 4.54, standard error (se) = .03, PHC0 < .0001, R2ADJ = .85, dR2ADJ = .01). In contrast, the opposite relationship was found in the same region with GABA, which was positively associated with MA in younger participants but negatively associated with MA in mature participants (Fig 2B, Î² = −.14, t(224) = −5.39, se = .03, PHC0 < .0001, R2ADJ = .85, dR2ADJ = .01). Concerning the MFG, glutamate concentration was negatively associated with MA in younger participants but positively associated with MA in mature participants (Fig 2C, β = .11, t(220) = 3.59, se = .03, PHC0 = .0004, R2ADJ = .85, dR2ADJ = .01)."

The funny Î² characters in the quote above refer to a beta coefficient, which is not much different from a correlation coefficient (a measure of how strong the correlation is between two things).  The results given for the beta coefficients are all weak. They are all less than .15, and some of the correlations are negative.  A strong beta coefficient is something higher than .5.  A press story on this paper glowingly describes an "association" between math ability and these two brain chemicals, conveniently failing to mention that the association reported was so weak and inconsistent that it was not robust evidence of any real causal relation.  Because it found associations so weak, and because of methodology problems discussed below, the GABA and glutamate study has failed to prove anything important. 

This GABA and glutamate study has used a large number of subjects, so it cannot be criticized for using too small a sample size. However, there are other problems in the study, including the following:

  • The study was not a pre-registered study that announced beforehand in detail what hypothesis would be tested, and how data would be gathered and analyzed, meaning it did not follow a "best practices" approach.
  • The study did not make direct measurements of GABA and glutamate levels, but made indirect estimates using a technique called magnetic resonance spectroscopy.  Estimates of trace chemicals such as GABA using spectroscopy are subjective, and may be unreliable, because of serious confounding factors such as "spectral overlap."  A study using retesting to test the reliability of such GABA estimates in two different brain regions found only "low-moderate" reliability in estimates involving one of these two regions. 
  • Chemicals such as GABA and glutamate fluctuate in the body from week to week. Readings from different parts of the brain may vary. There is no reason to think that a single brain scan  gives you a reliable indication of the average yearly level of GABA and glutamate in a subject. 
  • The study fails to mention any blinding protocol that was followed.  Following a carefully defined blinding protocol is an essential for a study like this to be taken seriously as robust evidence for anything.  
  • Since some "math savant" humans have shown an ability to perform extremely complex math calculations at blazing near-instantaneous speed, and since neurotransmitters move around slowly in the brain, and since there is no evidence that taking GABA or glutamate supplements improve math ability, it never made any sense to suspect that estimates of GABA and glutamate would be well-correlated with math ability; and the study found no such good correlation.  A previous 2017 study on older people scanned with MRI machines had found no good correlation between cognition and GABA levels, merely finding a very weak correlation, with an R-squared of merely .12. 
  • The study made an unnecessary involvement of children. The hypothesis of whether GABA and glutamate affects math ability could have been tested just as well using only adult subjects. 

The most troubling thing about the study is that it needlessly  subjected hundreds of children to high-field 3T magnetic resonance imaging. These were not children who had any medical need for such imaging, and the paper says these subjects were recruited, lured by very small monetary incentives.  The study tells us, "All MRI data were acquired using a 3T Siemens MAGNETOM Prisma MRI System equipped with a 32 channel receive-only head coil."  The study here cautions that "exposure to 3 T MRI induces genotoxic effects in human lymphocytes," referring to potentially cancer-causing effects of using 3T MRI scanners.  

How long were these children scanned using this high-powered 3T MRI scanner?  We are told "the imaging session lasted approximately 60" minutes. A typical MRI scan for medical reasons very often takes only about 15 minutes. The paper tells us in Supplemental Table 1 that the brain scans were done on 51 six-year-olds, 51 ten-year-olds,  50 14-year-olds, and 49 16-year-olds. The younger a person is, the more likely he will be to be affected by possible cancer-causing things.  The subjects were induced to participate by giving them trifling sums of money such as 25 pounds  (about 35 dollars), and given such low compensation we may assume that a large fraction of the participants were from impoverished families.  Will some of these children who participated in this poorly-designed insignificant study end up with cancer decades from now because they were subjected to 60 minutes of unneeded 3T MRI scanning which "induces genotoxic effects" according to the previously cited paper

We'll probably never know, because neuroscientists don't seem to keep track of the long-term health results of the people they have brain-scanned in their experiments. It's kind of a policy of "scan 'em and forget 'em." Our neuroscientists are fond of saying there is "no proof" that fMRI imaging can be harmful, but that's because they are not doing the long-term patient health followup tracking to determine whether fMRI imaging produces a greater risk of cancer over 30 years or 40 years. 

bad science experiment

The GABA and glutamate brain-scanning study I have discussed was very unusual in having a high number of subjects (more than 200). What is very much more common in neuroscience studies is to do brain imaging experiments involving fewer than 15 subjects. Almost all such experiments are pretty worthless, because we should have no confidence in studies that used fewer than 15 subjects per study group (the chance of false alarms is too high when fewer than 15 subjects are used).  Very many people may have suffered a needless cancer health risk from participating in the usually worthless "fewer than 15 subjects" human experimental studies that are so common in modern neuroscience research. 

Don't put me down as being anti-fMRI (I've had an fMRI myself, after being advised by a doctor to do so).  In countless medical treatment cases, the benefits of an fMRI scan are greater than the small risks. But people should not be put at risk by getting unnecessary brain scans solely for the sake of poorly designed studies that fail to prove anything because they followed Questionable Research Practices. 

I am not at all suggesting anyone should avoid an fMRI scan when a doctor recommends such a thing as medically advisable. But it is rather clear that in their zeal to load up their resumes with more and more brain scanning studies, our neuroscientists are rounding up too many paid subjects for unnecessary and potentially harmful brain scans.  What is really tragic is that such a large fraction of experimental brain scan studies follow Questionable Research Practices so badly that they qualify as "junk science studies" failing to provide any robust evidence for anything important.  It seems that very often human research subjects may be needlessly put at increased risk of cancer and other health dangers by being brain-scanned in scanners such as 3T MRIs, merely so that neuroscientists can round up more subjects for badly designed studies that do nothing to advance science because they fall very short of meeting the standards of good experimental science.   

When neuroscientists say brain scans are safe, they are referring to how much health trouble is now observed in people whose brains are scanned. No one has done some 25-year longitudinal study on the topic of whether people whose brains were scanned with 3T MRIs have a higher chance of  cancer 25 or 30 years in the future.  3T MRIs were only approved by the FDA in the year 2000, and the Siemens MAGNETOM Prisma MRI System used by the GABA and glutamate brain-scanning study was only approved in 2013

A scientific paper states this, referring to 3T MRIs:

"An insufficient number of validated studies have been carried out to demonstrate the safety of high strength static magnetic field exposure (Shellock, 2009). While MRI has been used for many years in the clinic, at higher Tesla levels (over 3 Tesla) the technology is relatively novel. Even less information about potential negative health effects exists for specific populations such as pregnant women and children." 

If I were an ethical advisor asked to approve proposals for brain experiments, I would have the following rules:
  • I would never approve the use of human brain scanning for any experimental study that used fewer than 15 subjects for any of its study groups, because such studies are way too likely to produce false alarms. 
  • I would never approve the use of human brain scanning for any experimental study that had not published publicly a detailed research plan, including a precise hypothesis to be tested, along with a very exact and detailed description of how data would be gathered and analyzed. We should not be putting people at risk for studies that do not follow best practices. 
  • I would never approve the use of human brain scanning for any experimental study that had not published publicly a detailed blinding protocol to be followed, discussing exactly how blinding techniques would be used to reduce the risk of experimenter bias in which the experimenter "sees what he wants to see." We should not be putting people at risk for studies that do not follow best practices. 
  • I would insist that any consent form signed by a subject to be brain scanned would include a detailed discussion of the reasons why brain scanning might be potentially hazardous, with negative effects appearing far in the future, along with a fair discusssion of the scientific literature suggesting such hazards. Currently a large fraction of such consent forms fail to frankly discuss such risk. 
  •  I would never approve the use of any brain scanning on children in an experiment that did not absolutely require the participation of children. 

I strongly advise all parents never to let their children participate in any brain scanning experimental study unless a doctor has told them that the brain scan is medically advisable solely for the health of the child.  I advise adults not to participate in any brain scanning experimental study unless they have read something that gives them warrant for believing that the experimenters are following best experimental practices, and that there will be not be a very high chance that the adults will be undergoing unnecesary health risks for the sake of some "bad practices" poorly designed "fishing expedition" experiment that does not advance human understanding.  If a neuroscientist looking for research subjects tells you that brain scans are perfectly safe, remember that many neuroscientists often dogmatically make claims that are unproven or doubtful, and often pretend to know things they do not actually know (see this site for very many examples). 

I also strongly advise anyone who participated in any brain scanning experiment to permanently keep very careful records of their participation, to find out and write down the name of the scientific paper corresponding to the study, to keep a copy of any forms they signed, and to keep a careful log of any health problems they have. Such information may be useful should such a person decide to file a lawsuit.

When we examine the history of MRI scans, we see a history of overconfidence, and authorities dogmatically asserting that "MRI scans are perfectly safe," when they did not actually know whether they were perfectly safe.  Not many years ago there arose the great "contrast agent" scandal.  Scientists began to learn that what are called "contrast agent" MRI scans (given to 30 million people annually) may not be so safe. In such "contrast agent" scans, a subject is given an injection that increases the visual contrast of the MRI scan.  For a long time, the main substance in such an injection was gadolinium.  A mainstream cancer web site states, "Tissue and autopsy reports have also confirmed that gadolinium can accumulate in the brain and other organs." The results can be a health disaster, as described here. A 2019 Science Daily story says, "New contrast agent could make MRIs safer," letting us know that many of them previously were not so safe. On the same Science Daily web site, we read a 2017 news story with the title "MRI contrast agents accumulate in the brain."  A 2020 paper ("Side Effect of Gadolinium MRI Contrast Agents") says this:

"Until recently, it was believed that gadolinium is effectively cleared within 24 hours after intravenous injection, and that it does not have any harmful effects on the human body. However, recent studies on animals and analyses of clinical data have indicated that gadolinium is retained in the body for many years post-administration, and may cause various diseases."

Neuroscientists extensively used such contrast agents (as described here), very often putting human subjects at risk for the sake of junk poorly designed studies falling far short of the best experimental practices. All the while,  many of our experts were making the untrue claim that "MRI scans are perfectly safe," a statement which was not clearly  true for the large fraction of MRI studies that used gadolinium contrast agents. 

A recent article gives us a clue as to why so many junk science studies are occurring. It seems the number of PhD's is quite a few times greater than the number of available tenure-track positions, creating a pressure for not-yet-tenured PhD's to compromise on research standards, so that they get more published papers and paper citations. The article says, "Fifty-eight per cent of respondents to the survey are aware of scientists feeling tempted or under pressure to compromise on research integrity and standards."

Thursday, July 22, 2021

Macroevolution Experiments Fail As Badly as Abiogenesis Experiments

Two of the biggest dogmas of biology professors are the dogma of abiogenesis (that life can naturally appear from non-life) and the dogma of macroevolution (the idea that dramatic very complex biological transitions can naturally occur, such as one-celled organisms gradually evolving to become large visible organisms, or dinosaurs evolving into birds, or ape-like or chimp-like organisms evolving into men).  There is no direct observational or experimental evidence for either one of these dogmas.  No one has ever observed life appear from non-life. No one has ever observed the occurrence of some large very complex biological innovation or transformation that could be called macroevolution.  

There does seem to be evidence that at certain points in natural history,  there started to exist new types of organisms with dramatic innovations that had never existed on Earth before.  But such evidence should not be called evidence for macroevolution, but merely evidence for biological innovation.  Just as the appearance of some new type of organism on Earth should not be called evidence for spaceships from other planets introducing such organisms, evidence for the appearance of some new type of organism should not be called evidence for macroevolution.  Since we have never observed either spaceships introducing new organisms to planet Earth or the arrival of new organisms on Earth through macroevolution, and since both of those ideas have great credibility problems, we are not entitled to cite mere biological innovation as evidence for either one of these claims. 

Many decades of experiments trying to reproduce the origin of life have provided zero evidence to support claims that life can naturally appear from non-life. No one has ever produced a living thing (no matter how small) in any experiment that started out with sterile materials that were not living. Nor have experimenters even been able to produce any of the building blocks of life in any experiment realistically simulating the eartly Earth.  The building blocks of visible living things are cells. The building blocks of microscopic unicellular life are things like proteins, DNA and RNA. 

Origin of life researchers have not got anywhere in explaining the origin of DNA, RNA or proteins.  They have not produced any such things in any experiments realistically simulating the early Earth. Moreover, such researchers have not even produced any of the building blocks of DNA, RNA or proteins in any experiments realistically simulating the early Earth.   There has been no real experimental progress in understanding life's origin.  The problem of explaining the random origin of the 50 or more types of proteins needed for the simplest living things is a difficulty as great as explaining how someone could dump on the ground a dumpster filled with Scrabble letters and have them form into fifty coherent instruction paragraphs.

Facing a complete failure of abiogenesis experiments, biologists, chemists and science journalists have resorted to spreading  misinformation on this topic. There has been more baloney written on this topic than almost any other scientific topic.  The main sin has been an inaccurate description of experiments that did not realistically simulate conditions on the early Earth. Such experiments have been inaccurately described as if they were realistic simulations of the early Earth.  Also, such irrelevant  experiments (irrelevant because of their failure to realistically simulate early Earth conditions) have been described as producing  "building blocks of life," even though the experiments merely produced building blocks of the building blocks of life. No one has produced a functional protein (one of the real building blocks of microscopic life) in any experiment starting without life, regardless of whether the experiment did or did not realistically simulate the early Earth. 

Experiments trying to produce macroevolution have all failed as miserably as the experiments trying to produce abiogenesis.  A 2016 scientific paper has the title "Experimental Macroevolution." But it's yet another case of a scientific paper with an inaccurate title. The paper discusses no real examples of macroevolution observed in experiments.  Predictably, the author tries to down-define the term macroevolution to make it apply to relatively minor things like the appearance of new types of cells or cell products such as  cnidocytes and stereom. Mere microscopic innovations are not examples of macroevolution, which refers to big things like major very complex visible structural innovations or intellectual capability innovations. 

 The same approach is taken repeatedly by those trying to cite evidence for the never-observed phenomenon of macroevolution.  They may try to down-define macroevolution as something that goes on whenever a new cell type appears or whenever a new species appears. That is not the correct definition of macroevolution, which the Merriam Webster dictionary defines as "evolution that results in relatively large and complex changes."  Many other authorities define macroevolution as something occurring above the species level. 

But just as the abject failure of abiogenesis experiments has been covered up with hype and deceptive descriptions, the abject failure of macroevolution experiments has been covered up with hype and misleading descriptions. We get an example of the latter in a recent news story attempting to persuade us that some progress has been made in understanding a jump from unicellular life to multicellular life. The fossil record yields no sizable record of multicellular life until about the time of the Cambrian Explosion at about 540 million years ago.  The fossil record seems to suggest that there was the most dramatic innovation of multicellular life at that time, with almost all of the animal phyla (the main divisions of living organisms) seemingly appearing in a fairly short amount of time.  

The recent news story was entitled "How evolution shifts from unicellular to multicellular life."   The story had a link to a university press release, which was entitled "Evolution in real time," and referred to a "surprisingly rapid transition from unicellular life to multicellular life."  But the study referred to by these articles did nothing at all to show any evolution from unicellular life to multicellular life, or any transition anything like the appearance of multicellular animals from unicellular life, or any kind of macroevolution. The experiment merely described some crummy little clumping in microscopic algae, a type of unicellular plant life. 

The experiment followed some algae in water for 500 generations. At the beginning some of the algae cells were clumped together. After 500 generations no new life form had emerged.  The unicellular algae did not at all evolve into any multicellular life form.  The study does not even claim that the little cell clumps had grown bigger after 500 generations. 

Totally failing to provide any evidence for macroevolution such as the appearance of some new multicellular organism, the experimenters tried to provide some evidence for a little unimpressive microevolution. We are told that the experimenters isolated some of the cells and "cell groups" (just cell clumps), and then did a little statistical analysis.  They claim that somewhere they found that after 500 generations some of these algae cells were clumping a little more.  Unfortunately, the experimenters make no mention at all of following any blinding protocol, and they fail to use the word "blind" or "blinding" in their paper.  This means that the extremely faint evidence of microevolution that they claim to have found through clump-counting may not even be robust evidence of microevolution. Counting the number of microscopic algae clumps moving around in water is a procedure where a lack of a blinding protocol might lead researchers to see whatever little difference they want to see. 

You can tell the experiment is basically a bust from a line in the press release. In the press release we read this: "They were able to demonstrate – in collaboration with a colleague from the Alfred Wegner Institute (AWI) – that the unicellular green algae Chlamydomonas reinhardtii, over only 500 generations, develops mutations that provide the first step towards multicellular life." Whenever someone claims to have merely made "the first step" towards some complex very-hard-to-achieve result, nine times out of ten they will actually have gotten basically nowhere moving towards such a result. For example, if someone claims that your vacant lot can naturally turn into a house if you just give it five years to evolve, and then a year later he shows you a little metal junk discarded on the vacant lot, and he tells you that this is "the first step" in the vacant lot evolving into a house, you should have no confidence at all in what the person is saying.  The experiment discussed in the press release has not shown us anything impressive, so we should doubt that any actual "first step" has been achieved, and we should recognize from this mere claim of a "first step" that the experiment produced nothing impressive.  

The very idea of trying to insinuate that some alleged increased clumping in plant microbes does something to explain the appearance of enormously organized visible animal organisms is as ridiculous as claiming that baked beans sticking together can explain how skyscrapers or aircraft carriers get built. Clumping is not construction. Clumping does not involve any organization, and does nothing to explain the appearance of very organized things. 

A recent paper attempts to persuade us that some artificial breeding experiment has given support for macroevolution. The paper incorrectly states that "many empirical studies have shown that there is a clear link between microevolutionary processes and resulting macroevolutionary patterns,"  and cites as its main example what it calls the "classic example" of "Sewall Wright's work on artificial selection in guinea pigs," saying, "Wright was able to breed laboratory populations with four toes."   

Guinea pigs are normally born with three toes on their back feet, but are sometimes naturally born with four toes on their back feet (the fourth tiny toe involving no improvement in function).  This is called polydactyly. Through artificial selection using inbreeding, you can allegedgly create a breed that will always have four toes.  An animal hospital refers to such polydactyly as a "genetic defect, which may indicate other genetic health defects."  Another site says polydactyly "may require surgical repair for his [the guinea pig's] safety and comfort." Such four-toed guinea pigs are not actually an example of either macroevolution or the appearance of beneficial new features, and do not involve the appearance of any new complex useful innovation.  Sewall Wright's guinea pig experiments did nothing to provide experimental support for claims of macroevolution, and many guinea pigs with four toes on a back foot had already been observed before his experiment started. Trying to pass off an increase in four-toed guinea pigs by artificial selection as being experimental evidence for macroevolution is as misleading and laughable as trying to pass off clumping algae cells as evidence for macroevolution.

It is, in general, never valid to cite any experiment using artificial selection as evidence for any type of evolution, because such artificial selection (in which mating partners are chosen by the experimenter) is not a realistic simulation of natural evolution in which no such artificial selection would ever occur. But in their zeal to offer evidence for powers of so-called natural selection, scientists will sometimes resort to this cheat of citing artificial selection experiments that they should not be citing when discussing evidence for natural evolution.  

An experiment with fruit flies conducted since 1954 can be called an experiment relevant to macroevolution. Scientists at Kyoto University have maintained 1500 generations of these fruit flies in the dark, which is equivalent to maybe 30,000 years of human evolution.  The scientists call this line of fruit flies "Dark-fly." A paper on this line of fruit flies kept in the dark for 1500 generations tells us "Dark-fly has no apparent morphological features related to dark-adaptation."  1500 generations yielded no macroevolution. The paper then tells us about a grave error in the experiment which throws doubt on it showing much of anything:

"A serious problem of the long-term Dark-fly project in this regard is that the control sister flies were accidentally lost over the course of the 60 years of maintaining the Dark-fly line (Fuse et al. 2014). Therefore, it is impossible to precisely examine the genome evolution in the Dark-fly history and to accurately compare genomes and traits between Dark-fly and its control sisters."

Neither the paper nor a press release about it mention any real robust evidence of superior night-living function in this "Dark-fly" line of fruit flies bred for 1500 generations in darkness.  

A wikipedia.org article entitled "Experimental evolution" fails to give us any examples of experiments providing evidence for macroevolution, and does not even use the word "macroevolution." The experiments mentioning "selective breeding" are not actually evolution experiments, but experiments involving artificial selection. There is a mention of fruit flies that can survive better with less oxygen, but that's just microevolution, not macroevolution.   

The difference between prokaryotic cells and eukaryotic cells is so great that it has been compared to the difference between a one-room studio apartment and a billionaire's mansion.  But the Merriam-Webster dictionary defines macroevolution as "evolution that results in relatively large and complex changes." It is therefore doubtful that a transistion between prokaryotic cells and eukaryotic cells could be classified as an example of macroevolution, since the latter term is often defined as the appearance of visible or large complex new biological innovations, and individual eukaryotic cells are neither large nor visible.  But since some may classify a transition from prokaryotic cells to eukaryotic cells as an example of macroevolution, let us consider experiements attempting to produce such a transition. 

All such experiments have failed. No one has ever been able to produce eukaryotic cells from a starting point of isolated water containing only prokaryotic cells. But that hasn't stopped science news sites from misleading us into thinking that some progress was made in some experiments.  

An example is the experiment discussed here. We hear about 12 years of effort, and 5 years of experimentation.  Some scientists got some primitive prokaryotic microbes through laborious mud extraction work undersea, and they then watched the microbe for years in a laboratory tube-shaped "methane-fed continuous-flow bioreactor system" (observing for "more than 2000 days" according to their paper).  They were probably hoping to see a transition from this primitive prokaryotic microbe to a far more advanced eukaryotic microbe.  But they observed no such thing. 

What they were left with after this five years is (according to their preprint) "something with no visible organella-like structure."  So no transition occurred to anything like eukaryotic cells (which are rich in different types of organelles).  The main thing that the experimenters have to report is merely "morphological complexity – unique long, and often, branching protrusions."  But the abstract that uses this phrase does not state that this is a new feature that was not in the starting population of organisms. Since the microbes were bred for five years in some machine with the shape of a long tube (described here), we may suspect that these odd protrusions are effects produced by the strange unnatural environment the microbes were in. 

No evidence of macroevolution has been found, and the microevolution observed is not very impressive.  But in a press article hyping this experiment, we read an evolutionary biologist saying, "This is a monumental paper that reflects a tremendous amount of work and perseverance."  That's just yet another example of a scientist describing a "sow's ear" as a "silk purse," and speaking like someone struck gold when their result was more like dross. 

Humans have never observed either macroevolution or abiogenesis or anything like such things. Claims that macroevolution occurred have zero experimental support, just as claims that abiogenesis occurred have zero experimental support.  All we have in this area are experiments that failed to substantiate claims of macroevolution or abiogenesis, or that are irrelevant because they did not realistically simulate natural conditions.  But you would not recognize such failure if you relied on the huckster carnival-barker press reports of such experiments. 

science hype

Faced with the shortfall of experimental evidence for macroevolution, a scientist may make two defenses that are merely semantic trickery. The first defense might try to redefine macroevolution, defining it so weakly that something humans have observed might be dubiously classified as macroevolution. This is kind of like redefining "moon launch" to mean "launching something near the moon as it appears in the sky," and then using that definition to justify your claim that children can produce moon launches because they can throw balls in front of the moon. The second defense might be to define macroevolution so that there is no possibility of its observation. This defense has been used by some writers, who try to define macroevolution as "evolution occurring over geological scales." With that definition, someone can say, "Why, of course we have never observed macroevolution -- it only occurs over geological time periods."  

Both defenses are pretty futile, because neither stops critics from making the same complaints, using a term different from "macroevolution." For example, rather than using the term "macroevolution," a critic can instead use the term "massively innovative evolution," and complain that humans have never observed massively innovative evolution, so there is no warrant for believing it ever occurred, particularly given the impossibility of producing massively innovative evolution through mere changes in DNA molecules that do not actually specify anatomical structures but merely low-level chemical information (contrary to frequent misstatements on this topic).