Header 1

Our future, our universe, and other weighty topics


Friday, June 29, 2018

Specious Spin of the "Easy Life" Crowd

What we may call the “easy life” crowd is a group that wants to persuade us that it's easy for life (primitive or advanced) to appear by chance. A few weeks ago the “easy life” crowd was telling us that “building blocks of life” had been found on Mars, and that this greatly increased the chance that life once existed on that planet. However, as discussed here, the actual organic molecules found on Mars were neither the building blocks of life nor the building blocks of the building blocks of life.

This week the “easy life” crowd is at it again. For one thing, they are telling us that the “building blocks of life” have been found on Enceladus, a moon of Saturn. A page on Fox News has the headline “Scientists have found the 'building blocks for life' on Saturn's moon Enceladus.” But this claim is as inaccurate as the claim about building blocks of life on Mars.

The claims are based on a scientific paper that does not claim to have discovered any molecules with a molecular weight much larger than about 200 atomic mass units. The building blocks of life are things such as proteins and nucleic acids. The average protein has hundreds of amino acids, and an amino acid has a molecular weight of about 100 atomic mass units. So a protein has a molecular weight of about 20,000 or more. That's 100 times bigger than the molecules detected on Enceladus. 

Enceladus (Credit: NASA)

Until some scientist reports detecting something with a molecular weight of more than 10,000, no one should be claiming that “building blocks of life” have been detected on Enceladus. Although the organic molecules detected on Enceladus have a molecular weight about the same as the molecular weight of an amino acid (one of the building blocks of proteins), the exact nature of these molecules is unknown, so we don't know whether any of them is actually an amino acid. So it cannot even be truly said that even the building blocks of the building blocks of life have been detected on Enceladus.

This week's other example of specious spin by the “easy life” crowd comes in the form of an article in Science magazine entitled “The momentous transition to multicellular life may not have been so hard after all.”

The article begins by stating the following:

Billions of years ago, life crossed a threshold. Single cells started to band together, and a world of formless, unicellular life was on course to evolve into the riot of shapes and functions of multicellular life today, from ants to pear trees to people. It's a transition as momentous as any in the history of life, and until recently we had no idea how it happened.

The statement “until recently we had no idea how it happened” in reference to the appearance of multicellular life will come as a surprise to those who have been told for many decades by scientists that Darwin's theory of natural selection explained this and all other biological innovations occurring after the origin of life. So after many decades of telling us that the appearance of multicellular life could be explained by mere natural selection and random mutations, now scientists are claiming “until recently we had no idea how it happened.” Very fishy indeed.

Our Science magazine article then states this about the origin of multicelluarity:

Now, Nagy and other researchers are learning it may not have been so difficult after all. The evidence comes from multiple directions.

So what is this evidence? The first bit of alleged evidence that multicellularity was easy is described as follows: “The evolutionary histories of some groups of organisms record repeated transitions from single-celled to multicellular forms, suggesting the hurdles could not have been so high.” There is no actual fossil evidence of any such transitions, and the article gives no example to back up this claim. Even if we assume that there were multiple cases of life transitioning from single-celled life to multicellular life, this would not show that it was easy or something other than fantastically improbable. When we see two examples of some thing that seems fantastically improbable, that does not show such a thing is easy, although it may show that more than luck is involved. For example, if you ask me to guess a 15-digit number you are thinking of, and I do that successfully not once but twice, that may suggest that something more than luck is involved (such as ESP), but does not at all suggest that it is easy to guess randomly chosen 15-digit numbers.

Here according to the article is the second item of evidence that the appearance of multicellularity was easy: “Genetic comparisons between simple multicellular organisms and their single-celled relatives have revealed that much of the molecular equipment needed for cells to band together and coordinate their activities may have been in place well before multicellularity evolved.” But this is no evidence that the appearance of multicellular macroscopic life was easy. It's kind of like arguing that it's easy for a tornado to blow through an auto parts store and assemble an automobile, because many of the parts needed for the automobile are lying around in the auto parts store.

Here according to the article is the third item of evidence that the appearance of multicellularity was easy: “And clever experiments have shown that in the test tube, single-celled life can evolve the beginnings of multicellularity in just a few hundred generations—an evolutionary instant.” The article gives a visual of the type of experiments it is talking about. They are experiments in which a few microscopic cells are seen to clump together to make an equally microscopic blob consisting of a few cells adhering together. But such experiments do nothing at all to suggest that it might be easy for microscopic life to evolve into visible macroscopic life such as fishes, trilobites and crabs. You can compare such experiments to someone waving around a big magnet at an auto parts store. After getting a few random auto parts stuck to his magnet, the person might say, “You see – it's easy to form a car by random accumulations of auto parts.” But such a stunt would not at all prove such a thing.

On the basis of these pathetically weak evidence claims, the Science magazine article claims “multicellularity comes so easy.” No, the appearance of multicellular organisms such as the many that appeared suddenly in the Cambrian Explosion is a vast explosion of information that is not explained by orthodox theories in biology. Big life forms may exist all over our galaxy and the universe, but not if only random, mindless processes are involved. 

Monday, June 25, 2018

The Building Blocks of Bad Science Literature

Most scientific articles and papers are good solid information. But our news outlets will often give us misleading articles on scientific topics. Such articles may be based on poor reporting of sound scientific studies, but often the problem lies both in an article promoting a scientific study and the scientific study itself. Let us look at the various tricks that help to build up this type of misinformation, a topic that is also discussed in this post. 

Building Block #1: The Bunk “Click Bait” Article Headline

When it comes to reporting scientific studies, shameless hyping and unbridled exaggeration are very common, and simple lying is not very uncommon. Some research suggesting a possibility only very weakly may be trumpeted as dramatic proof of such a possibility, or the research may be described with some claim completely at odds with what the research actually found. It's pretty much an “anything goes” type of situation.

Why does this happen? It all boils down to money. The way large modern web sites work financially is that the more people access a particular page, the more money the web site gets from its online ads. So large web sites have a financial motive to produce “click bait” stories. The more sensational headline, the more likely someone will be to click on it, and the more the web site will make.

A recent outrageous example of such a bunk headline was the article headline “Fingerprints of Martian Life” in the web site of Air and Space magazine published by the Smithsonian. The article merely reported on some carbon compounds found on Mars, compounds that were neither the building blocks of life nor the building blocks of the building blocks of life.

Building Block #2: The Misleading Below-the-Headline Teaser

Sometimes a science article will have a short unpretentious title, but then we will see some dubious teaser text below the title or headline, some text making some claim that is not substantiated by the article. An example is this 2007 article that appeared in Scientific American. Below the unpretentious title of “The Memory Code,” we had some teaser text telling us, “Researchers are closing in on the rules that the brain uses to lay down memories.” This claim was false in 2007, and in 2018 there is still no scientist who has any understanding of how a brain could physically store episodic memories or conceptual memories as brain states or neural states.

Building Block #3: The Underpowered Study with a Too-Small Sample Size

A rule-of-thumb in animal studies is that at least 15 animals should be used in each study group (including the control group) in order for you to have moderately compelling evidence in which there is not a high chance of a false alarm. This guideline is very often ignored in scientific studies that use a much smaller number of animals. In this post I give numerous examples of MIT memory studies that were guilty of such a problem.

The issue was discussed in a paper in the journal Nature, one entitled Power failure: why small sample size undermines the reliability of neuroscience. The article tells us that neuroscience studies tend to be unreliable because they are using too small a sample size. When there is too small a sample size, there's a too high chance that the effect reported by a study is just a false alarm. 

The paper received widespread attention, but did little or nothing to change practices in neuroscience. A 2017 follow-up paper found that "concerns regarding statistical power in neuroscience have mostly not yet been addressed." 

Building Block #4: The Weak-Study Reference

A weak-study reference will occur when one scientific paper or article refers to some previously published paper or article, without including the appropriate caveats or warnings about problems in such a paper. For example, one scientist may have a “low statistical power” too-small-sample study suggesting that some gene manipulation causes rats to be smarter. He may then try to beef up or bolster his paper by referring to some previous study that claimed something similar; but he may completely fail to mention that the previous study was also a “low statistical power” too-small-sample-size study.

Building Block #5: The Misleading Brain Visual

Except for activity in the auditory cortex or the visual cortex, a brain scan will typically show differences of only 1% or less in the brain. As the post here shows with many examples, typical brain scan studies will show a difference of only a half of one percent or less when studying various types of thinking and recall. Now imagine you show a brain visual that honestly depicts these tiny differences. It will be a very dull visual, as all of the brain regions will look the same color.

But very many papers have been published that show such results in a misleading way. You may see a brain that is entirely gray, except for a few small regions that are in bright red. Those bright red regions are the areas in the brain that had a half of one percent more activity. But such a visual is very misleading, giving readers the entirely inaccurate impression that some region of the brain showed much more activity during some mental activity.

Building Block #6: The Dubious Chart

There are a wide variety of misleading charts or graphs that show up in the scientific literature. One example is a "Composition of the Universe" pie chart that tells you the universe is made up of 71.4% dark energy and 24% dark matter. Since neither dark energy nor dark matter has been discovered, no one has any business putting up such pie charts, which imply some state of knowledge that mankind does not have. 

Another example of dubious charts are charts showing the ancestry of humans from more primitive species. If I do a Google image search for “human evolution tree,” I see various charts trying to put fossils into a tree of human evolution. But the charts are speculative, particularly all their little lines suggesting particular paths of ancestry. The recent book “Almost Human” by two paleontologists tacitly admits this, for in its chart of hominid evolution, we see four nodes that have question marks, indicating hypothetical ancestors that have not been found. There is also inconsistency in these charts. Some of these charts list Australopithicus africanus as a direct ancestor of humans, while the “Almost Human” chart does not list Australopithicus africanus as a direct ancestor of humans. Some of these charts list Homo erectus as a direct ancestor of humans, while others indicate Homo erectus was not a direct ancestor of humans.  Given all the uncertainties, the best way to do such a chart is to have a chart like the one below, which simply shows different fossil types and when they appeared in the fossil record, without making speculative assumptions about paths of ancestry. But almost never do we see such a chart presented in such a way. 

From a page at Brittanica.com


Building Block #7: The Dubious Appeal to Popular Assumptions

Often a scientific study will be suggesting something that makes no sense unless some particular unproven assumption is true. The paper will try to lessen this problem by making some claim such as “It is generally believed that....” or “Most scientists believe....” Such claims are almost never backed up by specific references to show that such claims of support are true. For example, a scientific paper may say, “Most neuroscientists think that memories are stored in synapses,” but the paper will almost never do something such as citing an opinion poll of neuroscientists citing some specific percentage of scientists believing such a thing.

Building Block #8: The Dubious Causal Inference

Experiments and observations rarely produce a result in which a causal explanation “cries out” from the data. A much more common situation is that something is observed, and the reason or reasons for such a thing may be unknown or unclear. But a scientist will not be likely to get a science paper published if it has a title such as “My Observations of Chemicals Interacting” or “Some Brain Scans I Took” or “Some Things I Photographed in Deep Space.” The scientist will have a much higher chance of getting his paper published if he has some story line he can sell, such as “x causes y” or “x may raise the risk of y.”

But a great deal of the causal suggestions made in scientific papers are not warranted by the experimental or observational data the paper describes. A scientific paper states the following: “Thirty-four percent of academic studies and 48% of media articles used language that reviewers considered too strong for their strength of causal inference.” In other words, a full one third of scientific papers are doing things such as suggesting that one particular thing causes another particular thing, when their data does not support such statements.

Such causal statements are often made in studies that suggest one thing causes another thing, when the studies were not designed to look for such a thing.

Building Block #9: The Dubious Discounting of Alternate Explanations

When a scientist suggests one particular thing causes another particular thing, he or she often has to say a bad word or two about some alternate causal explanation. Sometimes this involves putting together some kind of statistical analysis designed to show that some alternate explanation is improbable. The paper will sometimes state that such an analysis shows the alternate explanation is “disfavored.”

Often, this type of analysis can be very dubious because of its bias. If scientist W is known to favor explanation X for observation Y rather than explanation Z, such a scientist's analysis of why explanation Z does not work may have little value. What often goes on is that “strawman assumptions” are made about the alternate explanation. To discredit alternate explanation Z, a scientist may assume some particular version of that explanation that is particularly easy to knock down, rather than some more credible version of the explanation which would not be as easy to discredit.

Building Block #10: The Slanted Amateurish “Monte Carlo” Simulation

A Monte Carlo simulation is a computer program created to show what might happen under chance conditions. Many a scientific study will include (in addition to the main observational results reported) some kind of Monte Carlo simulation designed to back up the claims of the study. But there are two reasons why such studies are often of doubtful value. The first is that they are typically programmed not by professional software programmers, but by scientists who occasionally do a little programming on the side. The reliability of such efforts is often no greater than what you would get if you let your plumber do your dental work. Another reason why such studies are often of doubtful value is that it is very easy to do a computer simulation showing almost anything you want to show, just by introducing subtle bias into the programming code.

Virtually always the scientists who publish these Monte Carlo simulations do not publish the programming source code that produced the simulation, often because that would allow critics to discover embarrassing bugs and programming biases. The rule of evaluation should be “ignore any Monte Carlo simulation results if the source code was not published.”


Building Block #11: The Cherry-Picked Data

The term "cherry picking" refers to presenting, discussing or thinking about data that supports your hypothesis or belief, while failing to present, discuss or think about data that does not support your hypothesis or belief. One type of cherry picking goes on in many scientific papers: in the paper a scientist may discuss only his or her results that support the hypothesis claimed in the paper title, failing to discuss (or barely mentioning) results that do not support his hypothesis. For example, if the scientist did some genetic engineering to try to make a smarter mouse, and did 10 tests to see whether the mouse was smarter than a normal mouse, we may hear much in the paper about 2 or 3 tests in which the genetically engineered mouse did better, but little or nothing of 4 or 5 tests in which the genetically engineered mouse did worse. 



A very different type of cherry-picking occurs in another form of scientific literature: science textbooks. For many decades biology and psychology textbook writers have been notorious cherry pickers of observational results that seem to back up prevailing assumptions.  The same writers will give little or no discussion of observations and experiments that conflict with prevailing assumptions. And so you will read very little or nothing in your psychology textbook about decades of solid experimental research backing up the ideas that humans have paranormal abilities; you will read nothing about many interesting cases of people who functioned well despite losing half, most, or almost all of their brains due to surgery or disease; and you will read nothing about a vast wealth of personal experiences that cannot be explained by prevailing assumptions. Our textbook writer has cherry picked the data to be presented to the reader, not wanting the reader to doubt prevailing dogmas such as the dogma that the mind is merely the product of the brain. 


Building Block #12: The All-But-Indecipherable Speculative Math

Thousands of papers in theoretical physics are littered with all-but-intelligible mathematics equations.  The meaning of a complex math equation can always be made clear, if a paper documents the terms used. For example, if I use the equation f = (G*m1m2)/r2., I can have some lines underneath the equation specifying that m1 and m2 are the masses of two bodies, G is the universal gravitational constant, f is the gravitational force between the bodies, and r is the distance between them. 

But thousands of theoretical physics paper are filled with much more complex equations that are not explained in such a way. The author will have no clarification of the symbols being used.  We may wonder whether deliberate obscurity is the goal of such authors, and can compare them to the Roman Catholic priests who would for centuries deliberately recite the Mass in Latin rather than a language people could understand. 


Building Block #13: Data Dredging

Data dredging refers to techniques such as (1) getting some body of data to yield some particular result that it does not naturally yield, a result you would not find if the data was examined in a straightforward manner, and (2) comparing some data with some other bodies of data until some weak correlation is found, possibly by using various transformations, manipulations and exclusions that increase the likelihood that such a correlation will show up.  An example may be found in this paper, where the authors do a variety of dubious statistical manipulations to produce some weak correlations between a body of genetic expression data and a body of brain wave data, which should not even be compared because the two sets of data were taken from different individuals.  


Building Block #14: Tricky Mixing of the Factual and the Speculative

An extremely common practice of weak science literature is to mix up references to factual observations and references to speculative ideas, without labeling the speculative parts as being speculations. This is often done with a ratio such as ten or twenty factual statements to each speculative statement.  When the reader hears the speculation (which is not described as such), there will be a large chance that he will interpret it as something factual, as the previous ten or twenty statements (and the next ten or twenty statements) are factual. 


Building Block #15: The Pedantic Digressions of Little Relevance

Another extremely common practice of weak science literature is to pile up digressions that may impress the reader but are not very relevant to the topic under discussion.  Such a practice is extremely common when the topic under discussion is one that scientists do not understand. So, for example, a scientist discussing how humans retrieve memories (something scientists do not at all understand) may give all kinds of discussions of assorted research, with much of the research discussed having little relevance to the question at hand. But many a reader will see all this detail, and then think, "Oh, he understands this topic well."


Building Block #16: The Suspicious Image

This article states, "35,000 papers may need to be retracted for image doctoring, says new paper." It refers to this paper, which begins by stating, "The present study analyzed 960 papers published in Molecular and Cellular Biology (MCB) from 21 2009-2016 and found 59 (6.1%) to contain inappropriately duplicated images." 


Building Block #17: The Shady Work Requested from a Statistician

A page on the site of the American Council on Science and Health is entitled "1 in 4 Statisticians Say They Were Asked to Commit Scientific Fraud." The page says the following:


A stunning report published in the Annals of Internal Medicine concludes that researchers often make "inappropriate requests" to statisticians. And by "inappropriate," the authors aren't referring to accidental requests for incorrect statistical analyses; instead, they're referring to requests for unscrupulous data manipulation or even fraud.

Thursday, June 21, 2018

They Wanted an Accidental Universe, Not a Beautiful One

An article in the New York Times discusses the failure of attempts to find evidence for the physics theory called supersymmetry:

“These are difficult times for the theorists,” Gian Giudice, the head of CERN’s theory department, said. “Our hopes seem to have been shattered. We have not found what we wanted.” What the world’s physicists have wanted for almost 30 years is any sign of phenomena called supersymmetry, which has hovered just out of reach like a golden apple, a promise of a hidden mathematical beauty at the core of reality.

Physicist Sabine Hossenfelder (who blogs at the Backreaction blog) has written a recently published book entitled Lost in Math: How Beauty Leads Physics Astray. In the book she pushes the same claim she has advanced on her blog, that physicists have gone astray because they've been so interested in pushing theories describing some undiscovered underlying beauty in nature. The short summary of the book at the site offering it for sale includes this statement: “The belief in beauty has become so dogmatic that it now conflicts with scientific objectivity: observation has been unable to confirm mindboggling theories, like supersymmetry or grand unification, invented by physicists based on aesthetic criteria.” You can read an excerpt from the book here. In that excerpt she states “Lost in Math is the story of how aesthetic judgment drives contemporary research.” She quotes a physicist to back up this claim:

We cannot give exact mathematical rules that define if a theory is attractive or not,” says Gian Francesco Giudice. “However, it is surprising how the beauty and elegance of a theory are universally recognized by people from different cultures. When I tell you, ‘Look, I have a new paper and my theory is beautiful,’ I don’t have to tell you the details of my theory; you will get why I’m excited. Right?” ….“Most of the time it’s a gut feeling,” he says, “nothing that you can measure in mathematical terms: it is what one calls physical intuition. There is an important difference between how physicists and mathematicians see beauty. It’s the right combination between explaining empirical facts and using fundamental principles that makes a physical theory successful and beautiful.”

But though this particular quote might seem to back up her thesis, Hossenfelder's “quest for beauty” thesis seems to be a dubious one in the sense that it does not accurately describe why physicists spent so much time on unsuccessful efforts such as the theory of supersymmetry. I think the real story is: they wanted an accidental universe, not a beautiful one. 

You can get a better feel for the motivation for a theory such as supersymmetry by reading a recent glum essay at the Aeon site, one entitled “Going nowhere fast.” The essay states the following:

Behind the question of mass, an even bigger and uglier problem was lurking in the background of the Standard Model: why is the Higgs boson so light? In experiments it weighed in at 125 times the mass of a proton. But calculations using the theory implied that it should be much bigger – roughly ten million billion times bigger, in fact....Quantum fluctuations of ultra-heavy particle pairs should have a profound effect on the Higgs boson, whose mass is very sensitive to them....One logical option is that nature has chosen the initial value of the Higgs boson mass to precisely offset these quantum fluctuations, to an accuracy of one in 1016. However, that possibility seems remote at best, because the initial value and the quantum fluctuation have nothing to do with each other. It would be akin to dropping a sharp pencil onto a table and having it land exactly upright, balanced on its point. In physics terms, the configuration of the pencil is unnatural or fine-tuned. Just as the movement of air or tiny vibrations should make the pencil fall over, the mass of the Higgs shouldn’t be so perfectly calibrated that it has the ability to cancel out quantum fluctuations. However, instead of an uncanny correspondence, maybe the naturalness problem with the Higgs boson could be explained away by a new, more foundational theory: supersymmetry.

This illuminating excerpt gives you the real scoop on supersymmetry. It was a theory designed to “explain away” a case of fine-tuning in nature, something that seemed as precise as a pencil balancing on its tip. Why would someone want to explain away such fine-tuning? It wouldn't be because they wanted to make things “more beautiful,” for by removing such an elegant case of fine-tuning it would be like robbing nature of something impressive and beautiful. But people might wish to get rid of such a case of fine-tuning if they wanted the universe to seem more accidental, and less like some product of deliberate purpose.




The motivation of wanting to make the universe seem more accidental is something that can be called an ideological preference. Hossenfelder's book would have a more accurate subtitle if its subtitle was “How Ideological Preferences Lead Physics Astray” rather than “How Beauty Leads Physics Astray.” The same ideological preferences have led countless physicists and cosmologists to write thousands of speculative papers writing about inflation theories designed to explain away fine-tuning at the universe's very beginning, fine-tuning that seems more precise than that involving the Higgs boson.

None of these attempts to explain away fine-tuning in nature has been empirically substantiated by confirming evidence. Another problem is that such theories do not actually reduce the amount of fine-tuning in nature – they merely “rob Peter to pay Paul” by reducing fine-tuning in one place at the cost of adding lots of fine-tuning in other places. For example, supersymmetry postulates lots of “superpartner” particles that happen to have masses and charges exactly the same as known particles, but if such coincidences happened in nature, each such coincidence would be an additional case of fine-tuning. Similarly, the "cosmic inflation" theories of exponential expansion in the universe's first second get rid of some cases of fine-tuning, but require just as much or more fine-tuning in the form of lots of postulated parameters that have to be fine-tuned. 

Another problem with any such attempt is that it will merely address one case of fine-tuning, when there is fine-tuning all over the place in both physics and biology. As the previously cited Aeon piece tells us:

Perhaps the bleakest sign of a flaw in present approaches to particle physics is that the naturalness problem isn’t confined to the Higgs boson. Calculations tell us that the energy of empty space (inferred from cosmological measurements to be tiny) should be huge. This would make the outer reaches of the universe decelerate away from us, when in fact observations of certain distant supernovae suggest that the outer reaches of our universe are accelerating. Supersymmetry doesn’t fix this conflict.

The writer fails to mention that this “huge” energy of empty space should also be sufficient to prevent our very existence, so we have in this matter one of the many “why are things so well-arranged so that we can exist” issues. There are countless other such fine-tuning issues in physics, as well as countless cases of fine-tuning in biology that are poorly explained by existing theories.

We may therefore compare the supersymmetry theorist to a person underneath a UFO swarm above him, who tries unsuccessfully to explain one UFO in the sky as being a swarm of bees that happened to have a disk shape, while failing to mention forty other UFO's in the same sky, some larger than the ones he tried to explain.

The author of the Aeon essay notes the effort he has put into chasing the "superpartners" of supersymmetry theory. He states:

For the first 20 years of my scientific career, I cut my teeth on figuring out ways to detect the presence of superpartners in LHC data. Now I’ve all but dropped it as a research topic.

Since undiscovered physics might take any of 1,000,000,000,000 different configurations, the scientist who assumes that the one unobserved configuration he dreamed up is actually going to appear is rather like some interstellar astronaut who makes a drawing of an extraterrestrial life form that he might see on a planet, and who then expects to see exactly that life form in front of him when he opens up the door of his landing craft. 

Sunday, June 17, 2018

Brains Do Not Work Harder or Look Different During Thinking or Recall

There have been many studies of how a brain looks when particular activities such as thinking or recall occur. Such studies will typically attempt to find some region of the brain that shows greater activity when some mental activity occurs. No matter how slight the evidence is that some particular region is being activated more strongly, that evidence will be reported and reported as a “neural correlate” of some activity. 
  
But rather than focusing on a question such as “which brain region showed the most difference" during some activity, we should look at some more basic and fundamental questions. They are:

  1. Does the brain actually look different when it is being scanned while people are doing some mental activity requiring thought or memory recall?
  2. Does the brain actually become more active while people are doing some mental activity requiring thought or memory recall?

If you were to ask the average person by how much brain activity increases during some activity such as problem solving or recall, he might guess 25 or 50 percent, based on all those visuals we have seen showing brain areas “lighting up” during certain activities. But as discussed here, such visuals are misleading, using a visual exaggeration technique that is essentially “lying with colors.” The key term to get a precise handle of how much brain activity increases is the term “percent signal change.” While the visual and auditory cortex regions of the brain (involved in sensory perception) may increase by much more than 1 percent, this technical document tells us that “cognitive effects give signal changes on the order of 1% (and larger in the visual and auditory cortices).” A similar generalization is made in this scientific discussion, where it says, based on previous results, that “most cognitive experiments should show maximal contrasts of about 1% (except in visual cortex)."

A PhD in neurophysiology states the following:

Those beautiful fMRI scans are misleading, however, because the stark differences they portray are, in fact, minuscule fluctuations that require complex statistical analysis in order to stand out in the pictures. To date, the consensus is that "thinking" has a very minor impact on overall brain metabolism.

You can get some exact graphs showing these signal changes by doing Google searches with phrases such as “neural correlates of thinking, percent signal change” or “neural correlates of recollection, percent signal change.” Let's look at some examples, starting with recollection or memory retrieval.
  
  • This brain scan study was entitled “Working Memory Retrieval: Contributions of the Left Prefrontal Cortex, the Left Posterior Parietal Cortex, and the Hippocampus.” Figure 4 and Figure 5 of the study shows that none of the memory retrievals produced more than a .3 percent signal change, so they all involved signal changes of less than 1 part in 333.
  • In this study, brain scans were done during recognition activities, looking for signs of increased brain activity in the hippocampus, a region of the brain often described as some center of brain memory involvement. But the percent signal change is never more than .2 percent, that is, never more than 1 part in 500.
  • The paper here is entitled, “Functional-anatomic correlates of remembering and knowing.” It shows a graph showing a percent signal change in the brain during memory retrieval that is no greater than .3 percent, less than 1 part in 300.
  • The paper here is entitled “The neural correlates of specific versus general autobiographical memory construction and elaboration.” It shows various graphs showing a percent signal change in the brain during memory retrieval that is no greater than .07 percent, less than 1 part in 1000.
  • The paper here is entitled “Neural correlates of true memory, false memory, and deception." It shows various graphs showing a percent signal change during memory retrieval that is no greater than .4 percent, 1 part in 250.
  • This paper did a review of 12 other brain scanning studies pertaining to the neural correlates of recollection. Figure 3 of the paper shows an average signal change for different parts of the brain of only about .4 percent, 1 part in 250.
  • This paper was entitled “Neural correlates of emotional memories: a review of evidence from brain imaging studies.” We learn from Figure 2 that none of the percent signal changes were greater than .4 percent,  1 part in 250.
  • This study was entitled “Sex Differences in the Neural Correlates of Specific and General Autobiographical Memory.” Figure 2 shows that none of the differences in brain activity (for men or women) involved a percent signal change of more than .3 percent or 1 part in 333.

Now let's look at brain scan studies showing brain activity during activities such as thinking, problem solving, and imagination. 

  • This brain scanning study was entitled “Neural Correlates of Human Virtue Judgment.” Figure 3 shows that none of the regions showed a percent signal change of more than 1 percent, and almost all showed a percent signal change of only about .25 percent (1 part in 400).
  • This brain scanning study examined the neural correlates of angry thinking. Table 4 shows that none of the regions studies showed a percent signal change of more than 1.31 percent.
  • This brain scanning study was entitled “Neural Activity When People Solve Verbal Problems with Insight.” Figure 2 shows that none of the problem-solving activity produced a percent signal change in the brain of more than .3 percent or about 1 part in 333.
  • This study is entitled “Aha!: The Neural Correlates of Verbal Insight Solutions.” Figure 1 shows that none of the brain regions studies had a positive percent signal change of more than .3 percent or about 1 part in 333. Interestingly, one of the brain regions studied had a negative percent signal change of .4 percent that was greater than any of the positive percent signal changes.
  • This brain scanning paper is entitled “Neural Correlates of Evaluations of Lying and Truth-Telling in Different Social Contexts.” Figure 3 shows that none of this evaluation activity produced more than a .3 percent signal change in the brain, or about 1 part in 333.
  • This brain scanning paper is entitled "In the Zone or Zoning Out? Tracking Behavioral and Neural Fluctuations During Sustained Attention." It tracked brain activity during a mental task requiring attention. The paper's figures show various signal changes in the brain, but none greater than .09 percent, less than 1 part in 1000. 
  • This brain scanning paper is entitled "Neuronal correlates of familiarity-driven decisions in artificial grammar learning." The paper's figures show various signal changes in the brain, but none greater than 1 percent.
  • This brain scanning study is entitled, "Neural correlates of evidence accumulation in a perceptual decision task." The paper's figures show various signal changes in the brain, but none greater than .6 percent, less than 1 part in 150. 
  • This brain scanning study was entitled, “Neural correlates of the judgment of lying: A functional magnetic resonance imaging study.” We learn from Figure 3 that none of the judgment activity produced a percent signal change in the brain of more than .2 percent or 1 part in 500.

 These studies can be summarized like this: during memory recall and thinking or problem solving, the brain does not look any different, and does not work any harder. The tiny differences that show up in these studies are so small they can be characterized as “no significant difference.” You certainly wouldn't claim that an employee was working harder on some day in which you detected that he merely expended a half of one percent more energy, by working two minutes longer on that day. And we shouldn't say the brain is working harder merely because it was detected that some part used only half of one percent more energy, only one part in 200. 

As for whether the brain looks different during thinking or memory recall, based on the numbers in the studies above, it would seem that someone looking at a real-time fMRI scanner would be unable to detect a change in activity when someone was thinking or recalling something. Brain scan studies have the very bad habit of giving us “lying with color” visuals that may show some region of the brain highlighted in a bright color, when it merely displayed a difference of activity of about 1 part in 200. But the brain would not look that way if you looked at a real-time fMRI scan of the brain during thinking. Instead, all of the regions would look the same color (with the exception of visual and auditory cortex regions that would show a degree of activity corresponding to how much a person was seeing or hearing). So we can say based on the numbers above that the brain does not look different when you are thinking or recalling something. 

A 1 percent difference cannot even be noticed by the human eye. If I show you two identical-looking photos of a woman, and ask you whether there is any difference, you would be very unlikely to say "yes" if there was merely a 1% difference (such as a width of 200 pixels in one photo and a width of 202 pixels in the second photo). So given the differences discussed above (all 1 percent or less, and most less than half of one percent), it is correct to say that brains do not look different when they are thinking or remembering. 

The relatively tiny variation of the brain during different cognitive activities is shown by the graph below, which helps to put things in perspective. The graphed number for the brain (.5 percent) is just barely visible on the graph.



When you run, the heart gives a very clear sign that it is involved, and a young man running very fast may have his heart rate increase by 300%. The pupil of the eye gives a very clear sign that it is involved with vision, because the pupil of the human eye changes from a size of 1.5 millimeters to 8 millimeters depending on how much light is coming into the eye. That's a difference of more than 500%.  But when you think or remember, the brain gives us no clear sign at all that the brain is the source of your thoughts or that memories are being stored or retrieved from the brain.  The tiny variations that are seen in brain scans are no greater than we would expect to see from random variations in the brain's blood flow, if the brain did not produce thought and did not store memories. You could find the same level of variation if you were to do fMRI scans of the liver while someone was thinking or remembering. 

Concerning glucose levels in the brain, an article in Scientific American tells us that a scientist "remains unconvinced that any one cognitive task measurably changes glucose levels in the brain or blood." According to a scientific paper, "Attempts to measure whole brain changes in blood flow and metabolism during intense mental activity have failed to demonstrate any change." Another paper states this: "Clarke and Sokoloff (1998) remarked that although '[a] common view equates concentrated mental effort with mental work...there appears to be no increased energy utilization by the brain during such processes' (p. 664)." 

The reality that the brain does not work harder and does not look different during thinking or recollection may be shocking to those who have assumed that the brain is the source of thinking and the storage place of human memories. But to those who have studied the numerous reasons for rejecting such dogmas, this reality will not be surprising at all. To a person who has studied and considered the lack of any viable theory of permanent memory storage in the brain (discussed here), and the lack of any viable theory of how a brain could instantly retrieve memories (discussed here), and the lack of any theory explaining how a brain could store abstract thoughts as neuron states,  it should not be surprising at all to learn that brains do not work harder or look different when you are thinking or recalling.  The facts discussed here conflict with the dogmas that brains generate thoughts and store memories. If the brain did such things, we would expect brains to work harder during such activities.  

We know for sure that there is a simple type of encoding that goes on in human cells: the encoding needed to implement the genetic code, so that nucleotide base pairs in DNA can be successfully translated into the corresponding amino acids that combinations of the base pairs represent.  To accomplish this very simple encoding,  the human genome has 620 genes for transfer RNA.  But imagine if human brains were to actually encode human experiential and conceptual memories, so that such things were stored in brains. This would be a miracle of encoding many, many times more complicated than the simple encoding that the genetic code involves. Such an encoding would require thousands of dedicated genes in the human genome. But the human genome has been thoroughly mapped, and no such genes have been found.  This is an extremely powerful reason for rejecting the dogma that brains store human experiential and conceptual memories. 

A good rule to follow is a principle I call "Nobel's Razor." It is the principle that you should believe in scientific work that has won Nobel prizes, but often be skeptical of scientist claims that do not correspond to Nobel prize wins.  No scientist has ever won a Nobel prize for any work involving memory, or any work backing up the claim that brains generate thoughts or store memories. 

Postscript: If you do a Google search for "genes for memory encoding," you will see basically no sign that any such things have been discovered, other than one of those hyped-up press stories with an inaccurate headline of "100 Genes Linked to Memory Encoding." The story is referring to a scientific study described by this paper, entitled "Human Genomic Signatures of Brain Oscillations During Memory Encoding." The very dubious methodology of the authors was to get data on gene expression, and try to see how much it correlated with oscillations in brain waves.  Out of more than 10,000 genes studied, the authors have found about 100 which they claim were correlated with these brain wave oscillations. They state, "We were successful in identifying over 100 correlated genes and the genes identified here are among the first genes to be linked to memory encoding in humans." But the correlations reported are weak, with most of these 100 genes correlating no more strongly than .1 or .2 (by comparison, a perfect correlation is 1.0, and a fairly strong correlation is .5). The results reported do not seem any stronger than you would expect to get by chance. We would expect that even if there was no causal relation between gene expression and brain waves, that if you compared gene expression in more than 10,000 genes to brain waves, you might find purely by chance a tiny fraction such as 1% of the genes that would look weakly correlated to brain waves (or any other random data, such as stock market ups and downs).   In one of the spreadsheet tables you can download from the study,  there is a function listed for each of these roughly 100 genes, and in each case it's a function other than memory encoding. So such genes cannot be any of the thousands of dedicated genes that would have to exist purely for the sake of translating complex conceptual, verbal, and episodic memories into neural states, and vice versa, if a brain stored memories. No such genes have been identified in the genome. 

The paper confesses, "All gene expression data are derived from different individuals than the ones that participated in the iEEG study."  This means the paper is an absurdity.  It is looking for correlations between one set of gene expression data measured in one set of individuals and another set of brain wave data measured in an entirely different set of individuals at a different time. That makes no more sense than trying to look for a correlation between some meal consumption in a 2016 woman's softball team and tooth decay rates in a 2017 male football team. 

Wednesday, June 13, 2018

Real Physics Versus String Theory's Chimerical “Landscape”

At Quanta magazine, there is an article written by string theorist Robbert Dijkgraaf. It's a batty-sounding piece entitled “There Are No Laws of Physics. There's Only the Landscape.” To explain the goofy reasoning behind this piece, I have to give a little background on the history of string theory. String theory started out as a speculative exercise to try to unify two types of physics theories that seem in conflict with another: general relativity (which deals with large-scale phenomena such as solar systems and galaxies) and quantum mechanics (which deals with the subatomic world). The hope of string theorists was that they could find one set of equations that would uniquely describe a reality in which general relativity and quantum mechanics would exist harmoniously. To try to reach this goal, the string theorists had to do all kind of imaginative flights of fancy such as imagining twelve different dimensions.

Eventually it was found out that it was not at all true that string theory predicted a reality only like the one we have. It was found that there could be something like 10500 possible universes in which something like string theory could be true, each with a different set of characteristics. Did the string theorists then give up? No, they invented the name “the landscape” to describe this imaginary set of possible universes. They then started speaking as if speculating about this “landscape” is some proper business of physicists.


This makes no sense. It is the business of science to study reality, not imaginary possibilities. The only case in which it makes sense to study an imaginary universe would be to throw some light upon our own universe. For example, I might study a hypothetical universe in which the proton charge is not the very exact opposite of the electron charge, as it is in our universe. This might shed some light on how fine-tuned our universe is. But except for rare cases like these, it is a complete waste of time to speculate about imaginary hypothetical universes.

String theorists like Dijkgraaf use various verbal tricks to fool us into thinking that they are doing something more scientific than Tolkien spinning stories about the imaginary landscapes of Middle Earth. One trick they use is to call their imaginary universes “models.” Such a word is inappropriate when talking about any speculation about a hypothetical universe different from the one we observe, such as a universe with different laws or different fundamental constants. In science a model is a simplified representation of a known physical reality. So, for example, the Bohr solar system model of the atom is an example of a model. But you are not creating a model when you imagine some unobserved universe with different laws. That's just imaginative speculation, not model building.

Another trick used by Dijkgraaf is to try to make it sound like the weird speculations of string theory have become accepted by most physicists. For example, he writes the following:

The current point of view can be seen as the polar opposite of Einstein’s dream of a unique cosmos. Modern physicists embrace the vast space of possibilities and try to understand its overarching logic and interconnectedness. From gold diggers they have turned into geographers and geologists, mapping the landscape in detail and studying the forces that have shaped it. The game changer that led to this switch of perspective has been string theory.

With this kind of talk you would think that string theory has taken over physics, wouldn't you? But that didn't happen. During the late 1980's there was talk about how string theory was going to take over physics. But as we can see in the diagram below (from a physics workshop), the popularity of string theory has plunged since about 1990. String theory is now like a weird little cult in the world of theoretical physicists, not at all something which most physicists endorse.
physics papers
Another verbal trick used by Dijkgraaf is to use metaphors that might make us think that string theory is talking about something more substantial than speculations about where angels fly about or how long ghosts haunt a house. So in the quote above he compares string theorists to geographers and geologists and gold diggers and mappers, who are all hard-headed down-to-earth people who deal with solid physical reality. But string theorists are not at all like such people, since string theorists deal so much with imaginary universes for which there is no evidence.

Dijkgraaf also tries to insinuate that the so-called models of string theory (a plethora of imaginary universes) are “results of modern quantum physics.” He states the following:

First of all, the conclusion that many, if not all, models are part of one huge interconnected space is among the most astonishing results of modern quantum physics. It is a change of perspective worthy of the term “paradigm shift.”

A result in physics is something established by observation or experiments. Quantum mechanics has results, but string theory has no results. It has merely ornate speculations. You don't get a paradigm shift by speculating about the unobserved.

What Dijkgraaf conveniently fails to tell us is that string theory has been a complete bust on the experimental and observational side. String theory is based on another theory called supersymmetry. Attempts to find the particles predicted by supersymmetry have repeatedly failed. There is no evidence for any version of string theory.

As for Dijkgraaf's claim that “there are no laws of physics, there's only the landscape,” this seems to be a bad case of confusing reality and the imaginary. The “landscape” of string theory is imaginary, but the laws of physics are realities that our existence depends on every minute. For example, if there were no laws of electromagnetism, none of us would last for even 30 seconds. Claims like “there are no laws of physics” by a string theorist suggests that string theory is just an aberrant set of science-flavored speculations out-of-touch with reality. You might call it runaway tribal folklore, the tribe in question being a small subset of the physicist community. And when a string theorist speaks about an imaginary group of possible universes (what string theorists call the landscape), and says that scientists are “mapping the landscape in detail and studying the forces that have shaped it,” as if the imaginary “landscape” was real, it again seems to be a case of confusing the real and the imaginary. Similarly, a Skyrim player might get so lost in the video game's fantasy that he might say, “There's no America, there's only Tamriel.”

Given string theorists, it's hardly a surprise that nbcnews.com has a story entitled “Why some scientists say physics has gone off the rails.” In that story the cosmologist Neil Turok is quoted:

"All of the theoretical work that's been done since the 1970s has not produced a single successful prediction," says Neil Turok, director of the Perimeter Institute for Theoretical Physics in Waterloo, Canada. "That's a very shocking state of affairs."