Header 1

Our future, our universe, and other weighty topics


Tuesday, July 30, 2019

Integrated Information Theory Is an Explanatory Flop

Neuroscientists have no credible explanations for the most important mental phenomena such as consciousness and memory. All that scientists have in this regard are some mangy speculations that don't hold up to scrutiny. I have often written about the deficiencies of neuroscientist theories about memory. I will now look at the poor quality of one of the theories that is put forth as a “theory that might explain consciousness”: what is called integrated information theory.

Before discussing the theory, I must discuss the general problem with any type of theory postulating that consciousness or human mentality is a product of the brain. We realize this huge problem when we consider that a brain is something material or physical, but consciousness or mentality is something immaterial or mental. There seems nothing wrong with the general idea that a physical cause might produce a physical effect, and we know of many examples of physical effects producing physical causes (such as tsunamis producing flooding or lightning producing electrocution). There also seems nothing wrong with the general idea of a mental cause producing a mental effect (for example, the mental idea or knowledge that you are going to soon die may produce the mental effect of anxiety). But there does seem something wrong with the idea of a physical cause producing a mental phenomenon such as consciousness or imagination or understanding. To many a philosopher, the idea that some particular arrangement of cells might cause an idea to pop up seems no more right than the idea that some arrangement of thoughts might conjure up some animal. It seems no more reasonable that some mere arrangement of atoms might cause an idea to arise than for some arrangement of thoughts in your mind to conjure up physical objects.

Now I will discuss integrated information theory. The theory is presented in this paper, and summarized in this wikipedia.org article. You can expand the visual below to get some reasoning that is the core of integrated information theory (the visual is from the wikipedia.org article). The arrangement mirrors what is presented in the paper on integrated information theory. On the left column is a theory of “axioms” about consciousness, an axiom being something that is self-evidently true.  In the visual, these “axioms” are described as “essential properties of every experience.” The first one is evident, that “consciousness exists.”


integrated information theory
From the wikikpedia. org article


But others in this list of “axioms” are not so evident at all. The second “axiom” is that “Consciousness is structured: each experience is composed of phenomenological distinctions.” Most experiences do consist of multiple aspects, but it is quite possible to have an experience that is not structured, and is not composed of multiple aspects. Lying motionless on a bed with my eyes closed, I can think of nothing but the blackness of outer space (or think of  absolutely nothing at all). There is nothing structured about that, and it doesn't consist of multiple aspects.  On the right side of the visual are what are called “postulates.” These are described as “properties that physical systems (elements in a state) must have to account for an experience.”

What is going is just a big example of begging the question, of assuming what was supposed to be proved. The creator of this scheme has simply taken it for granted that some physical system with some set of properties can give rise to a conscious experience. Before looking at such a set of properties, we should reject the underlying assumption. It actually seems there is no physical system that can account for conscious experiences. We can think of no reason why neurons or any groups of neurons should give rise to conscious experience or self-hood. 

I can give an analogy for the type of reasoning that is going on here. Suppose someone were to start trying to explain rock levitation by making a list of the set of properties that a rock levitation incantation would need to succeed. The person might start listing properties such as (a) a specification of which rock to levitate; (2) a specification of how high the rock should be levitated; (3) an appeal to some deity or to spirits of the dead. He might then claim that his rock levitation incantation met all of these properties, and that this explains how he was able to levitate a rock. But before giving much scrutiny to this “set of properties that a rock levitation incantation would need to have,” we should veto such a list at the very beginning, saying, “It's no fair presenting such a list unless you have first proven that rocks can be levitated by incantations.” And similarly, to the person who would start listing “properties that physical systems (elements in a state) must have to account for an experience,” we should not at all concede the possibility of such a thing, but demand first that someone show why we should believe that a physical system could ever account for a conscious experience.

I won't go too much into the details of the “postulates” in the second column of the visual, other than to note that they are as doubtful as some of the axioms in the first column from which these postulates supposedly derive. The paper consists largely of specialized jargon and doubtful mathematics, perhaps to provide some imposing sounds and sparkles to impress the easily impressed.

Integrated information theory claims that all systems with integrated information have some level of consciousness, and that the brain has high consciousness because it has lots of integrated information in the form of stored memories. But there is actually no evidence that integrated information exists in the brain. The claim that memories are stored in brains is simply a speech custom of scientists, a dogma they keep stating without any proof. There are extremely strong reasons for thinking that this dogma cannot be true. They include the following:

  1. Synapses (the supposed storage place of memories) are made of proteins with an average lifetime of only a few weeks, which is only a thousandth of the  maximum length of time that humans can remember things.
  2. There is so much noise in synapses and neurons that accurate recall of large amounts of information should be impossible if you remembered by reading things from your brain.
  3. There is no credible theory of how a brain could instantly retrieve a memory, such as when you instantly recall information about someone after merely hearing their name. Finding such a memory instantly in the brain would be like instantly finding a needle in a haystack. 
  4. No one has discovered any actual example of learned knowledge or episodic memories in any bit of brain tissue.
  5. Brains seem to have neither a mechanism for writing a memory nor a mechanism for reading a memory.
  6. No one has conceived of a detailed theory explaining how human knowledge could ever be translated into neural states so that it could be stored in brains.

We know there is genome information in each neuron, but that is not integrated information. It's just massively redundant information, with each DNA molecule in a neuron duplicating the same information. As for the claim of integrated information theory that all systems with integrated information are partially conscious, such a claim has absurd implications, such as the implication that your thermostat must be kind of conscious or that your smartphone must be kind of like a person.

Another defect of integrated information theory is that it makes no sense to try to present a "theory of consciousness" in isolation, because the thing that needs to be explained is human mentality in all its aspects, and consciousness is only one of those aspects.  Given all the many aspects and capabilities of the human mind (including memory, imagination, volition, emotion, abstract reasoning, understanding, self-hood, and many others),  trying to explain the human mind with a mere "theory of consciousness" is rather like advancing a theory of the earth's origin which merely explains the earth's shape rather than also explaining the earth's mass, position and composition.  

I can imagine a computer exercise that helps to illustrate how there is no sense at all in the idea that integrated information produces consciousness. Let us imagine that you have a website that is rather like wikipedia. Suppose the web site consists of 10 million pages, each of which has text scanned in from an old encyclopedia. Now, you might want to make this information more integrated. So you might write a computer program that scans through all these web pages, creating hyperlinks that add the integration. After the program finishes running on all these pages, there would then be many millions of hyperlinks integrating the information. So, for example, whenever a reader came to a page with a title of “World War II,” all of the people names, place names, and weapon names would appear as hyperlinks that a reader can click to go to a page discussing that particular person, place or weapon. And in each of those articles there would be a link back to the general article on World War II.

Now, imagine that after testing this program, you then run it on all of your 10 million web pages, creating millions of hyperlinks, and vastly increasing the amount of integrated information. According to integrated information theory, you then would have made your web site more conscious than it was before, because now the information is a lot more integrated. But that's nonsensical. There is not the slightest reason to suppose that your web site would be any more conscious than it was before. And if you had a massive web site with a trillion pages, and you ran such a program to create a quadrillion hyperlinks,  creating a vast mountain of integrated information, there would still not be the slightest reason for thinking that this vast leap in integrated information would have made your web site the slightest bit more conscious than it was before.  

Scott Aaronson (not to be confused with the cartoonist Scott Adams) has stated the following about integrated information theory (IIT):

"In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly 'conscious' at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data.  Moreover, IIT predicts not merely that these systems are 'slightly' conscious (which would be fine), but that they can be unboundedly more conscious than humans are."

Information and integrated information are things that can be produced by conscious agents.  Neither information nor integrated information do anything to explain why consciousness exists. Similarly, people make art, but art doesn't do anything to explain why people exist.  

You may object to what I have stated about memory and the brain, pointing out that a few days ago there was an article in the esteemed journal Nature which stated the following:

"Researchers know that memories are encoded in the mammalian brain when the strength of the connection between neurons increases. That connection strength is determined by the amount of a particular type of receptor found at the synapse."

This sounds very confident, but when we read the quote in context, we should lose all confidence in the claim. Here is the full quote:

"Researchers know that memories are encoded in the mammalian brain when the strength of the connection between neurons increases. That connection strength is determined by the amount of a particular type of receptor found at the synapse. Known as AMPA receptors, the presence of these structures must be maintained for a memory to remain intact. 'The problem, Hardt says, 'is that none of these receptors are stable. They are moved in and out of the synapse constantly and turn over in hours or days.' "

The latter part of the quote should cause us to lose all confidence in the first part of the quote.  Given such instability in synapses, they cannot be the storage place for memories that last for decades.  The author of the article is a science journalist, and science journalists have a history of uncritically regurgitating dubious claims by theorists and researchers. 

Rather than making that factually inaccurate claim that "researchers know that memories are encoded in the mammalian brain when the strength of the connection between neurons increases," along with the claim that this occurs at synapses, our science journalist should have noted that this very month in the journal Nature there appeared a scientific paper disputing such a claim. The paper stated the following: "There are nonetheless both theoretical arguments and experimental data against the idea that long-term memory resides in synapses with learning-altered weights," and also, "Whether or not memory is necessarily stored at synapses is still unclear." 

No comments:

Post a Comment