Header 1

Our future, our universe, and other weighty topics


Sunday, November 13, 2016

Epigenetics Cannot Fix the “Too-Slow Mutations” Problem

Recently in Aeon magazine there was an article entitled “Unified Theory of Evolution” by biologist Michael Skinner. The article starts out by pointing some problems in Neo-Darwinism, the idea that natural selection and random mutations explain changes in species or the origin of species. The article says this:

One problem with Darwin’s theory is that, while species do evolve more adaptive traits (called phenotypes by biologists), the rate of random DNA sequence mutation turns out to be too slow to explain many of the changes observed...Genetic mutation rates for complex organisms such as humans are dramatically lower than the frequency of change for a host of traits, from adjustments in metabolism to resistance to disease. The rapid emergence of trait variety is difficult to explain just through classic genetics and neo-Darwinian theory.... And the problems with Darwin’s theory extend out of evolutionary science into other areas of biology and biomedicine. For instance, if genetic inheritance determines our traits, then why do identical twins with the same genes generally have different types of diseases? And why do just a low percentage (often less than 1 per cent) of those with many specific diseases share a common genetic mutation? If the rate of mutation is random and steady, then why have many diseases increased more than 10-fold in frequency in only a couple decades? How is it that hundreds of environmental contaminants can alter disease onset, but not DNA sequences? In evolution and biomedicine, the rates of phenotypic trait divergence is far more rapid than the rate of genetic variation and mutation – but why?

As interesting as these examples are, they are merely the tip of the iceberg if you are talking about cases in which biological functionality arises or appears too quickly to be accounted for by assuming random mutations. The main case of such a thing is the Cambrian Explosion, where we see a sudden explosion of fossils in the fossil record about 550 million years ago, with a large fraction of the existing phyla suddenly appearing. Instead of seeing some slow gradual progression in which we very gradually see more complex things appearing over a span of hundreds of millions of years, we see in the fossil record many dramatic new types of animals suddenly appearing.

The other main case of functionality appearing too quickly to be accounted for by random mutations is the relatively sudden appearance of the human intellect. The human population about 1 million years years ago was very small. This article tells us that 1.2 million years ago there were less than 30,000 in the population. The predicted number of mutations is inversely proportional to the population size, which means the smaller the population, the lower the number of mutations in the population. So when you have a very small population size, the predicted mutation rate is very low. But suddenly humanity about 100,000 or 200,000 years ago seems to have got some dramatic increase in brain power and intellectual functionality. Such a thing is hard to plausibly explain by mutations, given the very low number of mutations that should have occurred in such a small population.

But Skinner tries to suggest there is something that might help fix this “too-slow mutations” problem in Neo-Darwinism. The thing he suggests is epigenetics. But this suggestion is mainly misguided. Epigenetics cannot do the job, because it is merely a kind of “thumbs up or thumbs down” type of system relating to existing functionality, not something for originating new functionality.

Skinner defines epigenetics as “the molecular factors that regulate how DNA functions and what genes are turned on or off, independent of the DNA sequence itself.” One of the things he mentions is DNA methylation, “in which molecular components called methyl groups (made of methane) attach to DNA, turning genes on or off, and regulating the level of gene expression.” Gene expression means whether or not a particular gene is used in the body.

The problem, however, with epigenetics is that it does not consist of detailed instructions or even structural information. Epigenetics is basically just a bunch of “on/off” switches relating to information in DNA.

Here is an analogy. Imagine there is a shelf of library books at a public library. A librarian might use colored stickers to encourage readers to read some books, and avoid other books. So she might put a little “green check” sticker on the spines of some books, and a little “red X” sticker on the spines of other books. The “green check” sticker would recommend a particular book, while the “red X” sticker would recommend that you avoid it.


Perhaps such stickers would have a great effect on which books were taken out by library patrons. Such stickers are similar to what is going on with epigenetics. Just as the “red X” sticker would instruct a reader to avoid a particular book, an epigenetic molecule or molecules may act like a flag telling the body not to use a particular gene.

But these little “green check” and “red X” markers would not explain any sudden burst of information that seemed to appear in too-short a time. For example, suppose there was a big earthquake at 10:00 AM, and then at 11:00 AM there appeared a book on the library shelf telling all about this earthquake, describing every detail of it and its effects. We could not at all explain this “information too fast” paradox by giving any type of explanation involving these little “green check” and “red X” stickers.

Similarly, epigenetics may explain why functionality that appeared too fast is or is not used by a species, but does nothing to explain how that functionality appeared too fast. Epigenetics is making some valuable and interesting additions to our biological knowledge, but it does nothing to solve the problem of biological information appearing way too quickly to be accounted for by assuming random mutations.

Another analogy we can use for epigenetics is what programmers call “commenting out code.” Given some software system such as a smartphone app, it is often easy for a programmer to turn off particular features. You can do what programmers call “commenting out” to turn off particular parts of the software. So the following is a quite plausible conversation between a manager and a programmer:

Manager: Wow, the app looks much different now. Some of the buttons that used to be there are no longer there, and two of the tabs have disappeared. How did you do that so quickly?
Programmer: It was easy. I just “commented out” some of the code.

Such “commenting out” of features is similar to gene expression modification produced by epigenetics, in which there's a “let's not use this gene” type of thing going on. But the following is a conversation that would never happen.

Manager: Wow, the app looks much different now. I see there's now some buttons that lead you to new pages the app never had before, which do stuff that the app could never do before. How did you do that so fast?
Programmer: It was easy. I just “commented out” some of the code.

The programmer would be lying if he said this, because you cannot produce new functionality by commenting out code. Similarly, some new biological functionality cannot be explained merely by postulating some epigenetic switch that causes some existing gene not to be expressed. That's like commenting out code, which subtracts functionality rather than adding it.

I can give Skinner credit for raising some interesting questions, but he does little to answer them. The problem remains that biological information has appeared way too rapidly for us to plausibly explain it by random mutations.

For every case in which random mutations produce a beneficial effect, there are many cases in which they produce a harmful effect. Long experiments on exposing fruit flies to high levels of mutation-causing radiation have not produced any new species or viable structural benefits, but produce only harm. We have so far zero cases of species that have been proven to have arisen from random mutations, and we also have zero cases of major biological systems or appendages that have been proven to have arisen from random mutations. So why do our scientists keep telling us that 1001 wonderful biological innovations were produced by random mutations?

It's rather like this. Imagine Rob Jones and his family get wonderful surprise gifts on their doorstep every Christmas, left by an anonymous giver. Now suppose there is someone on their street named Mr. Random. Mr. Random behaves like this: (1) if you invite him into your home, he makes random keystrokes on whatever computer document you were writing; (2) if you eat at his house, he'll give you probably-harmful soup made from random stuff he got from random spots in his house and backyard, including his bathroom and garage; (3) if you knock on his door, and ask Mr. Random for a cup of sugar, he'll give you some random white substance, maybe sugar or maybe plaster powder or rat poison. Now imagine how silly it would be if Rob Jones were to look on those fine Christmas gifts on his doorstep, and say to himself: Let me guess who left these – it must have been Mr. Random!

Wednesday, November 9, 2016

The Best Planet to Colonize in Case of an Apocalypse Is...Earth

All of those who regard the 2016 presidential election as one of the great disasters of modern times may take slight consolation in the thought that there are much bigger disasters we could have suffered. Our planet could have been hit by a comet or an asteroid. A solar flare could have caused an electromagnetic pulse effect that could have wiped out all our electricity. The Yellowstone Park super-volcano could have erupted, burying much of North America in ash. Or a nuclear war could have started.

There are some who argue along the following lines:

We're in a cosmic shooting gallery. A comet or an asteroid could hit us at any time. Then there's the threat of nuclear war, not in mention the eventual ruinous effects of global warming. How can we protect ourselves from the risk of extinction posed by such hazards? We must go to Mars! The sooner we get started on Mars colonization, the better.

But there are some reasons for doubting that Mars colonization is our best bet to avoid the threat of extinction. One problem is the risk of a Mars landing failing. This risk seems very large in light of the fact that the European space agency spent many millions on a Mars lander that recently crashed on Mars, resulting in a total loss of the mission. We never see movies with a plot like this:

An asteroid is discovered in space, heading for collision with our planet. The world rushes together a Mars spaceship. Heroic astronauts set out for the long voyage to Mars, which they hope to colonize. When they try to land, things don't go right, and their lander crashes and burns.

But such an outcome is a distinct possibility. And what about the radiation hazard, both on Mars and during the flight to Mars? Space is filled with deadly cosmic rays, and it is very hard to build a spaceship that fully protects against such radiation. By the time astronauts get to Mars, they might have damaged brains, with the disastrous effects described in my science fiction story Mars Peril. Another possibility is that by the time the astronauts got to Mars, the radiation during the voyage over may have caused harmful mutations. Such mutations might show up as birth defects in the first generation of children born on Mars.

Then there is the fact that once astronauts got to Mars, they might still suffer great hazard from radiation. This is because the very thin atmosphere of Mars does a poor job of shielding the surface from radiation.

If we are faced with an apocalyptic threat, it would seem there is a better option than rushing to colonize Mars. The better option is to stay right here on Earth, and build underground “Earth colonies” capable of surviving any type of disaster on the surface of our planet.

It's easy to imagine a type of structure that would work well and be fairly easy to build. The algorithm could go something like this:
  1. 1.Create a rectangular hole in the ground 30 meters deep and 20 meters wide, dumping all of the dug dirt on the sides of this hole.
  2. Drop at the bottom of this hole a steel structure about 20 meters wide.
  3. Add on top of the structure 1 or more excavation chutes allowing access to the surface.
  4. Add some solar panels that could work rather like the periscopes of submarines, capable of being withdrawn deep below the ground during times of surface upheaval, or pushed above the ground when the air above the shelter is relatively calm.
  5. Dump all of the excavated dirt on top of the steel structure.
  6. Then clear off some dirt corresponding to the top of the excavation chute and the top of the periscope-style solar panels.
If loaded with sufficient water and food, and oil and generators, such a structure could provide shelter for a decade or more, even on a planet that was being pulverized by a comet, an asteroid, or a nuclear war. The building of such structures could be facilitated by digging robots, relatively easy to make.

Of course, such structures would be too hard-to-make to save a large fraction of humanity in case of an apocalyptic event. But they would serve well to preserve a small fraction of the human race to ride out the years of environmental hell caused by the apocalypse .In most cases of apocalyptic events, the destructive surface events will only last several years before things start to slowly normalize.

Given the radiation problem on Mars, it might be necessary to build underground Mars bases to protect Martian colonists from cosmic rays. When you go to the wikipedia.com article on “Colonization of Mars,” you immediately see a drawing of a proposed Mars base that is largely underground. But if you're going to be building underground structures, why not just build them here on Earth? Don't answer, “Because you could use fancy hydroponic technology to grow crops underground,” because the same technology could be used in underground shelters on Earth. For the cost of one Mars mission moving 40 colonists to Mars, you could probably build underground shelters for 100,000 humans. 

You might think that people would go crazy living underground, but it is easy to imagine some tricks that could be used to make things tolerable. For example, we can imagine a large central room with a dome-shaped ceiling. Using projections, lighting tricks, and some vegetation, such a room could be made to simulate being outdoors during various times of day and various seasons, providing a somewhat outdoorsy ambiance to people sheltering underground.  

I'll admit that underground shelters on Earth have zero glamour, which makes them different from the high glamour of a Mars colonization mission. But in terms of bang-for-the-buck, terrestrial underground shelters beat shelters on Mars hands down.

Another idea for coping with an apocalypse (without going to Mars) is the idea of recolonization stations that I discuss here. This is the idea of putting up specially designed space stations intended to be occupied for a a decade or more, with the inhabitants of the station then returning back to our planet, using escape capsules built into the station. This would not be as cost-effective as underground shelters, but would probably still be much less expensive than trying to colonize Mars. 

A recolonization station 

Saturday, November 5, 2016

How Could 50-Year-Old Memories Be Stored in the Shifting Sands of Synapses?

I may compare the mighty fortress of materialism to a castle that is under siege. The walls that protect the castle are being breached by a wide variety of findings and observations, such as findings about cosmic fine-tuning, and observations such as near-death experiences. So how does a dedicated defender of this materialist paradigm try to defend his besieged castle? He builds walls to prop up the castle's defenses. But sometimes these walls are built of pure speculation, and therefore are no more solid than walls built of gossamer, the stuff that spider webs are built of.

Some examples can be found in the world of neurological theory. One breach in the wall around the castle of materialism is the fact of rapid molecular turnover in synapses. The most popular theory about the storage of memories in the brain has been that memories are stored as what are called synaptic weights. But those weights are built from protein molecules, and those molecules have short lifetimes of only a few weeks. This paper finds that synaptic proteins turn over at a rate of 0.7% per hour, which means 17% per day. Such turnover (which involves the replacement of protein molecules at a rapid rate) should quickly clear any information stored in synapses. 

Based on what we know about molecular turnover, it would seem that the brain should not be able to store memories for longer than a month. But instead humans can remember things well for 50 years or longer. How can that possibly be, if memories are stored in synapses?

The issue is a crucial one, because if your brain is incapable of storing your very long-term memories, it means they must exist elsewhere, presumably in some place (such as a human soul or some spiritual reality or infrastructure) incompatible with materialistic assumptions.

There are a few theories designed to overcome this molecular turnover difficulty, theories that are called synaptic plasticity maintenance theories. But these theories are super-speculative affairs that strain credulity in many ways. One idea is the idea of bistable or bimodal synapses. The speculation goes something like this:

Maybe there is some set of molecules that acts like an “on/off switch,” in the sense that there can be two different states. So when you learn something, maybe some molecules that act like an on/off switch get switched on. Then when some of those molecules start to disappear, because of rapid molecular turnover, maybe there's some kind of “feedback mechanism” that switches the set of molecules back to its original state. So it's kind of like a little square on an SAT test form that is penciled in, and then, after the rain washes away the pencil mark, something acts to restore the little pencil mark that was filled in.

This ornate speculation is extremely unbelievable. There is also no good evidence that it is true, and the evidence actually stands against such an idea. Below is a quote from a scientific paper:

Despite extensive investigation, empirical evidence of a bistable distribution of two distinct synaptic weight states has not, in fact, been obtained....In addition, a demonstration of synaptic bistability would require not only finding two distinct synaptic strength states but also finding that a set of different protocols for LTP induction (e.g., different patterns of stimuli, or localized application of pharmacological agents) commonly switched synaptic weights between the same two stable states. Such a demonstration has not been attempted. In addition, modeling suggests that stochastic fluctuations of macromolecule numbers within a small volume such as a spine head are likely to destabilize steady states of biochemical positive feedback loops, causing randomly timed state switches.... The weight distribution of Song et al... is based on measurements of several hundred excitatory postsynaptic potential (EPSP) amplitudes and appears to particularly disfavor the bimodal hypothesis.

What the above paragraph is basically saying is that there is no good evidence for the idea of bistable or bimodal synapses, and that there is good evidence for rejecting such an idea. There is an additional reason for rejecting the idea, not mentioned in the quote above. It is the fact that complex information like human memories could never be biologically stored in some information storage mechanism based on some binary “on/off” mechanism of storage such as the bistable or bimodal synapses idea imagines. 

Take the simple example of a visual image you see and remember. Each pixel or dot that makes up the image requires a color dot. A color dot could be biologically represented by some synapse strength that could vary from 1 to 16 or 1 to 256, but a color dot could not be biologically represented by some setup consisting of mere “on/off” switches. Computers can store information in binary form, but no plausible mechanism can be described by which a natural biological system could store memories in a binary form.

So the ultra-speculative idea of bistable or bimodal synapses is a bust as an explanation for how your brain might be able to store memories for 50 years despite rapid molecular turnover. Another major attempt to explain such a thing is a cluster theory of synaptic stability. The theory was presented in this paper. The speculative idea is that maybe molecular turnover is much more likely to occur where information has already been stored.

Imagine a very simple case of a 10 by 10 grid of 100 cells, in which some information is stored in the grid. Imagine there is a black square shape stored in the center of this grid, formed by 80 black cells in the grid. Then imagine the grid cells are being randomly overwritten by molecular turnover. But suppose instead of these grid square replacements occurring randomly at all positions, imagine that the grid replacements occur much more near spots that already have a black mark. Then the information might decay less rapidly.

That doesn't sound too unreasonable. But when we take a look at the details of the paper, we find that it makes quite an outrageous assumption. It states, “The primary effect of this implementation is that the insertion probability at a site with many neighbors (within a cluster or on its boundary) is orders of magnitude higher than for a site with a small number of neighbors.” The phrase “orders of magnitude” means something like 100 times, 1000 times, or 10,000 times. To assume that is to “cook the books” in a quite ridiculous way, like some gambler assuming that he will be 1000 times more successful than the gambler next to him at the roulette table. There is no reason why we would see such gigantic discrepancies in the positions where molecular turnover occurred.

The paper shows simulations designed to show that under these absurdly implausible assumptions, information could be preserved despite molecular turnover. The simulations show a little 7 by 7 square grid evolving over 1000 time steps. Information is preserved over 1000 time steps, but is not preserved over 2000 time steps.

There are three problems here: (1) the assumptions about molecular turnover occurring “orders of magnitude” more often near existing storage receptors is absurdly unrealistic and biased; (2) the simulation period is too short, leaving the author with the claim merely that such a “meta-stable network” could last as long as a year (we actually need something that would store information for 50 years); (3) the information being tested in the simulation is too simple, being the type of simplest-possible shape (a square) that works best under such a simulation, rather than a more complicated shape that would tend to not work as well. 

If a similar simulation were attempted with a more complicated shape, such as a Y shape or a P shape, the information would not be well-preserved.  Of course, what we actually store in our memories is vastly more complicated than such test shapes.  

There is another reason why these synaptic plasticity maintenance theories are futile. Even if you were somehow to explain how information could be preserved despite rapid molecular turnover in synapses, you would still have the problem of instability on a larger structural level.

We are told that what are called dendritic spines are storehouses of synaptic strengths. A person believing memories are stored in synapses will sometimes think of these dendritic spines as being rather like words in a paragraph, words storing our memories.



 The little "leaves" shown here are dendritic spines

But how long do these dendritic spines last? In the hippocampus of the brain, they last for less than two months. In the cortex of the brain, they last for less than two years. This study found that dendritic spines in the hippocampus last for only about 30 days. This study found that dendritic spines in the cortex of mice brains have a half-life of only 120 days. 

So it's rather like this. Imagine you are driving on the highway, and you watch a car filled with papers, and papers are blowing out of the car's open windows at a constant rate. That is like the information loss that would seem to be caused by rapid molecular turnover. Then suppose you watch the car crashing into a tree and bursting into flames. That's like the loss of information caused by the short lifetimes of dendritic spines. Now suppose someone creates a theory that maybe someone in the car wrote down the information in all the papers blowing out the windows. That utterly contrived and implausible theory is like the synaptic plasticity maintenance theories I have described. But such theoretical ingenuity is futile, because it cannot explain how the information could be preserved after the car crashes and burns. Similarly, the synaptic plasticity maintenance theories are futile because they can't explain how we could have memories lasting for 50 years despite dendritic spines that last no longer than two years.

Our scientists need to wake up and smell the coffee of their own research findings about the brain. Such findings imply that 50-year-old memories cannot be stored in synapses, and (as discussed here) there's no other plausible storage place where the brain could store them. Our minds must therefore be something much more than just the product of our brains. My 50-year-old memories may be stored in a soul, or some other mysterious consciousness reality, but they cannot be stored in my synapses, which are not a stable platform for permanent information storage. 

Postscript: I mentioned here the low lifetimes of dendritic spines, but didn't mention that synapses themselves don't last very long. Below is a quote from a scientific paper:

A quantitative value has been attached to the synaptic turnover rate by Stettler et al (2006), who examined the appearance and disappearance of axonal boutons in the intact visual cortex in monkeys.. and found the turnover rate to be 7% per week which would give the average synapse a lifetime of a little over 3 months.

 After writing this post, I found the important scientific paper "The demise of the synapse as the locus of memory: A looming paradigm shift?" by Patrick C. Trettenbrein.  This scientist makes quite a few points similar to my points in this post, and then makes this very astonishing confession: "To sum up, it can be said that when it comes to answering the question of how information is carried forward in time in the brain we remain largely clueless." 

Tuesday, November 1, 2016

What Is Increasing Human Intelligence?

The Flynn effect is a well-documented effect that involves a gradual increase in performance in intelligence tests. The increase seems to be about 3% per decade, and has seemingly been occurring since the 1930's. Some of the implications are startling -- for example, that before many decades have passed, the average person living will have an intelligence level of the average Harvard freshman today.

There have been various attempts to naturally explain the Flynn effect, but none have been very convincing. One attempted explanation has been that nutrition has been better in recent decades. But James Flynn has pointed out that there was a steady growth of IQ scores among Dutch people between 1952 and 1982, even though those taking the test around 1962 should have suffered from worsened nutrition as children during World War II (there was a Dutch famine in 1944). Also, there is little evidence that very many US children suffered from malnutrition between 1940 and 1970. So it's not like we can say, “Only in recent decades have American children started to eat properly.”

People trying to account for the Flynn effect usually consider only the period from 1930 onward. But the apparent increase in human intelligence since 1930 may be only one facet of a larger mystery of unaccountable increases in human intelligence, a mystery that may stretch back many thousands of years.

Consider the blossoming of human intelligence that occurred long ago. Rather suddenly, humans started to grow crops, and not too long after that, humans started to create cities. Before long we had things such as the glories of Greek sculpture, the philosophy of Plato, the mathematical works of Euclid, and the architectural and organizational achievements of the Roman Empire. But we cannot explain the intelligence behind such things by using natural selection as an explanation. As Alfred Russel Wallace (the co-founder of the theory of natural selection) pointed out in the nineteenth century, natural selection can only explain features that are needed for an organism to survive in the wild.

So we have a general mystery that exceeds the mystery of the Flynn effect. The mystery is: why has human intelligence increased at various times in ways we cannot explain?

I will now suggest a highly unconventional hypothesis to explain the Flynn effect. The Flynn effect may be evidence that human intelligence is being gradually increased by some external reality that is a partial or major source of human consciousness. Such a reality may be gradually increasing human intelligence to help us cope with an increasingly complex world, or to help us fulfill some human destiny that requires greater human intelligence.

To understand this hypothesis, I must first explain what is meant by neural reductionism, and why there are strong reasons for rejecting this assumption. Neural reductionism is a theory of the mind and brain that you have probably heard advanced many times. It is the idea that mind or intelligence is purely a product of the brain. A neural reductionist believes that your brain generates your intelligence rather like your liver secretes bile.

But there are quite a few reasons for doubting this simple theory. One reason is that we cannot plausibly account for very long-term human memories through any neurological explanation, mainly because (as discussed here) very rapid molecular turnover in the brain should make the brain an unsuitable substrate for a 50-year storage of memories, or even a storage of memories lasting longer than two years. Another reason is that the physician John Lorber documented many cases of people who functioned well even though the great majority of their brains were destroyed by disease. Another reason is that neural reductionism cannot account for psychic phenomena such as near-death experiences, out-of-body experiences, and extra-sensory perception (the latter something very well-demonstrated in convincing laboratory experiments such as those done by Professor Joseph Rhine, which have never been successfully debunked).

Suppose we consider possibilities other than neural reductionism. It may be that our consciousness and intelligence comes largely or mainly from some mysterious external reality outside of our brains.

Let's imagine a 10-year-old child who enters some data into a smartphone. If you ask the child where that data is stored, the child will say something like: “Why in the smartphone, of course – where else could it be?” But depending on the smartphone app being used, the data may not be stored in the smartphone. It could be the app uses a wi-fi connection to connect to an external web server that connects to a relational database server which stores the data the child has entered into the smartphone. But the child knows nothing of such an unseen infrastructure, so of course when she is asked where her data is stored, she answers that it is in the smartphone. Similarly, our long-term memories and intelligence may depend on some mysterious consciousness and information storage infrastructure that is outside of our brains. But since a scientist knows nothing of such an infrastructure, when he is asked where our memories are stored, he answers: “In the brain, of course – where else could it be?”

And if the child uses her smartphone to do a math problem, and you ask her, “Where was the intelligence that helped you do that?” the child will answer, “In the smartphone, of course – where else could it be?” But the actual facility that helped her may be a remote Google server thousands of miles away. Similarly, if a scientist is asked where is the intelligence that led you to ponder some cosmic mystery, he will answer, “In the brain, of course --- where else could it be?” But our minds may depend on some mysterious intelligence infrastructure outside of our bodies.

brain theory

Once we start thinking along these lines, that our intelligence may come partially or largely from some mysterious external reality outside of our brains, we have opened the door to a radical new hypothesis to explain the Flynn effect. The hypothesis is this: some external reality that is the source of our consciousness may be deliberately causing a gradual increase in human intelligence. This may be to help us cope with an increasingly complex world. Or it may be to help us fulfill some great destiny that is planned for humanity. The same reality may have increased human intelligence at the time when humans first started to build cities.

The idea that human intelligence is being gradually increased by some mysterious external power seems like a reasonable hypothesis. But there may be one thing that argues against the very idea that human intelligence is gradually increasing.

I refer, of course, to the inane drivel that is the 2016 American presidential campaign. 

Postscript: The average person probably has the idea that the human brain evolved until it was large enough to allow for thinking and spirituality, at which point human culture and religion first started to emerge. The link here reports "anatomically modern" human skull remains dated to 160,000 years ago, with what a scientist describes as "full-fledged Homo sapiens features."  But humans did not start engaging in symbolic behavior until about 80,000 years ago. How do we explain that? It's as if humans got some big intelligence boost 80,000 years after hominid brains reached their largest size.