Thursday, February 6, 2014

Singularity Near? Hardware Says “Yes,” Software Says “No”

According to futurists such as Ray Kurzweil, around the year 2045 we will see an intelligence explosion in which artificial intelligence vastly exceeds human intelligence. This predicted time is known as the Singularity. Kurzweil assures that science and technology are progressing at an accelerating rate, and he implies that in light of such a conclusion it makes sense to think that we will see the Singularity by the year 2045.

The main argument given for this technological singularity is based on the rapid rate of progress in hardware. For quite a few years, a law called Moore's Law has held true. It has been described as the law that every 18 months the number of transistors that can be put on a chip tends to double. Because of advances in hardware described by Moore's Law, your hand-held computing unit has more computing power than a refrigerator-sized computer decades ago.

However, there are some reasons why we should be skeptical about using Moore's Law as a basis for concluding that a technological singularity is a few decades away. One reason is that Moore's Law may stop working in future years, as we reach limits of miniaturization. But a bigger reason is that a technological singularity would require not just our hardware but also our software to increase in power by a thousand-fold or more – and our software is not progressing at a rate remotely close to the very rapid rate described by Moore's Law.

How fast is the rate of progress in software? It is relatively slow. We do not at all have anything like a situation where our software gets twice as good every 18 months. Software seems to progress at a rate that is only a small fraction of the rate at which hardware progresses.

Let's look at the current state of software and software development. Are we seeing many breathtaking breakthroughs that revolutionize the field? No, we aren't. What's surprising is how relatively little things have changed in recent years. The most popular languages used for software development are Java, C#, C, HTML, JavaScript, PHP, Python, Perl, and Ruby. All of these languages were developed in the 1990's and 1980's (although C# wasn't released until the year 2000). Not even 10 percent of programming is done in a language developed in the past 12 years (I'm not counting JQuery, which is a JavaScript library) . As for data manipulation languages, everyone is still using mainly SQL, a miniature language developed more than 20 years ago. The process of creating a program has not changed much in 25 years: type some code, run it through a compiler, fix any errors, build the program, and see if it does what you were trying to make it do.

Does this give you any reason for thinking that software is a few decades away from all the monumental breakthroughs needed for anything like a technological singularity? It shouldn't.

To get some kind of idea about why hardware advances alone won't give us what we need for a technological singularity, let's imagine some software developers working in New York City. The developers are hired by a publishing firm that is tired of paying human editors to read manuscripts. The developers are told to develop software that will be able to read in a word processing file, and determine whether the manuscript is a likely best seller. Suppose that the publishing firm tells the developers that they will have access to basically unlimited computer speed, unlimited disk storage, and unlimited random-access memory.

Would this hardware bonanza mean that the developers would be able to quickly accomplish their task? Not at all. In fact, the developers would hardly know where to begin to accomplish this task. After doing the easiest part (something that rejects manuscripts having many grammatical or spelling errors), the developers would be stuck. They would know that the work ahead of them would be almost like constructing the Great Pyramid of Cheops. First they would have to construct one layer of functionality that would be hard to create. Then they would have to build upon that layer another layer of functionality that would be even harder. Then there would be several other layers of functionality that would need to be built – each building upon the previous layer, and each layer far harder to create than the previous layer. That is how things works when you are accomplishing a difficult software task. All of this would take tons of time and manual labor. Having unlimited memory and super-fast hardware would help very little. As the developers reached the top of this pyramid, they would find themselves with insanely hard tasks to accomplish, even given all the previous layers that had been built.

The following visual illustrates the difficulty of achieving a technological singularity in which machine intelligence exceeds that of human intelligence. For the singularity to happen, each pyramid must be built. But each pyramid is incredibly hard to build, mainly because of monumental software requirements. The bottom layer of each pyramid is hard to build; the second layer is much harder; and the third layer is incredibly hard to build. By the time you get up to the fourth and fifth layers, you have tasks that seems to require many decades or centuries to accomplish.

technological singularity


Now some may say that things will get a lot faster, because computers will start generating software themselves. It is true that there are computer programs called code generators capable of creating other computer programs (I've written several such tools myself). But as a general rule code generators aren't good for doing really hard programming. Code generators tend to be good only for doing fairly easy programming that requires lots of repetitive grunt work (something like making various input forms after reading a database schema). Really hard programming requires a level of insight, imagination, and creativity that is almost impossible for a computer program to produce. If we ever produce artificial intelligence rivaling a human intelligence, then there will be a huge blossom of computers that create code for other computers. But it's hard to see how automatic code generation will help us very much in getting to such a level in the first place.

There was a lot of interest several years back about using genetic algorithms to get computers programs to generate software. Not much has come from such an approach. Genetic algorithms have not proved to be a very fruitful way to squeeze creativity out of a computer.

Will quantum computers help this “slow software progress” problem? Absolutely not. Quantum computers would just be lightning-fast hardware. To achieve a technological singularity, there would still be an ocean of software work to be done, even if you have infinitely fast computers. 

Will we be able to overcome this difficult by scanning the human brain and somehow figuring out how the software in our brain works, using that to create software for our machines? That's the longest of long shots. No one really has any clear idea of how thoughts or knowledge or rules are represented in the brain, and there is no reason to think we will be able to unravel that mystery in this century. 

To summarize, hardware is getting faster and faster, but software is still stumbling along pretty much as it has for the past few decades, without any dramatic change. To get to a singularity, you need for both hardware and software to make a kind of million mile journey. Hardware may well be able to make such a journey within a few decades. But I see little hope that software will be able to make such a journey within this century.

If we ever get to the point where most software is developed in a fundamentally different way – some way vastly more productive than typing code line by line – then we may be able to say that we're a few decades away from a technological singularity.

No comments:

Post a Comment