Our complex society
increasingly depends on computer software code, and that code is
growing ever more complex and unmanageable. It is already very
common in large companies for there to exist large software systems
that no single person understands very well. When you have a large
software system with more than 100,000 lines of code, it will often
be that one person knows certain aspects of the software system, and
someone else knows other parts of it; but there is no person who
understands the full system very well.
As demand for
software functionality grows, software engineers sometimes resort to
using code generators. These are software tools that can quickly
generate many lines of code. But such code is often very hard to
understand. By using a code generator, a software developer may
quickly add 10,000 lines of new code to a software system. But he may
not understand such code. A rough rule of thumb among programmers
is: if I didn't write the code, I don't understand it.
Many advanced
computer programs use what are called neural networks or deep
learning. When such code is used, the software ends up being pretty
incomprehensible to humans. Software decisions end up being driven
by extremely complex data, often data that is distributed across many
different layers. In complex cases of such implementations, the
computer itself doesn't understand how the data determines the
decision, and neither does a human. It's what programmers call a
“black box.”
There is a strong
possibility of a future complexity crisis in which humans find they
have created software systems of unfathomable intricacy that they can
no longer understand. We can imagine a certain level of complexity –
call it Complexity Level 10 -- that is the most complex level that
any human can understand. It is all too possible that humans might
build their way up to Complexity Level 11 or Complexity Level 12 or
some higher level. There would then be a possibility of an “overshoot
and collapse” situation, in which computer systems around the
world start to break down because they have become too complex for
anyone to understand, maintain or fix.
You don't have to
have lots of bugs for a complex system to fail. A space probe to Mars
failed because of a single line of errant software code. In a case
like that, it wasn't good enough that 99.999% of the code worked
right.
On May 6, 2010 there
occurred an event called the Flash Crash, in which the stock market
underwent a trillion-dollar dip, dropping by 900 points at 2:32 PM.
By the end of the day, the market had largely recovered. The dramatic
dip of the Flash Crash was apparently caused by program trading, in
which investment portfolios are controlled by extremely complicated
computer programs. No one is exactly sure why the Flash Crash
occurred. It seems to be an example of complex computer programs
acting in an unpredictable manner. We can only wonder whether some
future version of the Flash Crash may bring down the financial
system, or perhaps the electrical grid.
Some people are not
worried about such a possibility, because they think that
super-intelligent computers will fill in the gap. The idea can be
stated like this:
Sure,
software code will become too complex for humans to understand; but
that's no problem because our ever-more-brilliant computers will be
able to understand that code. Our computers will probably take over
the job of writing and maintaining their own software, freeing us
humans from such burdens.
But I believe we
should reject the idea that computers will become smart enough to
understand their own software code. Computers process information,
crunch numbers, and process information. But they do not currently
understand a single thing. A computer may be able to tell us
instantly when Abraham Lincoln was born, but no computer has any real
understanding of what a birth is, what a day is, what a human is, or
who Abraham Lincoln was. There is no reason to think that any future
advances will somehow give computers the understanding they now lack.
A computer that does not understand anything will not suddenly be
able to understand a little bit if we add some more lines of software
code or some more chips or processors. Thinking that a computer will
one day have understanding once you add faster processors or more
lines of code to its software seems to be like thinking that one day
when you get a much better TV, you'll be able to have a child
fathered by your favorite TV character.
It seems, then, that
we will not be saved from a software complexity crisis by computers
that understand software code that has progressed beyond human
understanding. A software complexity crisis will be worsened by
short-sighted programming managers who demand more and more features
be added to software, regardless of how this makes the code more and
more difficult to maintain. We can compare such figures to real
estate developers who keep yelling, “Higher, higher, higher!” to
their architects, without worrying about buildings in danger of
collapse because they are built too high.
The risks from such
a software complexity crisis are great. Imagine it is the year 2030,
and you are a typical computer programmer. Computer systems around
the world may be undergoing more and more breakdowns, and your job is
to fix one of them. You take a look at the software code, and see
before you an ocean of unfathomable intricacy, perhaps a million
lines of hard-to-read code. You ask yourself: how on earth did
something like this ever come into existence? It's like the tangled
jungle of complexity that is the US Tax Code, but much worse. After
looking at just a little of the software, you feel like some ordinary
person reading a 50-page scientific paper on quantum mechanics. You
know your choice: either admit to your boss that you are hopelessly
over your head, or cross your fingers and try to make some “blind
fix,” rather like a layman walking into a nuclear power plant, and
trying to fix rising core temperatures by fiddling with some of the
dials.
The agony of code too complex for you to understand
Then imagine such a
situation happening in 10,000 different offices, to 50,000 “over
their head” programmers, and you have a taste of the software
complexity crisis that may lie ahead. I mentioned the possibility of
the financial system or the power grid failing because of such a crisis.
Another possibility is that we may upgrade nuclear weapon systems so
that they are centered around computer systems that become way too
complex to maintain or understand. A single fault in such a system
might cause a nuclear war. The movie Fail Safe depicted such a
thing when a small electronic unit failed, but the same thing might
happen because of a single errant line of software code. Will some
nuclear holocaust one day occur because of some computer code that
grew too complex to be manageable?
No comments:
Post a Comment