By John Silveira

Issue #109 • January/February, 2008

I once wrote a science fiction novel that I never tried to sell. Titled The Perfect Defense, its first chapter appeared in the premier issue of BHM in 1989. The plot was set in the future. Mankind’s computers had revolted and were chasing what was left of humanity across intergalactic space in an effort to annihilate humans completely. The reason the computers took over, as one of the characters notes, was because, “We never had a good definition of life and didn’t realize, until it was too late, that even the earliest computers were life-forms of a sort. We just couldn’t see it because we defined life as ‘organic’ and intelligence as being comparable to ours. We had no useful definition of consciousness, until the machines revolted and began to kill us by the billions. We were wearing blinders, until it was too late.”

In my novel humanity survives, but only because humans had devised “the perfect defense.” But that’s just a novel. In reality, I’m not sure we’d survive if we had to deal with future computers if they got sentient, smart, and nasty.

How far-fetched is this?

In the past I’ve written about the possibility of global disaster that could come our way in the way of an asteroid or meteor impact, an exploding supervolcano, a worldwide pandemic, a nuclear war, and other events that could either bring down civilization or even precipitate the extinction of mankind. I’ve always thought these other things were possible, though not necessarily probable, within my lifetime.

Threats from computers may sound a bit far-fetched, sort of like science fiction. But people as eminent as the great British physicist, Stephen Hawking, have also visited this question, wondering what the consequences would be if computers one day surpassed us in intellect. Another man, Bill Joy, co-founder and chief scientist of the computer company, Sun Microsystems, wrote of the same prospects in the March 2000 issue of Wired. He was, frankly, pessimistic.

In 2000, the Singularity Institute for Artificial Intelligence (SIAI) was founded to examine this potential problem. SIAI’s goal is to ensure that powerful computers and computer programs are not dangerous to humanity if or when they’re created. I looked to see if SIAI is made up of quacks and kooks, but many of the names associated with it are definitely not quacky or kooky, lending to that organization’s credibility.

In an article in the June 21, 1999 issue of Business Week, Otis Port wrote about the possibility of producing neurosilicon computers. They would be hybrid “biocomputers” that mate living nerve cells, or neurons, with silicon circuits.

Still sound far-fetched? Groundwork was laid for this at places like Georgia Tech and the Institute of Mathematical Sciences in Madras, India, among others. Initially, the experiments used neurons from “lower” life-forms such as spiny lobsters and mussels. But, eventually, the scientists made artificial neurons from electronic parts bought at a Radio Shack that succeeded in fooling the real neurons into accepting them as other “real” neurons. In other words, they had created a synthetic, though primitive, nervous system.

Is a computer that really thinks even possible? We don’t know. But as far back as the middle of the 20th century predictions have been made for the day we would finally create an “intelligent” computer. In the 1960s estimates were made that we’d have one within 20 years. As far back as 1950, computer genius Alan Turing estimated we’d have one by the year 2000. But the years have come and gone and, though we have faster computers, we don’t seem to be appreciably closer to a “thinking” and “conscious” computer. Then, of course, there are others who, for one reason or another, say it will never happen. Maybe they’re right.

But the principle problem with answering the question of whether it’s possible for a computer to think is that not only do we not yet know what makes our own brains work, we don’t even know what consciousness is. Some people in the field believe consciousness doesn’t actually exist; it’s just an illusion—whatever that means.

But let’s take the scenario where we create a computer that runs on software sophisticated enough that it can finally “think.” What happens then?

Movie computers like the HAL 9000 in 2001: A Space Odyssey, the Nestor NS-5 named Sonny in I Robot, and Joshua in WarGames had human attributes including human needs and desires. That’s because those movies and novels aren’t really about computers, but about us. If machines were to gain self-consciousness, they most likely wouldn’t be like us at all.

And what happens if a powerful sentient computer develops any kind of “survival instinct” (We don’t know what causes that either.). Would such a computer think of us as friends? Gods? The enemy? What if it either didn’t like us or perceived us as a threat? Imagine what would happen if a computer that was tied into the Internet, our defense systems, and millions of other computers around the world and could think faster than any person has ever been able to think, decided it didn’t like us. Or want us around! We’d probably never even see it coming, in particular if we didn’t recognize it as intelligence with a survival instinct to begin with.

I’m not a technophobe or trying to cause undue alarm, but these are some of the things I think about when I’m trying to get to sleep at night. I’m an insomniac, so I do lots of thinking before I get to sleep.

I’ve been placing other plausible threats to humanity further into the back of my mind as I consider the possibility of a future computer threat. Things like asteroids and comets, supervolcanoes, disease, World War III – all of which I’ve written about before — would leave survivors. I’m not so sure computers would.

Look at that computer sitting atop your desk tonight: That may one day be the enemy.

LEAVE A REPLY

Please enter your comment!
Please enter your name here