Benevolence is really an emotion, just as are anger, enjoyment, and other emotions.
We
have no idea as to how to program a computer (or AI which is a computer
program possibly controlling some hardware) to have feelings. We have
no idea as to how to program a computer to be self-aware or to care
about anything.
There is a debate about
this. Some computer scientists think that we’ll eventually be able to
program AI’s to have feelings and be self-aware and some computer
scientists don’t think so.
And if they do become self-aware, would that be bad? More than a few scientists think so.
Stephen
Hawking, for example, has stated: “I think the development of full
artificial intelligence could spell the end of the human race.”
Bill Gates, the founder of Microsoft, is equally concerned: "I
am in the camp that is concerned about super intelligence. First, the
machines will do a lot of jobs for us and not be super intelligent. That
should be positive if we manage it well. A few decades after that
though the intelligence is strong enough to be a concern. I agree with
Elon Musk and some others on this and don’t understand why some people
are not concerned."
And this is what Elon Musk, CEO of Tesla, a US car-maker, said: "I
think we should be very careful about artificial intelligence. If I
were to guess like what our biggest existential threat is, it’s probably
that. So we need to be very careful with the artificial intelligence.
Increasingly scientists think there should be some regulatory oversight
maybe at the national and international level, just to make sure that we
don’t do something very foolish. With artificial intelligence we are
summoning the demon. In all those stories where there’s the guy with the
pentagram and the holy water, it’s like yeah he’s sure he can control
the demon. Didn’t work out."
Basically, these gentleman are worried are a “Rise of the Machines”, or SkyNet”, or “Terminators” run amok. Seriously.
I’m
in the opposite camp. So far, no matter how fast a computer computes,
and no matter how cleverly it is programmed, there is not even a
semblance of emotion or self-awareness. So it is not possible to
extrapolate as to when AI’s will be programmed to achieve these, if ever.
Yes,
I know you’ve heard that such AI’s will be created by 2045 or some such
year, but these are guesses without a factual foundation.
Consider
what Richard Socher has to say. He is the chief scientist at software
maker Salesforce and a computer science lecturer at Stanford. Socher
is ideally positioned to resolve the debate between the humanists and
the Terminators. He’s firmly in the humanist camp. “There’s no reason
right now to be worried about self-conscious AI algorithms that set
their own goals and go crazy,” he says. “It’s just that there’s no
credible research path, currently, towards that. We’re making a huge
amount of progress and we don’t need that kind of hype to be excited
about current AI.”
Let’s look at
self-driving cars. Commercial production of such vehicles is expected to
begin in just a few years. GM says they will have completely
self-driving vehicles commercially available by 2020.
Programming
these vehicles is a monumental task. Their top priority is to get their
occupants safely from point A to point B. So they will be programmed to
avoid getting into crashes. And if they are in a situation in which
they are going to hit either a dog or a human pedestrian, they will be
programmed to not hit the human.
Are
self-driving cars programmed to care about humans? No, because they
don’t ‘care’ about anything. If someone changes their programming to hit
a human rather than a dog, that is what they will do.
Furthermore,
we already have military drones that can be ordered by a person to drop
a bomb. Again, the drone doesn’t understand or care what it’s doing.
Could someone program a tough, military robot to kill the enemy?
Certainly. But it will not be because the robot ‘wants’ to kill.
We
can program what looks like benevolence to us. For instance, if you
observe self-driving cars long enough, you might think they were
programmed to be benevolent, because they appear to be careful with
human life. But they are not benevolent, nor are they hostile.
They are just machines.
--------------------------------------------------------
Tim Farage is a Professor and Graduate Advisor in the Computer Science Department at The University of Texas at Dallas. The views expressed herein are those of the author. He writes about mathematics, computer science, physics, the reconciliation between science and spirituality, Intelligent Design, and the application of Natural Law to our various systems such as education, government and economics.
6 comments:
I found that essay of certain value, Tim--in that I had no prior experience or knowledge about AIs, other than in the most general terms. Thanks.
You're welcome. Now you don't need to worry about Terminators ruining your day.
Tim,
Thanks for the blog.
I truly enjoy it.
What puzzles me is how someone so handsome can write these thought providing articles.
I write good blog posts in spite of being handsome.
Thank you for publishing this fantastic article.
I've been reading for a while but I've never actually left a comment.
I've bookmarked your site and shared this on Twitter.
Thanks again for a really good article!
Post a Comment