Thursday, December 21, 2017

Is it possible to program benevolence into an AI?

Benevolence is really an emotion, just as are anger, enjoyment, and other emotions

We have no idea as to how to program a computer (or AI which is a computer program possibly controlling some hardware) to have feelings. We have no idea as to how to program a computer to be self-aware or to care about anything.

There is a debate about this. Some computer scientists think that we’ll eventually be able to program AI’s to have feelings and be self-aware and some computer scientists don’t think so.

And if they do become self-aware, would that be bad? More than a few scientists think so.

Stephen Hawking, for example, has stated: “I think the development of full artificial intelligence could spell the end of the human race.”

Bill Gates, the founder of Microsoft, is equally concerned: "I am in the camp that is concerned about super intelligence. First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned."

And this is what Elon Musk, CEO of Tesla, a US car-maker, said: "I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out."

Basically, these gentleman are worried are a “Rise of the Machines”, or SkyNet”, or “Terminators” run amok. Seriously.

I’m in the opposite camp. So far, no matter how fast a computer computes, and no matter how cleverly it is programmed, there is not even a semblance of emotion or self-awareness. So it is not possible to extrapolate as to when AI’s will be programmed to achieve these, if ever.

Yes, I know you’ve heard that such AI’s will be created by 2045 or some such year, but these are guesses without a factual foundation.

Consider what Richard Socher has to say. He is the chief scientist at software maker Salesforce and a computer science lecturer at Stanford. Socher is ideally positioned to resolve the debate between the humanists and the Terminators. He’s firmly in the humanist camp. “There’s no reason right now to be worried about self-conscious AI algorithms that set their own goals and go crazy,” he says. “It’s just that there’s no credible research path, currently, towards that. We’re making a huge amount of progress and we don’t need that kind of hype to be excited about current AI.”

Let’s look at self-driving cars. Commercial production of such vehicles is expected to begin in just a few years. GM says they will have completely self-driving vehicles commercially available by 2020.

Programming these vehicles is a monumental task. Their top priority is to get their occupants safely from point A to point B. So they will be programmed to avoid getting into crashes. And if they are in a situation in which they are going to hit either a dog or a human pedestrian, they will be programmed to not hit the human.

Are self-driving cars programmed to care about humans? No, because they don’t ‘care’ about anything. If someone changes their programming to hit a human rather than a dog, that is what they will do.

Furthermore, we already have military drones that can be ordered by a person to drop a bomb. Again, the drone doesn’t understand or care what it’s doing. Could someone program a tough, military robot to kill the enemy? Certainly. But it will not be because the robot ‘wants’ to kill.

We can program what looks like benevolence to us. For instance, if you observe self-driving cars long enough, you might think they were programmed to be benevolent, because they appear to be careful with human life. But they are not benevolent, nor are they hostile.

They are just machines.


--------------------------------------------------------


Tim Farage is a Professor and Graduate Advisor in the Computer Science Department at The University of Texas at Dallas. The views expressed herein are those of the author. He writes about mathematics, computer science, physics, the reconciliation between science and spirituality, Intelligent Design, and the application of Natural Law to our various systems such as education, government and economics.

Friday, August 18, 2017

Will mankind survive overpopulation, resource shortage and climate change?

I’m going to reduce the anxiety in your life. Here’s how:
  • Overpopulation is not a problem
  • We’re not running out of resources
  • “Climate Change” may be disruptive, and it may even be better for humanity
Let’s take these one at a time.
  • Overpopulation
In 2017 world population is about 7.5 billion. Here is a UN graph of the projected population until 2100:


So it seems Earth’s population will top out this century between 9 to 10 billion people. Modern countries can easily feed, provide clothing and housing, etc., to its citizens. Developing countries have a problem because their governments are corrupt and don’t allow its citizens freedom, especially free markets. As countries modernize they get richer. Notice the growth of China, for instance, because its government has been allowing free enterprise.
  • Natural Resources
Reports of running out of natural resources have appeared for decades and maybe a century ago. Instead we’re finding more and more natural resources. Fracking has allowed countries to inexpensively obtain more natural gas than we thought possible. And China has many untapped resources.

Resources such as aluminum, iron, manganese, chromium, etc., never get used up - they don’t leave the Earth. With the exponential advances in technology and robotics, we’ll be able to recycle these and never run out of them.

Similarly, water never leaves the Earth. It just takes technology and energy to make it potable.

The only natural resources we’ll eventually run out of are the fossil fuels - and we shouldn’t need them for energy by the end of this century, because of renewables and nuclear power.
  • “Climate Change”
Here’s a NASA graph of the current rate of climate change. This does not depend on the many non-validated climate models. It is actual satellite data:

Note the trend line of 0.16C per decade. This comes out to less than 2C per century. Now with technological advances still increasing quickly, does it seem like we can’t handle a 2C increase over 100 years?

Actually, the Earth is getting greener because we are emitting more CO2 into our atmosphere. Who knows what the optimal plant/human/animal CO2 level is?

And if you’re worried about sea-level increases, here’s another graph for you from NASA:

So this gives a rate of 3.16 mm per year. This turns out to be about 12.5 inches per century, or about 1 inch per decade. Not quite the “20 feet” that Al Gore said would happen soon. So none of these concern me much.

What does concern me is man’s inhumanity to man. This is unlikely to destroy a significant fraction of humanity. But it sure does make life miserable for a large portion of humanity. 

-----------------------------------------------------------------------

Tim Farage is a Professor and Graduate Advisor in the Computer Science Department at The University of Texas at Dallas. The views expressed herein are those of the author. He writes about mathematics, computer science, physics, the reconciliation between science and spirituality, Intelligent Design, and the application of Natural Law to our various systems such as education, government and economics. You are welcome to comment upon this blog entry and/or to contact him at tfarage@hotmail.com