It should not be a surprise that the final lecture in the series “The Emergence of Intelligent Machines: Challenges and Opportunities” would deal with philosophical questions about moral responsibility.

The series has dealt with what life will be like when robots and artificial intelligences become common. We may be allowing machines to make life and death decisions, so it makes sense to think about what values we program into them.
Joseph Halpern, professor of computer science, concluded the series May 1, with “Moral Responsibility, Blameworthiness and Intention: In Search of Formal Definitions,” and spoke more about philosophy than robotics.

It all begins with the much-discussed “Trolley Problem.” A runaway trolley is careening downhill toward a switch you control. If you send it to the left, five people on the track will be killed; on the right track there is just one person. In a variation, there is just one track, and there is a large man sitting on a bridge above it. If you push him off he will land in front of the train, and when the train hits him it will derail, saving the five people farther ahead. Analyzing this leads to questions about intention and blame. You didn’t intend to kill the man, you intended to save the five people.

Questions like these are emerging as technology advances. Laws have been proposed to require that self-driving cars be programmed to choose property damage in preference to injuring people. Another proposal is that the car should avoid running into groups of pedestrians even if the result might be to kill the passenger. In surveys, most people thought this was a good idea, but many said they wouldn’t buy that car.

In Japan, with an aging population, robots are being proposed to help care for the elderly. Under Asimov’s second law, robots are supposed to obey human commands. What happens if someone asks a robot to help him commit suicide?

We may not agree, Halpern concluded, but we must reach a consensus about what sort of autonomy we give our machines. “Don’t leave it up to the experts,” he said.

The lecture series, although open to the public, was part of a course, CS 4732, “Ethical and Social Issues in AI”.

Halpern and Bart Selman, professor of computer science and co-creator of the course and lecture series, are co-principal investigators for the Center for Human-Compatible Artificial Intelligence, a nationwide research effort to ensure that the artificial intelligence systems of the future act in a manner that is aligned with human values.

 

Source: nanowerk

About The Author

Editor of ENVIENTA News Channel & Manager of social media channels

Related Posts

Leave a Reply