Are machines capable of making moral decisions?
Researchers from Tufts University, Brown University and Rensselaer Polytechnic Institute are working with the U.S. Navy to attempt to answer this question.
They are exploring whether robots can learn right, wrong, and the consequences of both, which would be beneficial in the battlefield.
In one scenario, a robot medic is responsible for helping wounded soldiers. It is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it?
If the machine stops, a new set of questions arises. The robot assesses the soldier's physical state and determines that unless it applies traction, internal bleeding in the soldier's thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier pain, even if it's for the soldier's well-being?
According to a Tufts University press release, the plan to develop moral robots includes isolating essential elements of human moral competence through theoretical and empirical research. Based on the results, the team will develop formal frameworks for modeling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.
The goal is to create a completely autonomous moral robot. According to Selmer Bringsjord, head of the Cognitive Science Department at RPI, all robot decisions would automatically go through at least a preliminary, lightning-quick ethical check using simple logics inspired by today's most advanced artificially intelligent and question-answering computers. If that check reveals a need for deep, deliberate moral reasoning, such reasoning would be fired inside the robot, using newly invented logics tailor-made for the task.