The World Commission on the Ethics of Scientific Knowledge and Technology of UNESCO (COMEST) recently published a report on the ethical issues raised by robotics.
Robotic technologies are quickly evolving and becoming increasingly “autonomous”. Thus the question arises who exactly should bear ethical and moral responsibility for robot behavior?
Originally robotic technologies were primarily intended for military and industrial purposes, but today they are employed for diverse applications, for transport, education, health, individual assistance, domestic robotics, etc.
Furthermore, robotics increasingly depends on artificial intelligence, based on algorithms. These are known as “cognitive” robots, capable of learning from past experience which can then recalibrate their own algorithms. Therefore since their behavior is not completely predictable this necessitates serious ethical reflections.
For example, with autonomous cars, the main ethical issue involves the decision-making process. How should the vehicle be programmed to behave in the event of an unavoidable accident? Should it try to minimize the number of casualties, even if it means sacrificing the occupants, or should it protect them at all costs? Should these issues be regulated by law, standards or codes of conduct?
The report proposes reflecting on these ethical values including: human dignity, autonomy, privacy, beneficence, and justice, as well as the ethical principles of “do not harm” and responsibility.
Some of the report’s extracts:
“Dignity is inherent to human beings, not to machines or robots. Therefore, robots and humans are not to be confused even if an android robot has the seductive appearance of a human, or if a powerful cognitive robot has learning capacity that exceeds individual human cognition. Robots are not humans; they are the result of human creativity and they still need a technical support system and maintenance in order to be effective and efficient tools or mediators.”
“The ‘do not harm’ principle is a red line for robots. As many technologies, a robot has the potentiality for ‘dual-use’. Robots are usually designed for good and useful purposes (to diminish harmfulness of work, for example), to help human beings, not to harm or kill them(…) If we are morally serious about this ethical principle, then we have to ask ourselves whether armed drones and autonomous weapons should be banned.”
“Deterministic robots, and even sophisticated cognitive robots, cannot take any ethical responsibility, which lies with the designer, manufacturer, seller, user and the State. Therefore, human beings should always be in the loop and find ways to control robots by different means (e.g. traceability, off-switch, etc) in order to maintain human moral and legal responsibility.”
This approach correlates with recommendations being currently drafted by European and national authorities, specifying that robots should not be independent of human control nor allowed to direct changes in human labor.
Did You Say ALL Children’s Rights?
20th November, the international day for children's rights. Did you say ALL children's rights? The international day...