A new paper has been published for the results of The Moral Machine experiment. This experiment surveyed responses to a twist in the classic ‘trolley problem’. This one asked people how self-driving cars should prioritise lives in variations of the trolley problem.
The results analysed data from 2.3 million people around the world. Some interesting findings:
- Most people, regardless of cultural differences or demographic, spared humans over pets, and groups of people over individuals.
- Nations could be divided into three clusters: West, East, and South. Those in proximity usually displayed similar ethics.
- Correlations could be found between the social and economic factors in a country and the opinions of the residents.
- For example, participants from collectivist cultures like China and Japan are less likely to spare the young over the old.
These results may have implications on how self-driving cars are programmed in different countries and how they would be regulated as well.
Read the full article on Nature: Self-driving car dilemmas reveal that moral choices are not universal
Analysis:
It is interesting to note that autonomous vehicles may have to be programmed differently to appeal to the moral ethics of the society they are operating in. This survey shows that the rules are not universal across cultures. Further public discussions may be held by policy makers and the industry to understand how the public may respond to different designs of the technology and policies. At a broader level, it reveals the need to think more about the ethics of artificial intelligence as there is no straightforward answer to defining the code of ethics.
Some have made skeptical comments about how practical the study results are in shaping policies and development of the autonomous vehicles. A law professor remarked that in real life, there would be few instances where a vehicle would have to choose between hitting two different types of people.
Watch this video to learn how the researchers designed the experiment, and the findings:
Questions for further personal evaluation:
- Why is there a need for a code of ethics for artificial intelligence?
- Do you think this sort of experiment is useful in the discussion? Why or why not?
Useful vocabulary:
- ‘collectivist culture’: emphasize family and work group goals above individual needs or desires.
Picture credits:https://www.itchronicles.com/artificial-intelligence/the-impact-of-driverless-technology-on-independent-driving-jobs/attachment/close-up-of-a-businesspersons-hand-giving-car-key-to-robot-on-grey-background/
