The rise of self-driving cars promises a revolution in transportation, but it also brings forth complex ethical questions, particularly: how are self-driving cars programmed to make decisions in unavoidable accident scenarios? A recent study delves into public perception of these moral algorithms, revealing a fascinating paradox in our expectations versus our personal preferences for autonomous vehicle programming.
Researchers presented ethical dilemmas to participants, scenarios where a self-driving car must choose between two harmful outcomes, such as swerving into a barrier to save multiple pedestrians but sacrificing the car’s occupant. These scenarios varied in details, including the number of lives saved, whether the decision was made by the car’s computer or a human driver, and the participant’s perspective (occupant or bystander).
The study’s findings indicated a general agreement that self-driving vehicles should be programmed to minimize casualties, a seemingly utilitarian approach. However, this endorsement came with a caveat. Participants expressed less confidence that autonomous vehicles would actually be programmed this way in real life, and even more tellingly, revealed a preference for others to use utilitarian self-driving cars, rather than choosing such vehicles for themselves.
This reveals a significant ethical paradox: while people conceptually agree with programming self-driving cars to prioritize the greater good, even at the cost of the occupant’s life, they are personally less inclined to own or ride in a vehicle programmed with such self-sacrificing algorithms. This raises critical questions about the practical implementation and public acceptance of ethical frameworks in autonomous vehicle programming.
Beyond this core dilemma, the researchers highlight further complexities. Consider situations involving uncertainty – should a car risk swerving to avoid a motorcycle if it’s less certain of saving the car’s passenger than the motorcyclist? Or should programming differ when children are in the vehicle, given their longer life expectancy and lack of agency in being there? Furthermore, if manufacturers offer different “moral algorithm” options, does the buyer bear responsibility for the algorithm’s decisions?
These are not just theoretical thought experiments. As autonomous technology rapidly advances and millions of self-driving cars are poised to become a reality, the ethical programming of these vehicles demands serious and urgent consideration. Understanding how are self-driving cars programmed ethically is not just a technical challenge, but a societal imperative to ensure a safe and morally consistent future of transportation.
Reference: arxiv.org/abs/1510.03346 : Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?