In preparation for an upcoming talk and effort that touches on ethics regarding autonomous systems, such as vehicles, but not necessarily restricted to them, I had reason to look at now the classic 'trolley problem'. Which is nicely covered in some detail in the Wikipedia entry on The Trolly Problem:
Implications for autonomous vehicles:
Problems analogous to the trolley problem arise in the design of software to control autonomous cars.[12] Situations could occur in which a potentially fatal collision appears to be unavoidable, but in which choices made by the car's software, such as who or what crash into, can affect the particulars of the deadly outcome. For example, should the software value the safety of the car's occupants more, or less, than that of potential victims outside the car.[33][34][35][36][37] ... "
See also the MIT work called the Moral Machine: https://en.wikipedia.org/wiki/Moral_Machine:
A platform called Moral Machine[38] was created by MIT Media Lab to allow the public to express their opinions on what decisions autonomous vehicles should make in scenarios that use the trolley problem paradigm. Analysis of the data collected through Moral Machine showed broad differences in relative preferences among different countries.[39] Other approaches make use of virtual reality to assess human behavior in experimental settings.[40][41][42][43] However, some argue that the investigation of trolley-type cases is not necessary to address the ethical problem of driverless cars, because the trolley cases have a serious practical limitation. It would need to be top-down plan in order to fit the current approaches of addressing emergencies in artificial intelligence.[44] ... "
Sunday, April 19, 2020
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment