In our early work in AI to implement expertise we always included what were called 'certainty' measures. Sometimes estimated with a crowd sourcing method, sometimes with statistics gathered or expert opinion. The resulting logic then included a measured certainty of a goal. Here a very important concept that is rarely considered:
Giving algorithms a sense of uncertainty could make them more ethical
Algorithms are best at pursuing a single mathematical objective—but humans often want multiple incompatible things.
by Karen Hao In Technology Review (Source technical paper)
Algorithms are increasingly being used to make ethical decisions. Perhaps the best example of this is a high-tech take on the ethical dilemma known as the trolley problem: if a self-driving car cannot stop itself from killing one of two pedestrians, how should the car’s control software choose who live and who dies?
In reality, this conundrum isn’t a very realistic depiction of how self-driving cars behave. But many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs. Assessment tools currently used in the criminal justice system must consider risks to society against harms to individual defendants; autonomous weapons will need to weigh the lives of soldiers against those of civilians.
The problem is, algorithms were never designed to handle such tough choices. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers’ lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t always exist. .... "
Sunday, February 03, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment