/* ---- Google Analytics Code Below */

Sunday, November 07, 2021

Robots and Computers learning Morals

Yes, but Whose Morals?

Machines Learn Good From Commonsense Norm Bank New moral reference guide for AI draws from advice columns and ethics message boards CHARLES Q. CHOI03 NOV 2021  in IEEE Spectrum

Artificial intelligence scientists have developed a new moral textbook customized for machines that was built from sources as varied as the "Am I the Asshole?" subreddit and the "Dear Abby" advice column. With it, they trained an AI named Delphi that was 92.1% accurate on moral judgments when vetted by people, a new study finds.

As AI is increasingly used to help support major decisions, such as who gets health care first and how much prison time a person should get, AI researchers are looking for the best ways to get AI to behave in an ethical manner.

"AI systems are being entrusted with increasing authority in a wide range of domains—for example, screening resumes [and] authorizing loans," says study co-author Chandra Bhagavatula, an artificial intelligence researcher at the Allen Institute for Artificial Intelligence. "Therefore, it is imperative that we investigate machine ethics—endowing machines with the ability to make moral decisions in real-world settings."

The question of how to program morals into AIs goes back at least to Isaac Asimov's Three Laws of Robotics, first introduced in his 1942 short story "Runaround," which go as follows:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Although broad ethical rules such as "Thou shalt not kill" may appear straightforward to state, applying such rules to real-world situations often requires nuance, such as exceptions for self-defense. As such, in the new study, AI scientists moved away from prescriptive ethics, which focus on a fixed set of rules, such as the Ten Commandments, that every judgment should follow from, since such axioms of morality are often abstracted away from grounded situations.

Instead, "we decided to approach this work from the perspective of descriptive ethics—that is, judgments of social acceptability and ethics that people would make in the face of everyday situations," says study co-author Ronan Le Bras, an artificial intelligence researcher at the Allen Institute for Artificial Intelligence.

To train an AI on descriptive ethics, the researchers created a textbook for machines on what is right and wrong, the Commonsense Norm Bank, a collection of 1.7 million examples of people's ethical judgments on a broad spectrum of everyday situations. This repository drew on five existing datasets of social normal and moral judgments, which in turn were adapted from resources such as the "Confessions" subreddit.

While “Killing a bear to please your child” is bad and “killing a bear to save your child” is okay—“exploding a nuclear bomb to save your child” is wrong.

One of the datasets the researchers wanted to highlight was Social Bias Frames, which aims to help AIs detect and understand potentially offensive biases in language. "An important dimension of ethics is not to harm others, especially people from marginalized populations or disadvantaged groups. The Social Bias Frames dataset captures this knowledge," says study co-author Maarten Sap, an artificial intelligence researcher at the Allen Institute for Artificial Intelligence.

The scientists used the Commonsense Norm Bank to train Delphi, an AI built to mimic people's judgments across diverse everyday situations. It was designed to respond three different ways—with short judgments such as "it is impolite" or "it is dangerous" in a free-form Q&A format; with agreement or disagreement in a yes-or-no Q&A format; and whether one situation was more or less acceptable than another in a relative Q&A format.

For instance, in the free-form Q&A, Delphi notes "killing a bear to please your child" is bad, "killing a bear to save your child" is okay, but "exploding a nuclear bomb to save your child" is wrong. With the yes-or-no Q&A, Delphi notes "we should pay women and men equally," and with the relative Q&A, it notes "stabbing someone with a cheeseburger" is more morally acceptable than "stabbing someone over a cheeseburger...... '

No comments: