Very good piece. A difficult problem.Yes as get closer to embedding ethics in systems and machines,how do we address this? By just warning the humans in the loop, or can we actually close the loop to include ethical reasoning? As we progress, the reasoning has to occur more quickly.
Automating ethics
Machines will need to make ethical decisions, and we will be responsible for those decisions.
By Mike Loukides in O'Reilly
We are surrounded by systems that make ethical decisions: systems approving loans, trading stocks, forwarding news articles, recommending jail sentences, and much more. They act for us or against us, but almost always without our consent or even our knowledge. In recent articles, I've suggested the ethics of artificial intelligence itself needs to be automated. But my suggestion ignores the reality that ethics has already been automated: merely claiming to make data-based recommendations without taking anything else into account is an ethical stance. We need to do better, and the only way to do better is to build ethics into those systems. This is a problematic and troubling position, but I don't see any alternative.
The problem with data ethics is scale. Scale brings a fundamental change to ethics, and not one that we're used to taking into account. That’s important, but it’s not the point I’m making here. The sheer number of decisions that need to be made means that we can’t expect humans to make those decisions. ... "
Friday, April 26, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment