After a Year of Tech Scandals, Our 10 Recommendations for AI (Outline Overview)
Let’s begin with better regulation, protecting workers, and applying “truth in advertising” rules to AI
Today the AI Now Institute publishes our third annual report on the state of AI in 2018, including 10 recommendations for governments, researchers, and industry practitioners.
It has been a dramatic year in AI. From Facebook potentially inciting ethnic cleansing in Myanmar, to Cambridge Analytica seeking to manipulate elections, to Google building a secret censored search engine for the Chinese, to anger over Microsoft contracts with ICE, to multiple worker uprisings over conditions in Amazon’s algorithmically managed warehouses — the headlines haven’t stopped. And these are just a few examples among hundreds.
At the core of these cascading AI scandals are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and existing regulatory frameworks fall well short of what’s needed. As the pervasiveness, complexity, and scale of these systems grow, this lack of meaningful accountability and oversight — including basic safeguards of responsibility, liability, and due process — is an increasingly urgent concern. ... "
Full report from AINOW.
No comments:
Post a Comment