See some previous work on this,. Proofs in specific goals and context.
To Build Trust In Artificial Intelligence, IBM Wants Developers To Prove Their Algorithms Are Fair
by Dan Robitzski in Futurism.com
We trust artificial intelligence algorithms with a lot of really important tasks. But they betray us all the time. Algorithmic bias can lead to over-policing in predominately black areas; the automated filters on social media flag activists while allowing hate groups to keep posting unchecked.
As the problems caused by algorithmic bias have bubbled to the surface, experts have proposed all sorts of solutions on how to make artificial intelligence more fair and transparent so that it works for everyone.
These range from subjecting AI developers to third party audits, in which an expert would evaluate their code and source data to make sure the resulting system doesn’t perpetuate society’s biases and prejudices, to developing tests to make sure that an AI algorithm doesn’t treat people differently based on things like race, gender, or socioeconomic class. ... "
Tuesday, December 04, 2018
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment