/* ---- Google Analytics Code Below */

Thursday, December 19, 2019

An AI Transparency Risk Paradox

Risk is not thought of formally enough.   And the point made in the article points out that to do current AI med\methods you need more data, but more data creates higher risk.   In our own work in the area we looked at valuation of data ... but also the cost risk of storing and moving it around, sharing it with suppliers and tech vendors.      Assets can have negative value.

The AI Transparency Paradox
By Andrew Burt  HBR 

In recent years, academics and practitioners alike have called for greater transparency into the inner workings of artificial intelligence models, and for many good reasons. Transparency can help mitigate issues of fairness, discrimination, and trust — all of which have received increased attention. Apple’s new credit card business has been accused of sexist lending models, for example, while Amazon scrapped an AI tool for hiring after discovering it discriminated against women.

At the same time, however, it is becoming clear that disclosures about AI pose their own risks: Explanations can be hacked, releasing additional information may make AI more vulnerable to attacks, and disclosures can make companies more susceptible to lawsuits or regulatory action.

Call it AI’s “transparency paradox” — while generating more information about AI might create real benefits, it may also create new risks. To navigate this paradox, organizations will need to think carefully about how they’re managing the risks of AI, the information they’re generating about these risks, and how that information is shared and protected. .... " 

No comments: