Agreed, process still inclear.
Enterprises Don't Know What to Buy for Responsible AI in Darkreading
Organizations are struggling to procure appropriate technical tools to address responsible AI, such as consistent bias detection in AI applications.
The potential for artificial intelligence (AI) is growing, but technology that relies on real-live personal data requires responsible use of that technology, says the International Association of Privacy Professionals.
"It is clear frameworks enabling consistency, standardization, and responsible use are key elements to AI's success," the IAPP wrote in its recent "Privacy and AI Governance Report."
The use of AI is predicted to grow by more than 25% each year for the next five years, according to PricewaterhouseCoopers. Responsible AI is a technological practice centered around privacy, human oversight, robustness, accountability, security, explainability, and fairness. However, according to the IAPP report, 80% of surveyed organizations have yet to formalize the choice of tools to assess the responsible use of AI. Organizations find it difficult to procure appropriate technical tools to address privacy and ethical risks stemming from AI, the IAPP states.
While organizations have good intentions, they do not have a clear picture of what technologies will get them to responsible AI. In 80% of surveyed organizations, guidelines for ethical AI are almost always limited to high-level policy declarations and strategic objectives, IAPP says..... '
No comments:
Post a Comment