Interesting piece in Datanami. But I suggest that the implications are incomplete. The chart shown only mentions the general type of 'AI' mentioned. Not how it is used, or what data was used, or the completeness/bias of that data , or the contextual implications of how the system was applied. All leading to an AI application hysteria. Yes, I do believe a C level exec should understand which kinds of AI were being used, but there is much more than just that. Classical statistical forecasting can be as easily misapplied as many forms of AI.
How Does Your AI Work? Nearly Two-Thirds Can’t Say, Survey Finds By Alex Woodie
Nearly two-thirds of C-level AI leaders can’t explain how specific AI decisions or predictions are made, according to a new survey on AI ethics by FICO, which says there is room for improvement.
FICO hired Corinium to query 100 AI leaders for its new study, called “The State of Responsible AI: 2021,” which the credit report company released today. While there are some bright spots in terms of how companies are approaching ethics in AI, the potential for abuse remains high.
For example, only 22% of respondents have an AI ethics board, according to the survey, suggesting the bulk of companies are ill-prepared to deal with questions about bias and fairness. Similarly, 78% of survey-takers say it’s hard to secure support from executives to prioritize ethical and responsible use of AI.
More than two thirds of survey-takers say the processes they have to ensure AI models comply with regulations are ineffective, while nine out of 10 leaders who took the survey say inefficient monitoring of models presents a barrier to AI adoption.
There is a general lack of urgency to address the problem, according to FICO’s survey, which found that, while staff members working in risk and compliance and IT and data analytics have a high rate of awareness of ethics concerns, executives generally are lacking awareness.
Government regulations of AI have generally trailed adoption, especially in the United States, where a hands-off approach has largely been the rule (apart from existing regulations in financial services, healthcare, and other fields).
Source: FICO’s “The State of Responsible AI: 2021”
Seeing as how the regulatory environment is still developing, it’s concerning that 43% of respondents in FICO’s study found that “they have no responsibilities beyond meeting regulatory compliance to ethically manage AI systems whose decisions may indirectly affect people’s livelihoods,” such as audience segmentation models, facial recognition models, and recommendation systems, the company said. ... "
No comments:
Post a Comment