/* ---- Google Analytics Code Below */

Saturday, April 27, 2019

The Risks of Artificial Intelligence

Having now been involved in many applications using AI oriented methods,  it was rare that there were not risks in their application.  We saw them all.  From legal exposure to regulatory penalties to the loss of public trust to shifts in context that made the results wrong.    Because cognitive AI is always to some degree utilizing something that is like human decision making, or mimics cognitive facilities, we always included risk analyses.   Depending on the nature of the problem, these could be extensive in place testing,  direct comparisons to other methods,  exposure to teams of users or consumers, or formal risk models.

As the article suggests, the risks need to be confronted.   This is rarely done for many kinds of analytics.    Because the intent is often to use these methods predictively, the assumption is that they will enable, enhance or even replace human decision making, so you have to understand the implications of that.  If your system is working with human resources, you need to further consider risk of how those teams will work.    The human, machine and combined elements of such an intelligence will behave in different, sometimes unexpected ways.

McKinsey provides a good article:

Confronting the risks of artificial intelligence  in McKinsey Quarterly
 By Benjamin Cheatham, Kia Javanmardian, and Hamid Samandari

With great power comes great responsibility. Organizations can mitigate the risks of applying artificial intelligence and advanced analytics by embracing three principles.

Artificial intelligence (AI) is proving to be a double-edged sword. While this can be said of most new technologies, both sides of the AI blade are far sharper, and neither is well understood.

Consider first the positive. These technologies are starting to improve our lives in myriad ways, from simplifying our shopping to enhancing our healthcare experiences. Their value to businesses also has become undeniable: nearly 80 percent of executives at companies that are deploying AI recently told us that they’re already seeing moderate value from it. Although the widespread use of AI in business is still in its infancy and questions remain open about the pace of progress, as well as the possibility of achieving the holy grail of “general intelligence,” the potential is enormous. McKinsey Global Institute research suggests that by 2030, AI could deliver additional global economic output of $13 trillion per year.

Yet even as AI generates consumer benefits and business value, it is also giving rise to a host of unwanted, and sometimes serious, consequences. And while we’re focusing on AI in this article, these knock-on effects (and the ways to prevent or mitigate them) apply equally to all advanced analytics. The most visible ones, which include privacy violations, discrimination, accidents, and manipulation of political systems, are more than enough to prompt caution. More concerning still are the consequences not yet known or experienced. Disastrous repercussions—including the loss of human life, if an AI medical algorithm goes wrong, or the compromise of national security, if an adversary feeds disinformation to a military AI system—are possible, and so are significant challenges for organizations, from reputational damage and revenue losses to regulatory backlash, criminal investigation, and diminished public trust.   .... "

No comments: