/* ---- Google Analytics Code Below */

Tuesday, April 21, 2020

Adversarial Attack Risks

Interesting case study,  stability not a common measure of this kind of system.

How Adversarial Attacks Could Destabilize Military AI Systems

Adversarial attacks threaten the safety of AI and robotic technologies. Can we stop them?
By David Danks

This piece was written as part of the Artificial Intelligence and International Stability Project at the Center for a New American Security, an independent, nonprofit organization based in Washington, D.C. Funded by Carnegie Corporation of New York, the project promotes thinking and analysis on AI and international stability. Given the likely importance that advances in artificial intelligence could play in shaping our future, it is critical to begin a discussion about ways to take advantage of the benefits of AI and autonomous systems, while mitigating the risks. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Artificial intelligence and robotic technologies with semi-autonomous learning, reasoning, and decision-making capabilities are increasingly being incorporated into defense, military, and security systems. Unsurprisingly, there is increasing concern about the stability and safety of these systems. In a different sector, runaway interactions between autonomous trading systems in financial markets have produced a series of stock market “flash crashes,” and as a result, those markets now have rules to prevent such interactions from having a significant impact1.

Could the same kinds of unexpected interactions and feedback loops lead to similar instability with defense or security AIs?  .... "

No comments: