/* ---- Google Analytics Code Below */

Tuesday, June 27, 2023

One Key Challenge for Diplomacy on AI: China’s Military Does Not Want to Talk

 Read the  Andreessen piece as well, relates to this. 

One Key Challenge for Diplomacy on AI: China’s Military Does Not Want to Talk

Commentary by Gregory C. Allen

Published May 20, 2022

Over the past 10 years, artificial intelligence (AI) technology has become increasingly critical to scientific breakthroughs and technology innovation across an ever-widening set of fields, and warfare is no exception. In pursuit of new sources of competitive advantage, militaries around the world are working to accelerate the integration of AI technology into their capabilities and operations. However, the rise of military AI has brought with it fears of a new AI arms race and a potential new source of unintended conflict escalation. In the May/June 2022 issue of Foreign Affairs, Michael C. Horowitz, Lauren Kahn, and Laura Resnick Samotin write:

The United States, then, faces dueling risks from AI. If it moves too slowly, Washington could be overtaken by its competitors, jeopardizing national security. But if it moves too fast, it may compromise on safety and build AI systems that breed deadly accidents. Although the former is a larger risk than the latter, it is critical that the United States take safety concerns seriously.

Such fears are not entirely unfounded. Machine learning, the technology paradigm at the heart of the modern AI revolution, brings with it not only opportunities for radically improved performance, but also new failure modes. When it comes to traditional software, the U.S. military has decades of institutional muscle memory related to preventing technical accidents, but building machine learning systems that are reliable enough to be trusted in safety-critical or use-of-force applications is a newer challenge. To its credit, the Department of Defense (DOD) has devoted significant resources and attention to the problem: partnering with industry to make commercial AI test and evaluation capabilities more widely available, announcing AI ethics principles and releasing new guidelines and governance processes to ensure their robust implementation, updating longstanding DOD system safety standards to pay extra attention to machine learning failure modes, and funding a host of AI reliability and trustworthiness research efforts through organizations like the Defense Advanced Research Projects Agency (DARPA).

However, even if the United States were somehow to successfully eliminate the risk of AI accidents in its own military systems—a bold and incredibly challenging goal, to be sure—it still would not have solved risks to the United States from technical failures in Russian and Chinese military AI systems. What if a Chinese AI-enabled early warning system erroneously announces that U.S. forces are launching a surprise attack? The resulting Chinese strike—wrongly believed to be a counterattack—could be the opening salvo of a new war.

In recognition of this risk, the National Security Commission on Artificial Intelligence recommended in its March 2021 final report that the DOD engage in diplomacy with the Chinese military to “discuss AI’s impact on crisis stability.” More recently, Ryan Fedasiuk wrote in last month’s Foreign Policy that “it is more important than ever that the United States and China take steps to mitigate existential threats posed by AI accidents.”

It is not only Americans who have written about the need for a diplomatic dialogue on this subject. In 2020, Zhou Bo, a senior colonel in the People’s Liberation Army (PLA), wrote an op-ed in the New York Times in which he argued,

As China’s military strength continues to grow, and it closes the gap with the United States, both sides will almost certainly need to put more rules in place, not only in areas like antipiracy or disaster relief—where the two countries already have been cooperating—but also regarding space exploration, cyberspace and artificial intelligence.  .... ' 

No comments: