But what should further limit human alone decisions? Algorithm / Bot indication of consequences?
Should Algorithms Control Nuclear Launch Codes? The U.S. Says No
By Wired,February 23, 2023
U.S. military leaders have often said a human will remain “in the loop” for decisions about the use of deadly force by autonomous weapon systems. However, the official policy does not require this to be the case.
Last Thursday, the U.S. State Department outlined a new vision for developing, testing, and verifying military systems—including weapons—that make use of artificial intelligence (AI).
The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy represents an attempt by the U.S. to guide the development of military AI at a crucial time for the technology. The document does not legally bind the U.S. military, but the hope is that allied nations will agree to its principles, creating a kind of global standard for building AI systems responsibly.
Among other things, the declaration states that military AI needs to be developed according to international laws, that nations should be transparent about the principles underlying their technology, and that high standards are implemented for verifying the performance of AI systems. It also says that humans alone should make decisions around the use of nuclear weapons.
From Wired
No comments:
Post a Comment