Well we all need explainable methods, and this always means explainable in a context. Reminds me of the After Action Review .... AAR .... We used it to support our results and their outcome. And learning about the next outcome. Its about context and context is about metadata used for specific decisions process involved.
What Is Explainable AI and Why Does the Military Need It?
Go to the profile of Benjamin Powers By Benjamin Powers in Medium
Last summer, the Defense Science Board’s report on autonomy found that investing in artificial intelligence (AI) warfare is a crucial part of maintaining the United States’ national security and military capability. As the report reads, “It should not be a surprise when adversaries employ autonomy against U.S. forces.” In other words, AI warfare is likely on the horizon; it’s just a matter of who gets there first.
This immediately sparks dystopian and apocalyptic reactions from most people, who may envision a Terminator-esque system that will at some point choose to overthrow its human masters. But don’t worry. We aren’t there just yet. The report concludes that “autonomy will deliver substantial operational value across an increasingly diverse array of DoD missions, but the DoD must move more rapidly to realize this value.” Meaning that while the value of autonomy is clear from a military perspective, the Department of Defense has to devote more money and time to realize its full potential — and do so quickly.
Those robots would be a result of artificial general intelligence (AGI), which is only a small area of research within AI that works on neural evolution and, perhaps in time, the creation of sentient machines. Much more prevalent, however, is machine learning (a computer’s ability to learn without being explicitly programmed) and neural nets (computer systems modeled on the human brain and nervous system) being drawn upon to augment human decision-making capabilities. Indeed, the Department of Defense is charging ahead with Project Maven, which established an Algorithmic Warfare Cross-Functional Team to have computers and neural nets lead the hunt for Islamic State militants in Iraq and Syria. The project synthesizes hundreds of hours of aerial surveillance video into actionable intelligence, which is then reviewed by analysts.
The thing is that we often don’t really know why AI makes the decisions or recommendations it does. While the computing capacity of AI expands on an almost daily basis, the study of how to make machine learning explain its decision-making process to a human has languished. So, while AI might recommend a target or offer up what it deems important intelligence footage, it can’t tell the military why. The extent of an explanation currently may be, “There is a 95 percent chance this is what you should do,” but that’s it.
This is why the Defense Advanced Research Projects Agency (DARPA) launched a call last year for proposals as part of its newly created Explainable Artificial Intelligence (XAI) program. The project’s goal is to develop a variety of explainable machine learning models while maintaining their prediction accuracy and to enable human users to understand and trust (while managing) the artificially intelligent partners being developed. After fielding hundreds of proposals, the XAI program settled on 12 that would make up the various areas of focus under the XAI umbrella. (DARPA puts out a call for nascent programs and then funds them for an amount of time under the program umbrella.) .... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment