Back to our maintenance problem. Unless you can explain a result, it is hard to maintain it under changing context. Trust too can be lost if the methods are indistinguishable from magic.
Inside DARPA's Push to Make Artificial Intelligence Explain Itself
The Wall Street Journal, Sara Castellanos; Steven Norton
The U.S. Defense Advanced Research Projects Agency (DAPRA) is coordinating a project in which 100 researchers at more than 30 universities and private institutions are seeking to create deep-learning artificial intelligences (AIs) that can explain their decision-making to humans. DARPA program manager David Gunning says this advance is crucial as AI becomes more deeply entrenched in everyday life and a greater level of trust between humans and machines must be nurtured. Participants have spent the project's first phase working on focus areas of their choosing, and in the second phase each institution will be assigned one of two "challenge problems" to address. The challenges will either involve using AI to classify events in multimedia, or training a simulated autonomous system to conduct a series of missions. The final result will be a set of machine-learning methods and user interfaces that public- or private-sector groups could use to construct their own explainable AI systems. ... "
Friday, August 11, 2017
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment