Intriguing idea. Includes video.
A New Approach to Understanding How Machines Think
From Quanta Magazine Link to full article.
Been Kim and colleagues at Google Brain developed a system she calls a translator for humans that permits them to ask questions of an artificial intelligence.
Google Brain research scientist Been Kim is developing a way to ask a machine learning system how much a specific, high-level concept went into its decision-making process.
If a doctor told that you needed surgery, you would want to know why — and you'd expect the explanation to make sense to you, even if you'd never gone to medical school. Been Kim, a research scientist at Google Brain, believes that we should expect nothing less from artificial intelligence. As a specialist in "interpretable" machine learning, she wants to build AI software that can explain itself to anyone.
Since its ascendance roughly a decade ago, the neural-network technology behind artificial intelligence has transformed everything from email to drug discovery with its increasingly powerful ability to learn from and identify patterns in data. But that power has come with an uncanny caveat: The very complexity that lets modern deep-learning networks successfully teach themselves how to drive cars and spot insurance fraud also makes their inner workings nearly impossible to make sense of, even by AI experts. If a neural network is trained to identify patients at risk for conditions like liver cancer and schizophrenia — as a system called "Deep Patient" was in 2015, at Mount Sinai Hospital in New York — there's no way to discern exactly which features in the data the network is paying attention to. That "knowledge" is smeared across many layers of artificial neurons, each with hundreds or thousands of connections. ... "
Sunday, January 13, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment