Below piece really rocked me. It is very old AI school. Even takes us to methods that were used in Watson-Jeopardy. Conclusions, and even sub conclusions, are rarely precise answers. They need to contain a certainty factor (CF). All the old AI systems embedded certainties. Trouble is, will the methods still converge the way the new methods do?
Our own real-problem based research showed that was not always certain itself. The need came up in recent modeling work. Thus the Google research. Glad to see this is being brought out, but it may slow movement in AI. This is a huge thing, but uncertainty must be considered.
Google and Others Are Building AI Systems That Doubt Themselves
AI will make better decisions by embracing uncertainty. by Will Knight in Technology Review
The most powerful approach in AI, deep learning, is gaining a new capability: a sense of uncertainty.
Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.
Deep learning, which involves feeding example data to a large and powerful neural network, has been an enormous success over the past few years, enabling machines to recognize objects in images or transcribe speech almost perfectly. But it requires lots of training data and computing power, and it can be surprisingly brittle.
Somewhat counterintuitively, this self-doubt offers one fix. The new approach could be useful in critical scenarios involving self-driving cars and other autonomous machines. .... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment