Beyond just recognition, on to recognizing and reasoning. Like to see this in a business context that includes complex quant optimization. Some of our own experiences typically built a representative quant model, then had humans do the associated reasoning and 'fuzzy' decisions. Crowd sourcing the logic among experts. The external memory represents a context? Will this take deep learning beyond?
Google's AI Reasons Its Way Around the London Underground in Nature
Artificial-intelligence systems known as neural networks can recognize images, translate languages and even master the ancient game of Go. But their limited ability to represent complex relationships between data or variables has prevented them from conquering tasks that require logic and reasoning. ...
In a paper published in Nature on 12 October, the Google-owned company DeepMind in London reveals that it has taken a step towards overcoming this hurdle by creating a neural network with an external memory. The combination allows the neural network not only to learn, but to use memory to store and recall facts to make inferences like a conventional algorithm. This in turn enables it to tackle problems such as navigating the London Underground without any prior knowledge and solving logic puzzles. Though solving these problems would not be impressive for an algorithm programmed to do so, the hybrid system manages to accomplish this without any predefined rules.
Although the approach is not entirely new — DeepMind itself reported attempting a similar feat in a preprint in 2014 — “the progress made in this paper is remarkable”, says Yoshua Bengio, a computer scientist at the University of Montreal in Canada. .... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment