In support of reading Rebooting AI, been exploring where the implementation of CYC has taken place. We saw it demonstrated in the 80s. Followed it here since. There are some mentions of applications like Glaxo and Cleveland Clinic in the WP article https://en.wikipedia.org/wiki/Cyc But not much mention of its use by Lucid.ai. Is it possible to deliver a complete and usable common sense knowledge base, even for domain areas? Where now is deep understanding? Quotes below are interesting, but dated. Our own use of machine learning before it was sexy keeps me a student.
An AI with 30 Years’ Worth of Knowledge Finally Goes to Work
An effort to encode the world’s knowledge in a huge database has sometimes seemed impractical, but those behind the technology say it is finally ready.
by Will Knight Mar 14, 2016 Technology Review
Having spent the past 31 years memorizing an astonishing collection of general knowledge, the artificial-intelligence engine created by Doug Lenat is finally ready to go to work.
Lenat’s creation is Cyc, a knowledge base of semantic information designed to give computers some understanding of how things work in the real world.
Cyc has been given many thousands of facts, including lots of information that you wouldn’t find in an encyclopedia because it seems self-evident. It knows, for example, that that Sir Isaac Newton is a famous historical figure who is no longer alive. But more important, Cyc also understands that if you let go of an apple it will fall to the ground; that an apple is not bigger than a person; and that a person cannot throw an apple into space.
And now, after years of work, Lenat’s system is being commercialized by a company called Lucid.
“Part of the reason is the doneness of Cyc,” explains Lenat, who left his post as a professor at Stanford to start the project in late 1984. “Not that there’s nothing else to do,” he says. But he notes that most of what is left to be added is relevant to a specific area of expertise, such as finance or oncology. ... "
Showing posts with label Rebooting AI. Show all posts
Showing posts with label Rebooting AI. Show all posts
Monday, December 16, 2019
Monday, October 21, 2019
We Can't Trust Deep Learning Alone
Its roughly the 65th anniversary of the proposal of AI. Time to rethink the broad idea. More comments on a book I have been reading: Rebooting AI: Building Artificial Intelligence we can Trust by Gary Marcus. I am a practitioner in the space, who has built many systems of this type, but remain a proponent of the fact that we must combine Deep Learning with logic processing (or classical) AI.
We used learning in such systems, but it was not deep, but did contain and update knowledge needed to make decisions. How can we make AI both broad and robust? Today we have other ideas that can help us build logical models of things, like Business Process Models and RPA. Minsky's Society of Mind is mentioned as a broad template.
We used learning in such systems, but it was not deep, but did contain and update knowledge needed to make decisions. How can we make AI both broad and robust? Today we have other ideas that can help us build logical models of things, like Business Process Models and RPA. Minsky's Society of Mind is mentioned as a broad template.
Here interview in Technology Review on the idea:
Gary Marcus, a leader in the field, discusses how we could achieve general intelligence—and why that might make machines safer. by Karen Hao in MIT Technology Review
Gary Marcus is not impressed by the hype around deep learning. While the NYU professor believes that the technique has played an important role in advancing AI, he also thinks the field’s current overemphasis on it may well lead to its demise. ..."
Finished, I like the thoughts provided. The book sets the stage. Read it. My only disappointment is though the book provides an excellent argument for why, it does not provide a good recommendation of how we should proceed. Always thought there were hints in elements of the context of 'causality' that might help. Now reading Judea Pearl's "The Book of Why: The New Science of Cause and Effect" on that topic.
Finished, I like the thoughts provided. The book sets the stage. Read it. My only disappointment is though the book provides an excellent argument for why, it does not provide a good recommendation of how we should proceed. Always thought there were hints in elements of the context of 'causality' that might help. Now reading Judea Pearl's "The Book of Why: The New Science of Cause and Effect" on that topic.
Friday, October 11, 2019
Review of Gary Marcus' Rebooting AI: Building Artificial Intelligence We Can Trust
Just reading this book. Found the below non technical overview/review quite useful. Mostly agree with this, and the outline of chapters is useful. Saving it here for my own reference. Buy a copy.
Prof Kenneth Forbus review of Gary Marcus' book: Rebooting AI: Building Artificial Intelligence We Can Trust
Franz
Prof Kenneth Forbus review of Gary Marcus' book: Rebooting AI: Building Artificial Intelligence We Can Trust
Franz
Wednesday, October 09, 2019
Causation to Provide the Why
Basic causation is a great start. Why did this happen? We are doing it all the time. Its one of our basic knowledge processing capabilities that lead to learning. We can figure out the answer by observation and combing observations to build rules of operation. Or we can be taught specific rules, or even imprecise rules of thumb, to help us process knowledge. Combining things we have observed or not. They must include things like causation, space and time relationships. It is this kind of knowledge we need to do general AI. Not just more data. Its more than just unstructured data. Its about combining all learning experiences we experience into an interacting data rich architecture that we can use. Like the direction of the below:
An AI Pioneer Wants His Algorithms to Understand the 'Why'
Will Knight, in Wired
Yoshua Bengio, a researcher at the University of Montreal in Canada who is co-recipient of the 2018 ACM A.M. Turing Award for contributions to the development of deep learning, thinks artificial intelligence will not realize its full potential until it can move beyond pattern recognition and learn more about cause and effect, which would make existing AI systems smarter and more efficient. A robot that understands dropping things causes them to break, for example, would not need to toss dozens of vases onto the floor to see what happens to them. Bengio is developing a version of deep learning that can recognize simple cause-and-effect relationships. His team used a dataset that maps causal relationships between real-world phenomena in terms of probabilities. The resulting algorithm essentially forms a hypothesis about which variables are causally related, and then tests how changes to different variables fit the theory. ... "
An AI Pioneer Wants His Algorithms to Understand the 'Why'
Will Knight, in Wired
Yoshua Bengio, a researcher at the University of Montreal in Canada who is co-recipient of the 2018 ACM A.M. Turing Award for contributions to the development of deep learning, thinks artificial intelligence will not realize its full potential until it can move beyond pattern recognition and learn more about cause and effect, which would make existing AI systems smarter and more efficient. A robot that understands dropping things causes them to break, for example, would not need to toss dozens of vases onto the floor to see what happens to them. Bengio is developing a version of deep learning that can recognize simple cause-and-effect relationships. His team used a dataset that maps causal relationships between real-world phenomena in terms of probabilities. The resulting algorithm essentially forms a hypothesis about which variables are causally related, and then tests how changes to different variables fit the theory. ... "
Saturday, October 05, 2019
Rebooting AI: The Future of General, Trustable AI
In the process of reading Gary Marcus and Ernest Davis' book: Rebooting AI: Building Artificial Intelligence We can Trust. Nicely done, starting with a history of AI and its challenges. And then a real push back on what AI needs to do to be really useful. Skeptics will like it, but its also for those interested in the way AI is headed. Have always been a proponent in mixing classic AI methods with 'deep learning', have lived through its evolution to the current state. While its true that deep learning can solve some narrow and complex problems, it is not well suited to the complexity of business, and even very real life problems. Needs to have better understanding, transparency and trustability. Good book so far that's worth a read, addressing a fundamental problem. Will follow with a more complete impression when done.
Monday, September 30, 2019
Building More General, Trustable AI: Deeper Understanding?
I have just been thinking about the idea of what is called 'deep understanding' here. That is more generally applicable AI. Agree that deep learning is impressive, but still very narrow I don't agree that deep understanding, more general AI would make AI safer, could make it less transparent, prone to tricks and misuse, and dangerous.
Book: Rebooting AI, Building Artificial Intelligence we can Trust By Gary Marcus and Ernest Davis Reading ...
We can’t trust AI systems built on deep learning alone
Gary Marcus, a leader in the field, discusses how we could achieve general intelligence—and why that might make machines safer. by Karen Hao in Technology Review
Gary Marcus is not impressed by the hype around deep learning. While the NYU professor believes that the technique has played an important role in advancing AI, he also thinks the field’s current overemphasis on it may well lead to its demise.
Marcus, a neuroscientist by training who has spent his career at the forefront of AI research, cites both technical and ethical concerns. From a technical perspective, deep learning may be good at mimicking the perceptual tasks of the human brain, like image or speech recognition. But it falls short on other tasks, like understanding conversations or causal relationships. To create more capable and broadly intelligent machines, often referred to colloquially as artificial general intelligence, deep learning must be combined with other methods. ... "
Book: Rebooting AI, Building Artificial Intelligence we can Trust By Gary Marcus and Ernest Davis Reading ...
We can’t trust AI systems built on deep learning alone
Gary Marcus, a leader in the field, discusses how we could achieve general intelligence—and why that might make machines safer. by Karen Hao in Technology Review
Gary Marcus is not impressed by the hype around deep learning. While the NYU professor believes that the technique has played an important role in advancing AI, he also thinks the field’s current overemphasis on it may well lead to its demise.
Marcus, a neuroscientist by training who has spent his career at the forefront of AI research, cites both technical and ethical concerns. From a technical perspective, deep learning may be good at mimicking the perceptual tasks of the human brain, like image or speech recognition. But it falls short on other tasks, like understanding conversations or causal relationships. To create more capable and broadly intelligent machines, often referred to colloquially as artificial general intelligence, deep learning must be combined with other methods. ... "
Subscribe to:
Posts (Atom)
