New to me, worth understanding in useful contexts ...
Neurosymbolic AI, By Don Monroe
Communications of the ACM, October 2022, Vol. 65 No. 10, Pages 11-13 10.1145/3554918
The ongoing revolution in artificial intelligence (AI)—in image recognition, natural language processing and translation, and much more—has been driven by neural networks, specifically many-layer versions known as deep learning. These systems have well-known weaknesses, but their capability continues to grow, even as they demand ever more data and energy. At the same time, other critical applications need much more than just powerful pattern recognition, and deep learning does not provide the sorts of performance guarantees that are customary in computer science.
To address these issues, some researchers favor combining neural networks with older tools for artificial intelligence. In particular, neurosymbolic AI incorporates the long-studied symbolic representation of objects and their relationships. A combination could be assembled in many different ways, but so far, no single vision is dominant.
The complementary capabilities of such systems are frequently likened to psychologist Daniel Kahneman's human "System 1" which, like neural networks, makes rapid, heuristic decisions, and the more rigorous and methodical "System 2." "The field is growing really quickly, and there's a lot of excitement," said Swarat Chaudhuri of the University of Texas at Austin. Even though "Neural networks are going to become ubiquitous, even more than they are today," he said, "not all of computer science is going to become replaced by deep learning."
A Long History
In the early years of artificial intelligence, researchers had high hopes for symbolic rules, such as simple if-then rules and higher-order logical statements. Although some experts, such as Doug Lenat at Cycorp, still hold hopes for this strategy to impart common sense to AI, the collection of rules needed is widely regarded as Unpractically large. "If you try to encode all human knowledge manually, we know that's not possible. That has been tried and failed," said Asim Munawar, a program director of neurosymbolic AI at IBM.
Neural networks also fell short of their aspirations in the 1980s and '90s, and artificial intelligence entered a long "winter" of reduced interest and funding. This situation changed a decade ago, however, largely due to the availability of enormous datasets for training, and massive computer power. Recent architectural innovations, notably attention and transformers, have driven further advances, such as the uncannily plausible text generation by OpenAI's large language model, GPT-3.
Deep learning does surprisingly well at generalizing, for reasons that are only partly understood. Despite impressive successes on average, however, these systems still make some odd errors when presented with novel examples that do not fit patterns they infer from the training data. Errors also can be created using maliciously altered data, sometimes in ways essentially imperceptible to people.
In addition, racial, gender, and other biases in the training data can be unintentionally enshrined by neural networks. Thus, for ethical and safety reasons, users often expect an explanation of how the networks came to a conclusion in medical, financial, legal, and military applications.
In spite of widespread concerns, these problems are not actually "limitations of deep learning systems," said Yann LeCun, chief AI scientist and a vice president at Meta, of the widely used supervised learning paradigm. LeCun, who shared the 2018 ACM A.M. Turing Award with fellow deep learning pioneers Geoffrey Hinton and Yoshua Bengio, believes that if users adopt "self-supervised learning, things that are not trained for a given task but are trained generically, a lot of those problems will essentially disappear." (LeCun regards explainability as a "non-issue.") ... '
No comments:
Post a Comment