/* ---- Google Analytics Code Below */

Sunday, January 03, 2021

Insights for AI from the Human Mind

Good thoughts by Gary Marcus, aiming at the difficulty of creating intelligence, even though we have very rich models around we can do testing with.  

Insights for AI from the Human Mind   By Gary Marcus, Ernest Davis

Communications of the ACM, January 2021, Vol. 64 No. 1, Pages 38-41  10.1145/3392663

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.

Marvin Minsky, The Society of Mind

Artificial intelligence has recently beaten world champions in Go and poker and made extraordinary progress in domains such as machine translation, object classification, and speech recognition. However, most AI systems are extremely narrowly focused. AlphaGo, the champion Go player, does not know that the game is played by putting stones onto a board; it has no idea what a "stone" or a "board" is, and would need to be retrained from scratch if you presented it with a rectangular board rather than a square grid.

To build AIs able to comprehend open text or power general-purpose domestic robots, we need to go further. A good place to start is by looking at the human mind, which still far outstrips machines in comprehension and flexible thinking.

Here, we offer 11 clues drawn from the cognitive sciences—psychology, linguistics, and philosophy.

No Silver Bullets

All too often, people have propounded simple theories that allegedly explained all of human intelligence, from behaviorism to Bayesian inference to deep learning. But, quoting Firestone and Scholl,4 "there is no one way the mind works, because the mind is not one thing. Instead, the mind has parts, and the different parts of the mind operate in different ways: Seeing a color works differently than planning a vacation, which works differently than understanding a sentence, moving a limb, remembering a fact, or feeling an emotion."

The human brain is enormously complex and diverse, with more than 150 distinctly identifiable brain areas, approximately 86 billion neurons, hundreds if not thousands of different types; trillions of synapses; and hundreds of distinct proteins within each individual synapse.

Truly intelligent and flexible systems are likely to be full of complexity, much like brains. Any theory that proposes to reduce intelligence down to a single principle—or a single "master algorithm"—is bound to fail.

Rich Internal Representations

Cognitive psychology often focuses on internal representations, such as beliefs, desires, and goals. Classical AI did likewise; for instance, to represent President Kennedy's famous 1963 visit to Berlin, one would add a set of facts such as part-of (Berlin, Germany), and visited (Kennedy, Berlin, June 1963). Knowledge consists in an accumulation of such representations, and inference is built on that bedrock; it is trivial on that foundation to infer that Kennedy visited Germany.

Currently, deep learning tries to fudge this, with a bunch of vectors that capture a little bit of what's going on, in a rough sort of way, but that never directly represent propositions at all. There is no specific way to represent visited (Kennedy, Berlin, 1963) or part-of (Berlin, Germany); everything is just rough approximation. Deep learning currently struggles with inference and abstract reasoning because it is not geared toward representing precise factual knowledge in the first place. Once facts are fuzzy, it is difficult to get reasoning right. The much-hyped GPT-3 system1 is a good example of this.11 The related system BERT3 is unable to reliably answer questions like "if you put two trophies on a table and add another, how many do you have?" .... '   (much more follows) 

No comments: