Thoughtful Piece
What AI Still Doesn't Know How to Do, By The Wall Street Journal, July 18, 2022
A few weeks ago a Google engineer got a lot of attention for a dramatic claim: He said that the company's LaMDA system, an example of what's known in artificial intelligence as a large language model, had become a sentient, intelligent being.
Large language models like LaMDA or San Francisco-based Open AI's rival GPT-3 are remarkably good at generating coherent, convincing writing and conversations—convincing enough to fool the engineer. But they use a relatively simple technique to do it: The models see the first part of a text that someone has written and then try to predict which words are likely to come next. If a powerful computer does this billions of times with billions of texts generated by millions of people, the system can eventually produce a grammatical and plausible continuation to a new prompt or a question.
It's natural to ask whether large language models like LaMDA (short for Language Model Dialogue Application) or GPT-3 are really smart—or just double-talk artists in the tradition of the great old comedian Prof. Irwin Corey, "The World's Greatest Authority." (Look up Corey's routines of mock erudition to get the idea.) But I think that's the wrong question. These models are neither truly intelligent agents nor deceptively dumb. Intelligence and agency are the wrong categories for understanding them.
Instead, these AI systems are what we might call cultural technologies, like writing, print, libraries, internet search engines or even language itself. They are new techniques for passing on information from one group of people to another. Asking whether GPT-3 or LaMDA is intelligent or knows about the world is like asking whether the University of California's library is intelligent or whether a Google search "knows" the answer to your questions. But cultural technologies can be extremely powerful—for good or ill.
From The Wall Street Journal
No comments:
Post a Comment