We worked with Stanford in the early days of AI. Can't we all get along? I like the idea of using foundational definitions to decide what to emphasize on next.
A Stanford Proposal Over AI's 'Foundations' Ignites Debate By Wired, September 15, 2021
Last month, Stanford researchers declared that a new era of artificial intelligence had arrived, one built atop colossal neural networks and oceans of data. They said a new research center at Stanford would build—and study—these "foundational models" of AI.
Critics of the idea surfaced quickly—including at the workshop organized to mark the launch of the new center. Some object to the limited capabilities and sometimes freakish behavior of these models; others warn of focusing too heavily on one way of making machines smarter.
"I think the term 'foundation' is horribly wrong," Jitendra Malik, a professor at UC Berkeley who studies AI, told workshop attendees in a video discussion.
Malik acknowledged that one type of model identified by the Stanford researchers—large language models that can answer questions or generate text from a prompt—has great practical use. But he said evolutionary biology suggests that language builds on other aspects of intelligence like interaction with the physical world.
No comments:
Post a Comment