Good cautious point is made here. But I respond that in most cases we are assisting a human, or doing something rather narrow than a human does today. But quicker and without many kinds of human-like errors. So the bar is still a useful general measure, but it is not close to general intelligence. See the pointer to the NYU paper below.
Why ‘human-like’ is a low bar for most AI projects
Artificial Intelligence – The Next Webby by Tristan Greene in TNW
Awww, look! It thinks it's people!
Show me a human-like machine and I’ll show you a faulty piece of tech. The AI market is expected to eclipse $300 billion by 2025. And the vast majority of the companies trying to cash in on that bonanza are marketing some form of “human-like” AI. Maybe it’s time to reconsider that approach.
The big idea is that human-like AI is an upgrade. Computers compute, but AI can learn. Unfortunately, humans aren’t very good at the kinds of tasks a computer makes sense for and AI isn’t very good at the kinds of tasks that humans are. That’s why researchers are moving away from development paradigms that focus on imitating human cognition.
A pair of NYU researchers recently took a deep dive into how humans and AI process words and word meaning. Through the study of “psychological semantics,” the duo hoped to explain the shortcomings held by machine learning systems in the natural language processing (NLP) domain. According to a study they published to arXiv: https://arxiv.org/pdf/2008.01766.pdf ... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment