Nicely put piece that connects with the current state of the technology. And shows some of the challenges. Have been involved in a number of attempts at including common sense in reasoning, without general success. Back to our need for strong context based reasoning made in the last post. Most succinctly its knowledge and context with a causal engine.
The quest for artificial common sense By Samuel Flender in TowardsDataScience
On July 19th, a blog post titled ‘Feeling unproductive? Maybe you should stop overthinking.’ appeared online. The 1000-word self-help article explains that overthinking is the enemy of our creativity, and advises us to be more in the moment:
“In order to get something done, maybe we need to think less. Seems counter-intuitive, but I believe sometimes our thoughts can get in the way of the creative process. We can work better at times when we ‘tune out’ the external world and focus on what’s in front of us.”
The post was written by GPT-3, Open AI’s massive 175-Billion-parameter neural network trained on nearly half a Trillion words. UC Berkeley student Liam Porr merely wrote the title, and let the algorithm fill in the text. A ‘fun experiment’, to see whether the AI could fool people or not. Indeed, GPT-3 hit a nerve: the post was up-voted to the top of Hacker News.
There’s a paradox, then, with today’s AI. While some of GPT-3’s writings arguably meet the Touring test criterion — convincing people that it is human — it fails spectacularly at the simplest tasks. AI researcher Gary Marcus asked GPT-2, the precursor to GPT-3, to complete the following sentence: ... '
No comments:
Post a Comment