/* ---- Google Analytics Code Below */
Showing posts with label Marvin Minsky. Show all posts
Showing posts with label Marvin Minsky. Show all posts

Sunday, January 03, 2021

Insights for AI from the Human Mind

Good thoughts by Gary Marcus, aiming at the difficulty of creating intelligence, even though we have very rich models around we can do testing with.  

Insights for AI from the Human Mind   By Gary Marcus, Ernest Davis

Communications of the ACM, January 2021, Vol. 64 No. 1, Pages 38-41  10.1145/3392663

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.

Marvin Minsky, The Society of Mind

Artificial intelligence has recently beaten world champions in Go and poker and made extraordinary progress in domains such as machine translation, object classification, and speech recognition. However, most AI systems are extremely narrowly focused. AlphaGo, the champion Go player, does not know that the game is played by putting stones onto a board; it has no idea what a "stone" or a "board" is, and would need to be retrained from scratch if you presented it with a rectangular board rather than a square grid.

To build AIs able to comprehend open text or power general-purpose domestic robots, we need to go further. A good place to start is by looking at the human mind, which still far outstrips machines in comprehension and flexible thinking.

Here, we offer 11 clues drawn from the cognitive sciences—psychology, linguistics, and philosophy.

No Silver Bullets

All too often, people have propounded simple theories that allegedly explained all of human intelligence, from behaviorism to Bayesian inference to deep learning. But, quoting Firestone and Scholl,4 "there is no one way the mind works, because the mind is not one thing. Instead, the mind has parts, and the different parts of the mind operate in different ways: Seeing a color works differently than planning a vacation, which works differently than understanding a sentence, moving a limb, remembering a fact, or feeling an emotion."

The human brain is enormously complex and diverse, with more than 150 distinctly identifiable brain areas, approximately 86 billion neurons, hundreds if not thousands of different types; trillions of synapses; and hundreds of distinct proteins within each individual synapse.

Truly intelligent and flexible systems are likely to be full of complexity, much like brains. Any theory that proposes to reduce intelligence down to a single principle—or a single "master algorithm"—is bound to fail.

Rich Internal Representations

Cognitive psychology often focuses on internal representations, such as beliefs, desires, and goals. Classical AI did likewise; for instance, to represent President Kennedy's famous 1963 visit to Berlin, one would add a set of facts such as part-of (Berlin, Germany), and visited (Kennedy, Berlin, June 1963). Knowledge consists in an accumulation of such representations, and inference is built on that bedrock; it is trivial on that foundation to infer that Kennedy visited Germany.

Currently, deep learning tries to fudge this, with a bunch of vectors that capture a little bit of what's going on, in a rough sort of way, but that never directly represent propositions at all. There is no specific way to represent visited (Kennedy, Berlin, 1963) or part-of (Berlin, Germany); everything is just rough approximation. Deep learning currently struggles with inference and abstract reasoning because it is not geared toward representing precise factual knowledge in the first place. Once facts are fuzzy, it is difficult to get reasoning right. The much-hyped GPT-3 system1 is a good example of this.11 The related system BERT3 is unable to reliably answer questions like "if you put two trophies on a table and add another, how many do you have?" .... '   (much more follows) 

Monday, October 21, 2019

We Can't Trust Deep Learning Alone

Its roughly the 65th anniversary of the proposal of AI.   Time to rethink the broad idea.   More comments on a book I have been reading: Rebooting AI: Building Artificial Intelligence we can Trust by Gary Marcus.  I am a practitioner in the space, who has built many systems of this type, but remain a proponent of the fact that we must combine Deep Learning with logic processing (or classical) AI. 

We used learning in such systems, but it was not deep, but did contain and update knowledge needed to make decisions.   How can we make AI both broad and robust?  Today we have other ideas that can help us build logical models of things, like Business Process Models and RPA.  Minsky's Society of Mind is mentioned as a broad template.

Here interview in Technology Review on the idea:

Gary Marcus, a leader in the field, discusses how we could achieve general intelligence—and why that might make machines safer.   by Karen Hao  in MIT Technology Review

Gary Marcus is not impressed by the hype around deep learning. While the NYU professor believes that the technique has played an important role in advancing AI, he also thinks the field’s current overemphasis on it may well lead to its demise. ..."

Finished, I like the thoughts provided.   The book sets the stage.  Read it. My only disappointment is though the book provides an excellent argument for why, it does not provide a good recommendation of how we should proceed.   Always thought there were hints in elements of the context of 'causality' that might help.  Now reading Judea Pearl's  "The Book of Why: The New Science of Cause and Effect" on that topic.

Wednesday, February 10, 2016

Machine Intelligence in the Workplace

From the Cisco Blog:  Some interesting thoughts on intelligence and an interview with Marvin Minsky.   I like the comment about Minsky saying big business is stalling progress: But I don't recall him not willing to take the money of business.  Sure there are different goals, challenges and methods.  Governance is also very different.   But we funded him and many AI enterprises of the time.

Rowan Trollope writes:
" ... As an inventor and engineer myself, I get it. Big business can get in the way of good science. But it doesn’t have to be that way. I joined Cisco because I felt that solving some of these very hard problems would be easiest from inside a company with both tremendous resources and a passion for innovation. Resources and passion for innovation are the key words. Three years later, my experience here proves that hypothesis right. We are using our resources and our commitment to solve some hard problems. Others are too; I am impressed by the real progress in Machine Intelligence made at the likes of Google and Facebook, Apple and even IBM. ...  " 

Tuesday, January 26, 2016

Marvin Minsky and AI

I see that Marvin Minsky, AI pioneer, died a few days ago.  " ... American cognitive scientist in the field of artificial intelligence (AI), co-founder of the Massachusetts Institute of Technology's AI laboratory, and author of several texts on AI and philosophy ..."   He inspired us in the late 80s.  Both in the area of using AI in industry, and in applying neural nets for pattern recognition in consumer studies.  That foundational neural network research led to deep learning.

We visited the AI lab  he founded at MIT. Students of that lab staffed our group.  We met a few times, spoke at some of the same conferences, and received some of his critical and impish humor along the way.  Minsky's First Law:  Words should be your servants, not your masters.   ... Minsky's Second Law: Don't just do something. Stand there. ,,,   The Edge does a nice job in Remembering Minsky.     His WP article.

Saturday, January 18, 2014

Marvin Minsky Honored

And more about AI in the MIT News.  Marvin Minsky, another inspiration for us in our AI days, has been honored for lifetime achievement.   I believe that some of his seminal work is yet to be appreciated in leading to automated expertise delivery.   " ... Minsky reconfirmed his conviction that one day we will develop machines that will be as smart as humans. But he added “how long this takes will depend on how many people we have working on the right problems. Right now there is a shortage of both researchers and funding.”  ... '

Sunday, September 22, 2013

Reviving Artificial Intelligence at MIT

More on MIT's new center.   Is AI and its application to business back?

" ... A new interdisciplinary research center at MIT, funded by the National Science Foundation, aims at nothing less than unraveling the mystery of intelligence.

Artificial-intelligence research revives its old ambitions

The birth of artificial-intelligence research as an autonomous discipline is generally thought to have been the monthlong Dartmouth Summer Research Project on Artificial Intelligence in 1956, which convened 10 leading electrical engineers — including MIT’s Marvin Minsky and Claude Shannon — to discuss “how to make machines use language” and “form abstractions and concepts.” A decade later, impressed by rapid advances in the design of digital computers, Minsky was emboldened to declare that “within a generation ... the problem of creating ‘artificial intelligence’ will substantially be solved.” ... '

Tuesday, September 07, 2010

Robotics Newsletter

NASA has a new robotics newsletter, via IEEE. In the most recent edition they point to a piece by Marvin Minsky on Telepresence. We met with Minsky and read lots of his works in the late 80s while working with artificial intelligence.

Saturday, January 20, 2007

Marvin Minsky Interview


Marvin Minsky interviewed in Discovery. Minsky has been one of the leading thinkers in the world artificial intelligence for years. His book Society of Mind was instrumental in scoping out a deisgn outline of intelligence. His new book, the Emotion Machine, is another great read in this area addressing the issue of how systems can include emotions. He has influenced both fiction and technology over the last thirty years.