/* ---- Google Analytics Code Below */
Showing posts with label Gary Marcus. Show all posts
Showing posts with label Gary Marcus. Show all posts

Sunday, January 17, 2021

Improving Language AI

Improving, heating up, getting more useful.   But I still think there are considerable problems with not only interpreting/delivering language, but also managing multiple element conversations that refer to particular and changing context.   This is the nature of human intelligence.   Siri, Echo, Google and Watson quickly reveal they cannot do this.  So  they rely on us to fill in the unknowns. Is it really improving? 

Language AI is really heating up   in Venturebeat   By Pieter Buteneers, Sinch

January 17, 2021 10:25 AM

In just a short number of years, deep learning algorithms have evolved to be able to beat the world’s best players at board games and recognize faces with the same accuracy as a human (or perhaps even better). But mastering the unique and far-reaching complexities of human language has proven to be one of AI’s toughest challenges.

Could that be about to change?

The ability for computers to effectively understand all human language would completely transform how we engage with brands, businesses, and organizations across the world. Nowadays most companies don’t have time to answer every customer question. But imagine if a company really could listen to, understand, and answer every question — at any time on any channel? My team is already working with some of the world’s most innovative organizations and their ecosystem of technology platforms to embrace the huge opportunity that exists to establish one-to-one customer conversations at scale. But there’s work to do.

It took until 2015 to build an algorithm that could recognize faces with an accuracy comparable to humans. Facebook’s DeepFace is 97.4% accurate, just shy of the 97.5% human performance. For reference, the FBI’s facial recognition algorithm only reaches 85% accuracy, meaning it is still wrong in more than one out of every seven cases.  .. "


Sunday, January 03, 2021

Insights for AI from the Human Mind

Good thoughts by Gary Marcus, aiming at the difficulty of creating intelligence, even though we have very rich models around we can do testing with.  

Insights for AI from the Human Mind   By Gary Marcus, Ernest Davis

Communications of the ACM, January 2021, Vol. 64 No. 1, Pages 38-41  10.1145/3392663

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.

Marvin Minsky, The Society of Mind

Artificial intelligence has recently beaten world champions in Go and poker and made extraordinary progress in domains such as machine translation, object classification, and speech recognition. However, most AI systems are extremely narrowly focused. AlphaGo, the champion Go player, does not know that the game is played by putting stones onto a board; it has no idea what a "stone" or a "board" is, and would need to be retrained from scratch if you presented it with a rectangular board rather than a square grid.

To build AIs able to comprehend open text or power general-purpose domestic robots, we need to go further. A good place to start is by looking at the human mind, which still far outstrips machines in comprehension and flexible thinking.

Here, we offer 11 clues drawn from the cognitive sciences—psychology, linguistics, and philosophy.

No Silver Bullets

All too often, people have propounded simple theories that allegedly explained all of human intelligence, from behaviorism to Bayesian inference to deep learning. But, quoting Firestone and Scholl,4 "there is no one way the mind works, because the mind is not one thing. Instead, the mind has parts, and the different parts of the mind operate in different ways: Seeing a color works differently than planning a vacation, which works differently than understanding a sentence, moving a limb, remembering a fact, or feeling an emotion."

The human brain is enormously complex and diverse, with more than 150 distinctly identifiable brain areas, approximately 86 billion neurons, hundreds if not thousands of different types; trillions of synapses; and hundreds of distinct proteins within each individual synapse.

Truly intelligent and flexible systems are likely to be full of complexity, much like brains. Any theory that proposes to reduce intelligence down to a single principle—or a single "master algorithm"—is bound to fail.

Rich Internal Representations

Cognitive psychology often focuses on internal representations, such as beliefs, desires, and goals. Classical AI did likewise; for instance, to represent President Kennedy's famous 1963 visit to Berlin, one would add a set of facts such as part-of (Berlin, Germany), and visited (Kennedy, Berlin, June 1963). Knowledge consists in an accumulation of such representations, and inference is built on that bedrock; it is trivial on that foundation to infer that Kennedy visited Germany.

Currently, deep learning tries to fudge this, with a bunch of vectors that capture a little bit of what's going on, in a rough sort of way, but that never directly represent propositions at all. There is no specific way to represent visited (Kennedy, Berlin, 1963) or part-of (Berlin, Germany); everything is just rough approximation. Deep learning currently struggles with inference and abstract reasoning because it is not geared toward representing precise factual knowledge in the first place. Once facts are fuzzy, it is difficult to get reasoning right. The much-hyped GPT-3 system1 is a good example of this.11 The related system BERT3 is unable to reliably answer questions like "if you put two trophies on a table and add another, how many do you have?" .... '   (much more follows) 

Monday, December 07, 2020

What Makes Robust AI?

 Much enjoyed Gary Marcus' writing on the essense of robust AI, most recently from his book, 'Rebooting AI'.  Now see more from him on 'Four Steps Towards Robust AI'.   See more about this in ZDNet.   See more also in Garymarcus.com

Monday, March 30, 2020

Hybrid AI Examined

Big proponent of the idea.   Neural methods solve specific problems well, yet we solve many other problems symbolically, logically.  Math gives us solutions with algorithms, but the applied use of these methods is logically driven.   The next AI decade should seek the power of both methods.

The case for hybrid artificial intelligence  By Ben Dickson in bdTechtalks

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

Deep learning, the main innovation that has renewed interest in artificial intelligence in the past years, has helped solve many critical problems in computer vision, natural language processing, and speech recognition. However, as the deep learning matures and moves from hype peak to its trough of disillusionment, it is becoming clear that it is missing some fundamental components.

This is a reality that many of the pioneers of deep learning and its main component, artificial neural networks, have acknowledged in various AI conferences in the past year. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, the three “godfathers of deep learning,” have all spoken about the limits of neural networks.

The question is, what is the path forward?

At NeurIPS 2019, Bengio discussed system 2 deep learning, a new generation of neural networks that can handle compositionality, out of order distribution, and causal structures. At the AAAI 2020 Conference, Hinton discussed the shortcomings of convolutional neural networks (CNN) and the need to move toward capsule networks.

But for cognitive scientist Gary Marcus, the solution lies in developing hybrid models that combine neural networks with symbolic artificial intelligence, the branch of AI that dominated the field before the rise of deep learning. In a paper titled “The Next Decade in AI: Four Steps Toward Robust Artificial Intelligence,” Marcus discusses how hybrid artificial intelligence can solve some of the fundamental problems deep learning faces today.

Connectionists, the proponents of pure neural network–based approaches, reject any return to symbolic AI. Hinton has compared hybrid AI to combining electric motors and internal combustion engines. Bengio has also shunned the idea of hybrid artificial intelligence on several occasions.

But Marcus believes the path forward lies in putting aside old rivalries and bringing together the best of both worlds.   .... " 

Sunday, February 02, 2020

Can an AI System Reinvent Physics?

The idea has been kicked around a bit.  I see Gary Marcus is one of the authors.  have much enjoyed his book on the current limitations of AI.  See my review at the tag below.  So the answer is if we include a rather narrow range of predictions, based on lots of data, we might get some predictive things that look like 'laws'.   But will they be useful broadly, testable in broader contexts?

Are Neural Networks About to Reinvent Physics?
The revolution of machine learning has been greatly exaggerated.
By   Gary Marcus and Ernest Davis in Nautil.us

Can AI teach itself the laws of physics? Will classical computers soon be replaced by deep neural networks? Sure looks like it, if you’ve been following the news, which lately has been filled with headlines like, “A neural net solves the three-body problem 100 million times faster: Machine learning provides an entirely new way to tackle one of the classic problems of applied mathematics,” and “Who needs Copernicus if you have machine learning?”. The latter was described by another journalist, in an article called “AI Teaches Itself Laws of Physics,” as a “monumental moment in both AI and physics,” which “could be critical in solving quantum mechanics problems.”

The trouble is, the authors have given no compelling reason to think that they could actually do this.

None of these claims is even close to being true. All derive from just two recent studies that use machine learning to explore different aspects of planetary motion. Both papers represent interesting attempts to do new things, but neither warrant the excitement. The exaggerated claims made in both papers, and the resulting hype surrounding these, are symptoms of a tendency among science journalists—and sometimes scientists themselves—to overstate the significance of new advances in AI and machine learning.

As always, when one sees large claims made for an AI system, the first question to ask is, “What does the system actually do?”  .... "

Monday, October 21, 2019

We Can't Trust Deep Learning Alone

Its roughly the 65th anniversary of the proposal of AI.   Time to rethink the broad idea.   More comments on a book I have been reading: Rebooting AI: Building Artificial Intelligence we can Trust by Gary Marcus.  I am a practitioner in the space, who has built many systems of this type, but remain a proponent of the fact that we must combine Deep Learning with logic processing (or classical) AI. 

We used learning in such systems, but it was not deep, but did contain and update knowledge needed to make decisions.   How can we make AI both broad and robust?  Today we have other ideas that can help us build logical models of things, like Business Process Models and RPA.  Minsky's Society of Mind is mentioned as a broad template.

Here interview in Technology Review on the idea:

Gary Marcus, a leader in the field, discusses how we could achieve general intelligence—and why that might make machines safer.   by Karen Hao  in MIT Technology Review

Gary Marcus is not impressed by the hype around deep learning. While the NYU professor believes that the technique has played an important role in advancing AI, he also thinks the field’s current overemphasis on it may well lead to its demise. ..."

Finished, I like the thoughts provided.   The book sets the stage.  Read it. My only disappointment is though the book provides an excellent argument for why, it does not provide a good recommendation of how we should proceed.   Always thought there were hints in elements of the context of 'causality' that might help.  Now reading Judea Pearl's  "The Book of Why: The New Science of Cause and Effect" on that topic.

Friday, October 11, 2019

Review of Gary Marcus' Rebooting AI: Building Artificial Intelligence We Can Trust

Just reading this book.   Found the below non technical overview/review quite useful.  Mostly agree with this, and the outline of chapters is useful.   Saving it here for my own reference.   Buy a copy.

Prof Kenneth Forbus review of Gary Marcus' book: Rebooting AI: Building Artificial Intelligence We Can Trust 

Franz

Saturday, October 05, 2019

Rebooting AI: The Future of General, Trustable AI

In the process of reading Gary Marcus and Ernest Davis'  book:  Rebooting AI: Building Artificial Intelligence We can Trust.   Nicely done, starting with a history of AI and its challenges.   And then a real push back on what AI  needs to do to be really useful.  Skeptics will like it, but its also for those interested in the way AI is headed.  Have always been a proponent in mixing classic AI methods with 'deep learning',  have lived through its evolution to the current state.  While its true that deep learning can solve some narrow and complex problems, it is not well suited to the complexity of business, and even very real life problems.  Needs to have better understanding, transparency and trustability.    Good book so far that's worth a read, addressing a fundamental problem.   Will follow with a more complete impression when done.

Monday, September 30, 2019

Building More General, Trustable AI: Deeper Understanding?

I have just been thinking about the idea of what is called 'deep understanding'  here.  That is more generally applicable AI.    Agree that deep learning is impressive, but still very narrow  I don't agree that deep understanding, more general AI would make AI safer, could make it less transparent, prone to tricks and misuse, and dangerous.

Book:  Rebooting AI,  Building Artificial Intelligence we can Trust   By Gary Marcus and Ernest Davis  Reading ...

We can’t trust AI systems built on deep learning alone 
Gary Marcus, a leader in the field, discusses how we could achieve general intelligence—and why that might make machines safer.   by Karen Hao  in Technology Review 

Gary Marcus is not impressed by the hype around deep learning. While the NYU professor believes that the technique has played an important role in advancing AI, he also thinks the field’s current overemphasis on it may well lead to its demise.

Marcus, a neuroscientist by training who has spent his career at the forefront of AI research, cites both technical and ethical concerns. From a technical perspective, deep learning may be good at mimicking the perceptual tasks of the human brain, like image or speech recognition. But it falls short on other tasks, like understanding conversations or causal relationships. To create more capable and broadly intelligent machines, often referred to colloquially as artificial general intelligence, deep learning must be combined with other methods. ... "

Friday, September 13, 2019

Finances of AI Research: DeepMind

Are such losses in leading edge tech signficant, or to be expected?

DeepMind's Losses and the Future of Artificial Intelligence by Gary Marcus in Wired

Alphabet’s DeepMind unit, conqueror of Go and other games, is losing lots of money. Continued deficits could imperil investments in AI.  Alphabet’s DeepMind lost $572 million last year. What does it mean?

DeepMind, likely the world’s largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years. DeepMind also has more than $1 billion in debt due in the next 12 months.

Does this mean that AI is falling apart?

Gary Marcus is founder and CEO of Robust.AI and a professor of psychology and neural science at NYU. He is the author, with Ernest Davis, of the forthcoming Rebooting AI: Building Artificial Intelligence We Can Trust.

Not at all. Research costs money, and DeepMind is doing more research every year. The dollars involved are large, perhaps more than in any previous AI research operation, but far from unprecedented when compared with the sums spent in some of science’s largest projects. The Large Hadron Collider costs something like $1 billion per year and the total cost of discovering the Higgs Boson has been estimated at more than $10 billion. Certainly, genuine machine intelligence (also known as artificial general intelligence), of the sort that would power a Star Trek–like computer, capable of analyzing all sorts of queries posed in ordinary English, would be worth far more than that. .... "