/* ---- Google Analytics Code Below */
Showing posts with label Judea Pearl. Show all posts
Showing posts with label Judea Pearl. Show all posts

Monday, October 21, 2019

We Can't Trust Deep Learning Alone

Its roughly the 65th anniversary of the proposal of AI.   Time to rethink the broad idea.   More comments on a book I have been reading: Rebooting AI: Building Artificial Intelligence we can Trust by Gary Marcus.  I am a practitioner in the space, who has built many systems of this type, but remain a proponent of the fact that we must combine Deep Learning with logic processing (or classical) AI. 

We used learning in such systems, but it was not deep, but did contain and update knowledge needed to make decisions.   How can we make AI both broad and robust?  Today we have other ideas that can help us build logical models of things, like Business Process Models and RPA.  Minsky's Society of Mind is mentioned as a broad template.

Here interview in Technology Review on the idea:

Gary Marcus, a leader in the field, discusses how we could achieve general intelligence—and why that might make machines safer.   by Karen Hao  in MIT Technology Review

Gary Marcus is not impressed by the hype around deep learning. While the NYU professor believes that the technique has played an important role in advancing AI, he also thinks the field’s current overemphasis on it may well lead to its demise. ..."

Finished, I like the thoughts provided.   The book sets the stage.  Read it. My only disappointment is though the book provides an excellent argument for why, it does not provide a good recommendation of how we should proceed.   Always thought there were hints in elements of the context of 'causality' that might help.  Now reading Judea Pearl's  "The Book of Why: The New Science of Cause and Effect" on that topic.

Thursday, February 28, 2019

Data, Data Science, Causal Thinking

"The Seven Tools of Causal Inference, with Reflections on Machine Learning," by ACM  A.M. Turing Award recipient Judea Pearl, describes tools that overcome obstacles to human-level machine intelligence. Pearl delivers a message to machine-learning and AI experts in an original video at bit.ly/2GUEyJW.

Excerpt from the long paper, ultimately positioning our challenge:

Key insights:

- Data Science is a two-body problem, connecting data and reality, including the forces behind the data.

- Data Science is the art of interpreting reality in the light of data, not a mirror through which data sees itself from different angles.

- The ladder of causation is the double helix of causal thinking, defining what can and cannot be learned about actions and about worlds that could have been. ...   "

Monday, February 25, 2019

Judea Pearl on Causal Inference

 Back to the need for better integrated inference.   Its not enough to just find patterns,  we need to insert them in our cognitive work. 

"The Seven Tools of Causal Inference, with Reflections on Machine Learning," by ACM A.M. Turing Award recipient Judea Pearl, describes tools that overcome obstacles to human-level machine intelligence. Pearl delivers a message to machine-learning and AI experts in an original video at bit.ly/2GUEyJW.

Full paper in the Communications of the ACM    Technical, but contains excellent overview pieces that are essential to understand the future of AI beyond Deep Learning.   And my point made above.

Ultimately this makes the case of connecting any kind of analytics (like machine learning) to human augmentation and interaction.

Sunday, September 30, 2018

Judea Pearl on Causality Tools and Machine Learning

  Causality scientist Judea Pearl looks at causality and some of the limitations of machine learning systems.  Technical paper, but useful scan for practitioners.

The seven tools of causal inference with reflections on machine learning

The seven tools of causal inference with reflections on machine learning Pearl, CACM 2018

With thanks to @osmandros for sending me a link to this paper on twitter.

In this technical report Judea Pearl reflects on some of the limitations of machine learning systems that are based solely on statistical interpretation of data. To understand why? and to answer what if? questions, we need some kind of a causal model. In the social sciences and especially epidemiology, a transformative mathematical framework called ‘Structural Causal Models’ (SCM) has seen widespread adoption. Pearl presents seven example tasks which the model can handle, but which are out of reach for associational machine learning systems.  ... "

Thursday, September 27, 2018

What is AI? How should we do Future Research for Application?

Excellent look at what has been achieved in AI, and how it relates to what we call intelligence.  Are we taking the right approach to look for yet better solutions?  Nicely presented, and good descriptions of the dilemma of how research and results are funded and perceived by the public, academia  and industry.  I particularly like that he does not abandon 'classical' AI vs Neural approaches.  Supporting summarizing, non technical video on human vs animal intelligence: https://vimeo.com/288403370

ACM Summary:  The recent successes of deep learning  have revealed something very interesting about the structure of our world, yet this seems to be the least pursued and talked
about topic today.

In Al, the key question today is not whether we should use model-based or function-based approaches but how to  integrate and fuse them so we can realize their collective benefits.

We need a new generation of Al  researchers who are well versed in and appreciate classical Al, machine learning,  and computer science more broadly while also being informed about Al history.

Adnan Darwiche discusses "Human-Level Intelligence or Animal-Like Abilities?" 

Communications of the ACM, October 2018, Vol. 61 No. 10, Pages 56-67
10.1145/3271625

"The vision systems of the eagle and the snake outperform everything that we can make in the laboratory, but snakes and eagles cannot build an eyeglass or a telescope or a microscope." —Judea Pearl

The recent successes of neural networks in applications like speech recognition, vision, and autonomous navigation has led to great excitement by members of the artificial intelligence (AI) community, as well as by the general public. Over a relatively short time, by the science clock, we managed to automate some tasks that have defied us for decades, using one of the more classical techniques due to AI research.

The triumph of these achievements has led some to describe the automation of these tasks as having reached human-level intelligence. This perception, originally hinted at in academic circles, has gained momentum more broadly and is leading to some implications. For example, some coverage of AI in public arenas, particularly comments made by several notable figures, has led to mixing this 
excitement with fear of what AI might bring us all in the future (doomsday scenarios). 

 Moreover, a trend is emerging in which machine learning research is being streamlined into neural network research, under its newly acquired label "deep learning." This perception has also caused some to question the wisdom of continuing to invest in other machine learning approaches or even other mainstream areas of AI (such as knowledge representation, symbolic reasoning, and planning). ... " 

Saturday, May 26, 2018

Addressing Causation vs Curve Fitting

The below comes from Inference.vc   It addresses some of the comments by Judea Pearl's recent post commented on here.  Beyond the first few paragraphs it is thoughtful but very technical.

ML beyond Curve Fitting: An Intro to Causal Inference and do-Calculus 

You might have come across Judea Pearl's new book, and a related interview which was widely shared in my social bubble. In the interview, Pearl dismisses most of what we do in ML as curve fitting. While I believe that's an overstatement (conveniently ignores RL for example), it's a nice reminder that most productive debates are often triggered by controversial or outright arrogant comments. Calling machine learning alchemy was a great recent example. After reading the article, I decided to look into his famous do-calculus and the topic causal inference once again.

Again, because this happened to me semi-periodically. I first learned do-calculus in a (very unpopular but advanced) undergraduate course Bayesian networks. Since then, I have re-encountered it every 2-3 years in various contexts, but somehow it never really struck a chord. I always just thought "this stuff is difficult and/or impractical" and eventually forgot about it and moved on. I never realized how fundamental this stuff was, until now.

This time around, I think I fully grasped the significance of causal reasoning and I turned into a full-on believer. I know I'm late to the game but I almost think it's basic hygiene for people working with data and conditional probabilities to understand the basics of this toolkit, and I feel embarrassed for completely ignoring this throughout my career.

In this post I'll try to explain the basics, and convince you why you should think about this, too. If you work on deep learning, that's an even better reason to understand this. Pearl's comments may be unhelpful if interpreted as contrasting deep learning with causal inference. Rather, you should interpret it as highlighting causal inference as a huge, relatively underexplored, application of deep learning. Don't get discouraged by causal diagrams looking a lot like Bayesian networks (not a coincidence seeing they were both pioneered by Pearl) they don't compete with, they complement deep learning. .... " 

Sunday, May 20, 2018

Judea Pearl Criticizes Machine Learning

Quite interesting view.  Pearl's view is interesting.  Bayesian networks in particular has shown a more broadly insightful and transparent view to modeling than machine learning.    But machine learning deep learning can target narrower problems more specifically.

How a Pioneer of Machine Learning Became One of Its Sharpest Critics
Judea Pearl helped artificial intelligence gain a strong grasp on probability, but laments that it still can't compute cause and effect.

 By Kevin Hartnett in The Atlantic

Artificial intelligence owes a lot of its smarts to Judea Pearl. In the 1980s he led efforts that allowed machines to reason probabilistically. Now he’s one of the field’s sharpest critics. In his latest book, The Book of Why: The New Science of Cause and Effect, he argues that artificial intelligence has been handicapped by an incomplete understanding of what intelligence really is.

Three decades ago, a prime challenge in artificial-intelligence research was to program machines to associate a potential cause to a set of observable conditions. Pearl figured out how to do that using a scheme called Bayesian networks. Bayesian networks made it practical for machines to say that, given a patient who returned from Africa with a fever and body aches, the most likely explanation was malaria. In 2011 Pearl won the Turing Award, computer science’s highest honor, in large part for this work.

But as Pearl sees it, the field of AI got mired in probabilistic associations. These days, headlines tout the latest breakthroughs in machine learning and neural networks. We read about computers that can master ancient games and drive cars. Pearl is underwhelmed. As he sees it, the state of the art in artificial intelligence today is merely a souped-up version of what machines could already do a generation ago: find hidden regularities in a large set of data. “All the impressive achievements of deep learning amount to just curve fitting,” he said recently.  .... " 


Wednesday, October 26, 2016

Judea Pearl on Engines of Evidence

A favorite researcher on the topic.  How do we understand how evidence models results?  In the Edge: 

Engines of Evidence,  A Conversation With Judea Pearl
A new thinking came about in the early '80s when we changed from rule-based systems to a Bayesian network. Bayesian networks are probabilistic reasoning systems. An expert will put in his or her perception of the domain. A domain can be a disease, or an oil field—the same target that we had for expert systems. 

The idea was to model the domain rather than the procedures that were applied to it. In other words, you would put in local chunks of probabilistic knowledge about a disease and its various manifestations and, if you observe some evidence, the computer will take those chunks, activate them when needed and compute for you the revised probabilities warranted by the new evidence.

It's an engine for evidence. It is fed a probabilistic description of the domain and, when new evidence arrives, the system just shuffles things around and gives you your revised belief in all the propositions, revised to reflect the new evidence.         

JUDEA PEARL, professor of computer science at UCLA, has been at the center of not one but two scientific revolutions. First, in the 1980s, he introduced a new tool to artificial intelligence called Bayesian networks. This probability-based model of machine reasoning enabled machines to function in a complex, ambiguous, and uncertain world. Within a few years, Bayesian networks completely overshadowed the previous rule-based approaches to artificial intelligence.

Leveraging the computational benefits of Bayesian networks, Pearl realized that the combination of simple graphical models and probability (as in Bayesian networks) could also be used to reason about cause-effect relationships. The significance of this discovery far transcends its roots in artificial intelligence. His principled, mathematical approach to causality has already benefited virtually every field of science and social science, and promises to do more when popularized. 

He is the author of Heuristics; Probabilistic Reasoning in Intelligent Systems; and Causality: Models, Reasoning, and Inference. He is the winner of the Alan Turing Award.  .... " 

Wednesday, December 31, 2014

Judea Pearl Keynote Address

Keynote Lecture at the 2014 BayesiaLab User Conference

September 23, 2014, Los Angeles, California

From Bayesian Networks to Causal and Counterfactual Reasoning  a talk by Judea Pearl.

"... The development of Bayesian Networks, so people tell me, marked a turning point in the way uncertainty is handled in computer systems. For me, this development was a stepping stone towards a more profound transition, from reasoning about beliefs to reasoning about causal and counterfactual relationships. In this talk, I will survey the milestones of this journey, and  summarize the practical and conceptual problems that we can solve today and could not address two decades ago. ... "

Registration required.

Sunday, November 16, 2014

Thinking Causality in Science and Statistics

I have been looking back to understand how AI has changed since the 90s, when we worked with rule based expert systems.  One book that addresses some of the changes is Judea Pearl's:  Causality: Models, Reasoning and Inference.  Now over a decade old, it contains some interesting gems. Dealing with the mixing of knowledge in diagrams and equations, and developing approaches that have evolved to now commonly used Bayesian Networks.  More on his site.

There is also a copy of a lecture that Pearl gave at the time:  The Art and Science of Cause and Effect, originally an epilogue in the book.   Now available free at the link. Deals with the interesting concept of Causality, which is remarkably complex.  The idea is essential in working engineered systems, avoided in the physical sciences, and most always warned against in statistics.  The article examines why, and poses some remedies.  I disagree with some of his early historical views, causation was not discovered at the time of Galileo, but the lecture is still an excellent read.

Consider also how Big Data methods have backed off the need for strict causation requirements.

Tuesday, August 14, 2012

An Introduction to Bayesian Networks

I may have included this in a previous post, but it is worth repeating.   A good basic introduction to Bayesian Network methods from Conrady Science, which integrates some writings by Judea Pearl.  Essential reading if you are interested in intelligence rich techniques that can be readily integrated with real-time decision making.  In our own experiences we used simple rule-based layers that led to specific Bayesian networks to solve complex problems.  I can also see this method integrated with numerical optimization techniques and then plugged into process flow implementations.

Tuesday, March 27, 2012

Judea Pearl Wins Turing Award

For contributions to artificial intelligence in the area of uncertainty and causal reasoning.  More details about his work here.  His excellent book:    ... Causality: Models, Reasoning, and Inference, won the 2001 Lakatos Award from the London School of Economics and Political Science “for an outstanding significant contribution to the philosophy of science.” ...  '