Hmm quite a thought, Want to see this broadly adapted and usefully applied. Technical supporting papers.
Published June 29, 2021
By Daniel McDuff , Principal Researcher Yale Song , Senior Researcher Sai Vemprala , Senior Researcher Vibhav Vineet , Senior Researcher Shuang Ma , Senior Researcher Ashish Kapoor , Partner Research Manager
The ability to reason about causality, and ask “what would happen if…?’’ is one property that sets human intelligence apart from artificial intelligence. Modern AI algorithms perform well on clearly defined pattern recognition tasks but fall short generalizing in the ways that human intelligence can. This often leads to unsatisfactory results on tasks that require extrapolation from training examples, e.g., recognizing events or objects in contexts that are different from the training set. To address this problem, we have built a high-fidelity simulation environment, called CausalCity, which is designed for developing algorithms that improve causal discovery and counterfactual reasoning of AI.
To understand the problem better, imagine if we developed a self-driving car confined to the streets of a neighborhood in Arizona with few pedestrians, wide, flat roads and street signs with English writing. If we deployed the car on the narrow, busy streets of Delhi, where street signs are written in Hindi, pattern recognition would be insufficient to operate safely. The pattern in our “training set’’ would be very different from our deployment context. Yet, somehow humans can adapt so quickly to situations that they haven’t previously observed that someone with an Arizona state-issued driving license is allowed to drive a car in India.
In our recent paper, “CausalCity: Complex Simulations with Agency for Causal Discovery and Reasoning“, we take a closer look at this problem and propose a new high-fidelity simulation environment. ... '
No comments:
Post a Comment