/* ---- Google Analytics Code Below */
Showing posts with label Common Sense. Show all posts
Showing posts with label Common Sense. Show all posts

Thursday, June 08, 2023

Fostering AI Common Sense: The need for Critical Thinking and Healthy Skepticism

Very important,   SAS 

Fostering AI common sense: The need for critical thinking and healthy skepticism

by REGGIE TOWNSEND on MAY 25, 2023 

As AI rapidly advances over the next several years, I’ve been fortunate to have an active role in helping to guide a responsible path forward when it comes to technology’s impact on our daily lives. Currently, this role includes serving as Vice President for the SAS Data Ethics Practice, as an EqualAI board member and as a member of the National Artificial Intelligence Advisory Committee (NAIAC).

The acceleration of AI development and application has incredible potential for supercharging our decision making and democratizing access to technology. However, it also carries the risks of widespread misinformation, fomenting division and perpetuating historical injustices. Because of these pitfalls, promoting “AI common sense” among the public is essential, encouraging a basic understanding of AI benefits, limits, and vulnerabilities it might exploit or create. In other words, an understanding of how AI affects one’s well-being.

I like to compare AI to electricity. Most of us don’t have a detailed understanding of how electrons, transformers and grounding wires work, but we all get the basics: We plug something into an outlet, and it powers our devices, appliances, etc. We have a common understanding of basic electrical safety as well. We keep implements and hands away from outlets, and we don’t let electric devices or wires touch the water.  Though we likely came to a more advanced understanding of these rules in science class, they comprise a general electricity “common sense” most of us learn prior to any formal schooling.

AI common sense would include a general understanding of AI's functions and risks at a basic level, especially as AI capabilities multiply. It’s easy to get lost in conversations around machine learning, neural networks and large language models. Still, everyday users don’t need to be familiar with these terms to be aware of AI’s impact on their daily lives, including the potential dangers.

Here are some ways we can foster AI common sense as the technology becomes more prevalent in our lives:

Recognizing human nature and AI

In today's fast-moving tech landscape, it's easy to be swept away by the allure of AI's capabilities. However, we must recognize that AI systems are created by humans, which means they can carry human biases and limitations with them. These biases can manifest in the data used to train AI, leading to potential discrimination or unfair treatment. For example, AI algorithms used in hiring processes may inadvertently favor certain demographics over others if trained on biased data.

Though learned bias can be pervasive in AI implementations, it isn’t unsolvable. Responsible developers and innovators are working to mitigate inequity in AI systems by approaching the issue from all directions: training models with broad, inclusive and diverse data; testing models for disparate impact across different groups and regularly monitoring them for drift over time; instituting skills-based “blind hiring” for development teams; and combining humans and technology to form a system of checks and balances that can override unintended bias.

While these efforts are being made to reduce bias, acknowledging the potential for imperfect judgment in AI systems remains critical to fostering AI common sense and helping users understand the potential for risks and inaccuracies.

Combating automation bias

Automation bias occurs when people trust automated systems, like AI, over their judgment, even when the system is wrong. There is a common assumption that machines don’t make careless errors as humans do. We’re inclined to trust a calculator's results, because it’s an objective machine. But AI tools go far beyond addition and subtraction. In fact, AI purists would argue that addition and subtraction is prescriptive or rules-based, whereas AI is predictive in nature. Though it seems minor, the distinction is important because it increases the probability that AI can replicate biases from past data, make false connections, or “hallucinate” information that doesn’t exist but seems reasonable to a reader.

This overreliance on AI can have severe consequences. In health care, a doctor might rely on an AI system to diagnose a patient, despite evidence contradicting the AI's recommendation. By recognizing this bias, we can encourage individuals to question AI systems and seek alternative perspectives, thus reducing the risk of harmful outcomes. Some trustworthy AI platforms have “explainability” features to help mitigate this challenge by providing additional reasons and context for why an AI model produced what it did.

Promoting critical thinking

Encouraging a culture of inquiry and curiosity can help individuals better understand the real-world impact of AI technologies. Enhancing our critical thinking skills and maintaining a healthy skepticism about AI systems is crucial to promoting AI common sense. This means questioning AI-generated results, recognizing possible limitations in the underlying data and being aware of potential biases in the algorithms. The axiom “trust but verify” should guide AI interactions until they are repeatedly proven accurate and effective, especially in high-risk scenarios.

This critical thinking approach can empower individuals to make informed decisions and better understand the limitations of AI systems. For example, users of AI-generated news should be aware of the potential for inaccuracies or misleading information and should verify claims from multiple sources. With generative applications like Dall-E and Midjourney already capable of photorealistic images virtually indistinguishable from reality, we should all be inclined to question incendiary or controversial pictures until we can confirm their veracity with corroborating evidence, like consistent images from multiple angles and trustworthy first-person reporting.   ... ' 

Saturday, March 25, 2023

Large Language Models: A Cognitive and Neuroscience Perspective

Irving does an excellent review and links to much work about LLMs, Large Language Models, and oter topics that are now much in the news. Below an intro. I plan to read all the articles pointed to at the link.  A considerable weakness in the current directions?   Implications to all this,  will provide.

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.  By Irving Wladawsky-Berger  March 23, 2023

Large Language Models: A Cognitive and Neuroscience Perspective

Over the past few decades, powerful AI systems have matched or surpassed human levels of performance in a number of tasks such as image and speech recognition, skin cancer classification, breast cancer detection, and highly complex games like Go. These AI breakthroughs have been based on increasingly powerful and inexpensive computing technologies, innovative deep learning (DL) algorithms, and huge amounts of data on almost any subject. More recently, the advent of large language models (LLMs) are taking AI to the next level. And, for many technologists like me, LLMs and their associated chatbots have introduced us to the fascinating world of human language and cognition.

I recently learned the difference between form, communicative intent, meaning, and understanding from “Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data,” a 2020 paper by linguistic professors Emiliy Bender and Alexander Koller. These linguistic concepts helped me understand the authors’ argument that “in contrast to some current hype, meaning cannot be learned from form alone. This means that even large language models such as BERT do not learn meaning; they learn some reflection of meaning into the linguistic form which is very useful in applications.”

A few weeks ago, I came across another interesting paper, “Dissociating Language and Thought in Large Language Models: a Cognitive Perspective,” published in January, 2023 by principal authors linguist Kyle Mahowald and cognitive neuroscientist Anna Ivanova and four additional co-authors. The paper nicely explains how the study of human language, cognition and neuroscience sheds light on the potential capabilities of LLMs and chatbots. Let me briefly discuss what I learned.

“Today’s large language models (LLMs) routinely generate coherent, grammatical and seemingly meaningful paragraphs of text,” said the paper’s abstract. “This achievement has led to speculation that these networks are — or will soon become — thinking machines, capable of performing tasks that require abstract knowledge and reasoning. “Here, we review the capabilities of LLMs by considering their performance on two different aspects of language use: formal linguistic competence, which includes knowledge of rules and patterns of a given language, and functional linguistic competence, a host of cognitive abilities required for language understanding and use in the real world.

The authors point out that there’s a tight relationship between language and thought in humans. When we hear or read a sentence, we typically assume that it was produced by a rational person based on their real world knowledge, critical thinking, and reasoning abilities. We generally view other people’s statements not just as a reflection of their linguistic skills, but as a window into their mind. .... '


Tuesday, February 14, 2023

Can an AI System Exhibit Commonsense Intelligence?

 Irving Berger looks at this quest,  links to lots of work in the area,  below a quick intro, much more at the link. 

Can an AI System Exhibit Commonsense Intelligence?

“One of the fundamental limitations of AI can be characterized as its lack of commonsense intelligence: the ability to reason intuitively about everyday situations and events, which requires rich background knowledge about how the physical and social world works,” wrote University of Washington professor Yejin Choi in “The Curious Case of Commonsense Intelligence,” an essay published in the Spring 2022 issue of Dædalus. “Trivial for humans, acquiring commonsense intelligence has been considered a nearly impossible goal in AI, added Choi.”   ... '   (Continues)  ... 

Sunday, October 09, 2022

Common Sense

 Makes makes much sense

A Common-Sense Test for AI Could Lead to Smarter Machines

By The Next Web

September 29, 2022

AI with common sense could mean big things for humans, for example, better customer service, better performance in autonomous cars, and better military decisions about life and death.

Today's artificial intelligence (AI) systems are quickly evolving to become humans' new best friend. We now have AIs that can craft award-winning whiskey, write poetry, and help doctors perform extremely precise surgical operations.

But one thing they cannot do—which is, on the surface, far simpler than all those other things—is use common sense. Common sense is different from intelligence in that it is usually something innate and natural to humans that helps them navigate daily life; it cannot really be taught.

From The Next Web

View Full Article  

Monday, September 19, 2022

One Man's Dream of Fusing A.I. With Common Sense

Back to the tough problem

ACM NEWS

One Man's Dream of Fusing A.I. With Common Sense    By The New York Times, August 29, 2022

David Ferrucci, who led the team that built IBM's famed Watson computer, was elated when it beat the best-ever human "Jeopardy!" players in 2011, in a televised triumph for artificial intelligence.

But Dr. Ferrucci understood Watson's limitations. The system could mine oceans of text, identify word patterns and predict likely answers at lightning speed. Yet the technology had no semblance of understanding, no human-style common sense, no path of reasoning to explain why it reached a decision.

Eleven years later, despite enormous advances, the most powerful A.I. systems still have those limitations.  ...   Full Text 

David Ferrucci sees the work he did on IBM’s Watson computer as a “small part” of A.I.’s potential. ... 

From The New York Times

Saturday, August 27, 2022

On Solving the AI Common Sense Problem

Not yet, when?   What will it take?   Some thoughts, but not enough.  

The Common sense in Context problem 

By TechTalks, August 9, 2022

Ronald Jay Brachman is director of the Jacobs Technion-Cornell Institute at Cornell Tech and co-author of the book, Machines Like Us.

In recent years, deep learning has taken great strides in some of the most challenging areas of artificial intelligence (AI); however, some problems remain unsolved. Deep-learning systems are poor at handling novel situations, they require enormous amounts of data to train, and they sometimes make weird mistakes. Some scientists believe these problems will be solved by creating larger neural networks trained on bigger datasets. Others think that what the field of AI needs is a little bit of human "common sense."

In an interview, Brachman discusses what common sense is and is not, why machines do not have it, and how "knowledge representation" can steer the AI community in the right direction.

From TechTalks

Friday, July 15, 2022

LeCun on Vision for the next Generation of AI

 Bold view? Common Sense is near?  More detail and link to talk.

Yann LeCun's Bold New Vision for the Future of AI  By MIT Technology Review, June 28, 2022

Yann LeCun, chief scientist at Meta's artificial intelligence (AI) lab and one of the world's most influential AI researchers, has a bold new vision for the next generation of AI. In a draft document shared with MIT Technology Review, LeCun sketches out an approach that he thinks will one day give machines the common sense they need to navigate the world.

"Getting machines to behave like humans and animals has been the quest of my life," he says. LeCun thinks that animal brains run a kind of simulation of the world, which he calls a world model.

From MIT Technology Review

View Full Article (May Require Paid Registration)  

This idea that we're going to just scale up the current large language models and eventually human-level AI will emerge—I don't believe this at all, not for one second." -Yann LeCun

Monday, November 01, 2021

More Need for Common Sense

Good piece in Wired on the need for 'common sense'  reasoning to bring human Intelligence to AI.  Even a mention of CYC, and the years of work it has used to include human ideas.    But I have not heard any cases where it, or it successor Lucid, has suddenly sprung out in human common sense insight.   Some say that CYC just produced a very big, very brittle knowledge graph.   It just found lots and lots of patterns, sometimes useful, often not.     Its often not about the fact that patterns are there, but how humans can assemble them in useful ways.

How to Teach Artificial Intelligence Some Common Sense
We’ve spent years feeding neural nets vast amounts of data, teaching them to think like human brains. They’re crazy-smart, but they have absolutely no common sense. What if we’ve been doing it all wrong?

Five years ago, the coders at DeepMind, a London-based artificial intelligence company, watched excitedly as an AI taught itself to play a classic arcade game. They’d used the hot technique of the day, deep learning, on a seemingly whimsical task: mastering Breakout,1 the Atari game in which you bounce a ball at a wall of bricks, trying to make each one vanish.

1 Steve Jobs was working at Atari when he was commissioned to create 1976’s Breakout, a job no other engineer wanted. He roped his friend Steve Wozniak, then at Hewlett-­Packard, into helping him.

Deep learning is self-education for machines; you feed an AI huge amounts of data, and eventually it begins to discern patterns all by itself. In this case, the data was the activity on the screen—blocky pixels representing the bricks, the ball, and the player’s paddle. The DeepMind AI, a so-called neural network made up of layered algorithms, wasn’t programmed with any knowledge about how Breakout works, its rules, its goals, or even how to play it. The coders just let the neural net examine the results of each action, each bounce of the ball. Where would it lead?

To some very impressive skills, it turns out. During the first few games, the AI flailed around. But after playing a few hundred times, it had begun accurately bouncing the ball. By the 600th game, the neural net was using a more expert move employed by human Breakout players, chipping through an entire column of bricks and setting the ball bouncing merrily along the top of the wall.  ... " 

Monday, October 04, 2021

Common Sense is Hard

A key consideration .... 

An AI expert explains why it’s hard to give computers something you take for granted: Common sense

August 17, 2021 8.09am EDT, Author, Mayank Kejriwal

Research Assistant Professor of Industrial & Systems Engineering, University of Southern California

Disclosure statement, Mayank Kejriwal receives funding from DARPA. Partners  ... 

Imagine you’re having friends over for lunch and plan to order a pepperoni pizza. You recall Amy mentioning that Susie had stopped eating meat. You try calling Susie, but when she doesn’t pick up, you decide to play it safe and just order a margherita pizza instead.

People take for granted the ability to deal with situations like these on a regular basis. In reality, in accomplishing these feats, humans are relying on not one but a powerful set of universal abilities known as common sense.

As an artificial intelligence researcher, my work is part of a broad effort to give computers a semblance of common sense. It’s an extremely challenging effort.

Quick – define common sense

Despite being both universal and essential to how humans understand the world around them and learn, common sense has defied a single precise definition. G. K. Chesterton, an English philosopher and theologian, famously wrote at the turn of the 20th century that “common sense is a wild thing, savage, and beyond rules.” Modern definitions today agree that, at minimum, it is a natural, rather than formally taught, human ability that allows people to navigate daily life.

Common sense is unusually broad and includes not only social abilities, like managing expectations and reasoning about other people’s emotions, but also a naive sense of physics, such as knowing that a heavy rock cannot be safely placed on a flimsy plastic table. Naive, because people know such things despite not consciously working through physics equations.

Common sense also includes background knowledge of abstract notions, such as time, space and events. This knowledge allows people to plan, estimate and organize without having to be too exact.

Common sense is hard to compute  ... ' 

Monday, December 14, 2020

Quest for More Common Sense: Less Thinking?

Nicely put piece that connects with the current state of the technology.   And shows some of the  challenges.  Have been involved in a number of attempts at including common sense in reasoning, without general success.  Back to our need for strong context based reasoning made in the last post.  Most succinctly its knowledge and context with a causal engine.

The quest for artificial common sense   By Samuel Flender in TowardsDataScience

On July 19th, a blog post titled ‘Feeling unproductive? Maybe you should stop overthinking.’ appeared online. The 1000-word self-help article explains that overthinking is the enemy of our creativity, and advises us to be more in the moment:

“In order to get something done, maybe we need to think less. Seems counter-intuitive, but I believe sometimes our thoughts can get in the way of the creative process. We can work better at times when we ‘tune out’ the external world and focus on what’s in front of us.”

The post was written by GPT-3, Open AI’s massive 175-Billion-parameter neural network trained on nearly half a Trillion words. UC Berkeley student Liam Porr merely wrote the title, and let the algorithm fill in the text. A ‘fun experiment’, to see whether the AI could fool people or not. Indeed, GPT-3 hit a nerve: the post was up-voted to the top of Hacker News.

There’s a paradox, then, with today’s AI. While some of GPT-3’s writings arguably meet the Touring test criterion — convincing people that it is human — it fails spectacularly at the simplest tasks. AI researcher Gary Marcus asked GPT-2, the precursor to GPT-3, to complete the following sentence: ... ' 

Sunday, October 25, 2020

Learning Common Sense from Animals

The common sense problem.   Intriguing view.  Not enough details, but has links to related academic papers.

Researchers suggest AI can learn common sense from animals  By Khari Johnson   @kharijohnson   October 25, 2020   in VentureBeat

AI researchers developing reinforcement learning agents could learn a lot from animals. That’s according to recent analysis by Google’s DeepMind, Imperial College London, and University of Cambridge researchers assessing AI and non-human animals.

In a decades-long venture to advance machine intelligence, the AI research community has often looked to neuroscience and behavioral science for inspiration and to better understand how intelligence is formed. But this effort has focused primarily on human intelligence, specifically that of babies and children.

“This is especially true in a reinforcement learning context, where, thanks to progress in deep learning, it is now possible to bring the methods of comparative cognition directly to bear,” the researchers’ paper reads. “Animal cognition supplies a compendium of well-understood, nonlinguistic, intelligent behavior; it suggests experimental methods for evaluation and benchmarking; and it can guide environment and task design.”

DeepMind introduced some of the first forms of AI that combine deep learning and reinforcement learning, like the deep Q-network (DQN) algorithm, a system that played numerous Atari games at superhuman levels. AlphaGo and AlphaZero also used deep learning and reinforcement learning to train AI to beat a human Go champion and achieve other feats. More recently, DeepMind produced AI that automatically generates reinforcement learning algorithms.  ... "

Friday, October 23, 2020

Towards Artificial Common Sense

 The key part of AI we don't know h0w to do yet. Good overview of current state and directions.   What most all of us consider the important starting point for useful intelligence.  It is often also has the ability to explain why and how it came to a conclusion.

Seeking Artificial Common Sense   By Don Monroe  in CACM

Communications of the ACM, November 2020, Vol. 63 No. 11, Pages 14-16 10.1145/3422588

Although artificial intelligence (AI) has made great strides in recent years, it still struggles to provide useful guidance about unstructured events in the physical or social world. In short, computer programs lack common sense.

"Think of it as the tens of millions of rules of thumb about how the world works that are almost never explicitly communicated," said Doug Lenat of Cycorp, in Austin, TX. Beyond these implicit rules, though, commonsense systems need to make proper deductions from them and from other, explicit statements, he said. "If you are unable to do logical reasoning, then you don't have common sense."

This combination is still largely unrealized; in spite of impressive recent successes of machine learning in extracting patterns from massive data sets of speech and images, they often fail in ways that reveal their shallow "understanding." Nonetheless, many researchers suspect hybrid systems that combine statistical techniques with more formal methods could approach common sense.

Importantly, such systems could also genuinely describe how they came to a conclusion, creating true "explainable AI" (see "AI, Explain Yourself," Communications 61, 11, Nov. 2018).   ... " 

Saturday, June 06, 2020

Its about Common Sense

Have repeated this many times, it was clear in the late 80s when we built systems that could solve a problem, but not implement it among decision makers.   For general AI, as well as installation of any system that interacts with humans.  Both directly or indirectly through results.   Looking for more out of the Allen Institute for AI.   Here a short introduction to the problem, again:

ACM NEWS
Giving AI Common Sense  By Bennie Mols

Senior Research Manager Yejin Choi says the Allen Institute for AI is teaching neural networks representations of common-sense knowledge and reasoning.

At the Allen Institute for AI:  https://allenai.org/ in Seattle, computer scientist Yejin Choi is leading project Mosaic, which aims to teach machines common-sense knowledge and reasoning, one of the hardest and longest-standing challenges in the field of artificial intelligence (AI).

Choi, senior research manager, leads the project, which started in 2018 and recently delivered its first results. Choi is also an associate professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington in Seattle.

What is your definition of common sense?

Common sense is about the basic level of practical knowledge and reasoning that concerns everyday situations and events. This is knowledge that is commonly shared among most people. Most 10-year-old kids possess it, but it is very hard for machines. For example: don't leave the door of the fridge open too long, because the food will go bad. Or: if I drop my mug of coffee on the floor, the floor will get wet and the mug might break.

Common sense knowledge is not just about the physical world, but also about the social world. "If Kate smiles, she is probably happy."  .... '

Saturday, May 30, 2020

Why is AI so Confused by Language?

From the Elemental Blog, well worth reading through there:

Why is AI so confused by language? It’s all about mental models.  By David Ferrucci

In my last post, I shared some telling examples where computers failed to understand what they read. The errors they made were bizarre and fundamental. But why? Computers are clearly missing something, but can we more clearly pin down what?

Let’s examine one specific error that sheds some light on the situation. My team ran an experiment where we took the same first-grade story I discussed last time, but truncated the final sentence:

Fernando and Zoey go to a plant sale. They buy mint plants. They like the minty smell of the leaves.

Fernando puts his plant near a sunny window. Zoey puts her plant in her bedroom. Fernando’s plant looks green and healthy after a few days. But Zoey’s plant has some brown leaves.

“Your plant needs more light,” Fernando says.

Zoey moves her plant to a sunny window. Soon, ___________.

[adapted from ReadWorks.org]

Then we asked workers on Amazon Mechanical Turk to fill in the blank. Here’s what the workers suggested:

Saturday, May 09, 2020

New Tries at Common Sense Reasoning

We saw it at the very beginning, common sense reasoning is the key part of creating the most useful and powerful kinds of AI. There have been many attempts to do this, we tested a number, but they did not past our tests.  The excerpt from a non technical article from Wired below describes the challenge.  Looking forward to see more.

Watson's Creator Wants to Teach AI a New Trick: Common Sense
David Ferrucci built a computer that mastered Jeopardy. Since then, he's been attacking a more challenging task.   .... '

Ferrucci and his company, Elemental Cognition,    hope to fix a huge blind spot in modern AI by teaching  machines to acquire and apply everyday knowledge that lets humans communicate, reason, and navigate our surroundings. We use common sense reasoning so often, and so easily, that we barely notice it.

Ernest Davis, a professor at NYU who has been studying the problem for decades, says common sense is essential for advancing everything from language understanding to robotics. It is “central to most of what we want to do with AI,” he says.  .... "

Also see Elemental's blog.

Saturday, February 15, 2020

Machines Understanding Language

Have seen a number claims recently of how good machine understanding of human language had advanced.  Here is a contrary view.  Its all hack to the basics of common sense.

Artificial Intelligence / Machine Learning
AI still doesn’t have the common sense to understand human language
Natural-language processing has taken great strides recently—but how much does AI really understand of what it reads? Less than we thought.  ... 
by Karen Hao

Monday, December 02, 2019

Machines Perceiving Common Sense Physics

 Having machines understand common sense 'physics' may well be way to train them to do general AI.   How humans learn is a potential model.  Infant cognition.   The interpretation of surprise as a measure of learning?   A signal measure to understand  knowledge?    Here pointers to research at MIT.

Helping machines perceive some laws of physics
Model registers “surprise” when objects in a scene do something unexpected, which could be used to build smarter AI.

Rob Matheson | MIT News Office

Humans have an early understanding of the laws of physical reality. Infants, for instance, hold expectations for how objects should move and interact with each other, and will show surprise when they do something unexpected, such as disappearing in a sleight-of-hand magic trick.

Now MIT researchers have designed a model that demonstrates an understanding of some basic “intuitive physics” about how objects should behave. The model could be used to help build smarter artificial intelligence and, in turn, provide information to help scientists understand infant cognition.

The model, called ADEPT, observes objects moving around a scene and makes predictions about how the objects should behave, based on their underlying physics. While tracking the objects, the model outputs a signal at each video frame that correlates to a level of “surprise” — the bigger the signal, the greater the surprise. If an object ever dramatically mismatches the model’s predictions — by, say, vanishing or teleporting across a scene — its surprise levels will spike.

In response to videos showing objects moving in physically plausible and implausible ways, the model registered levels of surprise that matched levels reported by humans who had watched the same videos.  

“By the time infants are 3 months old, they have some notion that objects don’t wink in and out of existence, and can’t move through each other or teleport,” says first author Kevin A. Smith, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and a member of the Center for Brains, Minds, and Machines (CBMM). “We wanted to capture and formalize that knowledge to build infant cognition into artificial-intelligence agents. We’re now getting near human-like in the way models can pick apart basic implausible or plausible scenes.”  ..... "

Wednesday, October 09, 2019

Causation to Provide the Why

Basic causation is a great start.   Why did this happen? We are doing it all the time.  Its one of our basic knowledge processing capabilities that lead to learning.    We can figure out the answer by observation and combing observations to build rules of operation.   Or we can be taught specific rules, or even  imprecise rules of thumb, to help us process knowledge.   Combining things we have observed or not.   They must include things like causation, space and time relationships.  It is this kind of knowledge we need to do general AI.  Not just more data.   Its more than just unstructured data.  Its about combining all learning experiences we experience into an interacting data rich architecture that we can use.   Like the direction of the below:

An AI Pioneer Wants His Algorithms to Understand the 'Why'
  Will Knight, in Wired

Yoshua Bengio, a researcher at the University of Montreal in Canada who is co-recipient of the 2018 ACM A.M. Turing Award for contributions to the development of deep learning, thinks artificial intelligence will not realize its full potential until it can move beyond pattern recognition and learn more about cause and effect, which would make existing AI systems smarter and more efficient. A robot that understands dropping things causes them to break, for example, would not need to toss dozens of vases onto the floor to see what happens to them. Bengio is developing a version of deep learning that can recognize simple cause-and-effect relationships. His team used a dataset that maps causal relationships between real-world phenomena in terms of probabilities. The resulting algorithm essentially forms a hypothesis about which variables are causally related, and then tests how changes to different variables fit the theory.  ... " 

Friday, September 13, 2019

Build AI we can Trust

A long time question.   Its the old problem of  'common sense' reasoning, and it usually a step above common sense .... maybe call it 'light reasoning' , as the article below suggests.  Humans often omit that kind of reasoning too,  say the kind you can do in your head, or need to write a few lines on a piece of paper.   We sometimes do one-level analogies, but rarely more.   But surely we should expect that capability of a machine.  And often a question requires a determination of risk in context to determine a useful and correct answer.

How to Build Artificial Intelligence We Can Trust    By The New York Times

Exploring the universe with artificial intelligence.
We are relying on artificial intelligence more and more, but it hasn't yet earned our confidence.

Artificial intelligence has a trust problem. We are relying on A.I. more and more, but it hasn't yet earned our confidence.

Tesla cars driving in Autopilot mode, for example, have a troubling history of crashing into stopped vehicles. Amazon's facial recognition system works great much of the time, but when asked to compare the faces of all 535 members of Congress with 25,000 public arrest photos, it found 28 matches, when in reality there were none. A computer program designed to vet job applicants for Amazon was discovered to systematically discriminate against women. Every month new weaknesses in A.I. are uncovered.

The problem is not that today's A.I. needs to get better at what it does. The problem is that today's A.I. needs to try to do something completely different.

In particular, we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets — often using an approach known as deep learning — and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality.  ....  "

Saturday, January 12, 2019

Speech to Dialog

Been lately experimenting with the resolution of dialog with AI style analysis.  Still surprising that what can be done today is still primitive,  what are the key aspects of dialog understanding and resolution in context?   What elements of common sense adaption to implied goals?

In the Data Driven Investor:  

Making the Leap from Speech to Dialogue: The Challenge for Human to Machine Communication

Daily Wisdom
Robots are everywhere and doing virtually everything. We have even begun conversing with them in situations that are beginning to resemble interpersonal communication. Right now these spoken dialogue systems (SDS) tends to be limited to a “command-based” approach, which can be seen with a number of recently introduced commercial implementations, like Apple’s Siri for the iOS, Amazon’s Echo/Alexa, and the social robot Jibo.

The command-based approach to SDS design works reasonably well, as it predetermines much of the semantic context, communicative structure, and social variables by keeping conversational interactions within manageable boundaries. Yet, the development of more robust SDS will rely not only on advancements in engineering, but will also require better understanding and modeling of the actual mechanisms and operations of human-to-human communicative behaviors.... "