/* ---- Google Analytics Code Below */

Thursday, June 08, 2023

Fostering AI Common Sense: The need for Critical Thinking and Healthy Skepticism

Very important,   SAS 

Fostering AI common sense: The need for critical thinking and healthy skepticism

by REGGIE TOWNSEND on MAY 25, 2023 

As AI rapidly advances over the next several years, I’ve been fortunate to have an active role in helping to guide a responsible path forward when it comes to technology’s impact on our daily lives. Currently, this role includes serving as Vice President for the SAS Data Ethics Practice, as an EqualAI board member and as a member of the National Artificial Intelligence Advisory Committee (NAIAC).

The acceleration of AI development and application has incredible potential for supercharging our decision making and democratizing access to technology. However, it also carries the risks of widespread misinformation, fomenting division and perpetuating historical injustices. Because of these pitfalls, promoting “AI common sense” among the public is essential, encouraging a basic understanding of AI benefits, limits, and vulnerabilities it might exploit or create. In other words, an understanding of how AI affects one’s well-being.

I like to compare AI to electricity. Most of us don’t have a detailed understanding of how electrons, transformers and grounding wires work, but we all get the basics: We plug something into an outlet, and it powers our devices, appliances, etc. We have a common understanding of basic electrical safety as well. We keep implements and hands away from outlets, and we don’t let electric devices or wires touch the water.  Though we likely came to a more advanced understanding of these rules in science class, they comprise a general electricity “common sense” most of us learn prior to any formal schooling.

AI common sense would include a general understanding of AI's functions and risks at a basic level, especially as AI capabilities multiply. It’s easy to get lost in conversations around machine learning, neural networks and large language models. Still, everyday users don’t need to be familiar with these terms to be aware of AI’s impact on their daily lives, including the potential dangers.

Here are some ways we can foster AI common sense as the technology becomes more prevalent in our lives:

Recognizing human nature and AI

In today's fast-moving tech landscape, it's easy to be swept away by the allure of AI's capabilities. However, we must recognize that AI systems are created by humans, which means they can carry human biases and limitations with them. These biases can manifest in the data used to train AI, leading to potential discrimination or unfair treatment. For example, AI algorithms used in hiring processes may inadvertently favor certain demographics over others if trained on biased data.

Though learned bias can be pervasive in AI implementations, it isn’t unsolvable. Responsible developers and innovators are working to mitigate inequity in AI systems by approaching the issue from all directions: training models with broad, inclusive and diverse data; testing models for disparate impact across different groups and regularly monitoring them for drift over time; instituting skills-based “blind hiring” for development teams; and combining humans and technology to form a system of checks and balances that can override unintended bias.

While these efforts are being made to reduce bias, acknowledging the potential for imperfect judgment in AI systems remains critical to fostering AI common sense and helping users understand the potential for risks and inaccuracies.

Combating automation bias

Automation bias occurs when people trust automated systems, like AI, over their judgment, even when the system is wrong. There is a common assumption that machines don’t make careless errors as humans do. We’re inclined to trust a calculator's results, because it’s an objective machine. But AI tools go far beyond addition and subtraction. In fact, AI purists would argue that addition and subtraction is prescriptive or rules-based, whereas AI is predictive in nature. Though it seems minor, the distinction is important because it increases the probability that AI can replicate biases from past data, make false connections, or “hallucinate” information that doesn’t exist but seems reasonable to a reader.

This overreliance on AI can have severe consequences. In health care, a doctor might rely on an AI system to diagnose a patient, despite evidence contradicting the AI's recommendation. By recognizing this bias, we can encourage individuals to question AI systems and seek alternative perspectives, thus reducing the risk of harmful outcomes. Some trustworthy AI platforms have “explainability” features to help mitigate this challenge by providing additional reasons and context for why an AI model produced what it did.

Promoting critical thinking

Encouraging a culture of inquiry and curiosity can help individuals better understand the real-world impact of AI technologies. Enhancing our critical thinking skills and maintaining a healthy skepticism about AI systems is crucial to promoting AI common sense. This means questioning AI-generated results, recognizing possible limitations in the underlying data and being aware of potential biases in the algorithms. The axiom “trust but verify” should guide AI interactions until they are repeatedly proven accurate and effective, especially in high-risk scenarios.

This critical thinking approach can empower individuals to make informed decisions and better understand the limitations of AI systems. For example, users of AI-generated news should be aware of the potential for inaccuracies or misleading information and should verify claims from multiple sources. With generative applications like Dall-E and Midjourney already capable of photorealistic images virtually indistinguishable from reality, we should all be inclined to question incendiary or controversial pictures until we can confirm their veracity with corroborating evidence, like consistent images from multiple angles and trustworthy first-person reporting.   ... ' 

No comments: