We knew this long ago, but only since approaches like machine learning have done some surprising things have we become ready for more contextually useful advances.
Ushering in the third wave of AI By Tolga Kurtoglu Head of Global research at Xerox, in TechRadar
Breaking down the language barrier between AI and humans
Today, artificial Intelligence (AI) helps you shop, provides suggestions on what music to listen to and what shows to watch, connects you with friends on social media and even drives your car.
As more companies focus their efforts on AI-based solutions, 2020 is shaping up to be a turning point as we begin to witness the third wave of AI — when AI systems not only not learn and reason as they encounter new tasks and situations, but have the ability to explain their decision making.
Where We Are Now
The first wave of AI focused on enabling reasoning over narrowly defined problems, but lacked any learning capability and poorly handled uncertainty. Financial products like Turbotax and Quickbooks, for example, are able to take information from a situation where rules have previously been defined and work through it to achieve a desired outcome. However, they are unable to operate beyond the previously defined rules.
The second wave, which we are in the midst of right now, is AI that has nuanced classification and prediction capabilities, but no contextual capability and minimal reasoning capability. Major machine learning-based AI platforms like IBM’s Watson and Salesforce’s Einstein are good examples as they are able to synthesize large amounts of data to provide insight and answers, but are not able to truly understand or explain how they got to that answer.
The Third Wave: AI That Understands and Reasons in Context
So how do we progress from the second wave to the third wave?
The starting point is to make AI explainable, more transparent. AI algorithms will continue to increasingly interact with humans across many industries — in our homes, our cars, and our clothing. If they continue to evolve the way they have been, and they are black boxes in nature, there is a potential concern with transparency and eventually with trust. To avoid these concerns, these systems should be transparent in a way that they can explain their work, including the assumptions they made, the different options they considered and eventually why they came up with the answer that they provided.
Right now, there is an opportunity to establish trust between the system and the human. Explainability and transparency are a starting point to something much broader—the collaboration between humans and computers to solve the most complex problems in the world. ... "
No comments:
Post a Comment