Especially as these methods continue to grow in augmenting human capabilities.
Building Trustable AI in Kellogg
Artificial intelligence is here to stay. Machines are getting smarter, faster, and are poised to play ever greater roles in our healthcare, our education, our decision-making, our businesses, our news, and our governments.
Humans stand to gain from AI in a number of ways. But AI also has the potential to replicate or exacerbate long-standing biases. As machine learning has matured beyond simpler task-based algorithms, it has come to rely more heavily on deep-learning architectures that pick up on relationships that no human could see or predict. These algorithms can be extraordinarily powerful, but they are also “black boxes” where the inputs and the outputs may be visible, but how exactly the two are related is not transparent.
Given their very complexity, bias can creep into the algorithms’ outputs without their designers intending it to, or without them even knowing the bias is there. So perhaps it is unsurprising that many people are wary of the power vested in machine-learning algorithms.
Inhi Cho Suh, General Manager, IBM Watson Customer Engagement, and Florian Zettelmeyer, a professor of marketing at Kellogg and chair of the school’s marketing department, are both invested in understanding how deep-learning algorithms can identify, account for, and reduce bias.
The pair discuss the social and ethical challenges machine learning poses, as well as the more general question of how developers and companies can go about building AI that is transparent, fair, and socially responsible.
This interview has been edited for length and clarity. ..
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment