/* ---- Google Analytics Code Below */

Friday, May 19, 2023

Let a Thousand AIs Bloom

Home/News/'Let a Thousand AIs Bloom'/Full Text

ACM NEWS

'Let a Thousand AIs Bloom'    By Bennie Mols

Commissioned by CACM Staff   May 4, 2023

Data science and philosophy professor David Danks.

"Computer science students don’t need to become ethicists, and philosophy students don’t need being able to write code, but we need to teach them how to collaborate and understand each other."

Credit: DavidDanks.org

The field of artificial intelligence (AI) has been dominated by the deep learning approach in recent years, and there is some concern that focus may be limiting progress in the field. David Danks, a professor of data science and philosophy at the University of California, San Diego, advocates for more diversity in AI research or, as he puts it, "let a thousand AIs bloom." .... 

Bennie Mols interviewed Danks at the 2023 AAAS Annual Meeting in Washington, D.C.

What has led you to the conclusion there is too little diversity in the AI field?

We have seen enormous advances in the ability of AI, and in particular deep learning, to predict, classify, and generate what we might think of as the surface features of the world. These successes are marked by two fundamental features that don't always hold: having a measurement of what matters, and being able to define what counts as success. Deep learning can do amazing things, but what worries me is that it crowds everything else out.

Such as…

We have struggled to come up with AI systems that can discover the underlying structure of the world, things that show up in the data but are not defined by them. So one reason that we are struggling with developing more trustworthy and value-centered AI is because trust and values fundamentally are not things that we know how to give numerical expressions for.

Can you give an example?

It is difficult to figure out what counts as success for a self-driving car. Sure, we want to drive safely, but what counts as driving safely is very context-dependent. It depends on social norms, it depends on the weather, it depends on suddenly occurring situations on the road. As soon as there is an unusual context, self-driving cars can't reason their way out like a human driver can.

What is your proposal for increasing the diversity of AI research?

First, people need to realize that there are problems we are not considering because of the focus on deep learning. Deep learning is not good at symbolic reasoning, not good at planning, not good at reconciling conflicts between multiple agents that have different values. We need to let a thousand AIs bloom because we need different frameworks and we need to look at problems we are not considering because of the focus on deep learning.

Second, funding agencies in particular should be supporting the work that companies don't want to support. Right now, most companies are putting most effort into deep learning.

Third, I also think that there is an enormous opportunity for entrepreneurs to identify problems that deep learning is not going to solve and come up with new methods and new systems. If I were an entrepreneur, I would stay far away from deep learning, because I am not going to compete with the big tech companies.

How do you, as a philosopher, look at the recent hype around ChatGPT and similar large language models?

For me, the most interesting aspect is that ChatGPT is calling into question how deep a lot of our human conversation actually is. So much of human language seems to be highly predictable or ritualized; anybody who has ever taught classes for some years knows this. There are times when you just go in the classroom and you start talking on autopilot. I came to realize that my own speech is not nearly as profound as I might have thought it was.

Does ChatGPT have consequences for your way of teaching?

I put all the assignments for my classes through ChatGPT to see how it performs, and it did very badly. ChatGPT is particularly good at giving a Wikipedia-level summary of a topic, but it is bad at reasoning, drawing inferences, logical conclusions, and constructing good arguments. ChatGPT might actually push teachers to make better assignments by avoiding assignments that it can answer well. ... 

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.

No comments: