/* ---- Google Analytics Code Below */

Wednesday, October 19, 2022

Thinking Deep Learning

Deep Learning is Human


Deep Learning is Human, Through and Through, By Bennie Mols

Commissioned by CACM Staff, October 18, 2022

It was 10 years ago, in 2012, that deep learning made its breakthrough, when an innovative algorithm for classifying images based on multi-layered neural networks suddenly turned out to do spectacularly better than all algorithms before it. That breakthrough has led to deep learning's adoption in domains like speech and image recognition, automatic translation and transcription, and robotics.

As deep learning was embedded into ever-more everyday applications, more and more examples of what can go wrong also surfaced: artificial intelligence (AI) systems that discriminate, confirm stereotypes, make inscrutable decisions and require a lot of data and sometimes also a huge amount of energy.

In this context, the 9th Heidelberg Laureate Forum organized a panel discussion on the applications and implications of deep learning for an audience of some 200 young researchers from more than 50 countries. The panel included Turing Award recipients Yoshua Bengio, Yann LeCun, and Raj Reddy, 2011 ACM Prize in Computing recipient Sanjeev Arora, and researchers Shannon Vallor, Been Kim, Dina Machuve, and Shakir Mohamed. Katherine Gorman moderated the discussion.

Meta chief AI scientist Yann LeCun turned out to be the most optimistic of the panelists: "There have been lots of claims that deep learning can't do this or that, and most of these claims have been proven false after a few years of more work. The last five years, deep learning has been able to do things that none of us imagined it was going to be able to do, and the progress is accelerating."

As an example, LeCun said that Facebook, owned by Meta, now automatically detects 96% of all hate speech, whereas about four years ago that was only 40%. He attributes the improvement to deep learning. "We are bombarded with enormous amounts of information every day, and this is only getting worse. We are going to need even more automated systems that allow us o sift through this information."

Shannon Vallor, a professor focused on the Ethics of Data and AI at the U.K.'s University of Edinburgh, objected to LeCun's idea that technology just moves forward as if it has its own will, and that society simply has to adapt. "That is precisely how we got into certain problems. Technology can take many forked paths and people decide which of the forked paths are optimal to follow. Deep learning systems are through and through artefacts that humans build and deploy, according to their own values, incentives, and power structures, and therefore we are still fully reponsible for them."

One of the criticisms of deep learning is that, while it is good at pattern recognition, it is currently not suitable for logical reasoning, while good-old symbolic AI is. However, both Bengio and LeCun saw no reason why deep learning systems cannot be made to reason. As Bengio observed, "Humans also use some kind of neural nets in their brains, and I believe that there are ways to get to human-like reasoning with deep learning architectures."

However, Bengio added that he doesn't think just scaling up present-day neural nets will be sufficient. "I believe that we can take much more inspiration from biology and human intelligence in order to bridge the gap between current AI and human intelligence."

It is not just that deep learning cannot reason yet, but also that we cannot reason about deep neural networks, added Sanjeev Arora, a theoretical computer scientist at Princeton University. Said Arora, "We need more understanding of what is going on inside the black box of deep learning systems, and that is what I am trying to do."

Raj Reddy was the panelist who has been involved in the AI community by far the longest, since the 1960s, when he did his Ph.D. research with AI pioneer John McCarthy. Reddy sees the glass as half-full instead of half-empty: "One important application of deep learning is helping the people at the bottom of the societal pyramid. There are about two billion people in the world who cannot read or write. All kinds of language technologies, like speech recognition and translation, are now good enough to be used. I have worked in this area for almost 60 years and I didn't think such technologies would become practical in my lifetime. Ten years from now, even an illiterate person will be able to read any book, watch any movie, and have a conversation with anyone, anywhere in the world, in their native language."

However, dealing with smaller languages is still an unsolved problem for deep learning technologies, as much less data is available for them. Dina Machuve, a data science consultant, remarked that in Africa alone, 2,000 languages are spoken for which there are no AI technologies available. It is important to go into a community and see what would work for them, so in looking for deep learning applications for Africa, Machuve concentrated on image applications. As a result, "We have developed early detection systems for poultry diseases and crop diseases based on image recognition." .... ' 

No comments: