/* ---- Google Analytics Code Below */

Tuesday, July 12, 2022

IWB Discusses AI Foundation Models

 Nicely done intro from IWB's blog.  

Irving Wladawsky-Berger

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

Foundation Models: AI’s Exciting New Frontier    click through for many more links ... 

Over the past decade, powerful AI systems have matched or surpassed human levels of performance in a number of specific tasks such as image and speech recognition, skin cancer classification and breast cancer detection, and highly complex games like Go. These AI breakthroughs have been based on deep learning (DL), a technique loosely based on the network structure of neurons in the human brain that now dominates the field. DL systems acquire knowledge by being trained with millions to billions of texts, images and other data instead of being explicitly programmed.

These task-specific, DL systems have generally relied on supervised learning, a training method where the data must be carefully labelled, - e.g., cat, not-cat, - thus requiring a big investment of time and money to produce a model that’s narrowly focused on a specific task and can’t be easily repurposed. The rising costs for training ever-larger, narrowly focused DL systems have prompted concerns that the technique was running out of steam.

Foundation models promise to get around these DL concerns by bringing to the world of AI the reusability and extensibility that have been so successful in IT software systems, from operating system like iOS and Android to the growing number and varieties of internet-based platforms.

“AI is undergoing a paradigm shift with the rise of models that are trained on broad data at scale and are adaptable to a wide range of downstream tasks,” said On the Opportunities and Risks of Foundation Models a recent report by the Center for Research on Foundation Models, an interdisciplinary initiative in the Stanford Institute for Human-Centered Artificial Intelligence (HAI) that was founded in 2021 to make fundamental advances in the study, development, and deployment of foundation models. Foundation models aim to replace the task-specific models that have dominated AI over the past decade with models that are trained with huge amounts of unlabeled data, and can then be adapted to many different tasks with minimal fine-tuning. Current examples of foundation models include large language models like GPT-3 and BERT.

Shortly after GPT-3 went online in 2020, its creators at the AI research company OpenAI discovered that not only could GPT-3 generate whole sentences and paragraphs in English in a variety of styles, but it had developed surprising skills at writing computer software even though the training data was focused on the English language, not on examples of computer code. But, as it turned out, the vast amounts of Web pages used in its training included many examples of computer programming accompanied by descriptions of what the code was designed to do, thus enabling GPT-3 to teach itself how to program. GPT-3 can also generate legal documents, like licensing agreements or leases, as well documents in a variety of other fields.

“At the same time, existing foundation models have the potential to accentuate harms, and their characteristics are in general poorly understood,” warns the Stanford report. A major finding of the 2022 AI Index Report was that while large language models like GPT-3 are setting new records on technical benchmarks, they’re also more prone to reflect the biases that may have been included in their training data, including racists, sexist, extremist and other harmful language as well as overtly abusive language patterns and harmful ideologies.  ... ' 


No comments: