/* ---- Google Analytics Code Below */

Friday, June 17, 2022

Can We Trust Our Impressive AI Language Models?

As usual an excellent, heavily linked and considerable and serious piece by Irving, below just an intro, ldo ink-through...

Irving Wladawsky-Berger

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects .... 

Can We Trust Our Impressive AI Language Models?

One of the key findings of the 2022 AI Index Report was that large language models (LLMs) are setting records on technical benchmarks thanks to advances in deep neural networks and computational power that allows them to be trained using huge amounts of data. LLMs are now surpassing human baselines in a number of complex language tasks, including English language understanding, text summarization, natural language inference, and machine translation.

A.I. Is Mastering Language. Should We Trust What It Says?, a recent NY Times Magazine article by science writer Steven Johnson, took a close look at one such LLM, the Generative Pre-Trained Transformer 3, generally referred to as GPT-3. GPT-3 was created by the AI research company OpenAI. It’s been trained with over 700 gigabytes of data from across the web, along with a large collection of text from digitized books. “Since GPT-3’s release, the internet has been awash with examples of the software’s eerie facility with language - along with its blind spots and foibles and other more sinister tendencies,” said Johnson. .... ' 

No comments: