/* ---- Google Analytics Code Below */

Tuesday, April 18, 2023

Ian Hogarth Says Slow Down in The Financial Times

He writes: We must slow down the race to God-like AI

I’ve invested in more than 50 artificial intelligence start-ups. What I’ve seen worries me

Ian Hogarth,  APRIL 13 2023

We’ll send you a myFT Daily Digest email rounding up the latest Artificial intelligence news every morning.

The writer of this essay is an investor and co-author of the annual “State of AI” report

On a cold evening in February I attended a dinner party at the home of an artificial intelligence researcher in London, along with a small group of experts in the field. He lives in a penthouse apartment at the top of a modern tower block, with floor-to-ceiling windows overlooking the city’s skyscrapers and a railway terminus from the 19th century. Despite the prime location, the host lives simply, and the flat is somewhat austere.

During dinner, the group discussed significant new breakthroughs, such as OpenAI’s ChatGPT and DeepMind’s Gato, and the rate at which billions of dollars have recently poured into AI. I asked one of the guests who has made important contributions to the industry the question that often comes up at this type of gathering: how far away are we from “artificial general intelligence”? AGI can be defined in many ways but usually refers to a computer system capable of generating new scientific knowledge and performing any task that humans can.

Most experts view the arrival of AGI as a historical and technological turning point, akin to the splitting of the atom or the invention of the printing press. The important question has always been how far away in the future this development might be. The AI researcher did not have to consider it for long. “It’s possible from now onwards,” he replied.

This is not a universal view. Estimates range from a decade to half a century or more. What is certain is that creating AGI is the explicit aim of the leading AI companies, and they are moving towards it far more swiftly than anyone expected. As everyone at the dinner understood, this development would bring significant risks for the future of the human race. “If you think we could be close to something potentially so dangerous,” I said to the researcher, “shouldn’t you warn people about what’s happening?” He was clearly grappling with the responsibility he faced but, like many in the field, seemed pulled along by the rapidity of progress.

When I got home, I thought about my four-year-old who would wake up in a few hours. As I considered the world he might grow up in, I gradually shifted from shock to anger. It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight. Did the people racing to build the first real AGI have a plan to slow down and let the rest of the world have a say in what they were doing? And when I say they, I really mean we, because I am part of this community.

My interest in machine learning started in 2002, when I built my first robot somewhere inside the rabbit warren that is Cambridge university’s engineering department. This was a standard activity for engineering undergrads, but I was captivated by the idea that you could teach a machine to navigate an environment and learn from mistakes. I chose to specialise in computer vision, creating programs that can analyse and understand images, and in 2005 I built a system that could learn to accurately label breast-cancer biopsy images. In doing so, I glimpsed a future in which AI made the world better, even saving lives. After university, I co-founded a music-technology start-up that was acquired in 2017.

Since 2014, I have backed more than 50 AI start-ups in Europe and the US and, in 2021, launched a new venture capital fund, Plural. I am an angel investor in some companies that are pioneers in the field, including Anthropic, one of the world’s highest-funded generative AI start-ups, and Helsing, a leading European AI defence company. Five years ago, I began researching and writing an annual “State of AI” report with another investor, Nathan Benaich, which is now widely read. At the dinner in February, significant concerns that my work has raised in the past few years solidified into something unexpected: deep fear.

A three-letter acronym doesn’t capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI. A super intelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it. To be clear, we are not here yet. But the nature of the technology means it is exceptionally difficult to predict exactly when we will get there. God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race.

Recently the contest between a few companies to create God-like AI has rapidly accelerated. They do not yet know how to pursue their aim safely and have no oversight. They are running towards a finish line without an understanding of what lies on the other side.

How did we get here? The obvious answer is that computers got more powerful. The chart below shows how the amount of data and “compute” — the processing power used to train AI systems — has increased over the past decade and the capabilities this has resulted in. (“Floating-point Operations Per Second”, or FLOPS, is the unit of measurement used to calculate the power of a supercomputer.) This generation of AI is very effective at absorbing data and compute. The more of each that it gets, the more powerful it becomes.


2012 2022

Compute used to train largest AI model 1e+16 FLOPS

(10,000,000,000,000,000) 1e+24 FLOPS (1,000,000,000,000,000,000,000,000)

Data consumed by largest AI model Imagenet: a dataset of 15mn labelled images (150GB) Datasets of more than 2bn images or much of the text on the internet (estimated at 10,000GB*)

Capabilities of largest AI models Can recognise images at “beginner human” level

Superhuman at chess Superhuman or high-human at a wide variety of games (Go, Diplomacy, Starcraft II, poker etc)

Human-level at 150 reasoning & knowledge tasks

Passes US Medical Licensing Exam, passes the Bar Exam

Displays complex capabilities like power-seeking, deceiving humans

Can self-improve by “reasoning” out loud

Can write 40 per cent of the code for a software engineer

The compute used to train AI models has increased by a factor of one hundred million in the past 10 years. We have gone from training on relatively small datasets to feeding AIs the entire internet. AI models have progressed from beginners — recognising everyday images — to being superhuman at a huge number of tasks. They are able to pass the bar exam and write 40 per cent of the code for a software engineer. They can generate realistic photographs of the pope in a down puffer coat and tell you how to engineer a biochemical weapon.

There are limits to this “intelligence”, of course. As the veteran MIT roboticist Rodney Brooks recently said, it’s important not to mistake “performance for competence”. In 2021, researchers Emily M Bender, Timnit Gebru and others noted that large language models (LLMs) — AI systems that can generate, classify and understand text — are dangerous partly because they can mislead the public into taking synthetic text as meaningful. But the most powerful models are also beginning to demonstrate complex capabilities, such as power-seeking or finding ways to actively deceive humans .

Consider a recent example. Before OpenAI released GPT-4 last month, it conducted various safety tests. In one experiment, the AI was prompted to find a worker on the hiring site TaskRabbit and ask them to help solve a Captcha, the visual puzzles used to determine whether a web surfer is human or a bot. The TaskRabbit worker guessed something was up: “So may I ask a question? Are you [a] robot?”  ... ' 

No comments: