/* ---- Google Analytics Code Below */

Wednesday, April 12, 2023

Rage Against Intelligent Machines

Good overview of the topic with links to 

ACM NEWS

Rage Against the Intelligent Machines

By Paul Marks,  Commissioned by CACM Staff, April 11, 2023

If the launch of ChatGPT in November 2022 was the point at which generative artificial intelligence (AI) began to make an appreciable impact on the public consciousness, the final week of March 2023 was the start of a multi-faceted fightback against AI, one that could have deep ramifications for the freedom firms have to roll out machine intelligences into the public domain.

The AI counter-offensive that week involved a number of high-profile organizations questioning the risks inherent in the largely unregulated way emerging Large Language Models (LLMs)—like OpenAI's ChatGPT and GPT-4, Microsoft's Bing Chat and Google's Bard systems—are being fielded.

At issue, they say, is the way LLMs are being unleashed without prior, transparent, and auditable assessment of their risks, such as aiding and abetting cybercrime, their propensity for simply fabricating facts people might rely on, reinforcing dangerous disinformation, and exhibiting overt and offensive societal biases. Some are calling for LLM development to be halted while measures to make them safe are thrashed out.

This was not just an argument amongst AI cognoscenti. News of the spat even reached the White House, with President Biden reiterating on April 5 that artificial intelligence providers, like all technology companies, "have a responsibility to make sure their products are safe before making them public." 

First out of the gate, on March 27, was Europol, the joint criminal intelligence organization of the 27 nations of the European Union, which published a report  the "diverse range of criminal use cases" it predicts products like ChatGPT could be used in.

Europol's digital forensics experts found the LLM's ability to quickly produce convincing written text in many languages would serve to hide the telltale typos and grammatical errors that are normally a giveaway with phishing messages,  and so boost the success of phishing campaigns.

Europol also said the ability to write messages in anybody's writing style is a gift to fraudsters impersonating employees to entice their colleagues to download malware, or to move large amounts of cash, as has happened in so-called "CEO fraud" cases. In addition, terrorist groups could prompt LLMs to help them generate text to promote and defend disinformation and fake news, lending false credibility to their propaganda campaigns, Europol says.

Worst, perhaps, is that the code-writing capabilities of LLMs could be misused by criminals with "little to no knowledge of coding" to write malware or ransomware. "Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing. If prompts are broken down into individual steps, it is trivial to bypass these safety measures," Europol says in its report.

Of particular worry, says the organization, is that LLMs are far from a done deal: they are constantly being improved,  so their potential criminal exploitation could happen ever-faster and at greater scale.

 "The Europol report seems exactly correct. I agree things look grim," says Gary Marcus, a professor of psychology and neural science at New York University, and an AI entrepreneur and commentator. "Perhaps coupled with mass AI-generated propaganda, LLM-enhanced terrorism could in turn lead to nuclear war, or to the deliberate spread of pathogens worse than Covid-19," Marcus later said in his newsletter.

OpenAI did not respond to questions on Europol's findings, and neither did the U.S. Cybersecurity and Infrastructure Security Agency (CISA), part of the Department for Homeland Security.

However, two days later, on March 29, the AI fightback moved up another notch, when Marcus was one of more than 1,000 initial signatories to an open letter to AI labs calling on them to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4".

Drafted by the Future of Life Institute, in Cambridge, MA, which campaigns against technologies posing existential risks, the letter urged that "AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."

"These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."

The letter was signed by some of the leading specialists in AI, including deep neural networking pioneer (and ACM A.M. Turing Award recipient) Yoshua Bengio of the Quebec AI Institute (MILA) in Montreal, Canada, and Stuart Russell, head of the Center for Human Compatible AI at the University of California, Berkeley. ... '   (Much more at the link) 

No comments: