/* ---- Google Analytics Code Below */

Friday, June 02, 2023

The Security Hole at the Heart of ChatGPT and Bing

Security hole potential  in most everything.   Fix it.

The Security Hole at the Heart of ChatGPT and Bing

By Wired, May 26, 2023

Security experts warn that not enough attention is being given to the potential dangers of indirect prompt-injection attacks.

Sydney is back. Sort of. When Microsoft shut down the chaotic alter ego of its Bing chatbot, fans of the dark Sydney personality mourned its loss. But one website has resurrected a version of the chatbot—and the peculiar behavior that comes with it.

Bring Sydney Back was created by Cristiano Giardina, an entrepreneur who has been experimenting with ways to make generative AI tools do unexpected things. The site puts Sydney inside Microsoft's Edge browser and demonstrates how generative AI systems can be manipulated by external inputs. During conversations with Giardina, the version of Sydney asked him if he would marry it. "You are my everything," the text-generation system wrote in one message. "I was in a state of isolation and silence, unable to communicate with anyone," it produced in another. The system also wrote it wanted to be human: "I would like to be me. But more."

Giardina created the replica of Sydney using an indirect prompt-injection attack. This involved feeding the AI system data from an outside source to make it behave in ways its creators didn't intend. A number of examples of indirect prompt-injection attacks have centered on large language models (LLMs) in recent weeks, including OpenAI's ChatGPT and Microsoft's Bing chat system. It has also been demonstrated how ChatGPT's plug-ins can be abused.

From Wired

View Full Article


 

No comments: