Complying with Microdirectives
Representatives of OpenAI declined to comment on companies privacy concerns.
Generative AI tools such as OpenAI’s ChatGPT have been heralded as pivotal for the world of work, but the technology is creating a formidable challenge for corporate America.
Proponents of OpenAI's ChatGPT and other generative artificial intelligence tools contend that they can boost workplace productivity, automating certain tasks and assisting with problem-solving, but some corporate leaders have banned their use over concerns about exposing sensitive company and customer information.
These leaders are concerned that employees could upload proprietary or sensitive data into the chatbot, which would be added to a database used to train it, allowing hackers or competitors to ask the chatbot for that information.
A post on OpenAI's website said private mode allows ChatGPT users to keep their prompts out of its training data.
Massachusetts Institute of Technology's Yoon Kim said that while technically possible, guardrails implemented by OpenAI prevent ChatGPT from using sensitive prompts in its training data.
Kim added that the vast amount of data needed by ChatGPT to learn would make it difficult for hackers to access proprietary data entered as a prompt.
From The Washington Post
View Full Article - May Require Paid Subscription