Considerable, interesting piece, Now how well can we detect this? Also covered in Schneier with much further analysis and comment:
Large Language Models as Corporate Lobbyists
9 Pages Posted: 4 Jan 2023 Last revised: 16 Jan 2023 By John Nay
Stanford University - CodeX - Center for Legal Informatics; New York University (NYU); Brooklyn Artificial Intelligence Research; Brooklyn Investment Group (BKLN.com)
Date Written: January 2, 2023
Abstract
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. An autoregressive large language model (OpenAI’s text-davinci-003) determines if proposed U.S. Congressional bills are relevant to specific public companies and provides explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of novel ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. We also benchmark the performance of the previous OpenAI GPT-3 model (text-davinci-002), which was the state-of-the-art model on many academic natural language tasks until text-davinci-003 was recently released. The performance of text-davinci-002 is worse than a simple benchmark. These results suggest that, as large language models continue to exhibit improved natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. Longer-term, if AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that law as information could play in aligning AI with humans. This Essay explores how this is increasingly a possibility. Initially, AI is being used to simply augment human lobbyists for a small proportion of their daily tasks. However, firms have an incentive to use less and less human oversight over automated assessments of policy ideas and the written communication to regulatory agencies and Congressional staffers. The core question raised is where to draw the line between human-driven and AI-driven policy influence.
Keywords: Artificial Intelligence, AI, Machine Learning, Natural Language Processing, NLP, Self-Supervised Learning, Large Language Models, GPT, Foundation Models, AI Safety, AI Alignment, AI & Law, AI Policy, Computational Legal Studies, Computational Law, Law-Making, Public Policy, Policy-Making, Lobbying
JEL Classification: C45, C55, K49, O30 ...
No comments:
Post a Comment