- Item 1
- Item 2
- Item 3
- Item 4
Key Takeaways from OpenAI CEO Sam Altman's Senate Testimony
In a three-hour Senate hearing, OpenAI CEO Sam Altman called for the regulation of AI, the formation of a government body to license AI models, and addressed a number of questions from lawmakers on the dangers posed by AI systems.
OpenAI CEO Sam Altman testifies before the US Senate on the emerging opportunities and threats posed by AI systems. Photo credit: New York Times
May 16, 2023
In a three-hour Senate hearing hailed as a watershed moment by members of all political parties, OpenAI CEO Sam Altman engaged with US Senators on a range of topics concerning artificial intelligence (AI). The discussion centered around the dangers posed by AI and the need for responsible regulation to ensure its safe and ethical use.
Unlike previous hearings where tech CEOs faced skepticism and hostility from lawmakers, this session took on a markedly different tone. Senators approached the conversation with a collaborative and inquisitive spirit, providing ample opportunities for Altman to share his own proposals for the future of AI.
Our key takeaways are below.
There’s bipartisan consensus on AI's potential impact
Lawmakers from both major parties demonstrated a shared understanding of AI's transformative power. They likened the emergence of generative AI to significant historical milestones, such as the invention of the first cellphone, the creation of the internet, the Industrial Revolution, the printing press, and the atomic bomb. This consensus highlights a mutual recognition of the need to address the risks associated with unchecked AI development.
Past efforts to regulate Big Tech have run into hurdles due to political disagreement. Here, members of Congress seemed open to finding a bipartisan path forward. Notably, Sen. Lindsay Graham (R-SC) said he supported forming a new government agency to regulate AI, including licensing AI models.
The United States trails behind global regulation efforts
Regulators in the EU are moving quickly with a near-final draft of the AI Act, and lawmakers in China are deep in crafting a second round of regulations on generative AI. Meanwhile, the Biden administration only just convened a group of CEOs on AI, and today’s Senate hearing marks the first time Congressional lawmakers are seriously grappling with the issue.
Altman supports AI regulation, including government licensing of models
During the hearing, OpenAI CEO Sam Altman presented a series of proposals for responsible AI regulation:
Government agency for AI safety oversight: Altman proposed the establishment of a government agency tasked with overseeing AI safety. This agency would have the authority to license companies working on advanced AI models and revoke licenses if safety standards are violated. Altman emphasized the need to prevent the development of AI systems that can "self-replicate and self-exfiltrate into the wild" and manipulate humans into ceding control.
International cooperation and leadership: Altman called for international regulation of AI, urging the United States to take a leadership role. He suggested the creation of an international body, similar to the International Atomic Energy Agency (IAEA), to establish global standards for AI. Altman drew parallels between the rise of AI and the governance of nuclear weapons and energy, emphasizing the importance of global cooperation.
Regulation of AI could benefit OpenAI immensely
As open-source AI models gain popularity, OpenAI is reportedly considering releasing its own model to shape the broader narrative. However, the licensing of AI models by a government agency could shift the balance of power, favoring private, licensed models. Although Altman did not explicitly state it, his support for regulation is likely driven by a business motive.
Altman was vague on copyright and compensation issues
Sen. Marsha Blackburn (R-Tenn) pressed Altman on how songwriters and artists could be compensated for their works used by AI companies. In particular, she cited using OpenAI’s Jukebox to create a song that resembled country singer Garth Brooks. “If I can go in and say ‘write me a song that sounds like Garth Brooks,’ and it takes part of an existing song, there has to be compensation to that artist for that utilization and that use,” Blackburn said.
Altman was generally vague on many of his responses, acknowledging that “content creators need to benefit” but offering few specifics on how that could happen. The music industry is understandably worried about AI, with Spotify recently removing thousands of AI-generated tracks last week as AI-made music floods streaming platforms.
Section 230 inapplicable to AI companies, Altman concedes
Asked by Sen. Lindsay Graham (R-SC) on whether Section 230 could shield ChatGPT’s outputs, much like how it protects social media companies today from their users’ content, Altman was direct: no, Section 230 isn’t applicable to AI models. Altman stressed the urgency of new legislation specifically targeting AI.
Animosity towards Section 230 is a bipartisan issue. Sen. Richard Blumenthal (D-Conn.) said lawmakers needed to avoid repeating the “mistakes of the past” and cited Section 230 as an example.
Voter influence at scale: AI's greatest threat
Altman acknowledged that AI could “cause significant harm to the world.” “If this technology goes wrong, it can go quite wrong,” he said. One of the most immediate threats, he pointed out, is “the more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation.” Voter disinformation and manipulation, personalized but done at scale, could pose a new threat to democratic governments and society in general.
AI critics are worried the corporations will write the rules
Sen. Cory Booker (D-NJ) called the “massive corporate concentration” of AI as a primary concern – in particular, Microsoft’s multi-billion dollar investment in OpenAI. Outsiders were also skeptical of the hearing. “Government is supposed to be a balance on industry. If industry is writing the laws, then we have no balance,” AI ethics researcher Timnit Gebru said.
Research
In Largest-Ever Turing Test, 1.5 Million Humans Guess Little Better Than ChanceJune 09, 2023
News
Leaked Google Memo Claiming “We Have No Moat, and Neither Does OpenAI” Shakes the AI WorldMay 05, 2023
Research
GPT AI Enables Scientists to Passively Decode Thoughts in Groundbreaking StudyMay 01, 2023
Research
GPT-4 Outperforms Elite Crowdworkers, Saving Researchers $500,000 and 20,000 hoursApril 11, 2023
Research
Generative Agents: Stanford's Groundbreaking AI Study Simulates Authentic Human BehaviorApril 10, 2023
Culture
As Online Users Increasingly Jailbreak ChatGPT in Creative Ways, Risks Abound for OpenAIMarch 27, 2023