- Item 1
- Item 2
- Item 3
- Item 4
Key Takeaways from OpenAI CEO Sam Altman's Senate Testimony
In a three-hour Senate hearing, OpenAI CEO Sam Altman called for the regulation of AI, the formation of a government body to license AI models, and addressed a number of questions from lawmakers on the dangers posed by AI systems.
OpenAI CEO Sam Altman testifies before the US Senate on the emerging opportunities and threats posed by AI systems. Photo credit: New York Times
May 16, 2023
In a three-hour Senate hearing hailed as a watershed moment by members of all political parties, OpenAI CEO Sam Altman engaged with US Senators on a range of topics concerning artificial intelligence (AI). The discussion centered around the dangers posed by AI and the need for responsible regulation to ensure its safe and ethical use.
Unlike previous hearings where tech CEOs faced skepticism and hostility from lawmakers, this session took on a markedly different tone. Senators approached the conversation with a collaborative and inquisitive spirit, providing ample opportunities for Altman to share his own proposals for the future of AI.
Our key takeaways are below.
There’s bipartisan consensus on AI's potential impact
Lawmakers from both major parties demonstrated a shared understanding of AI's transformative power. They likened the emergence of generative AI to significant historical milestones, such as the invention of the first cellphone, the creation of the internet, the Industrial Revolution, the printing press, and the atomic bomb. This consensus highlights a mutual recognition of the need to address the risks associated with unchecked AI development.
Past efforts to regulate Big Tech have run into hurdles due to political disagreement. Here, members of Congress seemed open to finding a bipartisan path forward. Notably, Sen. Lindsay Graham (R-SC) said he supported forming a new government agency to regulate AI, including licensing AI models.
The United States trails behind global regulation efforts
Regulators in the EU are moving quickly with a near-final draft of the AI Act, and lawmakers in China are deep in crafting a second round of regulations on generative AI. Meanwhile, the Biden administration only just convened a group of CEOs on AI, and today’s Senate hearing marks the first time Congressional lawmakers are seriously grappling with the issue.
Altman supports AI regulation, including government licensing of models
During the hearing, OpenAI CEO Sam Altman presented a series of proposals for responsible AI regulation:
Government agency for AI safety oversight: Altman proposed the establishment of a government agency tasked with overseeing AI safety. This agency would have the authority to license companies working on advanced AI models and revoke licenses if safety standards are violated. Altman emphasized the need to prevent the development of AI systems that can "self-replicate and self-exfiltrate into the wild" and manipulate humans into ceding control.
International cooperation and leadership: Altman called for international regulation of AI, urging the United States to take a leadership role. He suggested the creation of an international body, similar to the International Atomic Energy Agency (IAEA), to establish global standards for AI. Altman drew parallels between the rise of AI and the governance of nuclear weapons and energy, emphasizing the importance of global cooperation.
Regulation of AI could benefit OpenAI immensely
As open-source AI models gain popularity, OpenAI is reportedly considering releasing its own model to shape the broader narrative. However, the licensing of AI models by a government agency could shift the balance of power, favoring private, licensed models. Although Altman did not explicitly state it, his support for regulation is likely driven by a business motive.
Altman was vague on copyright and compensation issues
Sen. Marsha Blackburn (R-Tenn) pressed Altman on how songwriters and artists could be compensated for their works used by AI companies. In particular, she cited using OpenAI’s Jukebox to create a song that resembled country singer Garth Brooks. “If I can go in and say ‘write me a song that sounds like Garth Brooks,’ and it takes part of an existing song, there has to be compensation to that artist for that utilization and that use,” Blackburn said.
Altman was generally vague on many of his responses, acknowledging that “content creators need to benefit” but offering few specifics on how that could happen. The music industry is understandably worried about AI, with Spotify recently removing thousands of AI-generated tracks last week as AI-made music floods streaming platforms.
Section 230 inapplicable to AI companies, Altman concedes
Asked by Sen. Lindsay Graham (R-SC) on whether Section 230 could shield ChatGPT’s outputs, much like how it protects social media companies today from their users’ content, Altman was direct: no, Section 230 isn’t applicable to AI models. Altman stressed the urgency of new legislation specifically targeting AI.
Animosity towards Section 230 is a bipartisan issue. Sen. Richard Blumenthal (D-Conn.) said lawmakers needed to avoid repeating the “mistakes of the past” and cited Section 230 as an example.
Voter influence at scale: AI's greatest threat
Altman acknowledged that AI could “cause significant harm to the world.” “If this technology goes wrong, it can go quite wrong,” he said. One of the most immediate threats, he pointed out, is “the more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation.” Voter disinformation and manipulation, personalized but done at scale, could pose a new threat to democratic governments and society in general.
AI critics are worried the corporations will write the rules
Sen. Cory Booker (D-NJ) called the “massive corporate concentration” of AI as a primary concern – in particular, Microsoft’s multi-billion dollar investment in OpenAI. Outsiders were also skeptical of the hearing. “Government is supposed to be a balance on industry. If industry is writing the laws, then we have no balance,” AI ethics researcher Timnit Gebru said.
NewsAI and Media Titans Quietly Hash Out Future of Content Licensing
June 16, 2023
ResearchIn Largest-Ever Turing Test, 1.5 Million Humans Guess Little Better Than Chance
June 09, 2023
NewsHigh-Profile AI Leaders Warn of “Risk of Extinction” from AI
May 30, 2023
NewsOpenAI Readies Open-Source Model as Competition Intensifies
May 15, 2023
ResearchChatGPT Trading Algorithm Delivers 500% Returns in Stock Market
May 10, 2023
NewsLeaked Google Memo Claiming “We Have No Moat, and Neither Does OpenAI” Shakes the AI World
May 05, 2023
NewsChegg’s Stock Tumble Serves as Wake Up Call on the Perils of AI
May 03, 2023
NewsHollywood Writers on Strike Grapple with AI’s Role in Creative Process
May 02, 2023
ResearchGPT AI Enables Scientists to Passively Decode Thoughts in Groundbreaking Study
May 01, 2023
NewsChatGPT Grows in Popularity as Bing and Bard Flatline
April 27, 2023
ResearchStanford/MIT Study: GPT Boosts Support Agent Productivity by up to 35%
April 26, 2023
NewsSnap's My AI Feature Faces Unexpected Backlash from Users
April 24, 2023
News"Next to Impossible": OpenAI's ChatGPT Faces GDPR Compliance Woes
April 20, 2023
NewsMicrosoft's AI Chip Strategy Reduces Costs and Nvidia Dependence
April 18, 2023
News4 Million Accounts Compromised by Fake ChatGPT App
April 17, 2023
NewsEU's AI Act: Stricter Rules for Chatbots on the Horizon
April 14, 2023
ResearchStudy: Assigning Personas Creates a Sixfold Increase in ChatGPT Toxicity
April 13, 2023
ResearchGPT-4 Outperforms Elite Crowdworkers, Saving Researchers $500,000 and 20,000 hours
April 11, 2023
ResearchGenerative Agents: Stanford's Groundbreaking AI Study Simulates Authentic Human Behavior
April 10, 2023
ResearchBye-Bye, Mechanical Turk? How ChatGPT is Making Humans Obsolete
April 09, 2023
NewsMayor Threatens Landmark Defamation Lawsuit Against OpenAI's ChatGPT
April 06, 2023
NewsOpenAI's ChatGPT Suspended in Italy Amid Privacy and Cybersecurity Concerns
March 31, 2023
NewsCiting "Profound Risks to Society," Prominent AI Experts Call for Pause
March 29, 2023
NewsEuropol Warns of ChatGPT's Dark Side as Criminals Exploit AI Potential
March 28, 2023
CultureAs Online Users Increasingly Jailbreak ChatGPT in Creative Ways, Risks Abound for OpenAI
March 27, 2023
NewsAI Researchers Voice Disappointment at GPT-4’s Lack of Openness
March 16, 2023