- Item 1
- Item 2
- Item 3
- Item 4
Leading AI Language Models Fall Short of Upcoming EU Regulations, Stanford Study Warns
The world's leading AI language models could fail to meet the EU's new AI Act, facing significant regulatory risks and potential heavy fines. In particular, open-source models could face downstream regulatory risk from their deployment.
The EU's AI Act could create difficulties for generative AI models and compliance, a Stanford study warns. Photo illustration: Pixabay / Artisana
🧠 Stay Ahead of the Curve
The top ten AI language models may not meet the upcoming EU AI Act's stringent requirements, a Stanford study reveals.
The AI Act, the first comprehensive AI regulations, could result in heavy fines for non-compliant AIs, impacting global AI practices.
Open-source and closed-source models would face different compliance challenges under the AI Act, highlighting the catchup AI will have to play in the face of regulations.
June 22, 2023
Sounding a note of caution, a team of Stanford researchers warned that the world's leading ten AI language models are poised to fail the stringent standards laid out by the European Union's forthcoming AI Act. Should they not meet the regulations, these AI entities could face significant regulatory risks and potentially heavy financial penalties.
The EU’s AI Act, approved in a parliamentary vote on June 14th, is currently on the pathway to becoming official law. As the world’s first comprehensive set of AI regulations, it stands to impact over 450 million individuals, while also serving as an example that nations such as the US and Canada are likely to draw inspiration from in crafting their own AI regulations.
Implications for Foundation Models
Despite recent clarifications that exempt foundation models like GPT-4 from the "high-risk" AI category, generative AI models are still subject to an array of requirements under the AI Act. These include mandatory registration with relevant authorities and essential transparency disclosures, areas where many models fall short.
The price of non-compliance is hefty: fines could exceed €20,000,000 or amount to 4% of a company's worldwide revenue. Furthermore, open-source generative AI models are required to meet the same standards as their closed-source counterparts, raising questions within the open-source community of legal risk and exposure.
The Non-compliant Landscape
In the study, researchers evaluated ten leading AI models against the draft AI Act's 12 fundamental compliance requirements. Alarmingly, most models scored less than 50% in overall compliance.
Notably, closed-source models like OpenAI's GPT-4 only garnered 25 out of a possible 48 points. Google's PaLM 2 fared slightly better with a score of 27, while Cohere’s Command LLM managed just 23. Anthropic’s Claude languished near the bottom with a meager 7 points.
On the other hand, open-source model Hugging Face’s BLOOM performed the best, securing 36 points. However, other open-source models, such as Meta’s LLaMA and Stable Diffusion v2, merely achieved 21 and 22 points respectively.
Noteworthy Patterns and Observations
A notable trend emerged from the aggregate results: open-source models generally outperformed closed-source models in several critical areas, including data sources transparency and resource utilization. Conversely, closed-source models excelled in areas such as comprehensive documentation and risk mitigation.
However, the study points out significant areas of uncertainty. One such area is the murky "dimensions of performance" for complying with the numerous requirements set forth in the AI Act. Moreover, the question of enforcement remains unresolved, and the researchers warn that a lack of technical expertise could hinder the EU’s ability to regulate these foundation models effectively.
The Way Forward
Despite these concerns, the researchers advocate for the implementation of the EU's AI Act. They argue it would "act as a catalyst for AI creators to collectively establish industry standards that enhance transparency" and bring about "significant positive change in the foundation model ecosystem."
NewsAI and Media Titans Quietly Hash Out Future of Content Licensing
June 16, 2023
ResearchIn Largest-Ever Turing Test, 1.5 Million Humans Guess Little Better Than Chance
June 09, 2023
NewsHigh-Profile AI Leaders Warn of “Risk of Extinction” from AI
May 30, 2023
NewsKey Takeaways from OpenAI CEO Sam Altman's Senate Testimony
May 16, 2023
NewsOpenAI Readies Open-Source Model as Competition Intensifies
May 15, 2023
ResearchChatGPT Trading Algorithm Delivers 500% Returns in Stock Market
May 10, 2023
NewsLeaked Google Memo Claiming “We Have No Moat, and Neither Does OpenAI” Shakes the AI World
May 05, 2023
NewsChegg’s Stock Tumble Serves as Wake Up Call on the Perils of AI
May 03, 2023
NewsHollywood Writers on Strike Grapple with AI’s Role in Creative Process
May 02, 2023
ResearchGPT AI Enables Scientists to Passively Decode Thoughts in Groundbreaking Study
May 01, 2023
NewsChatGPT Grows in Popularity as Bing and Bard Flatline
April 27, 2023
ResearchStanford/MIT Study: GPT Boosts Support Agent Productivity by up to 35%
April 26, 2023
NewsSnap's My AI Feature Faces Unexpected Backlash from Users
April 24, 2023
News"Next to Impossible": OpenAI's ChatGPT Faces GDPR Compliance Woes
April 20, 2023
NewsMicrosoft's AI Chip Strategy Reduces Costs and Nvidia Dependence
April 18, 2023
News4 Million Accounts Compromised by Fake ChatGPT App
April 17, 2023
NewsEU's AI Act: Stricter Rules for Chatbots on the Horizon
April 14, 2023
ResearchStudy: Assigning Personas Creates a Sixfold Increase in ChatGPT Toxicity
April 13, 2023
ResearchGPT-4 Outperforms Elite Crowdworkers, Saving Researchers $500,000 and 20,000 hours
April 11, 2023
ResearchGenerative Agents: Stanford's Groundbreaking AI Study Simulates Authentic Human Behavior
April 10, 2023
ResearchBye-Bye, Mechanical Turk? How ChatGPT is Making Humans Obsolete
April 09, 2023
NewsMayor Threatens Landmark Defamation Lawsuit Against OpenAI's ChatGPT
April 06, 2023
NewsOpenAI's ChatGPT Suspended in Italy Amid Privacy and Cybersecurity Concerns
March 31, 2023
NewsCiting "Profound Risks to Society," Prominent AI Experts Call for Pause
March 29, 2023
NewsEuropol Warns of ChatGPT's Dark Side as Criminals Exploit AI Potential
March 28, 2023
CultureAs Online Users Increasingly Jailbreak ChatGPT in Creative Ways, Risks Abound for OpenAI
March 27, 2023
NewsAI Researchers Voice Disappointment at GPT-4’s Lack of Openness
March 16, 2023