- Item 1
- Item 2
- Item 3
- Item 4
EU's AI Act: Stricter Rules for Chatbots on the Horizon
The European Union is preparing new AI regulations that could impact development and deployment, requiring companies like OpenAI to disclose their use of copyrighted material. As the AI Act evolves, chatbots may face increased scrutiny and transparency requirements.
The EU's headquarters in Brussels. Photo credit: Open source photo.
🧠 Stay Ahead of the Curve
The EU is developing new AI regulations, potentially requiring companies like OpenAI to disclose their use of copyrighted material.
This development highlights the growing concern surrounding AI safety, transparency, and responsible deployment in the EU.
Stricter regulations could shape the future of AI governance, affecting innovation and the way AI platforms operate globally.
April 14, 2023
The European Union is preparing new regulations that could significantly impact the development and deployment of artificial intelligence (AI) platforms, according to the Financial Times. As discussions continue in Brussels regarding the proposals in the comprehensive Artificial Intelligence Act, sources indicate that the forthcoming regulation may require companies like OpenAI to disclose their use of copyrighted material in training their AI.
"High Risk" Chatbots in Focus of the AI Act
Central to the EU's AI Act is a four-tiered classification system that measures the risk AI technology could pose to an individual's health, safety, or fundamental rights. The risk levels are unacceptable, high, limited, and minimal, each of which triggers different regulatory requirements.
The rapid rise of generative AI technology has caught the attention of lawmakers due to its powerful capabilities and widespread adoption, prompting individual EU member countries to take action. Italy recently banned ChatGPT, citing alleged privacy violations, while Germany's commissioner for data protection is considering a similar ban. Following Italy's announcement, data protection authorities in France and Ireland consulted with the Italian data regulator to discuss their stance on ChatGPT.
Although the AI Act has not yet passed, preliminary comments from European parliament members suggest they may view generative AI art platforms, such as Stable Diffusion, and chatbots like OpenAI’s ChatGPT, as potentially hazardous innovations. In February, lead lawmakers on the AI Act proposed classifying AI platforms that use Large Language Models (LLMs) to generate text outputs without human supervision as high-risk.
Specific Proposals to Regulate Chatbots under the AI Act
Lawmakers are integrating new proposals into the AI Act to directly address the rise of sophisticated chatbots and LLMs.
A key proposal would compel developers of AI platforms like ChatGPT to disclose if they used copyrighted material to train their AI models. As previously reported, OpenAI has declined to share details on the training of GPT-4, much to the disappointment of AI researchers advocating for greater transparency.
Another proposal under consideration would require AI chatbots to inform human users that they are not conversing with another human. With instances of people forming attachments to chatbots and some even believing they are sentient, lawmakers argue that such disclosure is a fundamental first step.
Regulatory Changes on the Horizon, but Not Imminent
Introduced in 2021, the AI Act's recent debate over additional chatbot regulations suggests that the process to finalize the law will not conclude until at least 2024. In the interim, individual EU member states continue crafting their own policies, creating a complex web of governance criteria for companies like OpenAI to navigate.
Dragoș Tudorache, an EU parliament member leading negotiations on the AI Act, underscored the importance of regulations for ensuring safe deployment. "It is a pioneering technology, and we need to harness it, which means putting rules in place," he said. "Self due diligence by companies is not enough."
NewsAI and Media Titans Quietly Hash Out Future of Content Licensing
June 16, 2023
ResearchIn Largest-Ever Turing Test, 1.5 Million Humans Guess Little Better Than Chance
June 09, 2023
NewsHigh-Profile AI Leaders Warn of “Risk of Extinction” from AI
May 30, 2023
NewsKey Takeaways from OpenAI CEO Sam Altman's Senate Testimony
May 16, 2023
NewsOpenAI Readies Open-Source Model as Competition Intensifies
May 15, 2023
ResearchChatGPT Trading Algorithm Delivers 500% Returns in Stock Market
May 10, 2023
NewsLeaked Google Memo Claiming “We Have No Moat, and Neither Does OpenAI” Shakes the AI World
May 05, 2023
NewsChegg’s Stock Tumble Serves as Wake Up Call on the Perils of AI
May 03, 2023
NewsHollywood Writers on Strike Grapple with AI’s Role in Creative Process
May 02, 2023
ResearchGPT AI Enables Scientists to Passively Decode Thoughts in Groundbreaking Study
May 01, 2023
NewsChatGPT Grows in Popularity as Bing and Bard Flatline
April 27, 2023
ResearchStanford/MIT Study: GPT Boosts Support Agent Productivity by up to 35%
April 26, 2023
NewsSnap's My AI Feature Faces Unexpected Backlash from Users
April 24, 2023
News"Next to Impossible": OpenAI's ChatGPT Faces GDPR Compliance Woes
April 20, 2023
NewsMicrosoft's AI Chip Strategy Reduces Costs and Nvidia Dependence
April 18, 2023
News4 Million Accounts Compromised by Fake ChatGPT App
April 17, 2023
ResearchStudy: Assigning Personas Creates a Sixfold Increase in ChatGPT Toxicity
April 13, 2023
ResearchGPT-4 Outperforms Elite Crowdworkers, Saving Researchers $500,000 and 20,000 hours
April 11, 2023
ResearchGenerative Agents: Stanford's Groundbreaking AI Study Simulates Authentic Human Behavior
April 10, 2023
ResearchBye-Bye, Mechanical Turk? How ChatGPT is Making Humans Obsolete
April 09, 2023
NewsMayor Threatens Landmark Defamation Lawsuit Against OpenAI's ChatGPT
April 06, 2023
NewsOpenAI's ChatGPT Suspended in Italy Amid Privacy and Cybersecurity Concerns
March 31, 2023
NewsCiting "Profound Risks to Society," Prominent AI Experts Call for Pause
March 29, 2023
NewsEuropol Warns of ChatGPT's Dark Side as Criminals Exploit AI Potential
March 28, 2023
CultureAs Online Users Increasingly Jailbreak ChatGPT in Creative Ways, Risks Abound for OpenAI
March 27, 2023
NewsAI Researchers Voice Disappointment at GPT-4’s Lack of Openness
March 16, 2023