- Item 1
- Item 2
- Item 3
- Item 4
AI Researchers Voice Disappointment at GPT-4’s Lack of Openness
While GPT-4 is attracting praise for its capabilities, AI researchers are notably disappointed with how few details OpenAI is sharing about how its latest AI works.
Key details of how GPT-4 remain under lock and key, to the disappointment of the AI research community. Illustration: Artisana
🧠 Stay Ahead of the Curve
GPT-4’s launch is attracting criticism from numerous AI researchers who are disappointed at the lack of details shared by OpenAI around the model’s inner workings
In a reversal of its previous stance, OpenAI cites “competitive landscape” and “safety implications” as reasons why it chose to withhold details
At a time when numerous companies are in an arms race to deploy AI, the lack of third-party access to example an AI model’s underlying foundation may have unknown long-term consequences
March 16, 2023
GPT-4’s launch is making waves in the technology community, but not everyone is excited about its debut. Notably, the AI research community is expressing broad disappointment at OpenAI’s closed-off attitude towards sharing details about GPT-4, including how the model was trained, methods used to create it, its architecture and model size, hardware it was trained on, and more.
While OpenAI released a 98-page technical report detailing various aspects of GPT-4, a section notably explains:
Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.
As the Verge reports, numerous members of the AI research community are not happy about this stance.
Ben Schmidt, VP of Information Design at Nomic AI, tweeted the following:
I think we can call it shut on 'Open' AI: the 98 page paper introducing GPT-4 proudly declares that they're disclosing *nothing* about the contents of their training set. pic.twitter.com/dyI4Vf0uL3
— Ben Schmidt / @benmschmidt@vis.social (@benmschmidt) March 14, 2023
William Falcon, CEO of Lightning AI, told VentureBeat:
I think what’s bothering everyone is that OpenAI made a whole paper that’s like 90-something pages long. That makes it feel like it’s open-source and academic, but it’s not. They describe literally nothing in there. When an academic paper says benchmarks, it says ‘Hey, we did better than this and here’s a way for you to validate that.’ There’s no way to validate that here.
And David Picard, an AI research at Ecole de Ponts ParisTech, tweeted similar disappointment:
Please @OpenAI change your name ASAP. It's an insult to our intelligence to call yourself "open" and release that kind of "technical report" that contains no technical information whatsoever. https://t.co/WdXAq4a309
— David Picard (@david_picard) March 14, 2023
Ultimately, these researchers argue, OpenAI’s decisions to limit details on GPT-4 may be practical for business in the short term but may lead to long term challenges as power in the AI world becomes increasingly centralized.
And as the capabilities of AI rapidly improve and multiple companies compete in an arms race to deploy generative AI as fast as possible, it’s not clear that the companies who own the technology are prioritizing the considerations of academic researchers.
Research
In Largest-Ever Turing Test, 1.5 Million Humans Guess Little Better Than ChanceJune 09, 2023
News
Leaked Google Memo Claiming “We Have No Moat, and Neither Does OpenAI” Shakes the AI WorldMay 05, 2023
Research
GPT AI Enables Scientists to Passively Decode Thoughts in Groundbreaking StudyMay 01, 2023
Research
GPT-4 Outperforms Elite Crowdworkers, Saving Researchers $500,000 and 20,000 hoursApril 11, 2023
Research
Generative Agents: Stanford's Groundbreaking AI Study Simulates Authentic Human BehaviorApril 10, 2023
Culture
As Online Users Increasingly Jailbreak ChatGPT in Creative Ways, Risks Abound for OpenAIMarch 27, 2023