Tracking the Explosive World of Generative AI

AI Researchers Voice Disappointment at GPT-4’s Lack of Openness

While GPT-4 is attracting praise for its capabilities, AI researchers are notably disappointed with how few details OpenAI is sharing about how its latest AI works.

Key details of how GPT-4 remain under lock and key, to the disappointment of the AI research community. Illustration: Artisana

🧠 Stay Ahead of the Curve

  • GPT-4’s launch is attracting criticism from numerous AI researchers who are disappointed at the lack of details shared by OpenAI around the model’s inner workings

  • In a reversal of its previous stance, OpenAI cites “competitive landscape” and “safety implications” as reasons why it chose to withhold details

  • At a time when numerous companies are in an arms race to deploy AI, the lack of third-party access to example an AI model’s underlying foundation may have unknown long-term consequences

By Michael Zhang

March 16, 2023

GPT-4’s launch is making waves in the technology community, but not everyone is excited about its debut. Notably, the AI research community is expressing broad disappointment at OpenAI’s closed-off attitude towards sharing details about GPT-4, including how the model was trained, methods used to create it, its architecture and model size, hardware it was trained on, and more.

While OpenAI released a 98-page technical report detailing various aspects of GPT-4, a section notably explains:

Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

As the Verge reports, numerous members of the AI research community are not happy about this stance. 

Ben Schmidt, VP of Information Design at Nomic AI, tweeted the following: 

William Falcon, CEO of Lightning AI, told VentureBeat:

I think what’s bothering everyone is that OpenAI made a whole paper that’s like 90-something pages long. That makes it feel like it’s open-source and academic, but it’s not. They describe literally nothing in there. When an academic paper says benchmarks, it says ‘Hey, we did better than this and here’s a way for you to validate that.’ There’s no way to validate that here.

And David Picard, an AI research at Ecole de Ponts ParisTech, tweeted similar disappointment:

Ultimately, these researchers argue, OpenAI’s decisions to limit details on GPT-4 may be practical for business in the short term but may lead to long term challenges as power in the AI world becomes increasingly centralized. 

And as the capabilities of AI rapidly improve and multiple companies compete in an arms race to deploy generative AI as fast as possible, it’s not clear that the companies who own the technology are prioritizing the considerations of academic researchers.

Read More: ChatGPT