- Item 1
- Item 2
- Item 3
- Item 4
Leaked Google Memo Claiming “We Have No Moat, and Neither Does OpenAI” Shakes the AI World
A leaked Google memo ignites debate as it claims open-source AI could outpace tech giants' proprietary models, raising concerns about OpenAI and Google’s future as they develop closed AI models.
Google's future could be under threat from open source AI models, a leaked memo claims. Photo credit: Artisana.
🧠 Stay Ahead of the Curve
Google engineer Luke Sernau's leaked memo claims open-source AI may eventually outpace proprietary models from Google, Meta, and OpenAI.
The memo has ignited debate on AI's competitive edge, highlighting rapid open-source innovations and cost-effective alternatives vs. closed models by Google and OpenAI.
The rise of open-source AI raises concerns about responsible AI release and potential misuse for criminal purposes, challenging safety commitments made by AI companies.
May 05, 2023
A leaked Google memo, claiming that "we have no moat, and neither does OpenAI," has stirred a heated debate within the technology community. Penned by Google engineer Luke Sernau, the memo argues that despite significant investments by Google, Meta, and OpenAI in generative AI chatbots, open-source alternatives may rapidly outpace them all.
Who is Luke Sernau?
The memo was published anonymously by the blog SemiAnalysis, but Bloomberg later identified Google senior engineer Luke Sernau as the memo's author. Sernau’s LinkedIn profile shows that he graduated with a background in mathematics and has worked at Google as a Senior Engineer since March 2019. He previously spent four years at Meta on automated insights and machine learning infrastructure.
Why is the memo so earthshattering?
Sernau posits that the continued development of proprietary models may render Google and OpenAI irrelevant as open-source models make rapid strides. He cites examples of recent progress, such as language models running on phones, multimodal models training in under an hour, and personalized AI fine-tuning on laptops, as just the beginning signs of how rapidly the open-source side of generative AI is moving.
This paradigm of affordable, rapid development is the opposite of the closed system in which OpenAI and Google have developed their language models. As we previously reported, OpenAI burned $540 million in 2022 as it developed and launched ChatGPT.
The genesis of this open-source innovation can be traced back to the leak of Meta’s LLaMA language model. The leaked version of LLaMA was relatively unsophisticated and lacked key features, Sernau pointed out, but within a month of release numerous improvements had been made. The collective effort of “an entire planet's worth of free labor” poured into innovating on LLaMA has enabled incredible progress.
Vicuna-13B: 90% of ChatGPT, and a sign of what’s to come
Sernau references a chart from the researchers behind Vicuna-13B, an open source chatbot adapted off of Meta’s LLaMA-13B. “Preliminary evaluation using GPT-4 as a judge shows Vicuna-13B achieves more than 90% quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90% of cases,” the researchers concluded. This progress notably was achieved by training Vicuna on 70,000 user-shared ChatGPT conversations, for a total training cost of just $300.
Vicuna-13B was trained for just $300 by utilizing a vastly cheaper mechanism for fine tuning called low rank adaptation (LoRA). This method, which Google and OpenAI do not currently use, enables stackable fine-tuning rather expensive re-training. Re-training, Sernau notes, also throws away previous pretraining and iterative improvements made on top.
What’s especially notable, Sernau calls out, is that Vicuna-13B released just 3 weeks after the leak of LLaMA-13B – an extraordinarily short timeframe that highlights the speed of source innovation. This pace of progress isn’t dissimilar to what has happened in the image generation space, Sernau argues, where Stable Diffusion’s open source roots has enabled it to outpace OpenAI’s Dall-E. “Having an open model led to product integrations, marketplaces, user interfaces, and innovations that didn’t happen for Dall-E,” he notes.
Want to try Vicuna-13B? An online demo is available here.
A wake up call for Google's AI strategy
Sernau’s memo seems intended to serve as a wake-up call for Google’s AI strategy. He argues that “directly competing with Open Source is a losing proposition,” and that Google should pivot to an open source offering that own the ecosystem, much like its strategy behind Chrome and Android.
“The more tightly we control our models, the more attractive we make open alternatives,” Sernau concludes. “Google should establish itself a leader in the open source community, taking the lead by cooperating with, rather than ignoring, the broader conversation.”
Skeptics are not sure open source will win
The memo’s central argument has divided the technology community, with some cheering on this viewpoint. “It would be unexpected and kind of glorious of Google were the first FAANG to get totally decimated by technological competition,” tweeted one observer.
But others are not so sure. Stability AI’s CEO Emad Mostaque chimed in: “While this article fits with much of our thesis I think it has a misunderstanding of what moats actually are. It is [very] difficult to build a business with innovation as a moat, base requirement is too high. Data, distribution, great product are moats.” Microsoft and OpenAI may very well thrive, he concluded, because “the ecosystem building around OpenAI plugins is fantastic and they are leveraging Microsoft for distribution while building their own and getting super interesting data.”
Startup investor and advisor Elad Gil shared similar thoughts, recalling his days at Google when social networks were taking the world by storm. “I always remember when I was at Google in the mid 2000s and social was happening. I was at a meeting where many people (now known as social "experts") were pounding the table saying social products had no moat and would never be sticky,” he said. Ultimately, this viewpoint was proven wrong.
Raj Singh, a startup founder now working at Mozilla, also disagreed with the no moat thesis. “Owning the AI developer platform relationship which OpenAI is doing is the moat. It’s the same moat MS had with Windows developers. It’s the same moat AWS has with cloud developers.”
Security and responsible AI implications
If there is one thing everyone can agree on, however, it’s that responsible release of AI may no longer be possible, as personalized and open-source models run in the wild. The emergence of open source models as a highly viable alternative comes at an inopportune time for the White House, which convened leaders from Google, Microsoft, OpenAI, Anthropic and more this week on the safety and security of AI products. Open source models that are rapidly able to be customized for any purpose, including for criminal intent, will be difficult to regulate.
“The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products,” Vice President Kamala Harris said in a statement. That commitment, many experts conclude, may no longer be possible to keep.
Research
In Largest-Ever Turing Test, 1.5 Million Humans Guess Little Better Than ChanceJune 09, 2023
Research
GPT AI Enables Scientists to Passively Decode Thoughts in Groundbreaking StudyMay 01, 2023
Research
GPT-4 Outperforms Elite Crowdworkers, Saving Researchers $500,000 and 20,000 hoursApril 11, 2023
Research
Generative Agents: Stanford's Groundbreaking AI Study Simulates Authentic Human BehaviorApril 10, 2023
Culture
As Online Users Increasingly Jailbreak ChatGPT in Creative Ways, Risks Abound for OpenAIMarch 27, 2023