- Item 1
- Item 2
- Item 3
- Item 4
Google Researchers Unleash AI Performance Breakthrough for Mobile Devices
A new breakthrough from Google AI researchers dramatically boosts the performance of generative AI models on mobile devices, moving us closer to a future where sophisticated AI can run on smartphones.
Google AI researchers have achieved blazing-fast results for running Stable Diffusion and other AI models on mobile phones. Photo illustration: Artisana
🧠 Stay Ahead of the Curve
Google AI researchers reveal a breakthrough method for accelerating generative AI models on mobile devices, with impressive results in their test of Stable Diffusion.
The breakthrough could enable complex generative AI models to run efficiently on smartphones, enhancing user experience and device capabilities.
A future where sophisticated AI applications become widely accessible and integrated into everyday handheld electronics is increasingly possible as performance improves.
April 25, 2023
As generative AI models captivate users with their ability to create poems, images, and even videos, the computational resources needed to generate this content present a challenge for smaller devices. Consequently, many generative AI models are run on either cloud-based systems or high-powered computers.
Efficiently running generative AI models on mobile devices would mark a significant advancement in capabilities. In a recent paper, a team of Google AI researchers unveiled a method to dramatically improve the performance of the Stable Diffusion 1.4 model on mobile phones. One Samsung test device was able to generate images in under 12 seconds. Remarkably, this approach can be applied to other large diffusion models, potentially accelerating generative AI performance across a wide range of mobile devices.
The Breakthrough: A GPU-Optimized Shader
Central to the research breakthrough is a series of optimizations that take advantage of a mobile phone's Graphics Processing Unit (GPU). The researchers crafted a specialized shader, which is a program that the GPU runs to output rendered graphics, GPU-optimized shader, to execute multiple intermediate steps typically required by the Stable Diffusion model in a single step. Four GPU-specific optimizations into the shader, enabling improvements such as:
Executing multiple computations within a single draw call
Enhancing parallelism by performing calculations in blocks
Optimizing threading and memory cache management during shader execution
These combined optimizations yielded impressive results: overall image generation time decreased by 52% and 33% on a Samsung S23 Ultra and an iPhone 14 Pro, respectively, with memory usage also significantly reduced. As mobile phones typically have between 4 GB and 8 GB of RAM, efficient memory management is crucial for running generative AI models on smaller devices.
The GPU optimizations demonstrate strong potential for the future of generative AI models on compact devices. While the Stable Diffusion model has around 1 billion parameters, other large diffusion models like OpenAI's DALL-E boast 3.5 billion parameters. Parameter count, which refers to the number of adjustable settings a model has to learn from its training dataset, generally correlates with the complexity and power of an AI model, as well as its computational demands.
Ultimately, the researchers emphasize that their breakthrough extends beyond merely enhancing the speed of the Stable Diffusion model. "These improvements expand the applicability of generative AI and elevate the overall user experience across a diverse range of devices," they conclude, hinting at a future where sophisticated AI models can operate swiftly on handheld electronics.
NewsKey Takeaways from OpenAI CEO Sam Altman's Senate Testimony
May 16, 2023
NewsOpenAI Readies Open-Source Model as Competition Intensifies
May 15, 2023
ResearchChatGPT Trading Algorithm Delivers 500% Returns in Stock Market
May 10, 2023
NewsLeaked Google Memo Claiming “We Have No Moat, and Neither Does OpenAI” Shakes the AI World
May 05, 2023
NewsChegg’s Stock Tumble Serves as Wake Up Call on the Perils of AI
May 03, 2023
NewsHollywood Writers on Strike Grapple with AI’s Role in Creative Process
May 02, 2023
ResearchGPT AI Enables Scientists to Passively Decode Thoughts in Groundbreaking Study
May 01, 2023
NewsChatGPT Grows in Popularity as Bing and Bard Flatline
April 27, 2023
ResearchStanford/MIT Study: GPT Boosts Support Agent Productivity by up to 35%
April 26, 2023
NewsSnap's My AI Feature Faces Unexpected Backlash from Users
April 24, 2023
News"Next to Impossible": OpenAI's ChatGPT Faces GDPR Compliance Woes
April 20, 2023
NewsMicrosoft's AI Chip Strategy Reduces Costs and Nvidia Dependence
April 18, 2023
News4 Million Accounts Compromised by Fake ChatGPT App
April 17, 2023
NewsEU's AI Act: Stricter Rules for Chatbots on the Horizon
April 14, 2023
ResearchStudy: Assigning Personas Creates a Sixfold Increase in ChatGPT Toxicity
April 13, 2023
ResearchGPT-4 Outperforms Elite Crowdworkers, Saving Researchers $500,000 and 20,000 hours
April 11, 2023
ResearchGenerative Agents: Stanford's Groundbreaking AI Study Simulates Authentic Human Behavior
April 10, 2023
ResearchBye-Bye, Mechanical Turk? How ChatGPT is Making Humans Obsolete
April 09, 2023
NewsMayor Threatens Landmark Defamation Lawsuit Against OpenAI's ChatGPT
April 06, 2023
NewsOpenAI's ChatGPT Suspended in Italy Amid Privacy and Cybersecurity Concerns
March 31, 2023
NewsCiting "Profound Risks to Society," Prominent AI Experts Call for Pause
March 29, 2023
NewsEuropol Warns of ChatGPT's Dark Side as Criminals Exploit AI Potential
March 28, 2023
CultureAs Online Users Increasingly Jailbreak ChatGPT in Creative Ways, Risks Abound for OpenAI
March 27, 2023
NewsAI Researchers Voice Disappointment at GPT-4’s Lack of Openness
March 16, 2023