- Item 1
- Item 2
- Item 3
- Item 4
4 Million Accounts Compromised by Fake ChatGPT App
More than 4 million accounts have been compromised by an imposter ChatGPT software, raising cybersecurity concerns and spotlighting the potential dangers of the generative AI's widespread popularity.
ChatGPT scams continue to draw in users, with one fake app compromising 4 million accounts. Open source photo.
🧠 Stay Ahead of the Curve
A counterfeit ChatGPT app compromised over 4 million accounts, stealing credentials and bypassing two-factor authentication.
The breach highlights the security risks posed by fake AI applications amid ChatGPT's unprecedented popularity.
The incident raises concerns about the need for stronger security measures and scrutiny in the rapidly evolving AI landscape amidst growing adoption of AI tools.
April 17, 2023
A fake ChatGPT application that compromised the accounts of more than 4 million users, an investigation by security firm Cyberangel has revealed. Distributed as both a Chrome Extension and Windows desktop software, this counterfeit tool steals user credentials and bypasses two-factor authentication for the affected accounts.
For Facebook users, the damage has already led to the viral TikTok hashtag, #LilyCollinsHack. The fake application locks users out of their Facebook accounts and changes their name and user profile to resemble Lily Collins, the actress from the hit Netflix series “Emily in Paris.”
Cyberangel's investigation into the stolen data, accessed via an unsecured public database, revealed its stunning scope: 4 million stolen credentials total, with over 6,000 corporate accounts, 7,000 VPN logins that could grant access to secure corporate networks, and customer logins for a wide range of software services.
ChatGPT’s Popularity Masks Criminal Schemes
Since its debut in November, ChatGPT has set records for having the fastest-growing user base of any website. By some estimates, ChatGPT gained one million users in its first week, crossing 100 million active monthly users within two months of launch.
This incredibly rapid adoption has inspired a gold rush to capitalize on ChatGPT’s popularity, attracting both well-intentioned and nefarious actors. Within days of launch, users had reverse-engineered ChatGPT’s web API and were offering native iPhone apps that imitate the ChatGPT experience.
Internet forums are flooded with users asking “how to access ChatGPT,” and software developers have been quick to release thousands of tools that utilize ChatGPT’s API, exploring the wide-ranging applications of generative AI technology. Many of these are Chrome plugins, native mobile apps, and desktop applications, allowing users to interact with ChatGPT beyond just OpenAI’s website.
Currently, three of the top twelve free productivity apps on Apple’s App Store are ChatGPT apps, many with confusing names like “Chat AI Chatbot Assistant Plus” and descriptions that don't clearly indicate their third-party nature. In-app purchases for “Plus” and “Pro” level subscriptions in these apps resemble the OpenAI’s own ChatGPT Plus paid tier.
Corporations are Scrambling to Keep Up
The proliferation of ChatGPT’s popularity and its imitator third-party apps have created a security headache for corporations, many of whose workers have unofficially begun adopting ChatGPT.
In April, the Economist Korea reported that Samsung had placed new limits on using ChatGPT after discovering employees had leaked sensitive source code and meeting notes to the chatbot. ChatGPT's data policy also states that unless users explicitly opt out, it uses their prompts to train its models, raising concerns that sensitive information could be incorporated into future versions of the chatbot.
And in February, Amazon's lawyers cautioned employees after identifying instances of ChatGPT-generated text “closely” resembling internal company data. OpenAI has refused to disclose what training data was used to build GPT-4, raising concern among AI and security researchers.
As enthusiasm for ChatGPT and AI technology continues to grow, corporations face an ongoing challenge to balance security with the adoption of innovative tools. And OpenAI itself could be vulnerable to cyber attacks or simple mishaps; in March, an OpenAI bug accidentally revealed users' chat histories to other users, leading CEO Sam Altman to apologize and explain that the company felt “awful” about what had happened.
NewsAI and Media Titans Quietly Hash Out Future of Content Licensing
June 16, 2023
ResearchIn Largest-Ever Turing Test, 1.5 Million Humans Guess Little Better Than Chance
June 09, 2023
NewsHigh-Profile AI Leaders Warn of “Risk of Extinction” from AI
May 30, 2023
NewsKey Takeaways from OpenAI CEO Sam Altman's Senate Testimony
May 16, 2023
NewsOpenAI Readies Open-Source Model as Competition Intensifies
May 15, 2023
ResearchChatGPT Trading Algorithm Delivers 500% Returns in Stock Market
May 10, 2023
NewsLeaked Google Memo Claiming “We Have No Moat, and Neither Does OpenAI” Shakes the AI World
May 05, 2023
NewsChegg’s Stock Tumble Serves as Wake Up Call on the Perils of AI
May 03, 2023
NewsHollywood Writers on Strike Grapple with AI’s Role in Creative Process
May 02, 2023
ResearchGPT AI Enables Scientists to Passively Decode Thoughts in Groundbreaking Study
May 01, 2023
NewsChatGPT Grows in Popularity as Bing and Bard Flatline
April 27, 2023
ResearchStanford/MIT Study: GPT Boosts Support Agent Productivity by up to 35%
April 26, 2023
NewsSnap's My AI Feature Faces Unexpected Backlash from Users
April 24, 2023
News"Next to Impossible": OpenAI's ChatGPT Faces GDPR Compliance Woes
April 20, 2023
NewsMicrosoft's AI Chip Strategy Reduces Costs and Nvidia Dependence
April 18, 2023
NewsEU's AI Act: Stricter Rules for Chatbots on the Horizon
April 14, 2023
ResearchStudy: Assigning Personas Creates a Sixfold Increase in ChatGPT Toxicity
April 13, 2023
ResearchGPT-4 Outperforms Elite Crowdworkers, Saving Researchers $500,000 and 20,000 hours
April 11, 2023
ResearchGenerative Agents: Stanford's Groundbreaking AI Study Simulates Authentic Human Behavior
April 10, 2023
ResearchBye-Bye, Mechanical Turk? How ChatGPT is Making Humans Obsolete
April 09, 2023
NewsMayor Threatens Landmark Defamation Lawsuit Against OpenAI's ChatGPT
April 06, 2023
NewsOpenAI's ChatGPT Suspended in Italy Amid Privacy and Cybersecurity Concerns
March 31, 2023
NewsCiting "Profound Risks to Society," Prominent AI Experts Call for Pause
March 29, 2023
NewsEuropol Warns of ChatGPT's Dark Side as Criminals Exploit AI Potential
March 28, 2023
CultureAs Online Users Increasingly Jailbreak ChatGPT in Creative Ways, Risks Abound for OpenAI
March 27, 2023
NewsAI Researchers Voice Disappointment at GPT-4’s Lack of Openness
March 16, 2023