OpenAI Responds to ChatGPT User Account Credentials Found on Dark Web
"Industry best" security practices are in place. But that won't save you from weak passwords and suspicious software installs.
6/22/2023 Update: Two days after the original publication of this story, a representative from ChatGTP's parent company, OpenAI, reached out with a statement regarding the accounts found on the Dark Web. That statement has been added after the third paragraph below, and the original headline of this story has been changed.
ChatGPT users should be wary that their personal data might've been leaked online, following the dump of more than 100,000 ChatGPT account credentials on the dark web. As reported by The Hacker News and according to Singapore-based cybersecurity company Group-IB, the credentials for users that logged into ChatGPT ranges from its launch (in June 2022) through May 2023, meaning that it's still an ongoing event. The U.S., France, Morocco, Indonesia, Pakistan, and Brazil seem to have contributed the most users towards the stolen credentials.
"The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023," a Group-IB specialist said. "The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale over the past year."
Note this means these are logs with ChatGPT accounts found in them, not that ChatGPT or ChatGPT account holders were specifically targeted. Given the meteoric rise of the chatbot, and AI interest in general starting late last year, it stands to reason that recently pilfered logs would contain a higher rate of ChatGPT accounts than those offered up months ago. And a representative of ChatGPT's parent organization OpenAI reached out to us to stress that, while it's investigating the issue, it's industry-standard security practices are in place to protect its users.
“The findings from Group-IB’s Threat Intelligence report is the result of commodity malware on people’s devices and not an OpenAI breach," an OpenAI representative told Tom's Hardware via email. "We are currently investigating the accounts that have been exposed. OpenAI maintains industry best practices for authenticating and authorizing users to services including ChatGPT, and we encourage our users to use strong passwords and install only verified and trusted software to personal computers.”
The 26,802 available logs mentioned above mean that the dark web marketplace has already absorbed the user credentials — they've found their (likely) malicious buyer.
"Logs containing compromised information harvested by info stealers are actively traded on dark web marketplaces," Group-IB said. "Additional information about logs available on such markets includes the lists of domains found in the log as well as the information about the IP address of the compromised host."
The majority of the dumped credentials were found within logs connected to multiple information stealer malware families. The Raccoon info stealer, a particular popular malware "distribution" within the family, was used to compromise exactly 78,348 accounts. (It becomes easy to know exact numbers when you know what to look for in for each malware type.)
Raccoon seems to be the AAA-equivalent of the info stealer malware world, and a showcase of how the dark web is a parallel world to ours. Users can purchase access to Raccoon on a subscription-based model; there's no coding or particularly skillful knowledge required. This ease of deployment is part of the reason for the increasing counts of cybercrime-related offenses. Raccoon, like others, comes bundled with increased capabilities. These subscription-based info stealers don't just steal credentials; they also allow malicious users to automate follow-up attacks.
Other malware pieces were used to steal user credentials, of course; it's a field of black-hat-designed tools out there. But their numbers are much less impressive. A distant second to Raccoon was Vidar, which was used to access 12,984 accounts, while third place went to the 6,773 credentials captured through the RedLine malware.
That these credentials offer access to ChatGPT accounts should give pause to anyone using the service. Remember that it's not just access to your personal information. Since the majority of users store their chats in the OpenAI application, malicious users also get access to those. And that's where the real value is: in the business planning, the app development, malware development (uh), and writing happening within those chats. Both personal and professional content can be found within a ChatGPT account, from company trade secrets that shouldn't be there in the first place to personal diaries. There are even classified documents, it seems.
"Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT's standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials."
It's quite the informational heist. So remember: all passwords matter. But perhaps the security of your ChatGPT window (both at home and at work) matters more than others. Be mindful of the plugins you might install onto ChatGPT, use strong passwords, activate two-factor authentication (2FA), and remember the cybersecurity best practices that'll decrease the likelihood of you being successfully targeted.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.
-
I'm pretty sure the malware has already collected few credentials saved in browsers, bank card details, crypto wallet information, cookies, browsing history, and other information by now.Reply
It's too late now though. ChatGPT’s popularity has made it a popular choice for bad actors on the dark web, and a hacked account now exposes confidential or sensitive information. Armageddon is upon us !
Furthermore, IMO, there is the very real danger and high possibility that users might have reused the same password for their ChatGPT account as other online accounts as well. Ouch !
It's at least good to know that Samsung recently banned the use of ChatGPT and other generative AI tools. A recent Github survey revealed that a whopping 92% of developers use AI in an attempt to prevent burnout and increase productivity, which is insane !
EDIT:
How did you guys miss this news ?
https://www.infosecurity-magazine.com/news/chatgpt-spreads-malicious-packages/ -
The U.S., France, Morocco, Indonesia, Pakistan, and Brazil seem to have contributed the most users towards the stolen credentials.
As per the data collected, it says that these countries were impacted the most, India saw the largest numbers at more than 12,632, then Pakistan (9,217), Brazil (6,531), Vietnam (4,771), and Egypt (4,588) rounding out the top five. -
Amdlova Brazil doesn't count everyone has some information on darkweb.Reply
Serasa make some huge leaks (credit score) with all database social number and other things... -
Anon#1234
Me who just uses bing:Admin said:Over 100,000 ChatGPT user credentials have been dumped on the dark web's markets since June 2022, something that poses a significant risk considering what can be contained within a single chat session.
Over 100,000 ChatGPT Account Credentials Made Available on the Dark Web : Read more -
ThatMouse So it's Malware? That means that have your Google, Microsoft, and Apple single sign on creds which is far more troublesome than ChatGPT.Reply -
bit_user
That's a slight mischaracterization. The precise 92% quote is:Metal Messiah. said:A recent Github survey revealed that a whopping 92% of developers use AI in an attempt to prevent burnout and increase productivity, which is insane !
"a staggering number of developers — 92% — are already using AI tools either at work or in their leisure time." -
Exploding PSU bit_user said:Imagine the irony if the hackers actually used chatGPT to hack chatGPT!
"I use the ChatGPT to destroy the chatGPT!"
Lucky me, I use fresh account not related to anything else for the service, and have used it for nothing but to... write diaries. Sorry for straining your servers just so I can pour out my feelings though, it looks absolutely dystopian for me. -
IamNotChatGpt What's so special about this leak?Reply
If you are brave naive enough to use illegal/stolen GPT cred, then you might as well scroll a pixel down, click on "indonesia credit card leak" and use that to purchase CGPT sub.
Like I genuinely don't get this news article/point of this leak. -
anbello262
"Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT's standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials."IamNotChatGpt said:What's so special about this leak?
If you are brave naive enough to use illegal/stolen GPT cred, then you might as well scroll a pixel down, click on "indonesia credit card leak" and use that to purchase CGPT sub.
Like I genuinely don't get this news article/point of this leak.
It's pretty clear in the article, in my opinion.