OpenAI Responds to ChatGPT User Account Credentials Found on Dark Web

ChatGPT on a smartphone
(Image credit: Getty)

6/22/2023 Update: Two days after the original publication of this story, a representative from ChatGTP's parent company, OpenAI, reached out with a statement regarding the accounts found on the Dark Web. That statement has been added after the third paragraph below, and the original headline of this story has been changed.

ChatGPT users should be wary that their personal data might've been leaked online, following the dump of more than 100,000 ChatGPT account credentials on the dark web. As reported by The Hacker News and according to Singapore-based cybersecurity company Group-IB, the credentials for users that logged into ChatGPT ranges from its launch (in June 2022) through May 2023, meaning that it's still an ongoing event. The U.S., France, Morocco, Indonesia, Pakistan, and Brazil seem to have contributed the most users towards the stolen credentials.

"The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023," a Group-IB specialist said. "The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale over the past year."

Note this means these are logs with ChatGPT accounts found in them, not that ChatGPT or ChatGPT account holders were specifically targeted. Given the meteoric rise of the chatbot, and AI interest in general starting late last year, it stands to reason that recently pilfered logs would contain a higher rate of ChatGPT accounts than those offered up months ago. And a representative of ChatGPT's parent organization OpenAI reached out to us to stress that, while it's investigating the issue, it's industry-standard security practices are in place to protect its users.

“The findings from Group-IB’s Threat Intelligence report is the result of commodity malware on people’s devices and not an OpenAI breach," an OpenAI representative told Tom's Hardware via email. "We are currently investigating the accounts that have been exposed. OpenAI maintains industry best practices for authenticating and authorizing users to services including ChatGPT, and we encourage our users to use strong passwords and install only verified and trusted software to personal computers.”

The 26,802 available logs mentioned above mean that the dark web marketplace has already absorbed the user credentials — they've found their (likely) malicious buyer.
"Logs containing compromised information harvested by info stealers are actively traded on dark web marketplaces," Group-IB said. "Additional information about logs available on such markets includes the lists of domains found in the log as well as the information about the IP address of the compromised host."

The majority of the dumped credentials were found within logs connected to multiple information stealer malware families. The Raccoon info stealer, a particular popular malware "distribution" within the family, was used to compromise exactly 78,348 accounts. (It becomes easy to know exact numbers when you know what to look for in for each malware type.)

Raccoon seems to be the AAA-equivalent of the info stealer malware world, and a showcase of how the dark web is a parallel world to ours. Users can purchase access to Raccoon on a subscription-based model; there's no coding or particularly skillful knowledge required. This ease of deployment is part of the reason for the increasing counts of cybercrime-related offenses. Raccoon, like others, comes bundled with increased capabilities. These subscription-based info stealers don't just steal credentials; they also allow malicious users to automate follow-up attacks.

Other malware pieces were used to steal user credentials, of course; it's a field of black-hat-designed tools out there. But their numbers are much less impressive. A distant second to Raccoon was Vidar, which was used to access 12,984 accounts, while third place went to the 6,773 credentials captured through the RedLine malware.

That these credentials offer access to ChatGPT accounts should give pause to anyone using the service. Remember that it's not just access to your personal information. Since the majority of users store their chats in the OpenAI application, malicious users also get access to those. And that's where the real value is: in the business planning, the app development, malware development (uh), and writing happening within those chats. Both personal and professional content can be found within a ChatGPT account, from company trade secrets that shouldn't be there in the first place to personal diaries. There are even classified documents, it seems.

"Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT's standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials."

It's quite the informational heist. So remember: all passwords matter. But perhaps the security of your ChatGPT window (both at home and at work) matters more than others. Be mindful of the plugins you might install onto ChatGPT, use strong passwords, activate two-factor authentication (2FA), and remember the cybersecurity best practices that'll decrease the likelihood of you being successfully targeted.

Francisco Pires
Freelance News Writer

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.