A hacker said they purloined personal details from millions of OpenAI accounts-but researchers are hesitant, and the company is investigating.
OpenAI states it's investigating after a hacker claimed to have swiped login qualifications for 20 million of the AI firm's user accounts-and put them up for sale on a dark web forum.
The pseudonymous breacher posted a cryptic message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and providing possible purchasers what they claimed was sample data containing email addresses and passwords. As reported by Gbhackers, the full dataset was being provided for sale "for just a few dollars."
"I have more than 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to a translated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus concurs."
If legitimate, this would be the third major security occurrence for the AI company considering that the release of ChatGPT to the public. Last year, surgiteams.com a hacker got access to the company's internal Slack messaging system. According to The New York City Times, the hacker "stole details about the design of the company's A.I. technologies."
Before that, in 2023 an even simpler bug involving jailbreaking triggers permitted hackers to obtain the personal data of OpenAI's paying consumers.
This time, nevertheless, security scientists aren't even sure a hack took place. Daily Dot reporter Mikael Thalan wrote on X that he discovered invalid email addresses in the supposed sample data: "No proof (recommends) this supposed OpenAI breach is genuine. A minimum of 2 addresses were invalid. The user's just other post on the online forum is for a stealer log. Thread has actually considering that been erased also."
No proof this alleged OpenAI breach is legitimate.
Contacted every email address from the supposed sample of login credentials.
A minimum of 2 were invalid. The user's only other post on the online forum is for a stealer log. Thread has actually given that been deleted too. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, 2025
OpenAI takes it 'seriously'
In a declaration shown Decrypt, an OpenAI spokesperson acknowledged the circumstance while maintaining that the company's systems appeared secure.
"We take these claims seriously," the spokesperson said, including: "We have actually not seen any proof that this is connected to a compromise of OpenAI systems to date."
The scope of the alleged breach stimulated issues due to OpenAI's huge user base. Countless users worldwide count on the business's tools like ChatGPT for company operations, educational purposes, and content generation. A legitimate breach might expose personal conversations, commercial jobs, and other delicate information.
Until there's a last report, some preventive steps are always a good idea:
- Go to the "Configurations" tab, log out from all connected gadgets, and enable two-factor authentication or 2FA. This makes it virtually impossible for a hacker to gain access to the account, even if the login and passwords are jeopardized.
- If your bank supports it, then develop a virtual card number to handle OpenAI subscriptions. In this manner, it is easier to find and avoid scams.
- Always keep an eye on the conversations stored in the chatbot's memory, and be aware of any phishing attempts. OpenAI does not request for any personal details, and any payment upgrade is constantly dealt with through the main OpenAI.com link.