A hacker said they purloined private details from millions of OpenAI accounts-but scientists are hesitant, and the company is examining.
OpenAI says it's investigating after a hacker claimed to have swiped login qualifications for surgiteams.com 20 million of the AI company's user accounts-and put them up for sale on a dark web online forum.
The pseudonymous breacher published a cryptic message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and offering prospective buyers what they claimed was sample information containing email addresses and passwords. As reported by Gbhackers, the full dataset was being sold "for just a few dollars."
"I have more than 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to a translated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus concurs."
If legitimate, this would be the 3rd significant security event for the AI company because the release of ChatGPT to the public. In 2015, a hacker got access to the company's internal Slack messaging system. According to The New York City Times, historydb.date the hacker "stole details about the style of the business's A.I. technologies."
Before that, in 2023 an even easier bug involving jailbreaking prompts allowed hackers to obtain the personal information of OpenAI's paying customers.
This time, nevertheless, security researchers aren't even sure a hack took place. Daily Dot reporter Mikael Thalan wrote on X that he discovered invalid email addresses in the supposed sample data: "No proof (suggests) this supposed OpenAI breach is genuine. A minimum of 2 addresses were invalid. The user's just other post on the forum is for a thief log. Thread has actually considering that been deleted too."
No proof this supposed OpenAI breach is legitimate.
Contacted every email address from the supposed sample of login credentials.
A minimum of 2 addresses were invalid. The user's just other post on the online forum is for a thief log. Thread has actually since been deleted as well. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, 2025
OpenAI takes it 'seriously'
In a statement shown Decrypt, an OpenAI representative acknowledged the circumstance while maintaining that the company's systems appeared protected.
"We take these claims seriously," the spokesperson said, adding: "We have not seen any proof that this is connected to a compromise of OpenAI systems to date."
The scope of the alleged breach triggered issues due to OpenAI's massive user base. Millions of users worldwide rely on the company's tools like ChatGPT for business operations, educational functions, and setiathome.berkeley.edu content generation. A legitimate breach could expose private conversations, business tasks, and other sensitive information.
Until there's a last report, some preventive measures are always recommended:
- Go to the "Configurations" tab, log out from all connected gadgets, and make it possible for two-factor authentication or 2FA. This makes it practically difficult for a hacker to gain access to the account, even if the login and passwords are jeopardized.
- If your bank supports it, library.kemu.ac.ke then produce a virtual card number to handle OpenAI subscriptions. By doing this, it is simpler to spot and .
- Always watch on the discussions saved in the chatbot's memory, and bybio.co know any phishing attempts. OpenAI does not ask for wiki.dulovic.tech any individual details, and any payment upgrade is constantly handled through the main OpenAI.com link.