OpenAI was hacked, revealing internal secrets and raising national security concerns — year-old breach wasn't reported to the public

OpenAI
(Image credit: OpenAI)

A hacker breached OpenAI’s internal messaging systems early last year, stealing details of how OpenAI's technologies work from employees. Although the hacker did not access the systems housing key AI technologies, the incident raised significant security concerns within the company. Furthermore, it even raised concerns about the U.S. national security, reports the New York Times

The breach occurred in an online forum where employees discussed OpenAI's latest technologies. While OpenAI's systems, where the company keeps its training data, algorithms, results, and customer data, were not compromised, some sensitive information was exposed. In April 2023, OpenAI executives disclosed the incident to employees and the board but chose not to make it public. They reasoned that no customer or partner data was stolen and the hacker was likely an individual without government ties. But not everyone was happy with the decision. 

Leopold Aschenbrenner, a technical program manager at OpenAI, criticized the company's security measures, suggesting they were inadequate to prevent foreign adversaries from accessing sensitive information. He was later dismissed for leaking information, a move he claims was politically motivated. Despite Aschenbrenner's claims, OpenAI maintains that his dismissal was unrelated to his concerns about security. The company acknowledged his contributions but disagreed with his assessment of its security practices. 

The incident heightened fears about potential links to foreign adversaries, particularly China. However, OpenAI believes its current AI technologies do not pose a significant national security threat. Still, one could figure out that leaking them to Chinese specialists would help them advance their AI technologies faster. 

In response to the breach, OpenAI, just like other companies, has been enhancing its security measures. For example, OpenAI and others have added guardrails to prevent misuse of their AI applications. Also, OpenAI has established a Safety and Security Committee, including former NSA head Paul Nakasone, to address future risks.  

Other companies, including Meta, are making their AI designs open source to foster industry-wide improvements. However, this makes technologies available to American foes like China, too. Studies conducted by OpenAI, Anthropic, and others indicate that current AI systems are not more dangerous than search engines. 

Federal and state regulations are being considered to control the release of AI technologies and impose penalties for harmful outcomes. However, this looks more like a precaution as experts believe that the most serious risks from AI are still years away. 

Meanwhile, Chinese AI researchers are quickly advancing, potentially surpassing their U.S. counterparts. This rapid progress has prompted calls for tighter controls on AI development to mitigate future risks.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • CmdrShepard
    Still, one could figure out that leaking them to Chinese specialists would help them advance their AI technologies faster.
    If anyone actually bothered to read some research papers on AI, they would have known by now that many (if not most) authors of those research papers are Chinese nationals -- there's nothing to leak when they are the ones leading and publishing the AI research.
    Reply
  • WINTERLORD
    Although I'm a proponent of privacy an such I really think these big AI companies should do there own security however I think there should be a second line of defence severity an that the gov should help ensure that these companies are secure either through fisa or somthin similiar running in the background
    Reply
  • bit_user
    The article said:
    Also, OpenAI has established a Safety and Security Committee, including former NSA head Paul Nakasone, to address future risks.
    When I read this part, I had a major flashback to the security team in the mini-series Devs (2020).
    https://www.imdb.com/title/tt8134186/
    OpenAI seems to have several parallels to the fictional company at the center of that series, other than the fact that they're dealing with quantum computing and not AI.
    Reply
  • bit_user
    CmdrShepard said:
    If anyone actually bothered to read some research papers on AI, they would have known by now that many (if not most) authors of those research papers are Chinese nationals -- there's nothing to leak when they are the ones leading and publishing the AI research.
    But I think OpenAI isn't publishing its research, so we don't know how far ahead of academia they are.
    Reply
  • jp7189
    CmdrShepard said:
    If anyone actually bothered to read some research papers on AI, they would have known by now that many (if not most) authors of those research papers are Chinese nationals -- there's nothing to leak when they are the ones leading and publishing the AI research.
    Quantity does not equal quality or meaningful advances. I'm not necessarily being specific to Chinese research, but with all the attention AI is getting, it's harder to find the jewels in a sea of meaningless regurgitation. Find a good repo on github today and tomorrow it'll have countless forks.
    Reply
  • zsydeepsky
    jp7189 said:
    Quantity does not equal quality or meaningful advances. I'm not necessarily being specific to Chinese research, but with all the attention AI is getting, it's harder to find the jewels in a sea of meaningless regurgitation. Find a good repo on github today and tomorrow it'll have countless forks.

    Quality-wise, the current best open-source LLM on huggingface leaderboard is Qwen2, which is from Chinese company Alibaba:
    https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard
    Reply
  • bit_user
    zsydeepsky said:
    Quality-wise, the current best open-source LLM on huggingface leaderboard is Qwen2, which is from Chinese company Alibaba:
    https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard
    I don't know if they still do, but they at least used to have some R&D offices (including for AI) in the USA.
    Reply
  • vanadiel007
    Just look at the other AI article: China has been filing way more patents for AI than anybody else over the past 10 years.
    They are likely well ahead of other major players.

    In my opinion, rather than seeing them as the enemy, we should see them as an ally. We are fooling ourselves if we think we can sanction them into "submission" as most things we use on a daily basis are made in China.
    We have seen this during Covid, where the world came tumbling down on us because we could not produce the basic things we needed, and Covid prevented shipping them in like we usually do.

    We should not make the same mistake with AI, and end up with a toddler AI version while China has the adult AI version.

    And it's already happening with EV vehicles as we speak, where China can offer them for a quarter of the cost that the domestic producers are offering theirs for. We will be left behind technologically if we do not work with them.
    Reply
  • bit_user
    vanadiel007 said:
    In my opinion, rather than seeing them as the enemy, we should see them as an ally.
    Just because you're nice to someone doesn't make them a friend. Turning a blind eye to IP theft and trade practices like dumping doesn't mean they'll allow you to do the same. It's just seen as a sign of weakness and makes you a target ripe for exploitation.

    vanadiel007 said:
    We are fooling ourselves if we think we can sanction them into "submission"
    That's not the only outcome. Every time there's an article about sanctions leaks, people seem all too ready to decry the sanctions as pointless and ineffective, but I doubt the sanctions would have so many detractors if they weren't actually having an effect.

    vanadiel007 said:
    most things we use on a daily basis are made in China.
    It didn't used to be that way and it needn't be, in the future.

    vanadiel007 said:
    We should not make the same mistake with AI, and end up with a toddler AI version while China has the adult AI version.
    It'll be another Tiktok situation, where they keep their crown jewels locked up tight and merely rent them to us - perhaps even in some impaired capacity. They won't be giving them away, or even selling them at a price worth paying.

    vanadiel007 said:
    And it's already happening with EV vehicles as we speak, where China can offer them for a quarter of the cost that the domestic producers are offering theirs for.
    Because dumping and they sewed up the rare earth metals supply & processing chain.
    Reply
  • jp7189
    zsydeepsky said:
    Quality-wise, the current best open-source LLM on huggingface leaderboard is Qwen2, which is from Chinese company Alibaba:
    https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard
    Yes, indeed it is. I was disputing the "quantity" part of the original argument without disputing the country of origin part.
    Reply