Samsung Fab Workers Leak Confidential Data While Using ChatGPT

Samsung
(Image credit: Samsung)

After Samsung Semiconductor let its fab engineers use ChatGPT for assistance, they started using it to quickly fix errors in their source code, leaking confidential information like notes from internal meetings and data related to fab performance and yields in the process. The company now plans to develop its own ChatGPT-like AI service for internal use. But for now, it limits the length of questions submitted to the service to 1024 bytes, reports Economist.

So far, Samsung Semiconductor has recorded three instances of ChatGPT use which led to a data leaks. While three may not seem like a lot, they all happened over the course of 20 days, so the situation is quite disturbing.

In one case, a Samsung Semiconductor employee submitted source code of a proprietary program to ChatGPT to fix errors, which essentially disclosed the code of a top-secret application to an artificial intelligence run by an external company. 

The second case was perhaps even more alarming as another employee entered test patterns meant to identify defective chips and requested optimization. Test sequences meant to identify defects are strictly confidential. Meanwhile, optimizing these test sequences and possibly reducing their number can speed up silicon test and verification procedures, which significantly cuts down costs.

Yet another employee used the Naver Clova application to convert a meeting recording into a document, then submitted it to ChatGPT to prepare a presentation.

These actions clearly put confidential information at risk, prompting Samsung to warn its employees about the dangers of using ChatGPT. Samsung Electronics informed its executives and employees that data entered into ChatGPT is transmitted and stored on external servers, making it impossible for the company to retrieve it and increasing risks of confidential information leakage. While ChatGPT is a powerful tool, its open learning data feature can expose sensitive information to third parties, which is unacceptable in the highly-competitive semiconductor industry. 

The conglomerate is currently preparing protective measures to prevent similar accidents in the future. If another incident occurs, even after implementing emergency information protection measures, access to ChatGPT may be blocked on the company network. Still, it's clear that things like generative AI and various AI-enabled electronic design automation tools are an important aspect of the future of chip production.

When asked about the information leakage incident, a Samsung Electronics spokesperson declined to confirm or deny the information, as it was an internal matter.

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • PlaneInTheSky
    Samsung Semiconductor let its fab engineers use ChatGPT for assistance, they started using it to quickly fix errors in their source code.

    Jesus. Samsung engineers letting ChatGPT do the code might explains all the issues with Samsung SSD as of late.
    Reply
  • atomicWAR
    PlaneInTheSky said:
    Jesus. Samsung engineers letting ChatGPT do the code might explains all the issues with Samsung SSD as of late.

    Ok that was a little funny...
    Reply
  • jkflipflop98
    They don't have their own internal AI? Amateurs. . .
    Reply
  • JamesJones44
    How bad are their software developers that ChatGPT can be used to fix the code? I've played around with ChatGPT code generation, while decent, it will easily miss things that will cause problems. If ChatGPT is fixing their code, that is a very scary sign of the state of it.
    Reply
  • fritzo
    JamesJones44 said:
    How bad are their software developers that ChatGPT can be used to fix the code? I've played around with ChatGPT code generation, while decent, it will easily miss things that will cause problems. If ChatGPT is fixing their code, that is a very scary sign of the state of it.
    Where did it say that chatGPT actually fixed the code? I couldn't see that information. Just trying to fix code will ofc automatically import the code. I think you are jumping to a false conclusion. The tone is somewhat aggressive. Do you not like Samsung and likes to point a finger against the company?
    Reply
  • JamesJones44
    fritzo said:
    Where did it say that chatGPT actually fixed the code? I couldn't see that information. Just trying to fix code will ofc automatically import the code. I think you are jumping to a false conclusion. The tone is somewhat aggressive. Do you not like Samsung and likes to point a finger against the company?

    From the article

    In one case, a Samsung Semiconductor employee submitted source code of a proprietary program to ChatGPT to fix errors, which essentially disclosed the code of a top-secret application to an artificial intelligence run by an external company.
    Reply
  • PEnns
    Well, that says lots of things about Samsung...none are good!
    Reply
  • ottonis
    One would think that IT-engineers would be aware of basic rules on how to protect themselves and their company from data leakage.
    What were they thinking?
    Someone at OpenAI could make a fortune by selling such information to the next best bidder.
    Reply
  • OneMoreUser
    While Samsung send their engineers on security courses and boost their internal rules, they better also make sure no one is using the "advanced" spell checker in anything Chrome based as that means edited text being send out to Google or whom ever runs the advanced spell checker engine.
    Reply
  • Why_Me
    Good for Italy ^^

    https://www.bbc.com/news/technology-65139406
    ChatGPT banned in Italy
    Reply