Home

/

Shadow AI

/

The Samsung ChatGPT Incident

Case Study

How Samsung’s employees exposed sensitive data and what we can learn from it

What Happened

In April 2023, Samsung discovered that employees had been using ChatGPT to help with work tasks—and in the process, leaked sensitive proprietary data to OpenAI’s servers.

Within less than 20 days of allowing ChatGPT access, Samsung experienced three separate incidents where engineers exposed confidential company information.

3

Separate data leak incidents

<20

Days after allowing ChatGPT

0

Oversight or governance controls

The Three Data Leaks

What employees actually did with ChatGPT

Incident #1

Employee Role:

Semiconductor Engineer

What they did:

Pasted source code for semiconductor equipment into ChatGPT to help identify and fix errors

Proprietary source code for equipment used in Samsung’s chip manufacturing process

Their intent:

Engineer wanted help debugging code faster

Confidential IP sent to OpenAI servers with no NDAs, no data residency controls, and no ability to delete

Incident #2

Employee Role:

Hardware Engineer

What they did:

Used ChatGPT to optimize code for internal testing programs used in hardware quality assurance

Internal testing program source code, including hardware specifications and quality control processes

Their intent:

Engineer wanted to improve testing efficiency

Trade secrets exposed to third-party AI with no contractual protections

Incident #3

Employee Role:

Meeting Participant

What they did:

Recorded a confidential internal meeting and fed the transcript to ChatGPT to generate meeting notes

Confidential business discussions, strategic plans, and internal decision-making

Their intent:

Employee wanted to save time on documentation

Sensitive business intelligence sent to external AI service without approval

Samsung’s Response

Immediate Ban

Samsung immediately banned ChatGPT and other generative AI tools on company devices and networks. Employees were prohibited from using AI tools for work purposes.

Internal AI Development

Rather than rely on public AI tools, Samsung announced plans to develop their own internal AI system with proper data controls and security measures.

Policy & Training

New AI usage policies were drafted, employee training was mandated, and stricter data handling protocols were implemented across the organization.

Key Lessons for Healthcare

What the Samsung incident teaches us about shadow AI risk in healthcare

1

It Happens Fast

Samsung experienced three separate leaks in less than 20 days. Shadow AI risk doesn’t accumulate slowly—it’s immediate. Every day without governance is a day of exposure.

Healthcare Implication:

In healthcare, this isn’t source code—it’s PHI. Patient names, diagnoses, treatments. The exposure is even more serious and the regulatory consequences are severe.

2

Intent Doesn’t Matter

None of these Samsung employees were malicious. They were trying to do their jobs better and faster. But intent doesn’t change the outcome—confidential data was still exposed.

Healthcare Implication:

A well-meaning physician pasting patient notes into ChatGPT to save time creates the same HIPAA violation as intentional data theft. Compliance doesn’t care about intent.

3

Smart People Make Mistakes

These were Samsung engineers—highly educated, tech-savvy professionals. They still didn’t understand the risk or think through the implications.

Healthcare Implication:

Clinical staff, even brilliant ones, aren’t cybersecurity experts. Expecting them to intuitively know ChatGPT has no BAA and retains data is unrealistic. You need controls, not just training.

4

You Can’t Retrieve the Data

Once data is sent to ChatGPT, OpenAI retains it for training unless you have a specific enterprise agreement. Samsung couldn’t undo the exposure.

Healthcare Implication:

PHI sent to ChatGPT is gone. You can’t take it back. You can’t delete it. You can only hope OpenAI’s privacy practices hold up. That’s not a compliance strategy.

5

Bans Aren’t Sustainable

Samsung’s ban was reactive, not strategic. While they develop internal AI (which will take years), employees still need AI to compete. Shadow usage likely continues.

Healthcare Implication:

Healthcare can’t wait years for ‘perfect’ internal AI solutions. You need governance and enablement now, not prohibition and delay.

Different Context, Different Solution

Samsung’s response made sense for them
Healthcare needs a different approach

Samsung’s Approach
Healthcare’s Better Path

Discover shadow AI immediately

Deploy PHI protection layer

Enable AI with governance controls

Get to compliance in weeks, not years

Ready to Discover Shadow AI?

The first step to AI governance is knowing what’s already being used. Our Shadow AI Risk Check provides a complete picture of your exposure in 60 minutes.

About the Author

Chance Avatar