Category: Shadow AI

  • Shadow AI Statistics

    Research & Data

    Shadow AI Statistics 2026

    Data on adoption rates, PHI exposure, and compliance risks in healthcare organizations

    Content By The Numbers

    Data from healthcare organizations, industry surveys, and security research

    78%

    Healthcare workers use AI tools without IT approval

    Healthcare IT Security Study, 2024

    5-10

    Average number of shadow AI tools per organization

    Enterprise AI Governance Report

    0%

    Organizations with complete visibility into AI usage

    Gartner AI Governance Survey

    92%

    Organizations concerned about shadow AI risk

    HIMSS Healthcare AI Survey

    3.2M

    Average records potentially exposed per breach

    HHS Breach Portal Data

    $5.5M

    Organizations with complete visibility into AI usage

    IBM Cost of Data Breach Report

    Adoption Trends

    Shadow AI usage is accelerating, not slowing down

    Usage Growth

    43% staff using AI

    78% staff using AI

    89% staff using AI

    Key Insight: 81% increase in just 18 months—shadow AI is becoming ubiquitous

    Department Adoption

    91% adoption

    84% adoption

    93% adoption

    Key Insight: Every department is using AI—this isn’t isolated to tech-savvy teams

    Compliance Awareness

    23% of users

    11% of users

    4% of users

    Key Insight: Most staff have no idea they’re creating compliance risk

    Risk & Impact Data

    What happens when shadow AI goes unmanaged

    PHI Exposure

    100%

    Organizations with PHI in shadow AI tools

    4.7

    Average AI tools with PHI exposure per org

    0%

    Shadow AI tools with proper BAAs

    73%

    Tools storing data on external servers

    Financial Impact

    $5.5M

    Average healthcare data breach cost

    $429

    Cost per exposed record

    277 days

    Average time to identify & contain breach

    $1.3M

    Average OCR HIPAA penalty

    Most Common Shadow AI Tools

    The AI tools most frequently discovered in healthcare organizations

    ChatGPT (OpenAI)

    Adoption Rate:

    89%

    Primary Use: Documentation, patient education, clinical summaries

    Grammarly

    Adoption Rate:

    67%

    Primary Use: Email writing, report editing, professional communication

    Claude (Anthropic)

    Adoption Rate:

    43%

    Primary Use: Appeal letters, policy analysis, complex documentation

    Gemini (Google)

    Adoption Rate:

    38%

    Primary Use: Research, data analysis, report generation

    Otter.ai / Rev.ai

    Adoption Rate:

    31%

    Primary Use: Meeting transcription, patient call documentation

    Notion AI

    Adoption Rate:

    24%

    Primary Use: Project management, note organization, team collaboration

    Jasper / Copy.ai

    Adoption Rate:

    19%

    Primary Use: Marketing content, patient communications, newsletters

    What This Data Means

    Shadow AI Is Not an Edge Case

    With 78-89% adoption across all departments, this is standard operating procedure, not isolated incidents. Every organization has shadow AI.

    Staff Don’t Understand the Risk

    Only 23% of users are aware of HIPAA implications. This isn’t malicious—it’s a training and visibility problem.

    Banning Won’t Work

    Usage continues to grow despite organizational concerns. Prohibition has never worked. Governed enablement is the only path.

    The Cost of Inaction Is Real

    $5.5M average breach cost + $1.3M OCR penalties + reputational damage. The question isn’t ‘can we afford governance?’ but ‘can we afford not to?’

  • The Samsung ChatGPT Incident

    Case Study

    How Samsung’s employees exposed sensitive data and what we can learn from it

    What Happened

    In April 2023, Samsung discovered that employees had been using ChatGPT to help with work tasks—and in the process, leaked sensitive proprietary data to OpenAI’s servers.

    Within less than 20 days of allowing ChatGPT access, Samsung experienced three separate incidents where engineers exposed confidential company information.

    3

    Separate data leak incidents

    <20

    Days after allowing ChatGPT

    0

    Oversight or governance controls

    The Three Data Leaks

    What employees actually did with ChatGPT

    Incident #1

    Employee Role:

    Semiconductor Engineer

    What they did:

    Pasted source code for semiconductor equipment into ChatGPT to help identify and fix errors

    Proprietary source code for equipment used in Samsung’s chip manufacturing process

    Their intent:

    Engineer wanted help debugging code faster

    Confidential IP sent to OpenAI servers with no NDAs, no data residency controls, and no ability to delete

    Incident #2

    Employee Role:

    Hardware Engineer

    What they did:

    Used ChatGPT to optimize code for internal testing programs used in hardware quality assurance

    Internal testing program source code, including hardware specifications and quality control processes

    Their intent:

    Engineer wanted to improve testing efficiency

    Trade secrets exposed to third-party AI with no contractual protections

    Incident #3

    Employee Role:

    Meeting Participant

    What they did:

    Recorded a confidential internal meeting and fed the transcript to ChatGPT to generate meeting notes

    Confidential business discussions, strategic plans, and internal decision-making

    Their intent:

    Employee wanted to save time on documentation

    Sensitive business intelligence sent to external AI service without approval

    Samsung’s Response

    Immediate Ban

    Samsung immediately banned ChatGPT and other generative AI tools on company devices and networks. Employees were prohibited from using AI tools for work purposes.

    Internal AI Development

    Rather than rely on public AI tools, Samsung announced plans to develop their own internal AI system with proper data controls and security measures.

    Policy & Training

    New AI usage policies were drafted, employee training was mandated, and stricter data handling protocols were implemented across the organization.

    Key Lessons for Healthcare

    What the Samsung incident teaches us about shadow AI risk in healthcare

    1

    It Happens Fast

    Samsung experienced three separate leaks in less than 20 days. Shadow AI risk doesn’t accumulate slowly—it’s immediate. Every day without governance is a day of exposure.

    Healthcare Implication:

    In healthcare, this isn’t source code—it’s PHI. Patient names, diagnoses, treatments. The exposure is even more serious and the regulatory consequences are severe.

    2

    Intent Doesn’t Matter

    None of these Samsung employees were malicious. They were trying to do their jobs better and faster. But intent doesn’t change the outcome—confidential data was still exposed.

    Healthcare Implication:

    A well-meaning physician pasting patient notes into ChatGPT to save time creates the same HIPAA violation as intentional data theft. Compliance doesn’t care about intent.

    3

    Smart People Make Mistakes

    These were Samsung engineers—highly educated, tech-savvy professionals. They still didn’t understand the risk or think through the implications.

    Healthcare Implication:

    Clinical staff, even brilliant ones, aren’t cybersecurity experts. Expecting them to intuitively know ChatGPT has no BAA and retains data is unrealistic. You need controls, not just training.

    4

    You Can’t Retrieve the Data

    Once data is sent to ChatGPT, OpenAI retains it for training unless you have a specific enterprise agreement. Samsung couldn’t undo the exposure.

    Healthcare Implication:

    PHI sent to ChatGPT is gone. You can’t take it back. You can’t delete it. You can only hope OpenAI’s privacy practices hold up. That’s not a compliance strategy.

    5

    Bans Aren’t Sustainable

    Samsung’s ban was reactive, not strategic. While they develop internal AI (which will take years), employees still need AI to compete. Shadow usage likely continues.

    Healthcare Implication:

    Healthcare can’t wait years for ‘perfect’ internal AI solutions. You need governance and enablement now, not prohibition and delay.

    Different Context, Different Solution

    Samsung’s response made sense for them
    Healthcare needs a different approach

    Samsung’s Approach
    Healthcare’s Better Path

    Discover shadow AI immediately

    Deploy PHI protection layer

    Enable AI with governance controls

    Get to compliance in weeks, not years

    Ready to Discover Shadow AI?

    The first step to AI governance is knowing what’s already being used. Our Shadow AI Risk Check provides a complete picture of your exposure in 60 minutes.

  • What Is Shadow AI? The Complete Guide for IT Leaders

    Problem Definition

    What Is Shadow AI?

    The complete definition, why it’s a governance crisis, and why it’s happening right now in your organization

    The Definition

    Shadow AI is the use of AI tools and services by employees without formal approval, security review, or IT oversight.

    It’s called “shadow” because it happens invisibly—outside of procurement processes, vendor management, and governance frameworks. Leadership, IT, compliance, and security teams have no visibility into what AI tools are being used, by whom, for what purposes, or what data is being shared.

    Real Examples from Healthcare

    Shadow AI isn’t theoretical—it’s happening right now across clinical, administrative, and revenue cycle teams

    Physician using ChatGPT

    What they do:

    Doctor copies patient notes into ChatGPT to generate discharge summaries, simplify medical jargon for patient education materials, or draft clinical documentation

    Patient PHI (names, dates, diagnoses, treatments) sent directly to OpenAI servers

    HIPAA violation, no BAA, no audit trail, no control over data use or retention

    Admin using AI transcription

    What they do:

    Administrative staff uses free AI transcription tools (Otter.ai, Rev.ai) to document patient phone calls, insurance discussions, and appointment scheduling

    Unencrypted PHI stored in third-party cloud services

    Data breach exposure, compliance violation, no vendor oversight

    Billing team using AI

    What they do:

    Billing staff uses Anthropic Claude to draft insurance appeal letters, analyze denial patterns, or generate claim documentation

    Patient diagnosis codes, treatment details, and claim information shared with AI model

    No BAA, no logging, no ability to demonstrate compliance if audited

    Real Examples from Healthcare

    Shadow AI isn’t theoretical—it’s happening right now across clinical, administrative, and revenue cycle teams

    Physician using ChatGPT

    What they do:

    Doctor copies patient notes into ChatGPT to generate discharge summaries, simplify medical jargon for patient education materials, or draft clinical documentation

    Admin using AI transcription

    What they do:

    Administrative staff uses free AI transcription tools (Otter.ai, Rev.ai) to document patient phone calls, insurance discussions, and appointment scheduling

    Billing team using AI

    What they do:

    Billing staff uses Anthropic Claude to draft insurance appeal letters, analyze denial patterns, or generate claim documentation

    Patient PHI (names, dates, diagnoses, treatments) sent directly to OpenAI servers

    HIPAA violation, no BAA, no audit trail, no control over data use or retention

    Unencrypted PHI stored in third-party cloud services

    Data breach exposure, compliance violation, no vendor oversight

    Patient diagnosis codes, treatment details, and claim information shared with AI model

    No BAA, no logging, no ability to demonstrate compliance if audited

    Why Shadow AI Is Happening

    It’s not because staff are reckless. It’s because they’re trying to get work done

    AI Tools Are Incredibly Useful

    ChatGPT, Claude, and other AI tools genuinely save time and improve work quality. Staff discover them, see immediate value, and start using them—without thinking about compliance.

    No Official Alternative Exists

    Organizations haven’t provided approved, governed AI tools. Staff need AI to keep up with productivity expectations, so they use what’s available.

    Approval Processes Are Too Slow

    Organizations haven’t provided approved, governed AI tools. Staff need AI to keep up with productivity expectations, so they use what’s available.

    IT Doesn’t Know It’s Happening

    These are web-based SaaS tools accessed through personal accounts. They don’t show up in network logs, procurement systems, or vendor management processes.

    No Official Alternative Exists

    Most employees genuinely don’t realize that pasting patient information into ChatGPT is a HIPAA violation. They see it as using a productivity tool, not exposing PHI.

    Why This Is a Governance Crisis

    Not just a compliance issue. This is an existential risk for all organizations

    You Can’t Govern What You Can’t See

    Without visibility into what AI tools are being used, you have no ability to assess risk, enforce policies, or implement controls. You’re flying blind.

    Impact: Zero governance posture

    PHI Is Already Exposed

    Every time staff paste patient information into ChatGPT or Claude, PHI leaves your organization. This has already happened thousands of times.

    Impact: Ongoing HIPAA violations

    No Audit Trail Exists

    If OCR or a state attorney general asks ‘where has patient data been sent?’, you have no answer. You cannot demonstrate compliance or respond to breach investigations.

    Impact: Audit failure, regulatory action

    Banning AI Doesn’t Work

    Organizations that ban AI tools see zero reduction in shadow AI usage. Staff just hide it better. You need governed enablement, not prohibition.

    Impact: False sense of security

    The Solution: Governed Enablement

    You can’t eliminate shadow AI with bans—you eliminate it by providing a better alternative

    What Doesn’t Work

    Banning AI tools (staff use them anyway)

    Policy documents with no enforcement

    Quarterly training with no controls

    Waiting for ‘the perfect tool’ to evaluate

    Ignoring the problem and hoping it goes away

    What Works

    Discover all shadow AI usage (visibility first)

    Provide approved AI tools with automatic PHI protection

    Make the governed option easier than shadow tools

    Enforce policies through technical controls

    Continuous monitoring and enablement

    What to Do Next

    Assess Your Shadow AI Exposure

    Book a free Shadow AI Risk Check to understand what AI tools are being used in your organization, where PHI exposure is happening, and what your governance gaps are.

    Learn More About Shadow AI

    Explore our other Shadow AI resources to understand how to discover it, why AI bans fail, and what the data shows about shadow AI adoption in healthcare.

  • Why AI Bans Fail

    Strategy Guide

    Why AI Bans Fail

    Why prohibition doesn’t work and what to do instead

    The Ban Reflex

    When leadership discovers shadow AI usage, the first instinct is usually: “Ban it all until we figure this out.”

    This seems logical—if AI tools create compliance risk, prohibiting them should eliminate the risk. But in practice, AI bans don’t work. Here’s why.

    5 Reasons AI Bans Fail

    Staff Need AI to Do Their Jobs

    AI tools genuinely save time and improve work quality. Asking staff to stop using them is asking them to be less productive. In competitive, understaffed healthcare environments, that’s not realistic.

    Example:

    A physician who uses ChatGPT to summarize discharge instructions in 30 seconds instead of 10 minutes isn’t going to stop—they can’t afford to. They have 20 more patients to see.

    Outcome:

    Staff continue using AI, they just hide it better

    Bans Are Unenforceable

    Most shadow AI tools are web-based, accessed through personal accounts on personal devices. How do you enforce a ban on ChatGPT when staff can access it from their phone on cellular data?

    Example:

    Even organizations that block ChatGPT on corporate networks see zero reduction in usage. Staff just switch to mobile devices or personal laptops.

    Outcome:

    Zero technical ability to prevent usage

    You Lose Visibility

    When you ban AI, staff who were openly using it (and might have self-reported) go underground. Now you have shadow AI with zero visibility instead of shadow AI you knew about.

    Example:

    Before the ban: ‘I use ChatGPT for documentation.’ After the ban: Silent usage with no admission, no tracking, no governance opportunity.

    Outcome:

    Worse visibility than before the ban

    You Can’t Compete for Talent

    Healthcare workers know AI is the future. Organizations that ban AI look out-of-touch and risk losing talent to competitors who embrace AI with proper governance.

    Example:

    Top clinicians and administrators want to work where they have modern tools. ‘We ban AI’ is not a recruiting advantage.

    Outcome:

    Talent disadvantage in competitive markets

    Bans Don’t Address the Root Problem

    The problem isn’t AI tools—it’s unmanaged AI usage. Banning tools doesn’t create governance, establish PHI protection, build policies, or enable safe AI adoption. It just delays the inevitable.

    Example:

    Eventually you’ll need to enable AI. A ban is just procrastination that makes your governance problem worse over time.

    Outcome:

    No progress toward actual solution

    Real-World AI Ban Failures

    What happens when organizations try to ban AI

    Large Health System

    Approach:

    System-wide AI ban announced via email

    Result:

    ChatGPT usage increased 34% in the following month (measured via network traffic). Staff switched to mobile devices.

    Lesson Learned:

    Bans without alternatives drive usage underground

    Multi-Specialty Practice

    Approach:

    Blocked ChatGPT and Claude at network level

    Result:

    Revenue cycle team productivity dropped 18%. Staff complained to leadership. Block was quietly removed after 3 weeks.

    Lesson Learned:

    You can’t ban tools staff depend on for productivity

    Regional Medical Center

    Approach:

    Policy document prohibiting all generative AI

    Result:

    87% of staff were unaware of the policy. Usage continued unchanged. No enforcement mechanism existed.

    Lesson Learned:

    Policy without enforcement is just paperwork

    Academic Medical Center

    Approach:

    Threatened disciplinary action for AI usage

    Result:

    Zero reduction in usage. Created hostile relationship with IT/compliance. Staff stopped reporting issues.

    Lesson Learned:

    Fear-based approaches destroy trust and visibility

    What Works Instead: Governed Enablement

    Replace prohibition with safe, controlled access

    The Ban Approach

    Prohibit all AI tools via policy

    Block ChatGPT at network level

    Threaten disciplinary action

    Hope the problem goes away

    Delay AI strategy indefinitely

    Result: Usage continues underground, zero visibility, no governance progress

    Governed Enablement

    Discover all shadow AI usage (visibility first)

    Provide approved AI tools with PHI protection

    Make governed option easier than shadow tools

    Enforce policies through technical controls

    Enable teams while managing risk

    Result: Safe AI adoption, complete visibility, staff productivity gains

    The Governed Enablement Framework

    4 steps to eliminate shadow AI without bans

    Step 1: Discover

    Map all shadow AI usage across your organization. You can’t govern what you can’t see.

    Key Actions: Anonymous surveys, department interviews, network traffic analysis

    Step 2: Protect

    Deploy automatic PHI protection that works across all AI models. Make safety invisible to end users.

    Key Actions: PHI detection & cleansing, BAAs with AI vendors, audit logging

    Step 3: Enable

    Provide approved AI tools that are better than shadow alternatives. Give staff a governed path forward.

    Key Actions: Multi-model AI platform, role-based access, training & onboarding

    Step 4: Monitor

    Continuous visibility and policy enforcement. Governance isn’t a one-time project—it’s ongoing.

    Key Actions: Usage dashboards, compliance reporting, policy updates

    The Bottom Line

    You can’t ban your way to AI governance.

    Prohibition creates shadow AI with zero visibility. Governed enablement eliminates shadow AI by providing a better, safer alternative. The choice is clear.

  • How to Discover Shadow AI in Your Organization

    Tactical Guide

    How to Discover Shadow AI

    Practical methods to inventory unauthorized AI usage across your healthcare organization

    The Discovery Challenge

    Shadow AI is designed to be invisible. These are web-based SaaS tools accessed through personal accounts, personal credit cards, and consumer-grade services.

    Traditional IT discovery methods (network monitoring, procurement records, endpoint management) won’t catch them. You need a different approach.

    5 Discovery Methods

    Combine multiple approaches to get a complete picture of shadow AI usage

    Anonymous Staff Surveys

    Easy

    1-2 weeks

    Send organization-wide surveys asking staff to self-report AI tool usage in a non-punitive, anonymous way

    How to do it:

    • Frame it as ‘helping us enable AI safely’ not ‘catching violations’
    • Ask: What AI tools do you use? How often? For what tasks?
    • Promise no individual consequences—focus on organizational learning
    • Offer small incentive (gift card raffle) for completion

    Effectiveness:

    70-80% of usage discovered

    Pros:

    Fast, cheap, builds trust

    Cons:

    Self-reported data may be incomplete

    Department Interviews

    Medium

    2-4 weeks

    Conduct structured interviews with department leaders and frontline staff across clinical, administrative, and revenue cycle teams

    How to do it:

    • Interview 2-3 people from each major department
    • Ask about productivity pain points and workarounds
    • Listen for AI tool mentions (ChatGPT, Claude, Grammarly, transcription services)
    • Document workflows where AI could be or is being used

    Effectiveness:

    60-70% of usage discovered

    Pros:

    Deep qualitative insights, relationship building

    Cons:

    Time-intensive, requires skilled interviewer

    Network Traffic Analysis

    Hard

    1 week

    Analyze DNS logs and firewall traffic to identify connections to known AI service domains

    How to do it:

    • Pull 30 days of DNS logs from your firewall/proxy
    • Search for domains: openai.com, anthropic.com, claude.ai, gemini.google.com, etc.
    • Look for unusual traffic spikes to AI service providers
    • Correlate by department, time of day, user segments

    Effectiveness:

    40-50% of usage discovered

    Pros:

    Objective data, hard evidence

    Cons:

    Misses personal devices, VPNs, encrypted traffic

    Browser Extension Audit

    Medium

    1 week

    If you use endpoint management, audit installed browser extensions for AI writing assistants and productivity tools

    How to do it:

    • Export list of all Chrome/Edge extensions from endpoint management
    • Flag AI-related extensions: Grammarly, Jasper, Copy.ai, Notion AI, etc.
    • Check for ChatGPT desktop apps, Claude desktop apps
    • Document which departments have highest adoption

    Effectiveness:

    30-40% of usage discovered

    Pros:

    Specific tool identification

    Cons:

    Only catches managed devices, misses web-only usage

    Credit Card & Expense Review

    Easy

    1 week

    Review corporate credit card statements and expense reports for AI tool subscriptions

    How to do it:

    • Pull 6 months of expense data
    • Search for merchant names: OpenAI, Anthropic, Jasper, Copy.ai, etc.
    • Look for recurring monthly charges ($20-50 range)
    • Note: Most shadow AI is on personal cards, so this catches <10%

    Effectiveness:

    10-20% of usage discovered

    Pros:

    Easy to run, identifies paid subscriptions

    Cons:

    Misses majority of personal-account usage

    Recommended Approach

    Combine three methods for maximum coverage

    Start with Anonymous Survey (Week 1)

    Fastest way to get broad visibility. Most staff will self-report if framed correctly.

    Run Network Traffic Analysis (Week 1-2)

    Validates survey data and catches usage staff forgot to mention or didn’t realize counted as “AI”.

    Follow Up with Department Interviews (Week 2-3)

    Deep dive into high-risk or high-usage departments to understand workflows and PHI exposure.

    Result: 80-90% coverage of shadow AI usage in 2-3 weeks, with both quantitative data and qualitative context.

    What to Document

    Not just a compliance issue. This is an existential risk for all organizations

    AI Tool Name

    Example: ChatGPT, Claude, Gemini, Grammarly

    Track this information for each discovered AI tool to build a complete shadow AI inventory.

    Department/Team

    Example: Clinical Documentation, Revenue Cycle, Admin

    Track this information for each discovered AI tool to build a complete shadow AI inventory.

    Number of Users

    Example: Estimated count or percentage

    Track this information for each discovered AI tool to build a complete shadow AI inventory.

    Use Case

    Example: Summarizing notes, drafting appeals, patient education

    Track this information for each discovered AI tool to build a complete shadow AI inventory.

    AI Tool Data Shared

    Example: Patient names, diagnosis codes, treatment details

    Track this information for each discovered AI tool to build a complete shadow AI inventory.

    PHI Exposure Level

    Example: High, Medium, Low

    Track this information for each discovered AI tool to build a complete shadow AI inventory.

    Account Type

    Example: Personal account, free tier, paid subscription

    Track this information for each discovered AI tool to build a complete shadow AI inventory.

    Frequency of Use

    Example: Daily, weekly, occasional

    Track this information for each discovered AI tool to build a complete shadow AI inventory.

    What Happens After Discovery?

    Discovery is just the first step—then you need to act on what you learned

    Prioritize Risk

    Rank discovered tools by PHI exposure level, number of users, and business criticality. Focus governance efforts on highest-risk areas first.

    Communicate Findings

    Present shadow AI inventory to leadership with risk assessment, compliance gaps, and recommended actions. Make the invisible visible.

    Build Governance Plan

    Use discovery insights to create a roadmap: establish policies, deploy PHI protection, provide approved alternatives, and enable teams safely.