Insurance Spoke

Shadow AI in Insurance

Adjusters, underwriters, and producers are using ChatGPT and Claude without IT, compliance, or legal approval. Here is the scale, the risk, and the response that works.

What Is Shadow AI in Insurance?

Shadow AI is the use of generative AI tools — ChatGPT, Claude, Copilot, free image generators, browser extensions — by employees without IT, compliance, or legal approval.

In insurance, that means adjusters drafting denial letters in ChatGPT, underwriters running risk profiles through public LLMs, and agents using AI to summarize policyholder calls. None of it shows up in your AI inventory because none of it was approved.

The Scale of the Problem

Recent enterprise-wide surveys put unauthorized AI usage above 75 percent of employees

78%
U.S. workers using AI on the job report using unapproved tools (WalkMe / Propeller, 2025)
80%+
Of workers report using unapproved AI tools (UpGuard, 2024)
38%
Of employees have shared confidential or sensitive data with AI tools without approval (CybSafe / NCA)
4
Insurance roles driving most shadow AI — claims, underwriting, producers, fraud

Where Shadow AI Shows Up in Your Organization

Four roles dominate the unauthorized usage data inside insurance carriers

Claims

Claims Adjusters

Drafting denial language, summarizing recorded statements, generating settlement scripts.

Why it matters: Exposed: sensitive policyholder claim details and PII routinely paste in to public LLMs.

Underwriting

Underwriters

Summarizing risk reports, drafting decline rationales, cleaning up loss runs.

Why it matters: Exposed: ECDIS (external consumer data) and applicant financial / health data.

Distribution

Producers and Agents

Drafting client emails, generating quote explanations, summarizing carrier policy language.

Why it matters: Exposed: PII and proposed coverage details leak through to third-party model providers.

SIU

Fraud Investigators

Generating SIU narrative sections, brainstorming red flags, drafting investigation summaries.

Why it matters: Exposed: investigation details including suspect names and confidential evidence.

Why the Standard Response (Ban It) Fails

Bans push usage onto personal devices and networks where you lose visibility entirely

1

Productivity pressure makes bans untenable

Adjusters and underwriters are measured on cycle time and book growth. AI materially improves their output. A ban puts staff in direct conflict with their own performance incentives, and they route around it.

What happens:

Adjuster benchmarks reward fast cycle times. AI cuts drafting in half.

Why it backfires:

Staff move to personal phones or hotspots. Same data exposure, zero audit trail.

2

Personal-device workaround compounds the problem

Block AI on corporate devices and staff use personal phones. You now have zero visibility, zero audit trail, and the same client data exposure, just invisible.

What happens:

Phone-based ChatGPT with a copy-paste of the claim narrative.

Why it backfires:

Off-channel exposure that is invisible to your supervision and data-loss systems.

3

Talent risk in an AI-native market

AI-native firms attract top analysts and adjusters. If your firm bans the tools they were trained on, your recruiting pipeline narrows and your retention erodes.

What happens:

Candidates expect approved AI assistants as part of standard tooling.

Why it backfires:

Recruiting yield drops; retention erodes especially among high performers.

The Insurance-Specific Risks Bans Do Not Solve

Shadow AI fails the documentation tests in every major insurance regime by definition

NAIC

NAIC Model Bulletin on AI

AI used in claims and underwriting falls under the AI Systems Program governance requirements in the Model Bulletin adopted by 24 states + DC. Shadow AI is by definition not in the AIS Program.

Why it matters: See: /insurance/naic-model-bulletin-ai/

NYDFS

Circular Letter No. 7 (2024)

Explicitly requires fairness testing and third-party oversight of AI used in underwriting and pricing. Shadow tools fail both tests automatically.

Why it matters: Third-party tools with no contractual data-handling terms cannot pass NYDFS exam scrutiny.

Colorado

SB21-169 and Reg 10-1-1

Require documented governance of ECDIS and algorithms for unfair discrimination. Shadow tools produce no documentation at all.

Why it matters: No model lineage, no fairness testing, no governance record.

Data Security

NAIC Model #668

Requires nonpublic information to be covered by the licensee's written Information Security Program. Public LLMs do not sit inside that program.

Why it matters: See: /insurance/insurance-data-privacy-ai/

What Governed AI Looks Like Instead

Three components, in order

1

Discovery first

Inventory what is actually in use. Surveys, network telemetry, expense-report review, and structured staff interviews — not a memo asking people to confess.

2

Sanctioned platform

A multi-model, governed AI platform that gives adjusters, underwriters, and agents the same speed benefits with PII / PHI redaction, audit logs, and a BAA where required.

3

Enablement and policy

Role-based training, a written AI usage policy, and an annual review tied to your AIS Program documentation. See /insurance/insurance-ai-enforcement/ for the audit-readiness shape.

Shadow AI in Insurance — FAQ

What is shadow AI in insurance?

Shadow AI is the use of generative AI tools (ChatGPT, Claude, Copilot, browser extensions, image generators) by adjusters, underwriters, producers, and other insurance staff without approval from IT, compliance, or legal. It creates unsanctioned data exposure and regulatory blind spots.

How widespread is shadow AI inside insurance carriers?

Enterprise-wide research suggests 78%+ of employees use unapproved AI tools, with 38% having shared confidential data through them. Insurance shows the same pattern; document-heavy roles like claims and underwriting drive adoption faster than IT can sanction tools.

Why does not banning AI work in insurance?

Bans push usage onto personal devices and networks where the carrier loses visibility entirely. The data exposure continues; the audit trail disappears. Banning AI also does not satisfy NAIC Model Bulletin governance requirements — regulators expect documented oversight, not the absence of usage.

What regulations apply to shadow AI in insurance?

The NAIC Model Bulletin on the Use of AI Systems by Insurers (Dec. 2023, adopted in 24 states plus DC), NAIC Model #668 (Insurance Data Security Model Law), state-specific rules like NYDFS Circular Letter No. 7 and Colorado Reg 10-1-1, and HIPAA for health insurers. Shadow AI fails every documentation requirement in these regimes by definition.

Discover the Shadow AI Inside Your Carrier

Free 6-week Shadow AI Risk Check. Inventory the unsanctioned tools in use, quantify the regulatory exposure, and build the sanctioned alternative.