Insurance Data Privacy and AI
A single underwriting AI call can fall under three privacy regimes at once. Ignore any of them and you create exposure in all of them.
Three Privacy Regimes Converge on Insurance AI
A single underwriting AI call inside a U.S. insurance carrier can fall under three different privacy regimes at once — NAIC Model
AI deployment that ignores any of the three creates exposure in all three.
The Three Privacy Regimes
Different statutes, different obligations — same AI workload
NAIC Model #668
Insurance Data Security Model Law. Applies to all licensees handling nonpublic information. Adopted in roughly 24 states.
Why it matters: Every AI tool touching nonpublic information sits inside the licensee's written Information Security Program.
HIPAA Privacy & Security Rules
Health insurance carriers, HMOs, Medicare and Medicaid managed-care, and employer group health plans are HIPAA covered entities.
Why it matters: AI vendors handling PHI are business associates and must execute BAAs.
CCPA / CPRA, CPA, VCDPA…
California, Colorado, Virginia, Connecticut, Utah, and the growing list of state consumer-data laws apply to insurance just like every other industry holding PII.
Why it matters: Vary in treatment of automated decision-making and consumer rights.
NAIC Model #668 — The Data Security Baseline
What the Insurance Data Security Model Law requires of every licensee
Written Information Security Program
Every licensee maintains an ISP covering nonpublic information across systems, vendors, and processes.
Why it matters: AI vendors are not exempt — they belong in the same program.
Risk Assessments
Ongoing risk assessments of nonpublic information assets, including AI-enabled workflows.
Why it matters: Annual cycle; documentation must survive examination.
Third-Party Service Provider Oversight
Vendors handling nonpublic information are inside the program — contractual data-handling terms, security assessments, and ongoing monitoring required.
Why it matters: Shadow AI vendors are unreviewed third parties handling nonpublic information.
Cybersecurity Event Investigation and Notification
Investigation procedures and notification to the commissioner upon cybersecurity events.
Why it matters: AI-driven exposures count; you need detection on the AI surface.
HIPAA — The Layer Health Insurers Cannot Ignore
Health insurance carriers, HMOs, Medicare and Medicaid managed-care programs, and employer group health plans are HIPAA covered entities by definition. That means AI vendors handling PHI are business associates, and a BAA is required before any PHI flows into the AI vendor's systems.
Public LLMs like consumer ChatGPT cannot satisfy this — they explicitly disclaim BAA coverage.
The proposed 2025 HIPAA Security Rule update (NPRM issued January 6, 2025) tightens encryption, risk management, and resilience expectations — all of which apply to AI systems processing PHI.
Where AI Breaks Privacy If You Are Not Careful
Four common shadow-AI failure modes that violate one or more regimes
Prompt logging on public LLMs
Most public LLMs retain prompts by default. Every adjuster's ChatGPT session that contains policyholder PII becomes vendor-side data exposure.
Why it matters: Outside your ISP, outside your BAA chain, outside any auditable retention policy.
Training on customer inputs
Some LLM providers train on user inputs unless explicitly opted out — making your nonpublic information part of someone else's model.
Why it matters: Once data is absorbed into training, you cannot exfiltrate it.
No BAA = HIPAA violation
For health insurers, using a public LLM with PHI is a HIPAA breach the moment the data crosses the API boundary.
Why it matters: OCR investigations do not require an external breach — the missing safeguard is itself the violation.
No third-party oversight
Shadow AI vendors are unreviewed third parties handling nonpublic information.
Why it matters: #668 third-party service-provider obligations apply whether or not you know about the vendor.
The Privacy Governance Framework That Works
Five sequenced steps from inventory to documented controls
Inventory every AI tool that touches nonpublic information
Sanctioned and shadow. The list is the foundation for every regime's documentation requirement.
For health insurers — contractually require BAAs from every AI vendor
No BAA, no PHI access. Make this the gate, not a checkbox.
Implement PII / PHI redaction at the input layer
Data is cleansed before it reaches any LLM, not after. The model never sees identifiers.
Add AI vendors to the #668 third-party service provider inventory
Risk assessments, contractual confidentiality terms, ongoing monitoring on the same cadence as your other vendors.
Document consumer-rights workflows for AI-driven decisions
Some states require explanation of automated decisions. Build the workflow before someone exercises the right.
Insurance Data Privacy and AI — FAQ
Is NAIC Model #668 about AI?
No. NAIC Model #668 is the Insurance Data Security Model Law — a cybersecurity and data-security regime. The NAIC's AI-specific instrument is the December 2023 Model Bulletin on the Use of AI Systems by Insurers. AI tools handling nonpublic information sit inside both regimes simultaneously.
Are health insurance carriers HIPAA-covered entities?
Yes. HIPAA's covered-entity definition expressly includes health plans (insurers, HMOs, Medicare and Medicaid managed-care, employer group health plans). AI vendors that handle PHI on a health insurer's behalf are business associates and must execute BAAs.
Can health insurers use public ChatGPT with claim narratives?
No. Public ChatGPT does not provide a BAA and disclaims HIPAA coverage in its terms of service. Pasting any PHI into public consumer LLMs is a HIPAA breach the moment the data crosses the API boundary. Health insurers need enterprise-grade tooling with BAAs in place.
Do state privacy laws affect insurance AI?
Yes, though the GLBA exemption shields insurance from some state privacy law obligations on nonpublic personal information already governed by GLBA. Residual obligations exist around automated decision-making transparency and consumer rights workflows, and several states (notably California SB 1120) have insurance-specific AI restrictions.
Related Resources
Continue across the silo or bridge to a core hub
NAIC Model Bulletin on AI
How #668 (data security) and the Model Bulletin (AI governance) complement
Read article →State Insurance AI Enforcement
What examiners want on third-party vendor due diligence
Read article →Shadow AI in Insurance
Where unsanctioned AI use breaks #668 third-party vendor controls
Read article →HIPAA & AI Compliance
The HIPAA layer health insurers can't ignore — BAAs, audit logs, breach notification
Read article →SOC 2 and HIPAA AI Platforms
Platform-level controls that satisfy both #668 and HIPAA simultaneously
Read article →Build a Privacy-First AI Program
Free Shadow AI Risk Check covers your AI inventory, BAA gaps, #668 third-party vendor file, and PII redaction posture.