Insurance Spoke

State Insurance Department AI Enforcement

The question is no longer whether your AI deployments will be examined. It is which department will examine them first, and what they will find.

Enforcement Has Shifted From "Wait and See" to Active Priority

For most of the past decade, AI in insurance lived in a regulatory gray zone. That is over.

As of late 2025, 24 states plus the District of Columbia have adopted the NAIC Model Bulletin on the Use of AI Systems by Insurers — typically with little or no material change from the December 2023 NAIC text. Several have moved beyond adoption to active examination.

The Enforcement Landscape

Four jurisdictions are setting the bar

24 + DC
States with NAIC Model Bulletin adopted as of late 2025
NY CL-7
NYDFS Circular Letter No. 7 — most-cited state rule on AI in underwriting and pricing
CO 10-1-1
Most prescriptive on documentation; broadest line-of-business coverage
CA SB 1120
First state-level substantive prohibition on AI medical-necessity denials

Four Jurisdictions Setting the Bar

Each has translated NAIC framework into examination expectations

New York

NYDFS Circular Letter No. 7

Effective July 11, 2024. The most-cited state-level rule on AI in underwriting and pricing. Requires written governance of AIS and ECDIS, fairness testing, actuarial validity, transparency to applicants, and third-party vendor oversight.

Why it matters: NYDFS examiners now treat AIS / ECDIS as a market conduct focus.

Connecticut

Bulletin MC-25

Effective February 26, 2024. Adopted the NAIC Model Bulletin and explicitly named AI Systems as a market conduct examination focus.

Why it matters: Carriers operating in CT should expect AI governance documentation requests in their next exam cycle.

Colorado

Regulation 10-1-1

Governance and risk-management framework required by December 1, 2024 for life insurers, with auto and health plans phased in through October 15, 2025. SB21-169 is the underlying anti-discrimination statute.

Why it matters: Most prescriptive documentation rules; broadest LoB coverage in the country.

California

SB 1120

Effective September 28, 2024. AI cannot be the sole decision-maker for medical-necessity denials by health plans or disability insurers. Human clinician decision-making required.

Why it matters: First state-level substantive prohibition on AI use, not just process governance.

What a Market Conduct Exam With AI in Scope Looks Like

Examiners typically request these seven artifacts, in this order

1

The AIS Program document

Written governance, scope, risk-tiering of every AI System, governance owner.

2

The AI System inventory

Every model — internal, vendor, ECDIS — with risk classification.

3

Fairness testing reports

Pre-deployment and ongoing testing, with mitigation documentation where disparate impact was identified.

4

Third-party vendor due diligence files

BAAs where applicable, security assessments, data lineage documentation.

5

Decision logs and audit trails

Sample of AI-driven decisions with full traceability — user, model version, inputs, output, override.

6

Consumer disclosure documentation

For states that require AI use to be disclosed to applicants (e.g., NYDFS CL-7).

7

Adverse action procedures

How human override works, training for staff on when to override.

Private Litigation Runs Parallel to Regulatory Exams

Huskey v. State Farm (N.D. Ill., 2022) is the leading live algorithmic discrimination class action against an insurer, alleging State Farm's claims algorithms produced disparate scrutiny and delayed payments for Black homeowners. The case has survived motion practice and remains in litigation as of 2026.

The litigation theory — disparate impact from algorithmic decisions — applies equally in underwriting. Carriers facing market conduct exam findings on fairness testing should expect class-action plaintiffs to use those same findings as the basis for civil claims.

Audit-Readiness Checklist

Carriers operating in the four jurisdictions above should be able to produce, on request

Required

Written AIS Program

Covers AI development, acquisition, and use across the licensee.

Required

AI System Inventory

With risk-tier classifications for every model and ECDIS source.

Required

Annual Fairness Testing

Reports for every AI System affecting consumer outcomes.

Required

Vendor BAAs

For every AI vendor handling PHI on behalf of a health insurer.

Required

#668 Vendor File

Third-party service provider inventory under NAIC Model

Required

Decision Logs

Retained per applicable record-retention rules (HIPAA is 6 years).

Required

Human-Override Procedures

Documented escalation for adverse AI decisions, with named owners.

Required

Applicant Disclosure

Language where state rules require AI / ECDIS use to be disclosed to applicants.

Required

Staff Training Records

Covering AI governance roles, completion tracked, refreshed annually.

State Insurance AI Enforcement — FAQ

Which states have adopted the NAIC Model Bulletin on AI?

By late 2025, 24 states plus DC had adopted the bulletin. Adopters include Connecticut, Vermont, Rhode Island, Pennsylvania, Kentucky, Maryland, Nebraska, Oklahoma, North Carolina, Massachusetts, Delaware, New Jersey, and Hawaii, among others. Most adoptions track the NAIC text closely.

Is AI a market conduct exam focus?

Yes in several states. Connecticut's Bulletin MC-25 explicitly states that AI Systems will be a focus of market conduct examinations. NYDFS has signaled the same for Circular Letter No. 7 compliance. Carriers operating in those states should expect AI governance documentation requests in their next exam cycle.

What is California SB 1120 and who does it apply to?

SB 1120 (signed September 28, 2024) restricts AI in utilization review for health plans and disability insurers in California. It requires human clinician decision-making for medical-necessity denials. AI alone cannot deny coverage — a substantive prohibition, not just process governance.

What documents will a state insurance examiner ask for on AI?

Examiners typically request the written AIS Program, the AI System inventory with risk tiers, fairness testing reports, third-party vendor due diligence files, decision logs and audit trails, consumer disclosure documentation, and adverse action procedures. Missing any of these creates exam findings.

Get Audit-Ready Before Your Next Market Conduct Exam

Free Shadow AI Risk Check audits your AIS Program, vendor file, fairness testing posture, and decision-log readiness in 6 weeks.