State Insurance Department AI Enforcement
The question is no longer whether your AI deployments will be examined. It is which department will examine them first, and what they will find.
Enforcement Has Shifted From "Wait and See" to Active Priority
For most of the past decade, AI in insurance lived in a regulatory gray zone. That is over.
As of late 2025, 24 states plus the District of Columbia have adopted the NAIC Model Bulletin on the Use of AI Systems by Insurers — typically with little or no material change from the December 2023 NAIC text. Several have moved beyond adoption to active examination.
The Enforcement Landscape
Four jurisdictions are setting the bar
Four Jurisdictions Setting the Bar
Each has translated NAIC framework into examination expectations
NYDFS Circular Letter No. 7
Effective July 11, 2024. The most-cited state-level rule on AI in underwriting and pricing. Requires written governance of AIS and ECDIS, fairness testing, actuarial validity, transparency to applicants, and third-party vendor oversight.
Why it matters: NYDFS examiners now treat AIS / ECDIS as a market conduct focus.
Bulletin MC-25
Effective February 26, 2024. Adopted the NAIC Model Bulletin and explicitly named AI Systems as a market conduct examination focus.
Why it matters: Carriers operating in CT should expect AI governance documentation requests in their next exam cycle.
Regulation 10-1-1
Governance and risk-management framework required by December 1, 2024 for life insurers, with auto and health plans phased in through October 15, 2025. SB21-169 is the underlying anti-discrimination statute.
Why it matters: Most prescriptive documentation rules; broadest LoB coverage in the country.
SB 1120
Effective September 28, 2024. AI cannot be the sole decision-maker for medical-necessity denials by health plans or disability insurers. Human clinician decision-making required.
Why it matters: First state-level substantive prohibition on AI use, not just process governance.
What a Market Conduct Exam With AI in Scope Looks Like
Examiners typically request these seven artifacts, in this order
The AIS Program document
Written governance, scope, risk-tiering of every AI System, governance owner.
The AI System inventory
Every model — internal, vendor, ECDIS — with risk classification.
Fairness testing reports
Pre-deployment and ongoing testing, with mitigation documentation where disparate impact was identified.
Third-party vendor due diligence files
BAAs where applicable, security assessments, data lineage documentation.
Decision logs and audit trails
Sample of AI-driven decisions with full traceability — user, model version, inputs, output, override.
Consumer disclosure documentation
For states that require AI use to be disclosed to applicants (e.g., NYDFS CL-7).
Adverse action procedures
How human override works, training for staff on when to override.
Private Litigation Runs Parallel to Regulatory Exams
Huskey v. State Farm (N.D. Ill., 2022) is the leading live algorithmic discrimination class action against an insurer, alleging State Farm's claims algorithms produced disparate scrutiny and delayed payments for Black homeowners. The case has survived motion practice and remains in litigation as of 2026.
The litigation theory — disparate impact from algorithmic decisions — applies equally in underwriting. Carriers facing market conduct exam findings on fairness testing should expect class-action plaintiffs to use those same findings as the basis for civil claims.
Audit-Readiness Checklist
Carriers operating in the four jurisdictions above should be able to produce, on request
Written AIS Program
Covers AI development, acquisition, and use across the licensee.
AI System Inventory
With risk-tier classifications for every model and ECDIS source.
Annual Fairness Testing
Reports for every AI System affecting consumer outcomes.
Vendor BAAs
For every AI vendor handling PHI on behalf of a health insurer.
#668 Vendor File
Third-party service provider inventory under NAIC Model
Decision Logs
Retained per applicable record-retention rules (HIPAA is 6 years).
Human-Override Procedures
Documented escalation for adverse AI decisions, with named owners.
Applicant Disclosure
Language where state rules require AI / ECDIS use to be disclosed to applicants.
Staff Training Records
Covering AI governance roles, completion tracked, refreshed annually.
State Insurance AI Enforcement — FAQ
Which states have adopted the NAIC Model Bulletin on AI?
By late 2025, 24 states plus DC had adopted the bulletin. Adopters include Connecticut, Vermont, Rhode Island, Pennsylvania, Kentucky, Maryland, Nebraska, Oklahoma, North Carolina, Massachusetts, Delaware, New Jersey, and Hawaii, among others. Most adoptions track the NAIC text closely.
Is AI a market conduct exam focus?
Yes in several states. Connecticut's Bulletin MC-25 explicitly states that AI Systems will be a focus of market conduct examinations. NYDFS has signaled the same for Circular Letter No. 7 compliance. Carriers operating in those states should expect AI governance documentation requests in their next exam cycle.
What is California SB 1120 and who does it apply to?
SB 1120 (signed September 28, 2024) restricts AI in utilization review for health plans and disability insurers in California. It requires human clinician decision-making for medical-necessity denials. AI alone cannot deny coverage — a substantive prohibition, not just process governance.
What documents will a state insurance examiner ask for on AI?
Examiners typically request the written AIS Program, the AI System inventory with risk tiers, fairness testing reports, third-party vendor due diligence files, decision logs and audit trails, consumer disclosure documentation, and adverse action procedures. Missing any of these creates exam findings.
Related Resources
Continue across the silo or bridge to a core hub
NAIC Model Bulletin on AI
The AIS Program — the document examiners ask for first
Read article →AI Claims Processing Governance
Where Huskey v. State Farm and the claims-side disparate-impact theory live
Read article →AI Underwriting Compliance
ECDIS, fairness testing, and the four exam artifacts that get you ready
Read article →OCR AI Enforcement
The federal-enforcement parallel from healthcare — what state regulators learned from OCR
Read article →AI Observability and Compliance
Decision logs and audit trails — the technical layer of exam readiness
Read article →Get Audit-Ready Before Your Next Market Conduct Exam
Free Shadow AI Risk Check audits your AIS Program, vendor file, fairness testing posture, and decision-log readiness in 6 weeks.