AuthenTech AI
What Is Shadow AI?How to Detect Shadow AIShadow AI Statistics 2026Why AI Bans FailSamsung ChatGPT Incident
Platform ChecklistMulti-Model AI AccessAI Observability & ComplianceSOC 2 & HIPAAPHI ProtectionPlatform vs. Point SolutionsAI DLP vs. PlatformsCost of AI Tool SprawlWhy Point Solutions Fail
90-Day Governance PathGovernance Without Compliance TeamChange Management for AIAI Adoption KPIs
HIPAA & AI ComplianceShadow AI Use CasesAI Revenue Cycle ManagementClinical vs Administrative AIOCR & AI Enforcement
ContactBlogPodcastFAQ
Shadow AI Leaders Guide
Shadow AI Leaders Guide
HomeShadow AIAI PlatformsAI AdoptionHealthcareBlogPodcastAboutContactFAQ Shadow AI Leaders Guide
Legal Spoke

Shadow AI in Law Firms

Associates draft briefs in ChatGPT. Paralegals classify privilege through Claude. Partners summarize depositions in Gemini. None of it shows up on the firm's tool inventory.

Free Shadow AI Risk Check Back to Legal Hub

What Is Shadow AI in a Law Firm?

Shadow AI is the use of generative AI tools — ChatGPT, Claude, Gemini, free image generators, browser extensions — by attorneys and staff without firm IT, GC, or risk-management approval.

In law firms, that means associates drafting briefs in ChatGPT, partners summarizing depositions through public LLMs, and paralegals using AI to scrub document productions. None of it shows up on an enterprise tool inventory because none of it was procured by enterprise.

How Fast Adoption Is Outrunning Governance

Thomson Reuters Institute 2025 Generative AI in Professional Services Report

26%
Of legal organizations actively using GenAI in 2025 (up from 14% in 2024)
78%
Of law-firm respondents expect GenAI to be central within five years
52%
Of legal organizations have no GenAI policy
64%
Have received no GenAI training

Where Shadow AI Shows Up Inside a Firm

Four roles dominate unauthorized usage data inside law firms

Associates

Drafting and Research

ChatGPT and Claude used to draft briefs, generate research memos, summarize cases.

Why it matters: Exposed: privileged client materials, work-product, and case theory in public LLM logs.

Paralegals

Document Review

AI used to summarize document productions, classify privilege, generate review tags.

Why it matters: Exposed: client documents and privilege determinations leak into vendor-side data.

Partners

Litigation Workflow

Summarizing depositions, drafting demand letters, generating settlement scripts.

Why it matters: Exposed: witness names, case theories, and privileged communications cross the API boundary.

BD / Marketing

Business Development

Drafting pitches, summarizing prospective client materials, comparing competitor profiles.

Why it matters: Exposed: conflicts data and pricing strategy leak to third-party AI providers.

The Three Risk Vectors That Make Legal Different

Generic shadow-AI exposure compounded by privilege, sanctions, and ethics-opinion risk

Privilege

Attorney-Client Privilege

ABA Formal Opinion 512 (July 29, 2024) interprets Rule 1.6 to require evaluation of disclosure risk before inputting client information into a GenAI tool, and informed consent where retention or training is permitted.

Why it matters: ABA Jurimetrics (Spring 2024): submission to a public LLM whose terms permit retention likely waives privilege.

Sanctions

Hallucinations

Five named attorneys (Mata v. Avianca, People v. Crabill, Park v. Kim, Wadsworth v. Walmart, Goldberg Segalla Dec. 2025) have been sanctioned for AI-fabricated citations.

Why it matters: Federal and state, trial and appellate; supervising partners now sanctioned alongside the filer.

Ethics

State Bar Opinions

California, Florida Op. 24-1, NYC Bar Formal Op. 2024-5, DC Bar Op. 388, and Texas Op. 705 all impose competence, confidentiality, supervision, and candor duties on GenAI use.

Why it matters: Shadow tools fail competence (no understanding) and confidentiality (no due diligence).

Why Bans Do Not Fix It

ABA Op. 512 makes the duty to govern AI — not the duty to forbid AI — the Rule 1.1 obligation

1

Bans push usage onto personal devices and connections

Block public LLMs at the network layer and attorneys use personal devices on personal connections. The data exposure continues; the audit trail disappears.

What happens:

Associate's phone with ChatGPT and a copy-paste of the brief outline.

Why it backfires:

Same Rule 1.6 confidentiality risk, zero supervisory visibility.

2

Bans fail under ABA Op. 512's competence framework

Rule 1.1 imposes an affirmative duty to gain reasonable understanding of GenAI tools used in legal practice. A ban does not satisfy that duty — it avoids the question.

What happens:

Partner tells associates "do not use AI" but never trains them on what AI does or does not do.

Why it backfires:

Firm fails the competence duty regardless of whether the tools are technically blocked.

3

Talent and client pressure make bans untenable

AI-native firms attract top associates and clients increasingly expect AI-enabled work. A firm that bans the tools narrows its recruiting pipeline and erodes its competitive position.

What happens:

Lateral candidates ask about AI tooling during interviews. Clients ask about AI use in matter staffing.

Why it backfires:

Ban-driven attrition while exposure continues through workarounds.

The Governance Approach That Works

Four sequenced steps that satisfy ABA Op. 512's competence, confidentiality, and supervision duties

1

Inventory first

Discover what is actually in use. Network telemetry, billing-system review (some firms reimburse personal AI subscriptions), structured staff interviews.

2

Enterprise-grade sanctioned tooling

Westlaw AI / CoCounsel, Lexis+ AI, Harvey, or a governed multi-model platform with contractual confidentiality terms.

3

Written GenAI policy

Covers allowed tools, prohibited inputs (client identifiers, privileged communications without consent), supervision expectations, and verification requirements.

4

Training under Rule 1.1

ABA Op. 512 makes attorney AI competence a Rule 1.1 duty. Annual training on tool capabilities, limits, and verification protocols.

Shadow AI in Law Firms — FAQ

What is shadow AI in a law firm?

Shadow AI is the use of generative AI tools (ChatGPT, Claude, Gemini, etc.) by attorneys and staff without firm IT, GC, or risk-management approval. It creates unsanctioned data exposure, ethics-rule violations, and privilege-waiver risk.

How widespread is GenAI use in law firms today?

Thomson Reuters Institute's 2025 survey found 26% of legal organizations actively using GenAI (up from 14% in 2024), with 78% of law-firm respondents expecting GenAI to be central within five years. But 52% of legal organizations have no GenAI policy and 64% provide no training — most usage is happening outside any governance framework.

Can a firm ban ChatGPT and call the problem solved?

No. Bans push usage onto personal devices and connections, where the firm loses visibility entirely. Bans also fail under ABA Formal Opinion 512, which interprets the duty of competence (Rule 1.1) as requiring lawyers to understand and govern AI tools used in their practice — not to forbid them outright.

What is the difference between sanctioned GenAI and shadow GenAI in legal practice?

Sanctioned tooling has contractual confidentiality terms with the vendor, no training on firm inputs, documented data handling, and is covered by the firm's written GenAI policy. Shadow tools have none of those — they are consumer-grade products whose terms typically permit prompt retention and may permit training, creating the confidentiality and privilege risks ABA Op. 512 was written to address.

Related Resources

Continue across the silo or bridge to a core hub

Attorney-Client Privilege and AI

Whether inputting client information into ChatGPT waives privilege

Read article →

ABA Formal Opinion 512

How seven Model Rules apply to lawyers' GenAI use

Read article →

AI Hallucinations in Legal Practice

Five sanctioned attorneys since Mata v. Avianca and the two-layer defense

Read article →

Shadow AI Hub

The cross-industry primer on unsanctioned AI use and the demand it signals

Read article →

Why AI Bans Fail

Why blocking AI on corporate devices does not protect the firm

Read article →

Discover the Shadow AI Inside Your Firm

Free Shadow AI Risk Check inventories the unsanctioned tools in use, maps the privilege and ethics exposure, and builds the sanctioned alternative.

Free Shadow AI Risk Check Shadow AI Leaders Guide

Concerned about Shadow AI in your organization?

3 Minutes. 16 Questions. No Fluff.

Quick Online Risk Check
AuthenTech AI

Sanctioned AI
your staff will actually use.

We help regulated organizations regain visibility and control over AI usage without blocking innovation.

Shadow AI Solutions

Shadow AI Risk Check

AI Voice Solutions

AI Voice Readiness CheckAI Voice for Nurse LinesAI Voice for Patient AccessAI Voice for Crisis Hotline

Learn & Resources

What Is Shadow AI?AI TermsBlog

About

ContactPrivacy PolicyTerms & ConditionsCookies Policy

© 2026 AuthenTech AI, LLC. All rights reserved.

7300 State Highway 121, Suite 300 McKinney TX 75070