Shadow AI in Law Firms
Associates draft briefs in ChatGPT. Paralegals classify privilege through Claude. Partners summarize depositions in Gemini. None of it shows up on the firm's tool inventory.
What Is Shadow AI in a Law Firm?
Shadow AI is the use of generative AI tools — ChatGPT, Claude, Gemini, free image generators, browser extensions — by attorneys and staff without firm IT, GC, or risk-management approval.
In law firms, that means associates drafting briefs in ChatGPT, partners summarizing depositions through public LLMs, and paralegals using AI to scrub document productions. None of it shows up on an enterprise tool inventory because none of it was procured by enterprise.
How Fast Adoption Is Outrunning Governance
Thomson Reuters Institute 2025 Generative AI in Professional Services Report
Where Shadow AI Shows Up Inside a Firm
Four roles dominate unauthorized usage data inside law firms
Drafting and Research
ChatGPT and Claude used to draft briefs, generate research memos, summarize cases.
Why it matters: Exposed: privileged client materials, work-product, and case theory in public LLM logs.
Document Review
AI used to summarize document productions, classify privilege, generate review tags.
Why it matters: Exposed: client documents and privilege determinations leak into vendor-side data.
Litigation Workflow
Summarizing depositions, drafting demand letters, generating settlement scripts.
Why it matters: Exposed: witness names, case theories, and privileged communications cross the API boundary.
Business Development
Drafting pitches, summarizing prospective client materials, comparing competitor profiles.
Why it matters: Exposed: conflicts data and pricing strategy leak to third-party AI providers.
The Three Risk Vectors That Make Legal Different
Generic shadow-AI exposure compounded by privilege, sanctions, and ethics-opinion risk
Attorney-Client Privilege
ABA Formal Opinion 512 (July 29, 2024) interprets Rule 1.6 to require evaluation of disclosure risk before inputting client information into a GenAI tool, and informed consent where retention or training is permitted.
Why it matters: ABA Jurimetrics (Spring 2024): submission to a public LLM whose terms permit retention likely waives privilege.
Hallucinations
Five named attorneys (Mata v. Avianca, People v. Crabill, Park v. Kim, Wadsworth v. Walmart, Goldberg Segalla Dec. 2025) have been sanctioned for AI-fabricated citations.
Why it matters: Federal and state, trial and appellate; supervising partners now sanctioned alongside the filer.
State Bar Opinions
California, Florida Op. 24-1, NYC Bar Formal Op. 2024-5, DC Bar Op. 388, and Texas Op. 705 all impose competence, confidentiality, supervision, and candor duties on GenAI use.
Why it matters: Shadow tools fail competence (no understanding) and confidentiality (no due diligence).
Why Bans Do Not Fix It
ABA Op. 512 makes the duty to govern AI — not the duty to forbid AI — the Rule 1.1 obligation
Bans push usage onto personal devices and connections
Block public LLMs at the network layer and attorneys use personal devices on personal connections. The data exposure continues; the audit trail disappears.
What happens:
Associate's phone with ChatGPT and a copy-paste of the brief outline.
Why it backfires:
Same Rule 1.6 confidentiality risk, zero supervisory visibility.
Bans fail under ABA Op. 512's competence framework
Rule 1.1 imposes an affirmative duty to gain reasonable understanding of GenAI tools used in legal practice. A ban does not satisfy that duty — it avoids the question.
What happens:
Partner tells associates "do not use AI" but never trains them on what AI does or does not do.
Why it backfires:
Firm fails the competence duty regardless of whether the tools are technically blocked.
Talent and client pressure make bans untenable
AI-native firms attract top associates and clients increasingly expect AI-enabled work. A firm that bans the tools narrows its recruiting pipeline and erodes its competitive position.
What happens:
Lateral candidates ask about AI tooling during interviews. Clients ask about AI use in matter staffing.
Why it backfires:
Ban-driven attrition while exposure continues through workarounds.
The Governance Approach That Works
Four sequenced steps that satisfy ABA Op. 512's competence, confidentiality, and supervision duties
Inventory first
Discover what is actually in use. Network telemetry, billing-system review (some firms reimburse personal AI subscriptions), structured staff interviews.
Enterprise-grade sanctioned tooling
Westlaw AI / CoCounsel, Lexis+ AI, Harvey, or a governed multi-model platform with contractual confidentiality terms.
Written GenAI policy
Covers allowed tools, prohibited inputs (client identifiers, privileged communications without consent), supervision expectations, and verification requirements.
Training under Rule 1.1
ABA Op. 512 makes attorney AI competence a Rule 1.1 duty. Annual training on tool capabilities, limits, and verification protocols.
Shadow AI in Law Firms — FAQ
What is shadow AI in a law firm?
Shadow AI is the use of generative AI tools (ChatGPT, Claude, Gemini, etc.) by attorneys and staff without firm IT, GC, or risk-management approval. It creates unsanctioned data exposure, ethics-rule violations, and privilege-waiver risk.
How widespread is GenAI use in law firms today?
Thomson Reuters Institute's 2025 survey found 26% of legal organizations actively using GenAI (up from 14% in 2024), with 78% of law-firm respondents expecting GenAI to be central within five years. But 52% of legal organizations have no GenAI policy and 64% provide no training — most usage is happening outside any governance framework.
Can a firm ban ChatGPT and call the problem solved?
No. Bans push usage onto personal devices and connections, where the firm loses visibility entirely. Bans also fail under ABA Formal Opinion 512, which interprets the duty of competence (Rule 1.1) as requiring lawyers to understand and govern AI tools used in their practice — not to forbid them outright.
What is the difference between sanctioned GenAI and shadow GenAI in legal practice?
Sanctioned tooling has contractual confidentiality terms with the vendor, no training on firm inputs, documented data handling, and is covered by the firm's written GenAI policy. Shadow tools have none of those — they are consumer-grade products whose terms typically permit prompt retention and may permit training, creating the confidentiality and privilege risks ABA Op. 512 was written to address.
Related Resources
Continue across the silo or bridge to a core hub
Attorney-Client Privilege and AI
Whether inputting client information into ChatGPT waives privilege
Read article →ABA Formal Opinion 512
How seven Model Rules apply to lawyers' GenAI use
Read article →AI Hallucinations in Legal Practice
Five sanctioned attorneys since Mata v. Avianca and the two-layer defense
Read article →Shadow AI Hub
The cross-industry primer on unsanctioned AI use and the demand it signals
Read article →Why AI Bans Fail
Why blocking AI on corporate devices does not protect the firm
Read article →Discover the Shadow AI Inside Your Firm
Free Shadow AI Risk Check inventories the unsanctioned tools in use, maps the privilege and ethics exposure, and builds the sanctioned alternative.