Attorney-Client Privilege and AI
A partner pastes a client memo into ChatGPT. An associate uploads a contract to Claude. Each action raises the same question — does that input waive privilege?
The Question Every Firm Has — and Few Have Answered
Does inputting client information into a third-party AI service waive attorney-client privilege?
The current answer — likely yes if the tool's terms permit retention or training, absent client consent or an enterprise-grade agreement with adequate confidentiality protections. No published U.S. court has squarely held this yet, but every major ethics authority that has addressed the question has reached the same conclusion.
The ABA Framework — Formal Opinion 512 (July 2024)
Rule 1.6 read to require three things before inputting client information into a GenAI tool
Evaluate the Risk
Evaluate the risk that information relating to the representation will be disclosed to or accessed by others outside the firm before inputting client information into a GenAI tool.
Obtain Informed Consent
Obtain informed client consent before inputting client-identifying information into a self-learning or third-party-hosted GenAI tool that may use the input for training or has inadequate retention terms.
Investigate the Tool
Investigate the tool's terms of service, data handling, and training practices as part of the Rule 1.1 competence duty.
The Privilege-Waiver Analysis — Jurimetrics, Spring 2024
ABA scholarship in Jurimetrics reaches the cleanest analytic conclusion to date. Attorney-client privilege protects communications kept confidential. Submitting privileged communications to a public LLM whose terms permit retention or training constitutes voluntary disclosure to a third party. That voluntary disclosure likely waives the privilege as to those communications.
Case law has not caught up to the technology yet, but the framework is direct application of well-established privilege doctrine. A counterparty discovering AI use in deposition and arguing waiver is a legitimate, predictable litigation tactic.
Where State Bars Have Landed
The opinions converge on the same set of duties — due diligence, client consent, verification, supervision
Practical Guidance (Nov. 16, 2023)
First state-level guidance. Lawyers must understand the GenAI tools they use, protect confidential information, verify outputs, and disclose AI use to clients when material.
Bar Opinion 24-1 (Jan. 19, 2024)
Florida lawyers may use GenAI but must safeguard confidentiality, supervise the technology like a nonlawyer assistant, ensure billing reflects actual costs, and comply with advertising rules.
Formal Opinion 2024-5
Existing duties of competence, confidentiality, supervision, and candor extend fully to GenAI. Requires reasonable verification and informed-consent analysis before inputting client information.
Ethics Opinion 388 (April 2024)
Guidance on competent and confidential use of GenAI, aligning with ABA, California, and Florida positions.
Opinion 705 (Feb. 2025)
Lawyers must maintain human oversight of GenAI work product, verify all citations, protect confidential information, and bill reasonably for AI-assisted work.
Formal Opinion 512 (Jul. 2024)
Controlling national framework. Interprets Rule 1.6 across consent, due diligence, and Rule 1.1 competence duties. State opinions either adopt or react to it.
The Terms-of-Service Problem
Three contractual provisions determine whether a tool can satisfy Rule 1.6
No Training on Firm Inputs
Enterprise-grade tooling typically guarantees this; consumer-grade tooling typically does not.
Why it matters: Without this, every firm prompt becomes vendor-side training data.
Retention Limits
Short retention windows, customer-controlled deletion, no human review of inputs.
Why it matters: Indefinite retention is functionally equivalent to vendor archival of privileged communications.
Subprocessor Controls
Transparency on who else touches the data, with appropriate confidentiality terms downstream.
Why it matters: Subprocessor opacity makes Rule 1.6 due-diligence impossible.
The Enterprise vs. Public Boundary
Public ChatGPT, public Claude, and public Gemini fail on at least one of those three provisions by default. Enterprise versions of the same products materially improve the analysis. Legal-specific products (Westlaw AI / CoCounsel, Lexis+ AI, Harvey) typically include the necessary contractual terms.
Every firm needs to review the actual agreement before deploying. The tier the tool falls into is what drives the Rule 1.6 analysis — not the brand on the box.
Controls Every Firm Needs
Five operational requirements that translate the ethics framework into practice
Written GenAI policy
Classifies tools by tier and identifies acceptable inputs per tier. Distinguishes enterprise-grade from consumer-grade explicitly.
Engagement letter language
Addresses GenAI use and obtains consent appropriate to the firm's tooling. Blanket consent in the engagement letter is defensible for enterprise tooling with adequate terms.
Tool inventory with diligence files
Every enterprise tool has a contractual diligence file. Subprocessor lists, retention terms, training opt-outs preserved for audit.
Verification workflow
AI output is checked before client delivery. Citation verification, factual accuracy, and confidentiality screening before anything leaves the firm.
Training under Rule 1.1
Every attorney handling client information understands the tools' capabilities, limits, and known failure modes. Annual refresh required.
Attorney-Client Privilege and AI — FAQ
Does using ChatGPT with client information waive attorney-client privilege?
Likely yes if the tool's terms permit retention or training, absent client consent or an enterprise-grade agreement with adequate confidentiality protections. No published U.S. court has squarely held this yet, but ABA Formal Opinion 512, NYC Bar Formal Opinion 2024-5, California's Practical Guidance, and ABA scholarship in Jurimetrics all reach this conclusion.
What does ABA Formal Opinion 512 require regarding client confidentiality and AI?
Lawyers must evaluate the risk of disclosure before inputting client information into a GenAI tool, obtain informed client consent before using self-learning or third-party-hosted tools with inadequate retention terms, and investigate the tool's terms of service and data handling as part of the Rule 1.1 competence duty.
Can a firm use consumer ChatGPT with client matters?
Generally not without specific client consent. Consumer ChatGPT's terms permit retention and may permit training on inputs — making any client-information input arguably a voluntary disclosure that waives privilege. Enterprise GPT, with proper contractual terms, materially improves the analysis.
What is the difference between enterprise-grade and consumer-grade GenAI tools for confidentiality purposes?
Enterprise-grade tools typically include contractual prohibitions on training, customer-controlled retention, transparency on subprocessors, and audit-friendly data handling. Consumer-grade tools typically default to indefinite retention and (in some cases) opt-out training. Rule 1.6 analysis turns largely on which tier the tool falls into.
Has any U.S. court held that AI use waives privilege?
Not squarely, as of May 2026. The authority is ethics-opinion-based rather than caselaw-based. But the analytic framework is direct application of well-established privilege doctrine, and discovery disputes over AI use are an emerging litigation surface.
Related Resources
Continue across the silo or bridge to a core hub
ABA Formal Opinion 512
Rule 1.6 confidentiality and the consent framework Op. 512 created
Read article →Shadow AI in Law Firms
How unsanctioned AI use compounds privilege exposure
Read article →AI Hallucinations in Legal Practice
Rule 3.3 candor obligations when AI fabricates citations
Read article →Multi-Model AI Access
Enterprise-grade multi-model access with contractual confidentiality terms
Read article →SOC 2 AI Platforms
Platform-level controls that satisfy Rule 1.6 without per-matter consent
Read article →Build a Privilege-Protective GenAI Program
Free Shadow AI Risk Check audits your tool tier, your engagement letter language, your verification workflow, and your Rule 1.1 training posture.