AI & Confidentiality

Privilege, Confidentiality, and AI: What Your Firm Actually Needs to Worry About

Why most AI risk at law firms is a Rule 1.6 question — and a use-classification worksheet for matching each AI use to its required controls.

When law firms talk about AI risk, the word "privilege" comes up often. It is usually the wrong frame. Privilege — the attorney-client communication doctrine, or work product protection — is a relatively narrow evidentiary concept. Confidentiality is much broader. And confidentiality is where the actual exposure lives for most firms using AI tools today.

Getting this distinction right is not academic. It determines which questions your firm needs to be asking, which policies you need to write, and which AI tools require more scrutiny than others.

The Distinction That Matters

Attorney-client privilege

Privilege protects confidential communications between an attorney and client made for the purpose of obtaining or providing legal advice. It is an evidentiary doctrine: it protects those communications from compelled disclosure in legal proceedings. Privilege can be waived — generally by voluntary disclosure to a third party outside the privilege relationship.

Whether submitting client communications to an AI tool constitutes a waiver of privilege is a live legal question. Courts have not uniformly addressed it, jurisdictions vary, and no broad rule can be reliably stated here. This requires attorney judgment applied to the specific facts, tool, and use case — and in sensitive matters, careful analysis before submission.

Confidentiality under Rule 1.6

Model Rule 1.6 is far broader than privilege. It covers all information relating to the representation of a client — regardless of whether the information is privileged, regardless of whether the client shared it in confidence, and regardless of the source. A fact learned from a public filing about your client is still covered by Rule 1.6. So is information a third party shared in the context of your representation.

When you submit that information to an AI tool, you are disclosing information relating to the representation of a client to a third-party system. Whether that disclosure is permitted, and what precautions are required, is a Rule 1.6 question — not primarily a privilege question.

What This Looks Like in Practice

An attorney drafting a demand letter pastes the key facts from a client matter — parties, dates, opposing conduct, claimed damages — into a general-purpose AI chatbot to get a draft. The AI produces something useful. The confidentiality question does not arise from the output. It arises from the input. Those client facts were submitted to a third-party system operating under that tool's terms of service. Whether the attorney understood what those terms permit — whether the firm has a policy governing this use — is the issue that ABA Formal Opinion 512 and Rule 1.6 require the firm to have resolved before it happens.

Most of the actual risk does not live in sophisticated privilege analysis. It lives in this moment: an attorney or paralegal using a general-purpose AI tool with client facts because the firm has not yet established which tools are approved, for which uses, under what conditions.

ABA Formal Opinion 512: The Five Duties

ABA Formal Opinion 512 ABA FO 512 is the most current formal ABA guidance on attorney use of generative AI. It identifies five professional duties as they apply to AI use — and for each, a specific gap a firm without AI governance is likely carrying.

Duty What it requires for AI use Governance gap if unaddressed
Competence Attorneys must understand what the tool does and its known failure modes — including that AI systems generate plausible-sounding but incorrect citations. AI-generated citations used without independent verification. Output relied on in matters the attorney has not checked substantively.
Confidentiality Attorneys must verify what the tool does with submitted information: training use, retention, vendor staff access, and data-handling terms must all be confirmed before client information is submitted. Client facts submitted to unapproved tools under terms that permit training use, indefinite retention, or vendor staff access with no contractual restrictions.
Supervision Non-attorney staff using AI tools require attorney supervision under Rules 5.1 and 5.3. The policy must state who supervises what, and at what review threshold. Paralegals and intake staff using AI tools with client information without defined attorney oversight or review standards.
Fees AI that substantially reduces time on billed work raises fee-reasonableness and transparency questions. Firms should address how AI use affects billing practices. No position on whether AI efficiency is passed to clients, absorbed by the firm, or reflected in rates — leaving each attorney to handle billing questions individually.
Communication Engagement agreements should address AI use. Clients may need to be informed that AI tools are used in their representation. Candor obligations apply when AI-generated content appears in filings, with jurisdiction-specific disclosure rules varying. No stated position on client disclosure; engagement letters silent on AI; no protocol for jurisdiction-specific court disclosure requirements.

These are not questions with universal answers. They require attorney judgment applied to the specific firm, tool, and use case. But they are the right questions — and a firm that cannot answer them for the AI tools currently in use has a governance gap that needs closing.

Where the Actual Risk Is

Submitting client information to any AI tool — through a prompt, a chat interface, or a document upload — is a disclosure to a third-party system. For unmanaged consumer accounts on general-purpose AI tools, the analysis is generally unfavorable: consumer-tier terms typically permit the provider to use inputs for model training and improvement, retention is at the provider's discretion, there are no firm-level administrative controls, and there are no negotiated data-handling terms. Business or enterprise accounts on the same platforms may address some of these concerns — but the tier label is not the clearance. What matters is whether specific provisions are actually in place: training-use exclusion confirmed in contract, retention and deletion terms verified, a data processing addendum executed. ABA Formal Opinion 477R ABA FO 477R provides the underlying framework: reasonable precautions under Rule 1.6(c) require a fact-specific analysis of the information's sensitivity, the tool's data practices, and the protections actually in place.

AI note-takers and transcription tools require separate treatment. Beyond the standard vendor data-handling questions, they involve two distinct concerns that do not apply to other AI tools. First, recording law: recording a conversation requires consent under applicable state law, and state laws vary significantly — some require all-party consent, others require only one-party consent. The consent requirement applies regardless of whether the recording is processed by AI. Second, professional disclosure: whether clients should be informed that AI transcription is in use is a professional responsibility question independent of recording-law compliance. The communication duty in Opinion 512 is relevant here. Disclosure is generally the professionally appropriate default, particularly for early-stage client calls. These tools must be reviewed before client use — not after the firm discovers informal use has been ongoing.

AI Use Risk-Classification Worksheet

Not every AI use presents the same confidentiality risk. The worksheet below maps four use types against their information classification, required controls, and key questions — so the firm can match the level of scrutiny to the actual exposure before any tool is deployed.

Use this when evaluating a new AI use or reviewing whether existing uses are appropriately controlled. Uses in the third row require vendor diligence before approval (see ai-tool-due-diligence). Uses in the bottom row are prohibited without escalation.

Use Type Information in Scope Minimum Controls Required Key Questions Before Use Where to Go Next
Administrative / internal
(firm policies, marketing, scheduling, professional development, non-matter staff communications)
No client information General tool approval; no DPA required Is any client information in scope, even incidentally? Would output be used in client work? firm-ai-policy for approval process and tool list
Sanitized / client-adjacent
(legal research on hypothetical facts, template drafting, generic outline generation without matter-specific content)
Non-identifying; no matter-specific content General approval; explicit prohibition on including client-identifying or matter-specific content in prompts or uploads Is any identifying information present, even inadvertently? Is this use type documented in the approved-use list? ai-use-by-role for role-specific limits on sanitized use
Client information — approved tool
(matter drafting with client facts, document review, transcript summarization, client-communication drafting with matter context)
Client-identifying; matter-specific Vendor diligence completed; DPA executed; training-use exclusion confirmed in writing; attorney supervision of AI output before use Has this specific tool and plan passed full vendor diligence? Who reviews AI output before it is used or submitted? Is this use documented in the approved-tool list? ai-tool-due-diligence for the diligence checklist
Client information — unapproved tool
(consumer accounts on general-purpose AI tools, personal accounts, any tool not on the firm's approved list)
Any client information Prohibited. No exceptions without affirmative approval from the policy owner, with specific use and tool documented. N/A — escalate to the policy owner before proceeding firm-ai-policy for the exception and escalation process

What This Worksheet Connects To

The worksheet defines the classification system. The rest of the AI & Confidentiality cluster operationalizes it: ai-tool-due-diligence covers the vendor review required before approving any tool in the third row; ai-use-by-role maps which roles may perform which uses; firm-ai-policy covers the governance structure — the approval process, tool list, supervision standards, and policy maintenance — that makes the classification system functional rather than theoretical. The ABA's 2024 AI adoption data ABA 2024 suggests attorney comfort with AI is growing while firm-level policies remain significantly underdeveloped. The gap between individual attorney adoption and formal firm governance is where most actual risk lives.

This article is not legal advice and does not constitute legal or ethics guidance. The rules and opinions referenced are provided for educational orientation only. Rules of professional conduct vary by jurisdiction; ABA formal opinions address the Model Rules, which individual states may have adopted with modifications. Whether submitting content to a specific AI tool affects privilege is a fact-specific legal question — the analysis in this article is conceptual orientation, not a legal conclusion on any particular use. Firms should consult qualified legal ethics counsel before making policy decisions about AI use. Songbird Strategies is a consulting firm, not a law firm. See Sources & Notes for the primary authority cited.

Can Your Firm Place Every Current AI Use in the Right Row?

If you cannot state which tools are approved for which use categories — or if no one has run the vendor diligence, drafted the policy, or defined the supervision standard — the classification framework above describes a gap, not a system. The operational work that closes those gaps is what the rest of this cluster covers.

See the Legal AI Matrix →
Book a Free Strategy Call

30 minutes. No sales pitch.