The intake coordinator drafts a response to a new inquiry using a free AI tool she found online. The language sounds professional and she's sent versions of this message a hundred times — she doesn't think twice. A few offices down, a junior associate pastes case background into a different AI tool to orient first-pass research on a novel question. Down the hall, a partner drops a few paragraphs of strategic analysis into a chat window to see if the tool offers a different angle on a difficult negotiation.
The firm has one AI policy. It says: "Use AI tools responsibly and protect client confidentiality." That policy governs all three of those situations — and operationally covers none of them.
A blanket AI policy fails in both directions. Blanket prohibition pushes use underground. Blanket permission leaves firms unable to answer basic questions: which tools are handling client information, under what contractual terms, reviewed by whom before it goes anywhere. Neither is a governance model.
ABA Formal Opinion 512 (2024) ABA FO 512 identifies supervision as one of the five core duties attorneys must satisfy in AI use — and that obligation extends to attorney oversight of non-attorney staff AI use. Making that obligation operational requires a framework, not a sentence.
Why One Policy Fails: Three Variables That Title Alone Can't Resolve
A job title is a starting point, not a complete answer. Safe AI use in a law firm depends on three things working together:
- Role — who is doing the work, what supervisory responsibilities they carry, and what their competence level is for evaluating AI output in that domain
- Task and data type — whether the task involves legal analysis versus formatting, and whether the data involved is fully internal, client-adjacent, client-specific facts, or privileged communications
- Tool class — whether the tool is a verified enterprise product with negotiated data-handling terms, a firm-approved tool, or a consumer product whose terms may permit training on submitted content
A senior partner pasting client strategy into a consumer AI tool is not "safe" because of their title. A paralegal using an approved enterprise tool to draft a scheduling email is not "risky" because they're not an attorney. The intersection of role, task, and tool determines the actual risk picture — and what makes a policy either enforceable or decorative.
The Governance Framework That Makes a Permission Matrix Work
A matrix without a governance structure is just a table. Before applying the framework below, the firm needs:
- A written policy that specifically names approved tools, approved use categories, and prohibited uses — not "use AI responsibly."
- A designated owner responsible for approving new tools, handling exceptions, and updating the policy as tools and terms change. In most firms, this is a managing partner or operations lead, not a committee with no accountable member.
- An approval process for new tools. Any tool introduced to the firm — by any staff member — should complete a basic diligence step before touching client information. The diligence framework is covered here.
- Supervision structures adapted for AI work product. Review thresholds should be specific: not "attorney review before sending" but "supervising attorney reviews all AI-generated client communications before they leave the office." The review obligation under Rules 5.1 and 5.3 does not change because AI produced the first draft.
- A mechanism for matter-specific restrictions. Some clients, engagement agreements, or court rules impose AI use restrictions that override firm defaults. The policy needs a way to document and communicate those restrictions at the matter level, not just at the firm level.
Firms with clean AI governance are not necessarily the most restrictive ones. They are the ones where the rules are specific enough to follow, the approved tools are appropriate for the use cases, and someone is accountable for keeping the policy current.
How to Use This Matrix
For any AI use decision, work through five questions:
- Who is using the tool? Identify the role and the supervisory chain above it.
- What is the task? Administrative or formatting? First-draft generation? Legal research or analysis? Client-facing deliverable?
- What data is involved? No client data? Sanitized or general? Client-specific facts? Privileged communications?
- Is the tool approved for this combination? Has it been evaluated for the data type and task category involved?
- What is the review path? What review is required before this output is relied on, filed, or sent externally?
If questions 4 and 5 cannot be answered clearly for a given combination, that use should be paused until the answers exist — not defaulted to "probably fine."
Role-Based Permission Framework
The matrix below summarizes the framework across five primary role categories. It is a policy starting point and issue-spotting tool, not a substitute for a written firm policy or attorney judgment on specific facts.
| Role | Allowed use categories | Prohibited or requires escalation | Tool requirement | Client data rule | Review required |
|---|---|---|---|---|---|
| Admin & Intake Staff | Scheduling; appointment reminders; standard correspondence templates; formatting; intake form language drafting | Answering substantive client questions; generating anything resembling legal analysis or advice; using consumer tools with any client data | Approved tools only; consumer or free-tier tools prohibited for any client-information use | Client information only in approved tools with verified data-handling terms | Attorney or supervisor review required before AI-generated content reaches any client |
| Paralegals | Document drafting (demand letters, correspondence, discovery); records summarization; timeline and exhibit organization; deposition preparation materials | Independent legal analysis or case strategy conclusions; treating AI-generated citations as verified; client-facing use without attorney sign-off | Approved enterprise tools for client matters; no new tools without firm diligence step | Client matter facts only in approved tools; escalate before submitting privileged communications | Supervising attorney reviews before any work product reaches client or court; attorney verifies all AI-generated citations before reliance |
| Junior Associates | Research orientation and first-pass research (with independent primary-source verification); first drafts of routine documents; comparative analysis under direct supervision | AI-generated research without independent primary-source verification; AI-generated argument or analysis in filings without senior review; client-facing use without attorney sign-off | Approved tools only; discuss tool selection with supervising attorney for new matter types | Client facts only in approved tools; consumer tools prohibited for any client matter work | Supervising attorney reviews AI-generated research and drafts before external use; independent citation verification required for all AI-generated legal authorities |
| Senior Associates & Counsel | Contract review and analysis; research on novel issues; complex document drafting; deposition and hearing preparation; supervising AI use by junior staff | Skipping independent verification of AI-generated citations; delivering AI-assisted work to clients without a review pass; using unapproved tools for any client matter | Approved tools for client work; judgment required on tool scope for specific matter types | Client facts in approved tools only; judgment required on scope of client information shared with any tool | Self-review required before client delivery; accountable under Rule 5.1 for reviewing AI-assisted work from supervised staff |
| Partners & Leadership | Strategy analysis; business development support; policy drafting; staying current on practice area developments; approving tools and setting governance standards | Bypassing firm AI policy for personal convenience; approving new tools without completing the firm's diligence process; delivering AI-generated client advice without independent review | Same as senior associates for personal use; accountable for firm-level tool approval standards | Same as senior associates for personal use; accountable for defining firm-wide data handling rules | Accountable for the governance model itself; self-review required before client delivery; responsible for supervision structure across all roles |
Role-Specific Risk Notes
| Role | Key risk pattern | Most common failure mode | Supervision rule |
|---|---|---|---|
| Admin & intake staff | Consumer-tool convenience. Staff use familiar free tools without awareness that data terms have not been reviewed. | Client names or matter facts entered into a consumer AI product under training-permitting terms — not malicious, just unconsidered. | Approved-tool list is not optional. Attorney or supervisor reviews before any AI-generated content reaches a client. Policy orientation at onboarding. |
| Paralegals | Verification gap. AI produces confident-sounding citations that may be incorrect; paralegal cannot independently evaluate. | AI-generated legal citation treated as verified and passed to attorney, who reviews quickly without primary-source check. | Supervising attorney reviews all work product. All AI-generated legal citations require independent primary-source verification before any reliance. Standard must be stated explicitly, not assumed. |
| Junior associates | Two overlapping risks: errors hard to catch without substantive expertise; AI shortcuts work that develops independent legal judgment. | AI-generated analysis treated as reliable in a practice area the associate hasn't yet developed judgment to evaluate. Development lag from consistent AI substitution for formative work. | Supervising attorney specifies which tasks benefit from AI assistance, which require full independent work, and how to flag uncertainty. Supervision model addresses development question directly. |
| Senior associates & counsel | Efficiency overconfidence. AI-assisted work receives lighter review than a scratch draft would — review obligation doesn't change based on how the draft was generated. | AI-generated brief section delivered to client after cursory review. Rule 5.1 responsibility for supervised staff AI output not factored into the review workflow. | Self-review before client delivery. Rule 5.1 accountability for AI-assisted work from supervised staff — structured review workflow, not just availability for questions. |
| Partners & leadership | Governance gap. AI use is underway across the firm while the governance model is still being drafted — or delegated without real accountability. ABA FO 512 ABA FO 512 | Policy exists as a document but is not operational — no named owner, no tool approval process, no supervision structure adapted for AI work product. ~30% of attorneys already use AI tools. ABA 2024 | Accountable for the governance model itself: written policy, named owner, approval process, supervision thresholds. Cannot bypass firm policy for personal convenience. |
Special Situations That Change the Default
Lateral hires
Lateral attorneys and staff bring AI habits from prior firms. Prior-firm approvals do not transfer. Firm AI policy orientation should be part of lateral onboarding — not a policy document handed over, but a direct conversation about which tools are approved, which are not, and why. For anyone coming from a firm with materially different AI practices, a check-in on tool use in the first 30 days is a reasonable standard.
Institutional clients with AI restrictions
Large corporate clients, government entities, and healthcare organizations increasingly include AI use requirements in outside counsel guidelines. When a client's engagement terms impose AI restrictions, those restrictions override firm defaults for that matter. Matter-specific AI restrictions should be documented in the matter management system and communicated to everyone on the matter at the outset — not discovered mid-representation.
Contract attorneys and law clerks
Contract attorneys and law clerks should be supervised like junior associates unless the supervising attorney has specifically assessed otherwise. They should not introduce AI tools on their own authority, and their work product is subject to the same review thresholds as other supervised staff. This is particularly important when contract attorneys work remotely or on short-term engagements where direct supervision may be lighter.
Firms entering new practice areas
When a firm expands into a new practice area, AI-generated legal research carries elevated risk: the supervising attorneys may not have enough substantive background to identify AI errors in that domain. AI-assisted research in new practice areas should be treated as orientation material — not as a shortcut to competence the firm hasn't yet developed — with more careful independent primary-source verification required. The diligence burden is higher, not lower, when the supervising attorney is also learning the area.
For a deeper look at how to build the firm-level policy that governs these role permissions, see How to Create a Firm AI Policy That Gets Used.
This article is operational guidance and does not constitute legal advice or a formal ethics opinion. The framework and matrix presented here are starting points — not substitutes for a written firm policy reviewed by qualified ethics counsel. Professional responsibility obligations vary by jurisdiction. Some states have issued AI-specific guidance that supplements or differs from the ABA's position. Attorneys with specific questions about technology use, confidentiality, or AI governance should consult their bar's ethics hotline or qualified ethics counsel.