A three-attorney litigation firm has an AI policy. It was drafted by the managing partner, reviewed by outside counsel, and emailed to the full staff six months ago. No one has read it since. The paralegal has been using a free AI tool to draft intake summaries for three months — the tool is not on any approved list, its terms have not been reviewed, and nobody knows what it does with the client names and matter details that have been submitted to it. When asked, the managing partner says the firm has "covered that" in the policy. The paralegal has never looked at the policy.
That gap — between a policy that exists and a policy that governs — is where most AI risk at law firms actually lives. It is not a documentation problem. It is a design problem.
Most firms handling this question fall into one of three positions: no AI policy at all; a policy that prohibits so much that staff route around it quietly; or a policy that exists in a document nobody applies. All three produce the same practical result — people inside the firm making individual AI decisions without shared guidance, without defined approval processes, and without a clear sense of what the firm considers acceptable. The ABA's 2024 technology survey found roughly 30% of attorneys are already using AI tools in practice. ABA 2024 That does not include the paralegals, assistants, and intake staff at those same firms using tools on their own initiative — often without attorney visibility.
Why Firm AI Policies Fail
The failure patterns are consistent enough to name:
- Too vague to apply. "Use AI responsibly and in compliance with all applicable obligations" is technically accurate and practically useless. A staff member who reads it knows no more about what they can and cannot do than before.
- Too restrictive to follow. Blanket prohibition on AI tool use — or effective prohibition on any client-adjacent AI use — produces covert workarounds. If the policy is so broad that staff cannot use tools that genuinely help without violating it, they will use the tools and say nothing.
- Written at the firm level, not the role level. A policy that treats every person in the firm identically misses the ways AI risk actually differs by role. The managing partner drafting a client strategy memo is in different territory from an intake coordinator summarizing an inquiry. A policy that ignores those distinctions leaves the people with the most consequential AI decisions without the most relevant guidance.
- Silent on supervision and review. ABA Formal Opinion 512 (2024) addresses multiple duties implicated by generative AI use — including competence, confidentiality, supervision, fees, and communication with clients, with candor considerations arising in certain contexts. ABA FO 512 A policy that covers approved tools but says nothing about how AI-assisted work product is reviewed, what the supervision standard is for non-attorney AI use, or how AI use should be disclosed where required is incomplete in ways that matter professionally.
- No approval process for new tools. AI tools change rapidly. A policy that names approved tools today but has no mechanism for evaluating new ones will be outdated before the year is out.
- Not rolled out — just filed. A policy emailed as a PDF attachment has roughly the effect of an email attachment on policy compliance: variable at best. How the firm uses the policy is the policy.
The Policy Architecture: What a Usable Firm AI Policy Contains
A usable policy is not a long policy. Staff should be able to summarize it in conversation and apply it to a new situation without rereading the document. The structure below is a blueprint — a compact architecture every firm AI policy should follow. The sections should be filled in for your firm's actual tools, roles, and approved uses; the structure itself is not optional.
| # | Section | What it must address |
|---|---|---|
| 1 | Scope | What tools this policy covers — general-purpose AI assistants, AI features in practice management software, AI drafting tools, AI note-takers and transcription services. Define scope explicitly; tools omitted from the definition are treated as unregulated. |
| 2 | Approved tools | Current approved tools by name, plan/tier, and approved use category. Who may use each tool (firm-wide, role-specific, attorney-only). Whether client information is permitted in each tool. |
| 3 | Prohibited uses | Consumer tools prohibited for any client-identifying information. Uses prohibited regardless of tool — substantive legal advice to clients generated without attorney review, outputs filed in court without independent citation verification. Unapproved tools prohibited for client matters without exception. |
| 4 | Data rules | What categories of information may and may not enter AI tools. Minimum rule: client-identifying information and matter-specific facts are prohibited in unapproved consumer tools. Which approved tools have enterprise or no-retention terms that make them appropriate for client-adjacent work. |
| 5 | Role-based rules | What each role may and may not use AI for. See the matrix below. This section need not reproduce the full matrix — but it must state explicitly that use rules differ by role and refer to the governing framework. |
| 6 | Review and supervision | What review is required before AI-assisted work product reaches a client. Attorneys are responsible for verifying AI-generated content before use in client matters. Supervising attorneys are responsible for non-attorney AI use in their matters. Whether AI use in a matter must be documented. |
| 7 | New-tool approval | How a staff member or attorney requests evaluation of a new tool. What information is required. Who reviews. What counts as approval. What is permitted while approval is pending. See the approval workflow below. |
| 8 | Escalation path | Who to ask when a situation is not covered by the policy. How quickly to expect a response. What to do in the meantime. Without this, staff either proceed without guidance or do nothing. |
| 9 | Disclosure | When and how AI use should be disclosed to clients. The policy should not resolve the legal question — it should require attorneys to evaluate the disclosure question in their practice context and establish a path for seeking guidance when the answer is unclear. |
| 10 | Owner and cadence | Who owns the policy. Annual review of all approved tools and policy provisions. Trigger-based re-evaluation when vendor terms change materially, when bar guidance is updated, or when the firm's use expands to new tool categories. |
How to Apply the Policy: A Five-Step Decision Path
The policy skeleton above works when staff can apply it quickly to any specific AI use. The five-step decision path:
- Identify the role. Who is doing this task? What does the policy say about that role?
- Classify the task. Internal/administrative? Client-adjacent? Client-information use? Prohibited category?
- Classify the data. No client data? Sanitized? Client-specific facts? Privileged communications?
- Confirm the tool. Is this tool on the approved list for this task and data type?
- Determine the review path. Is attorney review required before this output is used, filed, or sent? If the tool is not approved, escalate before use — not after.
If steps 4 and 5 cannot be answered with the current policy, those are the gaps the policy needs to fill. A policy that cannot answer those two questions for the firm's most common AI use cases is not operational yet.
Role-Based Permission Framework
A firm AI policy that applies the same rules to everyone will either be too restrictive for the lower-risk roles or too permissive for the higher-risk ones. The matrix below summarizes the minimum differentiation a firm AI policy should reflect. It is a starting framework, not a substitute for a written policy or attorney judgment on specific facts.
| Role | Allowed use categories | Requires escalation or prohibited | Tool requirement | Review required |
|---|---|---|---|---|
| Admin & Intake Staff | Scheduling; standard correspondence templates; formatting; intake form language | Answering substantive client questions; drafting legal analysis; consumer tools with any client data | Approved tools only; consumer tools prohibited for client-information use | Attorney or supervisor review before AI-generated content reaches any client |
| Paralegals & Legal Assistants | Document drafting; records summarization; deposition prep materials; exhibit and timeline organization | Independent legal analysis; treating AI citations as verified; client-facing use without attorney sign-off | Approved enterprise tools for client matters; no new tools without firm diligence step | Supervising attorney reviews all work product before client or court use; attorney verifies all AI-generated citations |
| Attorneys | Research orientation; drafting; contract review; deposition prep; supervising AI use by staff in their matters | AI-generated citations without independent primary-source verification; delivering AI-assisted work without a review pass; unapproved tools for any client matter | Approved tools for client work; judgment required on scope of client information shared with any tool | Self-review before client delivery; accountable for reviewing AI-assisted work from supervised staff under Rule 5.1 |
| Leadership & Policy Owner | Strategy; business development; policy drafting; approving tools; setting and maintaining governance standards | Approving tools without completing diligence; bypassing policy for personal convenience; AI-generated client advice without independent review | Same as attorneys for personal use; accountable for firm-level tool approval standards and data rules | Accountable for governance model; self-review required before client delivery; responsible for supervision structure across all roles |
The role-based AI use framework in this Insights series covers each role's specific permitted uses, risk patterns, and supervision requirements in greater depth. The firm AI policy should be consistent with that framework.
New-Tool Approval Workflow
A policy with no mechanism for evaluating new tools is a policy with an expiration date built in. The workflow below is a practical process, not a procurement bureaucracy.
- Request. Any staff member or attorney may request evaluation of a new tool. The request goes to the policy owner and includes: tool name and URL, proposed use case (specific task), data sensitivity involved (will client information be submitted?), and proposed users.
- Initial review. Policy owner reviews vendor terms for: data retention, model training on submitted data, enterprise or no-retention configuration availability, data residency, and access controls. For tools that will handle client information, this review involves confirming the answers in plan-specific documentation or in writing from the vendor — not from marketing materials.
- Attorney consultation. If client information use is proposed, the policy owner consults a supervising attorney before approval. The attorney confirms: the use case is appropriate for client information, the data terms are acceptable, and the review standard is feasible in daily practice.
- Decision. One of three outcomes: (a) Approved — documented scope of approved use, data category permitted, and role(s) authorized; (b) Approved with conditions — specific restrictions noted; (c) Not approved — reason documented so the requestor understands what would need to change.
- Pending rule. While review is pending, the tool is restricted to internal or administrative use only. Client information does not go in until approval is documented.
- Post-approval monitoring. Policy owner tracks vendor term changes at renewal. Any material change to data handling, retention terms, or training use triggers re-evaluation before continued client-information use. Annual review of all approved tools.
The approval process should be simple enough that requesting evaluation is easier than quietly using an unapproved tool. If it takes more than a week to get a response, staff will route around it.
Rolling Out the Policy: What Makes It Stick
A policy that is written, reviewed, and emailed as a PDF will not change behavior. A policy that is explained in the context of each role's daily work, referenced at onboarding, and applied visibly in tool approval decisions will. The six-step rollout sequence below is the difference between a governance document and an operational one.
| # | Step | What it requires | Common failure mode |
|---|---|---|---|
| 1 | Inventory current use | Before drafting, identify what AI tools are actually in use across every role — including tools staff adopted without firm approval. What purposes are they being used for? What data categories are going in? This inventory shapes a policy calibrated to real use, not hypothetical use. | Drafting the policy before this step, then discovering the policy doesn't address the most common existing uses. |
| 2 | Draft using the policy skeleton | Use the 10-section skeleton above to draft. Fill in the specific approved tools, roles, data rules, and approval process for your firm. Policy should answer the 5-step decision path for your most common AI use cases without requiring follow-up clarification. | Generic or template policy pasted in without adaptation; vague sections that don't answer practical questions. |
| 3 | Collect staff input before finalizing | Brief conversations with paralegals, intake staff, and associates before the draft is finalized. Specifically: where do the rules create ambiguity in daily work? What use cases are not clearly covered? This step surfaces implementation friction before rollout rather than after. | Policy finalized without input from the people who will live with it; workarounds appear within weeks of rollout. |
| 4 | Attorney review of final policy | Attorney with governance responsibility reviews the data rules, supervision standards, and disclosure section. Confirm the policy is consistent with current bar guidance and that the supervision requirements for non-attorney AI use are operational, not aspirational. | Supervision section that names a standard but provides no mechanism for actually meeting it; disclosure section that acknowledges uncertainty without a defined escalation path. |
| 5 | Training rollout by role group | 30 minutes per role group. Cover: what the policy requires of that role specifically, the 5-step decision path with real examples from that role's work, approved tools for that role, the approval process for new tools, and the escalation path. Not a PDF — a conversation. New staff receive this at onboarding, not as an attachment. | Policy emailed as a document with no accompanying explanation. No role-specific framing. Staff read it once, or don't read it at all. |
| 6 | Establish maintenance cadence | Set annual policy review on the calendar. Name the trigger conditions for earlier review (vendor term changes, material bar guidance update, new tool category, firm practice area expansion). Assign the policy owner who will confirm those triggers are caught and acted on. A policy with no review schedule is a policy in slow decay. | No scheduled review; policy owner unclear; vendor terms change and nobody notices until the firm is already out of compliance with its own rules. |
Role-specific training with concrete examples
Thirty minutes per role group is sufficient for rollout training. The format that works: explain what the policy requires of that role specifically, walk through the five-step decision path with real examples from that role's typical work, and cover the approval process and escalation path.
Three examples that make the policy concrete in training:
- Intake coordinator drafting a scheduling email. Using an approved AI tool to draft a confirmation message with no client-specific matter facts — this is allowed without individual attorney review. Using a free tool outside the approved list to do the same task — this is not allowed, even though the risk seems low. The rule is the rule; the logic is that data terms for unapproved tools have not been evaluated.
- Paralegal summarizing a deposition transcript. This task involves client-specific matter facts. It requires an approved enterprise tool with a no-retention configuration verified and in use. The resulting summary requires attorney review before it is used in the matter — not because the paralegal cannot produce a good summary, but because the attorney's professional responsibility for that material does not transfer to the AI tool or to the paralegal.
- Attorney using AI to draft a section of a brief. Allowed in an approved tool. Every AI-generated draft is the attorney's work product — reviewed and verified to the same standard as if the attorney had written it from scratch. AI-generated case citations require independent primary-source verification before filing. The draft does not go to the client or the court based on how polished it looks; it goes when the attorney has reviewed it substantively.
The escalation path has to be used visibly
If staff never see anyone use the escalation path — if no question is ever surfaced and resolved — the mechanism becomes theoretical. When real questions come in, surface the answer (without naming individuals) in a way that normalizes the process. "Someone asked whether this use was covered; here is how we thought about it." A living governance document is more useful than a liability artifact.
Onboarding integration
New staff and new attorneys should receive the AI policy in onboarding — as a conversation, not just a document. Habits form early. The earlier the framework is established, the less likely it is that informal practices become entrenched before governance catches up.
The Three Most Common Policy Mistakes
Writing the policy before taking inventory. A policy drafted without knowing how AI is currently being used inside the firm may address uses no one is engaged in while missing the ones already happening daily. The first step is not drafting — it is a brief internal inventory: what tools is each team already using, for what purposes, and with what information? That answer shapes a policy calibrated to the firm's actual situation.
Writing it without input from the people who will follow it. The places where a policy fails in daily use are almost always visible to the staff members working within it and rarely visible to the people writing it from a governance position. Brief conversations with paralegals, intake staff, and junior associates before the policy is finalized will surface the use cases the policy did not anticipate and the implementation frictions that will produce workarounds.
Treating it as finished. An AI policy needs a defined review trigger — both a regular cadence and a mechanism that fires when vendor terms, bar guidance, or available tools change materially. A policy with no review mechanism ages out of relevance faster than almost any other governance document the firm maintains.