A managing attorney at a three-person family law practice tries an AI tool to draft a response to a difficult client inquiry. The output is good. She mentions it to her paralegal. The paralegal starts using the same tool to draft intake summaries. Three weeks later, the tool is part of daily workflow. No one has read the terms of service. No one has confirmed what happens to the summaries being submitted — summaries that include client names, matter facts, and financial details. The unwritten policy is: it seemed to work, nobody stopped it, so it must be okay.
That is not governance. That is drift.
A large firm with a dedicated innovation team, in-house security counsel, and a vendor evaluation process can afford to tighten governance after tools are already in use. A small firm — where the managing attorney is also the policy owner, the implementer, the primary user, and the person who will deal with the consequences — cannot. The exposure from a single misconfigured or misused tool is proportionally much larger when the firm has no procurement team, no internal security review, and limited capacity to manage the fallout.
That asymmetry does not mean small firms should adopt AI slowly. Most AI tools that genuinely reduce repetitive work are worth using. It means small firms need a sharper, more actionable threshold — not a six-month governance committee, but a set of conditions a managing attorney can evaluate before deciding whether a tool is ready for real work.
The 7-Condition Pre-Adoption Threshold
These are the minimum conditions that should be true before any AI tool is used in real work at a small firm — including client communication, intake, drafting, and matter-related tasks. Work through each condition before adoption, not after the tool is already embedded.
| # | Condition | Met | Partly / Unknown | Not met |
|---|---|---|---|---|
| 1 | Specific named use case defined. Can you state in one sentence what specific task this tool will do in this firm? | □ | □ | □ |
| 2 | Data handling confirmed for that use case. Do you know what the vendor does with submitted data — retention, model training, access, and data residency — for the plan and configuration you will use? | □ | □ | □ |
| 3 | Output is genuinely reviewable. Given how this tool will be used in daily practice — volume, pace, format — will the responsible attorney be able to substantively verify the output before it is relied on? | □ | □ | □ |
| 4 | Named owner with clear responsibility. Is one specific person accountable for knowing the tool's current terms, deciding if use expands, and making the call to stop if needed? | □ | □ | □ |
| 5 | Everyone who uses it knows the boundaries. Have staff been explicitly told what the tool can and cannot be used for, whether client information may go into it, and what to do if they are unsure? | □ | □ | □ |
| 6 | Failure consequences are acceptable for this use case. If this tool produces wrong output in daily use, are the consequences acceptable given the review standard the firm can realistically maintain? | □ | □ | □ |
| 7 | Acceptable exit path exists. If the tool turns out to be wrong for this firm, or the vendor changes its terms, can the firm stop using it without a workflow crisis? | □ | □ | □ |
What the Assessment Results Mean
The worksheet above produces one of four approval states. Be honest — "probably fine" is not "Met."
- All 7 Met: Approved — proceed with the defined use case and scope. Document the approval so the owner can revisit it when terms change or use expands.
- All 7 Met, but only for low-risk tasks: Approved for internal-only or sanitized use. Client-information use is not approved until condition 2 is fully confirmed for that data type.
- Any "Not Met" on conditions 1–4: Not approved. Conditions 1–4 are threshold conditions — a gap in any of them means the foundation for safe adoption is not in place yet. Resolve before proceeding.
- Multiple "Partly / Unknown" answers: Not approved. "Unknown" is the same as "Not Met" for purposes of client-information use. Unknowns need to be resolved, not assumed away.
The most common result for a tool that was adopted informally and is already in daily use: conditions 1, 4, 5, and 7 are "Partly / Unknown," and condition 2 was never confirmed. That is the drift pattern described at the opening — and it means going back to resolve what should have been resolved before adoption.
What Each Condition Requires
1. A specific named use case
"It might be useful for various things" is not a use case — and it is the most common failure at small firms where tools get adopted from personal experimentation rather than a defined need. Before adoption, the managing attorney should be able to state in one sentence what specific task this tool will do. "Drafting first-pass demand letters in PI matters" is a use case. "Helping with legal work" is not. The use case definition is what makes conditions 2, 3, and 6 answerable at all.
2. Acceptable data handling for the use case
This condition requires knowing the answers — not assuming them. ABA Formal Opinion 477R requires attorneys to conduct a fact-specific analysis of information sensitivity and the protections available when processing client information, not a checkbox review of whether the vendor claims to be secure. ABA FO 477R Consumer tools and enterprise tiers of the same product can have materially different data terms. Read the plan-specific documentation. For tools that will handle client information, the retention and training-use questions are threshold questions, not preferences.
3. Output the responsible attorney can realistically review
Review is not the same as reading. ABA Formal Opinion 512 makes clear that the attorney's professional responsibility for the work does not transfer to the AI tool. ABA FO 512 For review to be substantive, the attorney needs enough context, time, and expertise to catch the errors that tool is capable of making — including errors that look correct. A 40-page AI-assisted research memo reviewed in 10 minutes is not reviewed. Before adoption, ask honestly: given how this tool will be used in practice, will review be real?
4. A named owner with clear responsibility
In a solo or small firm, this is almost always the managing attorney. The owner is accountable for: knowing the vendor's current data terms, deciding whether use has expanded beyond what was originally evaluated, and making the call if the tool needs to stop. "The whole firm uses it" is not an ownership structure. One person needs to be accountable — and that accountability cannot be assumed; it has to be stated.
5. Staff know what the tool is and is not for
Use limits that exist only in the managing attorney's head are not enforced. Small firms are particularly vulnerable here because ad hoc tool adoption often begins with one person and spreads through informal mention rather than deliberate rollout. If paralegals or assistants will use the tool, they need to know — explicitly — what can go into it, what cannot, and what to do if they are unsure. This is a brief conversation at adoption, not a 30-page policy document.
6. Acceptable failure consequences
The stakes are not uniform across use cases. If the tool drafts a scheduling email with an error, the error is caught with minimal consequence. If the tool produces a wrong factual summary in a client letter, or a missed deadline in a court document, the consequence is different. The threshold the firm applies should reflect the stakes of the specific use case, not just "AI in general." For high-stakes use cases, the review standard in condition 3 and the failure tolerance in condition 6 need to be honestly reconciled.
7. An acceptable exit path
A tool that becomes so embedded in daily workflow that the firm cannot stop using it without a crisis was adopted without adequate caution. This condition is particularly relevant for small firms where there is no IT team to manage a transition. Introduce tools as a parallel resource first — not as an immediate replacement — until they have been evaluated in actual use. A clear exit path is not a sign of pessimism; it is proof that the tool was adopted deliberately.
Use Classification: What Level of Approval Each Category Requires
Not all uses carry the same risk. The threshold above applies to every AI adoption decision — but different use categories require different levels of confidence on condition 2 specifically.
| Use category | Examples | Starting point? | Minimum required |
|---|---|---|---|
| Internal admin use | Scheduling, firm templates, internal memos, meeting agendas | Yes — lowest risk; best place to start | All 7 conditions; data handling is simpler (no client data) |
| Sanitized / public information use | Firm bio, marketing copy, public-document summaries, general legal research on non-matter questions | Yes — low risk | All 7 conditions; no client-identifiable information submitted |
| Client-adjacent use | Intake form language, generic matter templates, standard client communication formats (before client-specific facts are added) | Proceed carefully — no client-specific facts without reviewed tool | All 7 conditions; condition 2 confirmed for the plan and configuration in use |
| Client-information use | Matter summaries, case-specific drafting, deposition prep, intake summaries with client facts | Higher bar — do not start here | All 7 conditions; condition 2 fully confirmed; no-retention or enterprise configuration verified and in use; attorney review standard defined for every output type |
| Prohibited without escalation | Answering substantive client questions in consumer tools; inputting privileged communications without reviewed terms; using any tool rejected under condition 2 | No | Escalate — do not use without formal review and approval |
For small firms evaluating AI for the first time, the internal admin and sanitized use categories are the right starting zone. Developing a practical understanding of a tool's reliability and limitations in lower-stakes contexts before extending it to client work is not excessive caution — it is how informed adoption works.
How to Verify: Where the Answers Actually Come From
Condition 2 (data handling) is the one most often left unresolved. Vendor marketing does not answer it. Here is where to verify:
- Provider terms of service and privacy policy. Look specifically for language about data retention, model training on user inputs, and data access by vendor employees. Consumer and enterprise tiers of the same product frequently have different terms — confirm which applies to the plan and configuration you are actually using.
- Plan-specific documentation. Some vendors publish separate enterprise data terms or data processing agreements. If those documents exist, they govern — not the general privacy policy. Read them.
- Vendor security or trust documentation. SOC 2 assurance reports, ISO 27001 certifications, and similar documents address infrastructure practices. They do not answer the training-use question. Both matter; neither substitutes for the other.
- Direct inquiry to vendor support — confirmed in writing. If the documentation is unclear, ask. A sales call answer is not verification. Get the response in writing. If the vendor cannot or will not clearly answer whether submitted data is used for model training, treat that as an unresolved condition 2.
- Executed contract terms if applicable. If the firm has negotiated a contract with the vendor, confirm the specific provisions you need — no training on submitted data, retention limits, deletion timelines — are in the executed document, not just in the sales pitch.
Stop Signs: When Not to Adopt
These are automatic stops for client-information use — not reasons to be cautious, but reasons not to approve:
- Cannot confirm what happens to submitted data. If the documentation is unclear and direct inquiry does not produce a clear written answer, data handling is unresolved. Unresolved data handling is a stop for client-information use.
- No enterprise or no-retention configuration available. A consumer tool without a no-retention option is not appropriate for client information regardless of other features.
- Vendor terms permit training on submitted data without an opt-out. Verify this specifically. Terms that allow the vendor to use submitted data to improve its models — without an opt-out for the plan in use — are a material risk for client information.
- No one in the firm will own it after adoption. A tool with no named owner accumulates risk without accountability. If ownership is not clear before adoption, it will not become clear after.
- The output cannot be genuinely reviewed in normal use. If the pace, volume, or format of the use case makes substantive review unlikely in practice, the supervision standard is not being met. The tool is producing false efficiency.
- The tool is being adopted because "everyone is using it." Peer adoption is not a governance rationale. What works for another firm's use case, data posture, and review standards does not automatically transfer.
- The tool has no defined exit path. A tool introduced without one should be run in parallel with existing workflow until it has been evaluated in real use — not adopted as a replacement before that evaluation is complete.
What Small Firms Get Wrong That Larger Firms Typically Catch
The failure patterns at small firms are different from those at larger firms — not because small-firm attorneys are less careful, but because the operating environment creates specific blind spots.
- Adoption from personal experimentation, not evaluation. "I tried it and it was impressive" is a description of a demo environment. The relevant question — will this tool produce reliable output in this firm's specific use case, consistently, under normal conditions — is not answerable from a solo test run.
- Use-case creep without re-evaluation. A tool adopted for internal drafting expands naturally into client-adjacent work and then into matter-specific work without anyone making an explicit decision at each step. Each expansion is a new adoption decision. Condition 2 in particular needs to be re-evaluated when the data sensitivity changes.
- Vendor terms reviewed at adoption, never again. Vendors update terms. A no-retention configuration that was confirmed at signup may be modified at renewal. The tool owner's job includes knowing when terms change — not just at the initial evaluation.
- No accountability when the managing attorney is also the primary user. The person who chose the tool and uses it daily is often not the best person to assess whether the tool should be stopped. Small firms should build this accountability explicitly — even if the managing attorney is the only realistic candidate, making the obligation explicit matters.
For the specific vendor-evaluation questions to ask about any tool's data handling and contractual posture, the AI tool due diligence checklist covers the full evaluation in detail. For firms building a governance framework across multiple tools and roles, building a firm AI policy that actually gets used is the natural next step.