Most law firm AI adoption problems are not discovered during tool evaluation. They are discovered after — when someone realizes that client documents were submitted to a tool under consumer terms, or that staff have been using a free AI product for intake summaries for months without anyone reviewing what the tool does with the information.
Consider a common pattern: a paralegal starts using a free AI assistant to draft intake-note summaries, finds it useful, and mentions it to colleagues. By the time anyone reviews the tool's terms, it has been in routine use for three months — on client names, matter descriptions, and preliminary facts — under consumer terms that permit the provider to use submitted content to improve its underlying model. Nobody at the firm consciously accepted those terms. Nobody rejected them either. The review never happened.
The diligence framework below is designed to be worked through before a firm uses any AI tool that will handle client information. Some questions are answered by reading vendor documentation. Some require direct vendor inquiry. A few require reviewing contract terms. All of them matter.
The underlying authority is ABA Formal Opinion 512 ABA FO 512 (generative AI) and ABA Formal Opinion 477R ABA FO 477R (cloud services and electronic communications), which together establish that attorneys must take reasonable precautions when using third-party tools with client information — and that reasonable precautions require actually understanding what the tool does with that information.
Classify the Use Before Running the Checklist
Not every AI use carries the same risk. Before applying the full diligence process, classify the use:
Administrative / no client information
Internal firm work with no client-identifying content: drafting internal policies, generating marketing copy, staff communications, scheduling, non-matter research. Rule 1.6 is not implicated in the same way. These uses can typically be approved with lighter review focused on general tool fitness and firm-level policy compliance.
Sanitized / client-adjacent
Work adjacent to client matters but containing no client-identifying or matter-specific information: researching a legal issue using hypothetical facts, drafting templates without matter context. Key requirement: the use must actually stay sanitized — no client names, no matter facts, no identifying context. Usage standards should make this line explicit.
Client information in a reviewed and approved tool
Matter-specific drafting, document review, intake processing with matter facts, transcript analysis with client context, and similar uses with actual client information. This category requires the full diligence process described below — completed and documented — before the tool is used.
Unapproved or unmanaged use — prohibited
Any use of a tool not yet reviewed and approved for client-information work. Should be prohibited by default. Exceptions require a completed diligence review and affirmative approval before use begins — not after.
Category 1: Data Retention
Does the tool retain the information submitted to it? If so, for how long, under what defaults, and can the firm change them?
Some tools retain inputs indefinitely by default. Others retain them for a rolling window. Business or enterprise accounts may offer shorter retention defaults, zero-retention configurations, or on-demand deletion — but these are often not enabled by default and may require specific plan tiers, settings changes, or contractual provisions. Understanding what the retention default actually is — for the specific account type and configuration the firm is using — is the baseline question. If information is retained, it can be subpoenaed, breached, or accessed by the vendor.
What to verify: The actual retention period and default for the plan/configuration in use. Whether deletion-on-demand is available and how it works. Whether the firm's data-handling agreement specifies retention limits. Consumer accounts often have public terms that are broad and not negotiable; business accounts may have more specific terms but those terms should be read, not assumed.
Category 2: Training Data Use
Is information submitted to this tool used to train or improve the underlying model — by default, or under any available configuration?
This is the most consequential question for many firms and the one most often skipped. Consumer-tier accounts on many general-purpose AI tools use inputs for model improvement by default. Business or enterprise agreements typically contractually exclude this — but exclusion should be verified, not assumed. The concern under Rule 1.6 is that client information incorporated into training data could, in theory, influence outputs available to other users of the same model.
What to verify: Explicit contractual language excluding use of customer inputs for training. "We may use data to improve our services" is typically not sufficient — this language is broad enough to include model training. Look for explicit exclusion language in the data-handling agreement, not in the general terms of service where such language often does not appear.
Category 3: Data Residency and Vendor Staff Access
Where is data processed and stored? Who at the vendor organization can access it, under what conditions?
For most firms, the access question is more immediately relevant than residency: can vendor employees read the content submitted to the tool? Under what circumstances — troubleshooting, quality review, product development? With what safeguards? Residency matters more for firms representing clients in regulated industries or with specific geographic data requirements.
What to verify: Clear statements on where data is processed and by which subprocessors. For access: commitments that customer data is not accessed by vendor employees except for specific limited purposes, with the firm's awareness or consent where appropriate. Vague claims about security culture are not a substitute for specific access restrictions in vendor documentation or contract terms.
Category 4: Contract Controls and Confidentiality Commitments
Does the firm have a written agreement with this vendor that creates the confidentiality protections it actually needs?
The question is not whether a document called a "Data Processing Agreement" exists. The question is whether the firm has a written contract or agreement package that actually creates the protections required. Depending on the vendor, that might be a Data Processing Addendum, an Enterprise Terms of Service with specific data-handling provisions, a Master Service Agreement with confidentiality obligations, or a negotiated instrument. The label matters less than the contents.
ABA Formal Opinion 477R ABA FO 477R identifies the availability of confidentiality protections with the third-party provider as one of the factors attorneys must consider when using cloud-based tools with client information. "What the public terms of service say" is not a satisfactory answer for client-information use — those terms are written for the vendor's benefit, not the customer's.
The protections the firm needs from its agreement with any AI vendor:
- Confidentiality obligations running from the vendor to the firm
- Documented restrictions on how the firm's data is processed and used
- Security obligations appropriate to the sensitivity of client information
- Visibility into subprocessors the vendor uses and controls over what they can access
- Deletion and return of data on request and at contract end
- Limits on vendor support or engineering staff access to firm content
- Defined incident notice and breach-handling obligations, with timelines
- Audit or review rights, where the sensitivity of the use warrants them
A vendor that cannot offer a written instrument covering these basics — or one that requires individual attorneys to accept public terms with no negotiated protections — is generally not an appropriate vendor for client-information use.
Category 5: Vendor Security Posture
Has the vendor's security program been independently assessed? Can it demonstrate that assessment?
The question is not whether the vendor claims to take security seriously. The question is whether there is third-party validation of its security controls. SOC 2 Type II is the relevant assurance report for most SaaS legal tools: it is a third-party examination of the vendor's controls against specified Trust Services Criteria over a defined audit period — not a certification, but a rigorous independent assessment with documented findings. "SOC 2 compliant" is not equivalent to a completed SOC 2 Type II report, and Type I (which reflects controls at a point in time, not over a period) is weaker.
ISO 27001 is an independently certified information security management standard and represents a higher level of formal commitment to security program maturity. For vendors handling significant volumes of sensitive data, it is a meaningful additional indicator.
What to verify: A SOC 2 Type II report or equivalent assurance artifact covering the relevant service period. The scope of the report — does it cover the specific product/infrastructure the firm will use? A vendor that cannot produce any third-party security assessment warrants serious scrutiny before use with client information.
Category 6: Auditability and Supervision Controls
Can the firm track what information was submitted to the tool, by whom, and with what result?
Attorney supervision obligations under Rules 5.1 and 5.3 apply to AI tool use. If non-attorney staff are using an AI tool with client information, the supervising attorney is responsible for ensuring that use is appropriate. That supervision is difficult to exercise if the firm has no visibility into who is using the tool and what is being submitted.
What to verify: Audit logging of user activity at the organizational level. Ability to set usage policies, access restrictions, and approved-use parameters at the firm level. Role-based access controls. The presence of organizational admin controls is one of the clearer signals distinguishing tools designed for professional-firm use from tools designed for individual consumers.
Special Consideration: AI Meeting and Transcription Tools
AI meeting transcription and summarization tools are among the most rapidly adopted AI products at law firms. They carry a combination of considerations that do not apply to other AI tools — and those considerations should be addressed separately before any such tool is used in connection with client matters.
Recording law and consent. Recording a conversation — whether by traditional recording device or AI transcription — requires consent under applicable state law. State recording laws vary significantly: some require only one-party consent; others require all parties to consent. The consent requirement applies regardless of whether the recording is processed by AI. Before using any AI transcription tool in client calls, the firm must confirm which law governs and whether required consent has been obtained.
Client disclosure and professional responsibility. Separate from recording-law compliance, there is a professional practice question: should clients be informed that AI transcription is in use for their calls? ABA Formal Opinion 512's communication duty is relevant here. Even where recording consent is not legally required for all parties, disclosure to the client that AI transcription is in use is generally the professionally appropriate default — and appropriate to reflect in engagement letters or retainer agreements where the firm uses transcription tools routinely.
Vendor handling of transcripts. Meeting transcripts are concentrated client information: parties, matter context, strategic discussion, client communications. The vendor review process described in this article applies here exactly as it does for any other AI tool processing client data. Consumer-tier transcription tools typically store transcript content on vendor servers under whatever terms govern the product configuration in use. Those terms should be verified using the same criteria as any other client-information tool.
Tool approval status. A transcription tool used in client calls must have been reviewed and approved under the firm's AI/tool approval process before use — not after the firm discovers it has been in use informally. The recording-law and professional-responsibility dimensions make pre-use review particularly important for this tool category.
The Approval Workflow
Diligence answers need to result in a documented approval decision, not just a private conclusion. The six steps below constitute a workable approval sequence for any AI tool being considered for client-information use.
| # | Step | What happens / required to advance | Who owns it | Stop condition |
|---|---|---|---|---|
| 1 | Intake & classification | Tool is identified. Classify the intended use: administrative, sanitized/client-adjacent, or client-information. If client-information use is intended, proceed to full review. If not, apply lighter review focused on policy compliance. | Requestor + AI governance owner | Prohibited use category → stop, do not proceed. |
| 2 | Diligence checklist | Run the full checklist: training exclusion, retention terms, vendor staff access, DPA existence, security assurance (SOC 2 Type II or equivalent), organizational admin controls. Document answers, not summaries. | AI governance owner (+ IT where applicable) | Training exclusion not confirmed in writing → stop. No DPA available or offered → stop. |
| 3 | Contract review | Confirm the DPA, Enterprise Terms, or equivalent instrument covers: confidentiality obligations, processing restrictions, security commitments, subprocessor controls, deletion/exit terms, breach notice timeline. | Attorney or qualified reviewer | Required provisions absent and not negotiable → stop. |
| 4 | Escalation check | Flag any open items: regulated-industry client considerations not addressed, unusually broad retention terms, unclear staff-access restrictions, vendor declined to confirm a material item. These require resolution or documented acceptance before approval. | AI governance owner | Unresolved material escalation item → do not approve pending resolution. |
| 5 | Documented approval decision | Record the approval: tool name, approved use scope, date, reviewer, basis for approval, any conditions or restrictions. Add to the firm's approved-tool list. Approval is scoped — a tool approved for administrative use is not approved for client-information use without a separate review. | Named AI governance owner (not "the firm") | No named owner for the approval record → not approved. |
| 6 | Re-review scheduling | Set a re-review date (minimum: 12 months out). Record the trigger conditions that would require an earlier reassessment. The approval owner is responsible for confirming the re-review runs on schedule. | AI governance owner | No re-review date set → approval is incomplete. |
Approval Is Not Permanent: Ongoing Monitoring
Provider terms change without advance notice. Product configurations are updated. A training exclusion that existed in one product version may not survive a platform update. A retention default that was 30 days may become 90. A firm's use cases expand — a tool approved for administrative use gets deployed for matter-specific work without a fresh review.
The firm should establish a reassessment trigger: at minimum, annually for all approved client-information tools, and immediately when a vendor announces material changes to its terms, data practices, or product architecture. The attorney who owns the approval also owns the periodic check that the controls approved at the start are still in place.
Re-review checklist — triggers that require immediate reassessment (in addition to the annual cycle):
- Vendor announces changes to its terms of service, privacy policy, or data-handling practices
- Vendor announces a material product architecture update, platform migration, or change in AI model infrastructure
- Firm expands the tool's use scope beyond the approved category (e.g., administrative use extended to client-information use, or addition of a new matter type)
- Security incident reported at the vendor — breach, unauthorized access, or significant vulnerability disclosed
- New ABA formal opinion, state bar guidance, or relevant regulatory requirement affecting AI tool use at law firms
- Vendor is acquired, merges, or undergoes significant ownership or operational change
- Firm's client base changes in a way that alters the sensitivity profile — e.g., addition of regulated-industry or government clients
The re-review does not require repeating the full diligence process if no material terms have changed. It requires confirming that the controls documented at approval are still in place and that no trigger condition has occurred without the firm's awareness. Firms that do not establish this cycle discover term changes after they have already been in effect for months.
When the Answer Should Stop You
Unmanaged consumer accounts on general-purpose AI tools should generally be presumed inappropriate for client information — not because consumer products are technically inferior, but because consumer terms are written for individual users, not professional service firms with confidentiality obligations. No training exclusion, no data-handling agreement, no admin controls, no organizational audit logging. The use may be convenient; the terms are not right for the use case.
Business or enterprise accounts on the same platforms may address these gaps — but only if the relevant provisions are actually confirmed. The tier label does not do that work. A firm that assumes enterprise plan means client-information-safe, without verifying the specific terms and settings, has closed the process without completing it.
The practical failure mode is rarely a conscious decision to use the wrong tool. It is individuals adopting useful tools, nobody running the approval process, and the firm discovering the gap afterward. A policy that names which tools are approved — and requires review before any new tool touches client information — prevents most of this.
Due Diligence Checklist
For each AI tool under review for client-information use, confirm:
- Is training/model-improvement use of customer inputs contractually excluded?
- Is the retention period for submitted content defined and acceptable? Is deletion-on-demand available?
- Is there a signed or available data-handling agreement (DPA, Enterprise Terms, or equivalent) that covers the protections listed above?
- Does the agreement include confidentiality obligations, processing restrictions, and security commitments specific to the firm's data?
- Are subprocessors identified? Is the firm's data subject to onward transfer restrictions?
- Are vendor staff access to firm content limited and defined in the agreement?
- Does the vendor have a SOC 2 Type II assurance report (or equivalent third-party security assessment) covering the relevant service?
- Does the firm have organizational admin controls — audit logging, role-based access, usage policy enforcement?
- Is there a defined breach and incident notice obligation in the agreement, with a specific timeline?
- Are deletion, return, and exit terms defined for contract end?
- If the tool involves meeting transcription: has recording-law consent been confirmed? Is client disclosure policy established?
- Is the approval documented, with a named owner and a scheduled re-review date?
A "no" or "unverified" on training exclusion, data-handling agreement, or retention terms is typically sufficient reason not to use the tool for client information, regardless of how capable the tool is. Those three are the minimum threshold. The rest of the checklist determines depth of scrutiny beyond the threshold.
This article is not legal advice and does not constitute legal or ethics guidance. The framework above is an operational guide based on published ABA guidance, not a legal opinion on what any specific firm's obligations require. Rules of professional conduct vary by jurisdiction; ABA formal opinions address the Model Rules, which individual states may have adopted with modifications. Firms should consult qualified legal ethics counsel before making policy decisions about AI use. Songbird Strategies is a consulting firm, not a law firm. See Sources & Notes for the primary authority cited.