Platform Strategy & Implementation

Is Your Legal Tech Implementation Actually Working? A Post-Go-Live Audit

A 12-item audit, five failure mode diagnoses, and a repair-vs-replace decision test for firms six months past go-live.

You bought a platform. It went live. Nine months later, half the staff are using it the way it was intended, a third have turned it into an expensive contact database, and two people are still running parallel spreadsheets because the transition "never quite finished." No one is complaining loudly. Adoption is drifting.

The platform is probably not broken. The implementation likely failed — or never fully happened. These are different problems with different fixes.

A 14-attorney litigation firm implemented a new case management platform eight months ago. Data migration completed on schedule. The vendor ran a three-hour training the week before go-live — against a calendar that included two active trial closes. By November, the associate who volunteered as implementation owner had rotated to a new matter load. Partners used the platform for matter review. Paralegals entered data inconsistently. Three timekeepers were still logging hours in a shared spreadsheet "until they figured out the billing workflow." No one had defined what good adoption looked like at 30 days, much less 90.

The platform was not the problem. Clio's 2022 Legal Trends research found satisfaction with legal technology correlates strongly with whether the firm adapted its workflows to use the platform consistently — not with which product was purchased. The same platform, properly implemented, would have worked. Improperly implemented, it is a monthly expense with a growing resentment attached.

The Five Failure Modes

Not every implementation fails for the same reason. The five failure modes below are distinct — and identifying which one (or which combination) explains your situation is what determines the fix.

Failure mode Signal you're in it Where the fix starts
FM1 — Bought for features, not workflows Staff cannot explain which firm process the tool replaced or improved. The configuration does not match how the firm actually works. Map current workflow against tool capability. Identify the mismatch. Redesign the workflow or acknowledge the scope limits.
FM2 — No designated owner No one is currently accountable for adoption metrics. Questions go unanswered. The implementation has "paused." Most common failure mode in mid-size firm implementations. Designate a specific person — not a committee — with authority to make configuration decisions. Define their mandate and a 60-day goal.
FM3 — Data migrated, processes not The platform holds data, but workflows still run in email, spreadsheets, or prior systems. The tool is used reactively, not as the system of record. Audit which workflows are running inside vs. outside the platform. Build transition documentation for each workflow still running outside.
FM4 — Trained on the tool, not the workflow Staff know where features are but use them differently from each other. Intake or matter workflow varies by person. Document the firm's intended process inside the tool. Retrain on that process using real firm scenarios — not the vendor's generic tutorial.
FM5 — Went live on a bad window The first two weeks were chaotic. Staff reverted immediately. The "we'll get back to it" moment never came. The initial window cannot be undone. Run a fresh adoption push: protected two-week focus window, clean re-training, owner re-activated, adoption metrics set in advance.

The 12-Item Post-Go-Live Audit

Use the audit below to assess your current implementation. Answer each item Yes / Partially / No. The scoring guide follows the table.

# Item What "Yes" looks like If No or Partially →
1 Workflow mapping before purchase The process this tool affects was documented before the platform was selected FM1: Bought for features, not workflows
2 Named owner accountable today A specific person — not a committee — is responsible for adoption outcomes right now and tracking metrics FM2: No designated owner
3 System configured before training began Staff were not trained on an unconfigured system; templates, automations, and matter types were built and tested first FM3 + FM4
4 Training covered the firm's workflows, not just the product Training used the firm's actual practice scenarios, not the vendor's generic tutorial FM4: Trained on tool, not workflow
5 Go-live window was protected Go-live was scheduled in a low-volume period; implementation owner was available for the first two weeks FM5: Went live on a bad window
6 Adoption metrics defined before go-live The firm specified what "good" looked like at 30 and 90 days before go-live — not after Missing success criteria — remediation is unfocused
7 30-day review completed and acted on Adoption data was formally reviewed at 30 days; gaps were addressed within that window Deferred problems compounded
8 Platform used for its intended workflows Staff are using the platform for the specific workflows it was purchased to support — not just as a data repository Identify which workflows are not adopted and why
9 Shadow systems eliminated Parallel processes — spreadsheets, shared drives, email chains — have been retired for the workflows the platform was meant to replace FM3: Data migrated, processes not
10 Current adoption rate known Someone can state the percentage of staff using the platform as intended today — not a guess, an actual number Cannot manage what is not measured
11 Staff can articulate the "why" Staff can explain why this platform was chosen over the previous process — what it does better and why the old method was retired FM2 / social adoption gap
12 Failure mode diagnosis complete The firm has identified which of the five failure modes explains the current gaps Remediation remains unfocused

Scoring

  • 10–12 Yes: The implementation is structurally sound. Work the specific No/Partially rows — they map directly to failure modes with defined fixes.
  • 7–9 Yes: The implementation is partially complete. Identify your No/Partially rows, note their failure mode column, and use the remediation table below in that order.
  • Below 7: The implementation has fundamental structural gaps. Before deciding to replace the platform, designate a named owner and give them 60 days to address the identified failure modes with defined metrics. Do not conclude the platform is wrong until the implementation has actually been run.

Failure Mode to Remediation

Failure Mode How to confirm it Remediation steps Timeline
FM1 — Features, not workflows Staff cannot name which firm process the tool replaced; configuration doesn't match how the firm actually operates Document current workflow for each process the tool was meant to affect. Map it against the tool's actual configuration. Identify mismatches and either redesign the workflow or scope the tool to what it actually fits. 2–4 weeks
FM2 — No designated owner No one is accountable for adoption metrics right now; implementation decisions are unmade; the rollout "paused" Name a specific person with authority to make configuration decisions. Define their mandate, adoption target, and a 60-day check-in. Make accountability visible — the owner should be able to report adoption rate on request. 1 week to designate; 60 days to results
FM3 — Data migrated, processes not Platform holds data but workflows still run in email, spreadsheets, or prior tools; the system is looked up, not used List every workflow that was supposed to move into the platform. For each one still running outside: build transition documentation, set a migration date, and retire the parallel tool on that date. 3–6 weeks per workflow cluster
FM4 — Tool training, not workflow training Staff know where features live but use them differently; intake or matter workflow varies by person; five different approaches to the same task Write the firm's intended process — step by step, in your own language, not the vendor's tutorial. Retrain each role on that process using real firm scenarios. Make the written process available as a reference. 2–3 weeks
FM5 — Bad go-live window First two weeks were chaotic; staff reverted to prior tools immediately; "we'll get back to it" never happened The original window cannot be undone. Run a structured re-launch: (1) pick a low-volume two-week window, (2) re-run role-specific workflow training, (3) make the implementation owner available daily for questions, (4) define the adoption target before the window opens, (5) retire parallel tools on day one of the window. 2-week focused push

The Repair-vs-Replace Decision

The instinct to replace a platform that isn't working is understandable — and occasionally correct. But replacing a platform that failed due to implementation problems produces the same outcome on a new platform. Migrations cost more in time, disruption, and money than a targeted implementation restart in virtually every case. ABA 2024

Before concluding you need a new platform, work through the three questions below in order. The first "replace" answer you reach ends the test.

# Question Answer Direction
1 Has the platform been properly implemented — meaning a named owner ran the full rollout, staff were trained on the firm's workflows, and adoption was measured at 30 and 90 days? No Repair. The platform hasn't been given a fair test. Run the implementation before evaluating the platform.
1 (same) Yes — proceed to Q2 Continue
2 Is the platform architecturally capable of supporting the firm's core workflows — or is the mismatch fundamental and cannot be fixed by configuration? Fundamental mismatch — e.g., document management tool being used as case management Replace. Configuration cannot fix an architectural mismatch.
2 (same) Configuration mismatch — the tool can do what's needed but isn't set up to Repair. Fix the configuration and re-run the implementation.
3 Has a named owner with real authority run a targeted 60-day implementation restart — with defined adoption metrics — and adoption is still below 50%? No — hasn't been tried Repair. Run the 60-day restart before drawing conclusions.
3 (same) Yes — tried with real accountability and metrics, adoption still below 50% Replace. The case for replacement is now well-founded.

The time already spent on the current platform is not a reason to stay. The migration cost is. Firms that conflate those two make worse decisions in both directions.

This article reflects Songbird Strategies' operational observations from working with law firms on platform selection and implementation. It is not legal advice. Claims referencing survey data are sourced to published reports; firm-level observations represent practitioner judgment. See Sources & Notes for full source documentation.

Know Which Failure Mode You're In?

The audit above identifies the gap. The repair-vs-replace test tells you whether you're looking at a recoverable implementation or a platform mismatch. If you scored below 7 — or if the adoption rate is unknown — the gap is almost certainly recoverable, but requires a named owner with a defined mandate and 60 days. A strategy call can identify which failure mode applies and what a targeted restart looks like.

Find the Right Platform →
Book a Free Strategy Call

30 minutes. No sales pitch.