What Change Management Actually Looks Like
When someone brings up “change management models,” especially around implementing new software or digital systems, it can sound super theoretical. But if you’ve ever been in the room when a team gets handed a new ticketing tool or CRM, you know the real change management isn’t on slides—it’s in Google Calendars, buried Slack threads, and the confusion on Linda’s face when her tool sidebar disappears overnight.
Thank you for reading this post, don't forget to subscribe!At its core, change management is the intentional planning and execution of how people adapt to a shift—usually tech-driven—at work. Models like ADKAR (Awareness, Desire, Knowledge, Ability, Reinforcement) and Kotter’s 8 Steps are commonly thrown around. But those are only useful if your team can actually slot their daily habits into those phases. Otherwise, you’re stuck with nicely labeled buckets that don’t reflect the fact that your senior developer refuses to open another app unless someone threatens their coffee budget.
Here’s where digital adoption comes in. Tools like WalkMe or Pendo step in to create interactive product experiences that guide users during onboarding. But unless you’ve mapped the right model to the actual friction points (and I mean literally, which screen someone rage-quits on), you’re just swapping out one complex tool for another.
Here’s a quick visual breakdown of what each model targets:
Model | Focus Area | Typical Use Case |
---|---|---|
ADKAR | Individual behavior change | Training teams on new tools |
Kotter’s 8-Step | Organizational vision and urgency | Company-wide system migrations |
Lewin’s Unfreeze-Change-Refreeze | Psychological readiness and reinforcement | Long-term process reengineering |
So if you’re working on a rollout, don’t just pick a model—start from users’ friction points, not upstream frameworks that assume consensus exists.
The bottom line is, skip the jargon and anchor your model choice around real user pain.
How to Map Change Models to Digital Rollouts
This is where a lot of teams mess it up. They pick a change model because it looks solid on paper, not because it reflects the chaos that actually happens after a tool rollout. For any model to work, it needs to be mapped onto the digital product’s interactions, not just a pretty roadmap on Confluence.
Let’s walk through what that mapping looks like. Say you’re introducing a new inventory system across five warehouses. You’ve chosen the Kotter model (because of its eight sequential steps), but here’s how you’d translate that into actual on-screen behavior:
- Step 1 – Create urgency: Run site-level dashboards showing manual entry errors to make the pain visible.
- Step 2 – Form a coalition: Recruit floor managers from each warehouse to test the new interface and log UX blockers.
- Step 3 – Create a vision: Design a clickthrough prototype and circulate it—not a 12-slide strategy doc.
Each step needs an equivalent action inside the digital adoption landscape. If your tool doesn’t let you attach walkthroughs to new releases (Pendo, for example, lets you do this dynamically), then your change momentum dies when users get stuck on step two of a form.
When ADKAR is mapped, that often means creating “nudges” inside the app—for example, showing a tooltip when a user repeatedly visits old layouts. Reinforcement, the last stage in ADKAR, can be handled with badges, feedback surveys at repeat intervals, or heatmaps on feature usage.
In essence, the model only matters if your toolset can adapt to it.
Finally, the mistake I see often is moving too fast between steps or not checking whether users are actually at the step you think they’re at. You implemented ‘Ability’? Show me which training completion dashboard backs that up.
At the end of the day, alignment between model steps and what users actually do is what makes or breaks adoption.
Choosing the Right Model for Your Team
Not every team needs Lewin’s thaw-refreeze approach—and not every rollout will benefit from trying to push a vision down people’s throats. The choice comes down to three questions:
- Are we introducing this system to lots of frontline users?
- Is the resistance mostly behavioral or structural?
- Do we need short-term compliance or long-term transformation?
If you’re rolling out to customer support agents who’ve been in Zendesk for a decade and are now being shifted to customer.io, you’re going to deal with behavioral inertia, not strategic confusion. Here, ADKAR typically outperforms Kotter. The focus on ability (do they know how) and reinforcement (what keeps them there?) is what makes or breaks day-to-day productivity.
But if leadership is trying to push a Salesforce rollout companywide, Kotter’s steps—starting with urgency and vision—give a better scaffold for comms strategy, milestones, and phased exec buy-in. You can’t just throw WalkMe tooltips at executives and hope they emotionally engage with the CRM.
This also happens when smaller companies copy enterprise change models—which usually overcomplicate what could’ve been resolved with a Figma walkthrough and one live Q&A.
The wrong fit wastes time, frustrates the team, and leaves behind bitter Slack threads about “who approved this unnecessary change.”
Overall, the smartest model is the one that fits your scale, your tech literacy, and your org’s appetite for feedback loops.
Digital Tools That Support Change Models
If you’ve picked a model and you’re halfway competent with your rollout timeline, your next headache is tooling. Because let’s be honest: you can’t implement ADKAR or Kotter using PDF attachments and hope it scales.
Tools like WalkMe, Pendo, and Whatfix allow you to overlay customized guidance on top of the app interface. This is huge when you’re handling “Ability” or “Knowledge” phases—stuff like contextual onboarding, tooltips that change per role, or step-by-step walkthroughs after a new deployment.
But here’s where things get messy. Just because a tool offers change model integrations doesn’t mean it respects them. I’ve seen clients set up WalkMe flows with five onboarding steps, only for most users to rage-click through by the third screen. Why? Because that step ignored whether users had even reached the “Desire” phase yet (in ADKAR terms).
This is where integration matters. If your apps like Salesforce or Jira don’t integrate properly with these adoption frameworks, you’re going to spend more time chasing UI bugs than helping users adapt. I once saw a Whatfix install where the overlay arrows pointed to the wrong button due to a permissions mismatch. It broke trust instantly.
Also, avoid relying fully on automation. Human feedback loops are part of the model—run weekly short polls, watch behavior via session recordings, and adjust your tooltip sequence accordingly. In one rollout I watched, the real reason adoption tanked was because half the team had dark mode enabled, which broke the widget text contrast. 🫠
So yes—pick the right tool. But internal testing and feedback-hunting should never be optional.
To wrap up, your tool should amplify behavior change—not just decorate the interface.
Tracking Progress Through the Change Stages
Implementing a change model without measurement is like driving blindfolded while your GPS says, “You should be fine.” Most teams forget to map each phase—”Awareness,” “Ability,” or “Urgency”—to actual behaviors they can track.
For teams using ADKAR, measurement often happens through a mix of surveys, behavior analytics, and simple checklists:
- Awareness: Check email open rates or track onboarding completion time in the LMS.
- Desire: Poll users after intro calls: “Do you want this change?”
- Knowledge: Examine how often your walkthroughs are skipped. High skip rates = low engagement.
With Kotter’s model, you’re dealing with bigger organizational changes, so try these:
- Create Urgency: Do Slack threads start surfacing the existing system’s flaws?
- Build Short-Term Wins: Show dashboard metrics climbing (fewer support tickets, faster workflows).
- Anchor Changes: How often do new hires get trained in the new tool versus old workarounds?
But here’s where it gets tricky. Metrics often lie. A tool walkthrough might have a high completion rate, but users are treating it like a skip-through gatekeeper. Ask yourself: What are they actually doing differently as a result?
Use heatmaps like Hotjar (or FullStory for enterprise setups) to see if behavior is aligning with your model stage. If you’re seeing people hover around old workflows daily, your change adoption isn’t sticking—regardless of how many badges your LMS gave them.
The bottom line is, good tracking makes each step visible. Otherwise, you’re just hope-planning.
Why Most Change Models Fail Post-Launch
Even with the best tools and crisply organized models, a surprising number of change initiatives crash and burn within weeks. Why? Because people stop managing the model after the launch phase.
ADKAR’s final R—”Reinforcement”—often gets treated like an optional dessert. Teams figure installing the tool was the main job. But unless you’re periodically running behavior checks and adapting onboarding content, user habits slide fast.
Another big reason? Lack of feedback loops. I’ve seen dashboards where everyone checks “onboarding completion” but no one links those to error rates or support tickets. If your reporting tool stops at who clicked “Done,” you’re missing whether their job got easier.
This also happens when teams split ownership mid-implementation. A change manager runs rollout planning, then tosses ongoing monitoring to IT or ops without enough briefings. Now no one’s owning Reinforcement activities.
Plus, context shifts. New hires come in. Managers change strategies. If you’re not revisiting the model steps based on shifting org context, your early momentum gets stale fast.
Finally, there’s what I call the “numb user syndrome”—where people build tolerance to walkthroughs and notifications and start ignoring them. If you don’t test variants, adjust timing, or reduce dependency on in-product popups, user blindness becomes your new enemy.
To sum up, what breaks a model isn’t tools or frameworks—it’s disappearing after launch day and hoping for the best.
Best Practices for Future-Proofing Digital Adoption
The model you choose shouldn’t just work for this one software rollout—it should become a reusable backbone. And that means documenting everything, leaving hooks for future teams, and maintaining user trust.
First up: modular tool setups. Don’t hardcode walkthroughs or tooltips. Use tools that update content dynamically based on user roles or previous clicks. (Pendo or Whatfix does this quite well.) This saves future teams from having to rebuild the wheel.
Second: build a feedback infrastructure. We built a recurring survey inside Slack that tagged users based on app usage segments. That way we avoided the angry “Why do I keep getting this?” DMs and got the right user pain insights weekly.
Third: train for skills, not steps. Walkthroughs should teach patterns—where to find help, where settings usually are—not just next-next-click-here. That’s what builds resilience during future UI changes.
Fourth: visually map each model phase to your adoption timeline in tools like Whimsical or Miro. That way, when things go wrong, you can pinpoint which phase got skipped and why.
Finally, measure outcome over completion. Completion just means they saw it. Outcome means it made something easier. Your KPIs need to be about task speed, error drops, re-engagement—not just walkthrough ends or login count.
As a final point, future-proofing works when reinforcement becomes habit—not rescue.