Making AI Adoption Work: A Practical Series - part 2
Welcome to FullStack HR, and an extra welcome to the 102 people who have signed up since last week.
If you haven’t yet subscribed, join the 10000+ smart, curious HR folks by subscribing here:
Happy Tuesday,
For the first time this fall, things are slowing down a bit for me. I’m not doing seven lectures this week, for example, as I’ve done the last three (!!) weeks.
But the highlight of last week was a full-day workshop on how AI could change processes. Fifty people participated, and they gave the day a 4.84 score on a scale of 0 to 5 (!!!).
We really made it practical, concrete and viable. Way beyond just “this is how you can use ChatGPT.”
I love doing these. It takes a lot of energy and preparation, but having a room that engaged and eager to truly work with AI adoption is insanely fun!
Want a workshop like this for your organization?
I’m opening up my calendar for a few strategy calls in late October/ early November.
Book 15 minutes here and I’ll walk you through:
What a full-day workshop looks like (and why it gets 4.84/5 ratings)
How we make it concrete and applicable to your specific processes.
Whether it’s the right fit for where your organization is right now.
No pressure. Just a conversation about what’s possible.
Grab a time that works for you →
Now on to the article!
This is Part 2 of Making AI Adoption Work: A Practical Series.
In Part 1, we covered the five critical foundations - leadership buy-in, knowledge types, structure, and ownership. If you don’t have those in place, stop reading and go back. Seriously.
But let’s say you’ve got the foundations. Leadership is on board. You’ve got the mandate and structure. Now what?
Now comes the part most organizations completely screw up (yes, screw up): Building competence across your organization.
The System for Building AI Competence That Actually Sticks
Most organizations think buying licenses equals adoption. Buy Copilot for everyone, send a 60-minute intro email, and boom. Done and dusted.
This costs organizations massive amounts in wasted license fees, low adoption rates, and missed productivity gains. I’ve seen it repeatedly, organizations that invest tens of thousands in tools but nothing in training. Then wonder why adoption is at 4.76%.
It’s a fundamental flaw thinking tools drive adoption. They don’t.
Competence and skills drives adoption.
Research shows, and look, I’ve seen this across dozens of organizations now, that having a manager who uses AI is the single biggest predictor of employee adoption. Not tool choice. Not fancy features. Whether your boss uses it.
Step 1: Choose Your Tool Intentionally (Don’t Just Take the Default)
Ok so we do talk about tools? You just said that tools don’t drive adoption. Which is true. But you still need a tool to drive adoption forward. No tool, no adoption. So this is the foundation for building competence. And it’s important to choose the right one here. Choose wrong here, and you’ll be fighting uphill.
Most organizations take the lazy option. “We’ve got Microsoft licenses, we get Copilot bundled, let’s just use that.” I get it. It’s there. It’s “secure.” It’s in your existing environment.
But here’s what I think you should do instead:
Evaluate multiple models for your specific use cases. What gives you most bang for the buck? Not what’s convenient. What solves your problems best?
Also look at user data:
How many actually use Copilot at work if you already have it?
Look at outgoind DNS traffic in your wifi network, is chatgpt.com more visited than copilot? (Strong hint about what people are using.)
How many use their Outlook and Office products fully?
What’s realistic adoption if you put effort behind it?
Do a small pilot on another tool. Evaluate properly. Make active choices, not passive ones.
On the security question everyone obsesses about: You can sign DPAs (Data Processing Agreements) with most major providers now. ChatGPT Enterprise and/or Google's Gemini have the exact security requirements you can get with Microsoft; this is a lazy excuse for IT departments.
And yes, I know I’m pretty laissez-faire on this. I don’t think letting people use what they want is dangerous. But organizations vary wildly here. Some are super strict about only using Copilot. (Though when it comes down to it, they don’t actually follow up on it at all, more on this in Part 3.)
The point: Choose the model that’s most competent for solving your problems. Don’t just take the default because it feels safe.
Step 2: Build Your Training Tier System (Managers First, Always)
Frame this as the diagnostic that determines success or failure.
If you’re going to invest in the more expensive models, a couple of hundred bucks per year per person, it’s absolutely insane not to invest in training people how to use them. That’s just holes in your head.
Present these 3 key questions:
Who needs to use this most effectively?
Who influences adoption across the organization?
What’s the minimum training that creates actual competence?
And in my book, as per usual, start with managers.
This is absolutely the single most important success factor. We know from studies that having a manager who uses AI dramatically increases employee adoption.
If your manager is using AI, it’s 4x more likely that you, as the employee will use AI. So start with managers, get them competent and using AI, then when employees get training, they’re learning from someone who knows how to use it and can guide and coach people on usage. Part 3 will be solely dedicated to managers and their adoption, but let’s cover the groundwork here.
How do you train people then? I’m a big advocate for the two-part manager training: I recommend workshops, minimum 3 hours total, ideally 2x3 hours:
Part 1: Building your own competence in using AI
Part 2: How to support your team in using AI
You can weave this into existing leadership programs, but managers go first. When they’ve gone through it, then you train employees. And here you can weave managers in as co-trainers or have them run sessions themselves. Builds even more credibility in the organization.
Step 3: Invest Real Time (Not Token Gestures)
Different groups need different time investments to build real competence.
For managers (the critical group):
Minimum: 2x3 hours to get real traction
To follow up on this: 6 sessions over 6 months (one per month over half a year)
This gives you real effect, not just surface enthusiasm
For employees:
Minimum: 3 hours to get somewhere useful
Better: 3x3 hours if you want a real effect
For organizations going all-in: Look at Atlassian. They trained all 14,000 employees. 8,000 in tech, 6,000 in other functions. Everyone got minimum 2x3 hours. Everyone was required to do it.
Why? They see this as business-critical competence for their future. When you see it from that perspective - that it’s business-decisive for succeeding in the future - then of course people need to work with this.
Emphasize the one common thread: This isn’t optional professional development. This is core competence for how work gets done now.
Step 4: Calculate Your ROI (It’s Easier Than You Think)
All of this, the tools and the training, require investments. And we need to recoup that investment somehow, I’m fully aware of that.
I calculated what you need to get payback on the investment once. If you increase efficiency by 0.5% for one employee, you pay back the entire cost of a premium license like ChatGPT Enterprise over a year.
That’s how little productivity gain you need.
Add training costs - say 6 hours per employee at their hourly rate. Find 1-2% efficiency gain over a full year? You’ve paid back the training investment too.
Starting point: Organization hesitant about training costs
Key calculation: 6 hours training per person, premium licenses
Actual results: 2% measured efficiency gains in first quarter
Lessons learned: Training investment paid back in under 4 months
Once people are actually trained, they find use cases you never thought of. The ROI compounds faster than your conservative estimates.
These efficiency levels aren’t hard to reach either. With a decent trainer? Pretty straightforward.
Step 5: Choose Your Trainer Intentionally (External vs Internal)
Focus on getting it done well, not just getting it done.
I’m an external trainer, so obviously, I think there’s value in bringing one in like me. (And I do train mostly managers these days!) But I also understand if it’s a cost question. As said above, you can pay back that cost pretty easily if you lift efficiency just a couple of percentage points.
But here’s what’s worth considering: It needs to be good. That’s what matters.
Set clear requirements and expectations:
What should the training lead to?
What’s the goal?
What should people feel and be able to do after?
Precise ways to measure training quality:
Can participants apply it to real work immediately?
Do they understand when to use AI vs when not to?
Can they troubleshoot their own problems?
For internal trainers: Make sure they’ve built deep competence themselves first. I’ve worked with organizations that thought they had good internal capability, then discovered it was surface-level. People were using ChatGPT a bit, but didn’t have the fundamental understanding to teach organizational use effectively.
For external trainers: Look for people who can connect to organizational context, not just explain how the tech works. That’s the difference between training that creates change and training that creates enthusiasm that fizzles.
When Adoption Still Doesn’t Happen (The Foundation Problem)
The most common roadblock: You’ve trained people, they seemed engaged, but three months later, usage is at 10%.
This means your foundation from Part 1 wasn’t solid enough. Usually it’s that managers got trained but aren’t actually using it themselves.
Why it happens: Training created knowledge but not behavior change. If managers don’t use it visibly and regularly, employees won’t either. Doesn’t matter how good the training was.
I learned this the hard way. Worked with an organization that said they had good foundational understanding. I took them at their word. Turned out they were much earlier than they assessed themselves to be. We had to go back and rebuild basics.
Recovery steps:
Assess honestly - do managers actually use it daily?
If not, why? Lack of practical examples? Too abstract?
Run follow-up sessions focused purely on application to their actual work
Have managers share specific use cases in leadership meetings
If adoption is low, you’re discovering that your foundation wasn’t as solid as you thought. That’s valuable information. Better to know now and fix it than pretend it’s working.
Your Path Forward
Build your competence systematically in this sequence:
Choose your tool intentionally - Evaluate, don’t just take the default
Design your tier system - Managers first, always
Invest real time - Minimum 3 hours, better is 6+ over time
Calculate ROI - It’s easier than you think (0.5-2% efficiency per employee)
Get training that’s good - Set clear requirements for the trainer.
Each step builds on the previous. Skip step 1 and you’re training on the wrong tool. Skip step 2 and you miss the critical adoption lever. Skip step 3 and training is too shallow to stick. Skip step 4 and you can’t justify continued investment. Skip step 5 and you waste everyone’s time.
The foundation from Part 1 means nothing if people don’t actually know how to use the tools. This is where adoption lives or dies.
The next part will cover more about managers and their importance in all of this. We’ve eluded to parts of this already today, but next week will be even more in-depth and include a couple of practical workshops you can do.
Hit reply if you’re stuck on any of these steps!