What 50+ HR leaders are saying about their AI reality
The universal feeling of being "late" to the AI party
Welcome to FullStack HR, and an extra welcome to the 12 people who have signed up since last edition.
If you haven’t yet subscribed, join the 9600+ smart, curious and like-minded future of work people by subscribing here:
One of the most common problem I see organisations struggle with is keeping up with what’s happening in the AI-space.
The AI & HR Monthly Update. Each month we meet live for 60 minutes. I unpack the latest tools, cases, and risks. You leave with clear actions for your people agenda.
Past attendees tell me the sessions cut through the noise and helped them internalise the gains of AI faster.
Pick six or ten meetings, starting September. Price is only $395 per month.
Unlimited seats (!) mean your whole HR team and curious managers can join at no extra cost.
Recordings and live Q&A included.
No travel, just one hour online.
Only three companies can join this batch.
Deadline is 15 Aug.
First come, first served.
Happy Tuesday,
Let’s round off the summer with three takeaways on the state of AI adoption right now. Being out and about in this crazy AI landscape lets me talk to a lot of people. This is a broad summary based on those conversations and the organizations I’ve worked with during the first six months of the year, from small municipalities to large global companies.
Everyone feels behind
Almost everyone I talk to has a constant feeling of being “late.” That isn’t strange, given how fast things move, and yes, most organizations are not AI-native (very few are), but in general they sit at roughly the same level.
Many feel uncertain about what to do, how to do it, and when to do it, and most still stick their heads in the sand on AI literacy training, assuming people will figure it out. Even organizations that see themselves at the forefront sometimes lack a basic understanding of how LLMs work and do not use GenAI to its full potential, usually because of limited training.
My five cents is that the ones figuring out how to do AI literacy trainings at scale and connect it to the point below will be the ones coming out stronger from this change.
The benefits of AI are tied to the individual, not the organization.
Most white-collar workers use AI in some way, but the gains today go to the individual, not the organization. If a task that once took me four hours now takes 20 minutes, the three hours and 40 minutes saved go back to me, not the organization. That looks different for different people: for some, it means a longer lunch break; for others, it means taking on more tasks and becoming high performers. But isn’t that good for the org?
It is, yet if you want an AI-first organization, you need people to share how they achieve this. Many don’t because sharing removes their edge and risks their high-performer status. My manager still thinks preparing a presentation takes me four hours and is amazed that I can do that, talk to managers, and onboard a new colleague all in those same four hours. I get rewarded for it.
I think the big shift we’ll see in the coming months is organizations trying to regain control and bring these new ways of working and wins back to the organizational level. And once again, a crucial part of this is training and education, especially on the upper levels of an organisation and for managers. They need the ability to lead and delegate work in this new reality where people have access to the models.
Outdated information leads to outdated decisions.
GPT-4o powers Copilot and the free version of ChatGPT, so it’s a model most people have used. But here’s the catch: 4o is not the most capable model. If you judge AI by Copilot or free ChatGPT, you miss potential. ChatGPT o3 and Gemini 2.5 Pro are more capable, knowledgeable, and reliable. They still hallucinate, but far less than before.
Keeping up with models and putting them to good use takes constant work (hence my offer above), yet it’s vital. More important, don’t lock yourself into just one model. Balancing one preferred model with testing others is tricky. Organizations at the front now use several models at once, or they have their own interface where you can choose which model to use. But it is truly tricky because the old way of buying software and services is to pick one and go with that one. That mindset doesn’t serve us well in the age of GenAI where models constantly change.
There’s also a somewhat strong belief that these models are “unsafe” and people operate under this principel to quite a large extent. But if you have a Microsoft Copilot license from work, it’s as equally safe as the rest of your Microsoft-suite.
The discussion reassembles the one we had when internet was new and your parents told you that the minute you submitted your credit card details online, they would be stolen. But over time, we learnt what was safe or not.
I guess we will learn here as well overtime.
That’s the summary and I hope that get’s you some food for thought!