"Treat AI as if it were human because, in many ways, it behaves like one."
Treat AI as if it were human because, in many ways, it behaves like one.
Welcome to the 197 new FullStack HR readers who joined last week!
The ambition is high with this newsletter - to be the guide for organisations in this AI transformation that is happening!
If you aren't yet subscribed, join the other like-minded people in this free newsletter by subscribing below:
This article is part of a paid collaboration with Sana.ai, which made my attendance possible. Thank you, Sana!
Unleash holds a special place for me. My very first professional HR conference, almost exactly 15 years ago, was the event that would eventually become what we now know as Unleash. Back then it had a different name. Back then it was in Amsterdam. And back then, the world of HR looked nothing like it does today.
This year, I went back to Las Vegas. And I came home with a lot to think about.
Why I went
The primary reason I chose to travel halfway across the world was one name: Ethan Mollick.
If you have been reading this newsletter for any amount of time, you know that Mollick has been a massive inspiration for me and for many others in this space. The reason is simple. He is one of a very, very small number of people who talk about AI from an organizational perspective, and what he does so well, and what I have genuinely tried to emulate in my own work, is that he embraces the fact that very few of us actually know what is going on. He said it himself on stage: “If you’re doing imposter syndrome right now, you’re in the exact right place. We are all imposters.” Things are moving too fast to follow everything. What you can do is try, experiment, and apply it in your own context. And that is exactly what he does, brilliantly.
Amy Edmondson was the other name on my list flying over the pond. Her opening keynote on Day Two delivered what you would expect from someone who has spent decades studying psychological safety. And I enjoyed her framework around “intelligent failure,” that organizations should deliberately create conditions where the right kind of failure can happen, felt like it was written for this exact moment. Every organization trying to figure out AI right now needs to hear that.
The AI Summit
The conference started with the AI Summit, organized by UKG and led by Jason Averbook and Jess Von Bank.
The AI Summit was almost unfairly good. It was practical, concrete, and forward-looking in a way that set the tone for the entire rest of the conference. Did it contain a lot that was completely new for someone like me, who spends essentially all waking hours trying to understand this space? Maybe not. But even for someone deep in the weeds, there is always something new to pick up when you are in the room with other people thinking hard about the same problems. That is the real value of these events. Always has been.
The key message from both Jason and Jess was this: read the signals. Pay attention to where things are actually heading, not where you wish they were heading.
Peter Hinssen stole the show? Yes.
Then came Day Two. And the highlight, by far, was Peter Hinssen.
I have seen Hinssen speak three times now. There is a lot happening when he talks, and there is a lot of dopamine when you listen to him. But this time, he was something else entirely. I would put his keynote in my top three talks I have ever seen on a stage. (That is not something I say lightly!)
Why was it so good? Because he puts things in perspective. He weaves together the technical realities of what is coming with the organizational implications of what it means. He makes the complex feel simple to follow. His concept of the “Never Normal,” that we should stop waiting for things to calm down because they never will, landed with particular force this year. He framed the current moment through four lenses: hardcore geopolitics, extreme capitalism, zero latency, and talent singularity. His conclusion was clear. AI-first does not mean people-last, but the waves of disruption are only going to get higher.
Budget planning is a yearly, sadistic corporate ritual where people put fake news in Excel that is consolidated into something which actually never works. - Peter Hinssen
For me, Hinssen and Mollick together represent the gold standard. They combine technical depth with organizational realism, and they are honest about what they do not know. One line from Hinssen has stayed with me since I left the room: “Digital was just the appetizer. AI is the main course.”
But Mollick?
Ok, so Hinssen was a highlight. But what about Mollick then? He did not disappoint.
And he did not just talk about where AI is heading. He gave the room a practical framework for what to do about it, and his model has three components. Leadership, Lab, and Crowd.
The Crowd is everyone in your organization who is already using AI. And they are using it, whether you know it or not. Mollick was blunt about this (rightfully so!) Almost half of employees are already using AI at work. Many report tripling their output on certain tasks. But none of that productivity is showing up on your dashboards. Why? Because your AI policies have made it scary to admit. Because people know what “efficiency gains” means in corporate language. Because there is no incentive to tell you. As Mollick put it: “The expertise of how to use AI is in your teams already.” The question is whether you are creating the conditions for that expertise to surface.
The Lab is not an IT function. It is a small, dedicated team that works on AI full-time. Their job is three things. First, take what the crowd discovers and ship it to the rest of the organization. Someone builds a prompt that saves six hours a week? That goes out to everyone, that week. Second, build benchmarks. How good is AI in your specific context? You will not know unless you test it yourself. No outside evaluation will tell you. Third, build the impossible things. Experiments that might not work. Provocations. The stuff that prepares you for what is coming next.
Leadership is the third piece, and Mollick was clear that he has not seen a single company succeed with AI without all three in place.
AI is a weird alien mind, one that isn't sentient but can fake it remarkably well. It is trained on the vast archives of human knowledge, and also on the backs of low-paid workers. - Ethan Mollick
One more thing he said that landed hard: “Your R&D team is HR.” Not IT. Not a vendor. Not a consultant. The people who will figure out how to use AI in your organization are the people who already work there. This is not something you can outsource. And the models are universally available. There is no secret AI. Everyone has the same tools. The difference is what you do with them.
And yes, we got to meet him, and yes, I was starstruck.
All good?
There is a palpable uncertainty in the HR community right now. You could feel it in the hallways. Everyone is looking at everyone else. What are you doing? What are you doing? What are you doing? There is a restlessness, almost an anxiety. Where is this going to land?
I want to be careful about how I say this. I know it might sound presumptuous. But I see what is happening with AI, and sometimes I feel like I am losing my mind because the gap between what is already possible and what most organizations are actually doing is enormous. So it was… reassuring to hear people like Hinssen, Mollick, Averbook, and Von Bank confirming the same thing. This is not a phase, there are no signals suggesting that AI disruption is going to slow down or go away.
That confirmation matters more than I expected it to and it connects directly to something I experienced that same week.
Snap back to reality.
The same week I was at Unleash, I was also working with an American client. I cannot name the organization, but I can tell you what happened. We used Claude Cowork extensively, live, inside a real HR operations environment. The work involved processes that most HR professionals would recognize immediately. Document handling, policy reviews and data consolidation across systems. The kind of recurring, high-volume knowledge work that fills up calendars and drains energy (and motivation).
In roughly 30-45 minutes of focused work with Cowork, I estimate we identified efficiency gains of hundreds of hours per month. Not theoretical gains. Real, measurable, immediate time savings on actual work that actual people were doing.
Cowork is a magical tool. It truly is.
And that’s yet another signal. And by the wya, here the signals all point in the same direction. Microsoft is working on their own version of this kind of tool. There are credible reports that OpenAI is building something similar. Anthropic already has Cowork in production. This is coming. Broadly. In ways that are compliant and secure.
And when it does arrive at scale, knowledge work as we know it will change. Not might change. Will change. The person who is not prepared for this will struggle. I am more and more convinced of that with every passing day. The signal is becoming clearer, and clearer.
Ok, so now what?
Does the work still require humans? Absolutely. Humans are critical. Validation matters. Judgment matters. Context matters. But the tasks themselves, the actual work of producing, analyzing, summarizing, drafting, processing? Machines can already do most of it. Right now, today. If you let them.
The limitation is not technology. Both Hinssen and Mollick made this point forcefully. The limitation is us. It is organizational inertia. It is perception. It is the comfort of waiting for the dust to settle. Or as Hinssen put it: “Yesterday’s work is the silent killer of organizations.” We have to become yesterday’s work hunters in the age of AI.
The dust is not going to settle.
That, more than anything, is what I took home from Las Vegas.





