Welcome to FullStack HR, and an extra welcome to the 79 people who have signed up since last edition.
If you haven’t yet subscribed, join the 9100+ smart, curious and like-minded HR people by subscribing here:
Happy Thursday,
Easter was, as always, spent in the ski slopes. It’s tradition for as long as I can remember. In the lift and in between eating candy, today’s article is what I’ve been thinking about.
I would be very happy if you took the time to share today’s article because I think this topic needs discussion, share it with your network and feel free to tag me as well and I’d be happy to try to engage in the conversations!
There’s most likely perspectives that I’m missing here.
Let’s get to it.
Last week, o3 was released, and I highlighted it as the top article in Friday's newsletter. At the time, I had only briefly tested it, but over Easter, I had the opportunity to spend more time exploring the model (it can dive into genealogy, which was a rabbit hole…) and o3 clearly feels different compared to other models.
It hallucinates less, takes more time to deliberate, searches the web, and explains its reasoning more transparently.
My go-to test has always been to have AI generate schedules compliant with Sweden's complex working-hours legislation, something no previous model has successfully managed on the first attempt. Yet o3 nailed it immediately, prompting (pun intended) me to reflect on what this means for the future of knowledge work.
And yes, you've heard me repeat this for a while now that we're experiencing a transformation of what knowledge work means. This is noe even further driven by models like o3.
The other day I listened to a podcast where they described the current state as AGI "if you squint a bit." That's precisely how I feel. Had someone described today's o3 a few years ago, we would undoubtedly have labeled it AGI.
Perspectives are changing much faster than our organizations can adapt.
What does this mean in practice? It means the very idea of what a job is is being challenged. Our traditional career frameworks and organizational structures are built on the premise that knowledge is an expensive and scarce resource. But with the rapid advancement of AI and models like o3 and Gemini 2.5 Pro, this assumption is becoming increasingly invalid and the cost of knowledge is swiftly approaching zero.
How much longer will we continue recruiting and planning based on outdated assumptions that knowledge is always precious, specialized, and scarce, that skill development is slow, and that human roles are static when AI is making these premises completely irrelevant?
It's no longer about whether AI can perform specific tasks, but rather about what happens when entire professions, roles, and job titles lose their relevance.
I've long criticized the phrase, "AI won't take your job, but someone using AI will."
It's one of the worst phrases I know.
I'm not alone in this, and Sangeet Paul Choudary articulated this far better than I ever could in his insightful article, "The Many Fallacies of 'AI Won’t Take Your Job'" (yes, you must read it):
The fallacy of static jobs persists because it’s cognitively efficient. It offers a clear anchor in a shifting environment. Job titles serve as focal points. They make organizational complexity manageable. But it’s grossly misleading. It encourages workers to optimize for role continuity when they should be preparing for role redefinition.
Here lies the heart of the challenge: AI radically changes how we perceive jobs and knowledge. Traditionally, navigating work life was feasible through experience and routine, but AI now forces us to reconsider what constitutes valuable competence.
Historically, knowledge has been categorized to identify aspects more easily automated by AI versus those still requiring human expertise: declarative (factual), procedural (practical skills), conditional, and tacit (implicit). With AI's rapid development, declarative and procedural knowledge, facts and practical skills, are quickly becoming automated.
This primarily leaves tacit knowledge, the intuitive, experience-based competence arising from complex human interactions. Why is this particularly challenging for AI? Tacit knowledge relies heavily on contextual understanding, empathy, intuition, and the ability to navigate ambiguous or entirely new situations swiftly and that is qualities still difficult to encode into algorithms (for now).
So, is everything fine then? Hardly.
Relying on the belief that tacit knowledge will always be shielded from AI would be a fatal mistake. Former OpenAI researcher Daniel Kokotajilo outlined this already back in 2021 with the scenario described in his article "What 2026 Looks Like" illustrating how AI will rapidly reshape our world. Now, he and other researchers have published a new interactive future report titled "AI 2027," utilizing trend analyses, simulations, and expert insights to show that AI will likely start replacing even complex, intuitive tasks at scale by 2026.
The report argues that the impact of this superhuman AI will be enormous, perhaps surpassing that of the Industrial Revolution. Organizations must already start strategically considering which tasks AI should handle and which remain best suited for humans. What kind of organization do you want to be in an AI-driven world? How do you build organizations capable of clearly determining when and how AI should replace humans, and when human traits are indispensable?
In practical terms, this means organizations must immediately define clear strategies for AI usage while focusing on enhancing human skills such as leadership, empathy, creative problem-solving, and the ability to handle ambiguity.
Organizations prioritizing adaptability, curiosity, and the ability to actively extract and evaluate knowledge from AI models will gain a massive competitive advantage.
Since knowledge increasingly becomes available through models like o3, models that hallucinate less and less, the decisive factor will be the ability to effectively understand, filter, and apply this knowledge.
It's less about possessing all knowledge and more about skillfully navigating the intersection between human insight and machine learning. Navigating and creating value at this human-machine boundary represents an enormous potential leverage point right now, more impactful than any other investment. Therefore, create training and organizational structures that actively support and enhance these specific capabilities.
When I recently did a upskilling session with a municipality in southern Sweden, someone asked, "But what should WE then do?" and that is precisely the question every organization should be asking immediately. The answer isn't just learning how to use AI tools (even though that might help understanding what’s possible or not) but questioning the very foundations of your organization. Why do you exist as an organisation? What are you genuinely trying to achieve when knowledge is no longer your primary competitive advantage? How long do you have left? According to forecasts in AI 2027, you might only have a couple of years, or even less, to act before the landscape radically changes.
So you may have less time than you think before the very foundations of your organization shift beneath you. Leadership today isn't about predicting the future perfectly; it's about preparing courageously for rapid change. Organizations that hesitate, clinging to outdated strategies, will eventually find themselves irrelevant.
AI won't pause politely for your five-year plans, it demands decisive action right now.