AI Job headlines have you worried? Here's your 5-step action plan.
From strategy mapping to talent pipeline protection - what HR leaders actually need to do now.
Welcome to FullStack HR, and an extra welcome to the 33 people who have signed up since last edition.
If you haven’t yet subscribed, join the 9500+ smart, curious and like-minded HR people by subscribing here:
Happy Tuesday!
Thank you for all the messages about my career move - your support means everything!
If today's piece resonates with you, please share it with your team. This is exactly the kind of strategic conversation HR leaders need to be having right now, and I'd love to hear how your organization is thinking about these scenarios.
Also, I'm running a Swedish survey on the State of AI in HR and would love your input. It takes 5 minutes and helps us all understand what's actually happening in the field versus what the headlines suggest.
Results will be shared in a webinar on the 27th of June.
Let's dive in.
In the discussion about how generative AI affects work, two strong narratives are currently clashing.
On one side stand lab leaders like Dario Amodei, Sam Altman, Demis Hassabi and Sergey Brin. They build the models and warn that recent graduates risk lower employability when systems like Claude 4, Gemini Pro, and OpenAI o3 can perform qualified tasks at near-zero marginal cost. Amodei, who is CEO of Anthropic, former VP of Research at OpenAI and co-architect behind GPT-2 and GPT-3, claims we could see a "white-collar bloodbath" with up to 20 percent unemployment. Brin and Hassabi went even further and said Google plans to be first with AGI before 2030. (And thus effectively killing a lot of entry level jobs.)
On the other side, we have profiles like Josh Bersin. In his recent podcast Why AI Is a Job Creator, Not a Job Destroyer, he dismisses Amodei's scenario as "completely incorrect" and argues that AI will primarily enhance work through the so-called "Superworker effect."
I can see both sides of the argument here (and I always think you should try to see both sides of the argument when it comes to AI.)
But I also think that most organizations that Bersin interviews have only had time to try tools like Microsoft Copilot. For them, productivity gains are still modest and the threat to jobs is hard to see. But when the same companies test frontier models in secure workflows, 30-plus percent efficiencies quickly emerge along with direct pressure on administrative roles, something I see myself in pilot projects.
So here, we once again have two confliting data points, two different views.
Capability: Systems like Deep Research or Operator from OpenAI and Deep Research from Google or ManusAI can already today summarize hundred-page reports, create finished presentation materials, and write production code.
Adoption: The majority of HR teams are not yet fully utilizing these capabilities, making the labor market effect invisible for now.
What do we make out of this? I think Jason Averbook and Jess Von Bank talked about it elegantly and nuanced when they said that"the entrance doesn't close, but the door moves forward." I like that analogy!
Entry-level tasks don't necessarily disappear; they just require more prior knowledge and tool familiarity.
For organizations, this means we must move with the door.
What does this mean in practice?
Ground yourself in strategy first
Map out what kind of strategy you're aiming for. Plan accordingly, don't blindly jump on the AI hype train or agent hype train. What will this mean? How is this supporting your overall business outcomes that you're looking for, and how will you work to achieve that? This is a vastly overlooked part. If you do that well, then whether it's agents taking jobs or humans doing the work doesn't matter, you'll have a defensible plan that you can work on.Map low-hanging fruit
Identify tasks where the model already matches or exceeds junior staff: report summaries, first drafts of policies, data preparation for analysis. Automate the process and free up time for more complex problem-solving.Protect the talent pipeline
When simple tasks disappear, recent graduates need to step into value-creating roles faster. Design trainee programs where AI-supported tools are everyday reality from day one and where coaching is provided by more experienced colleagues.Build AI literacy as core competency
Make prompting and model understanding the same hygiene factor as Office skills. This benefits both career security and innovation capacity.Plan from two scenarios
Plot what both these would mean for your organisation:
Moderate automation (Bersin): AI boosts productivity but the net effect is more jobs.
High automation (Amodei): major shifts in headcount within five years. Plan recruitment, transition, and competency budgets so you can pivot between both.
If you do scenario planing for both these scenarios you’ve started to excerise the idea on what could happen in a more modest context as well as if we now then truly achieve AGI.
I think the truth probably lies somewhere between the two.
Some jobs disappear, others emerge, and the outcome is determined by how quickly organizations, education systems, and individuals move with the shifted door.
By testing frontier models hands-on, daring to measure the effects, and simultaneously investing in people's ability to work with AI, HR leaders can take control of the outcome instead of guessing at it - and that’s the most important thing here.
The extremists on either side always have some incentives to share their particular views. Indeed, the truth is often in the nuanced but boring middle. But "it depends" doesn't score in the headlines.