Welcome to FullStack HR, and an extra welcome to the 48 people who have signed up since last week.If you haven’t yet subscribed, join the 10000+ smart, curious leaders by subscribing here:
Happy Tuesday,
Spending time with the family in Paris while writing this. Missed Unleash last week but got to see the Eiffel tower. My plan was to continue the AI adoption series but this felt more urgent. So AI adoption part three will be back next week, instead I'm going to ponder about three scenarios for AI at work.
Most companies are planning like nothing will change while the ground is already shifting under them. That’s a problem, to say the least.
And yes, we don’t know exactly how this plays out. Anyone telling you they know for certain is lying. But we can map out the likely scenarios and what they mean for how you and organizations should act today.
I’m seeing organizations frozen because they can’t predict the future. But you don’t need perfect prediction. You need to understand the range of possibilities and plan accordingly.
Three scenarios. Different likelihoods. Very different implications for work. Lets get to it.
Scenario 1: Everything Pauses
What happens:
Development stops. Nothing new gets built. We keep what we have today: Copilot, ChatGPT, Claude and Gemini at current capability levels. No improvements. The bubble narrative wins, it pops, we pick up the pieces and move on.
The models we have now are all we get.
Impact on work:
10-20% efficiency gains over medium to long term. Organizations slowly catch up, some faster than others. People get time to adapt like they did with Excel, Word, and email. Gradual adjustment.
Unemployment hits some white-collar work where AI can streamline work. Some new jobs emerge to replace them. Nothing dramatic. Relatively smooth transition.
Why this won’t happen:
Look at the investment. Microsoft, Google, Meta, Anthropic - billions flowing in. Compute scaling continues. New capabilities shipping monthly. The infrastructure buildout is massive and accelerating.
Could it slow? Sure. Could it stop completely? Extremely unlikely given current momentum and competitive dynamics.
If you’re planning for this:
You’re planning to lose. This not going to happen and even if it would, even if development slows, we already have enough capability to transform significant portions of knowledge work. The question isn’t whether change happens, it’s how fast.
Scenario 2: Continued Scaling
What happens:
LLMs continue improving, maybe somewhat slower than the most optimistic predictions. Context windows expand. Model work capacity increases. Within the near term, genAI can work longer stretches - approaching a full workday on complex tasks.
Agents become real parts of the workforce. Not science fiction, actuall deployment taking sub-tasks and executing them with human-level precision in defined domains.
Impact on work:
Organizations that don’t actively work to understand and implement AI slowly become irrelevant. Not overnight but gradually. Because competitors are streamlining processes, solving problems faster, creating more value.
Western companies get hit particularly hard. Why? We moved physical production to China and Asia. We have massive service sectors. Exactly what AI targets first.
We can’t compete on cost without AI because we’re not fast enough at creating value versus those using it. The productivity differential becomes impossible to bridge.
Jobs get eliminated and aren’t replaced at the same rate. Not because of technical limitation - because we failed to embrace the change early enough to transition smoothly. But it’s not so massive that mass unemployment hits. It’s a difficult transition, not collapse.
What this means for you:
If you’re not actively building competence and changing processes now, you’re banking on being able to catch up later when competitive pressure forces your hand. That’s possible. But it’s playing catch-up while others compound advantages.
The organizations winning here aren’t the ones with the fanciest AI strategy. They’re the ones actually using it in production. Measuring results. Iterating fast.
This is the scenario to plan for. Not because it’s certain, but because it’s likely and actionable.
You can start working towards this today. If you haven't started now is the time to start.
Scenario 3: Rapid AGI-Like Capabilities
What happens:
We get models that can actually do most cognitive work at a fraction of current cost. Not “eventually.” but soon. Within years, not decades.
This isn’t about consciousness or sentience. It’s about economic substitution. Can the system do the work cheaper, faster, and reliably enough to replace expensive human labor? If yes, it will.
Impact on work:
Most white-collar workers face replacement or severe wage pressure. Mass unemployment in service sectors. General chaos in labor markets while policy scrambles to catch up but can’t move fast enough.
Organizations that don’t adopt the technology can’t compete with those that do. The productivity gap is too large. Small teams create companies that challenge firms with 10,000-30,000 employees because they rethink organization and production from the ground up.
Broad downward wage pressure as human cognitive labor competes with near-free digital labor.
Why this matters even if you don’t think it’s likely:
Because if there’s even a 20-30% chance of this scenario, you need to stress test against it. What happens to your organization if this unfolds in 3-5 years? Are you positioned to adapt? Or will you be caught completely flat-footed?
The organizations that survive this scenario are the ones that built competence and flexibility early. The ones that understand how to work with AI, how to reorganize around it, how to identify what humans still add value on.
You can’t build that capability overnight when the pressure hits.
What Most Organizations Are Actually Doing (And Why It’s Wrong)
Here’s what I see happening, most organizations are acting like Scenario 1 while claiming they’re planning for Scenario 2. They’re moving slowly. Running pilots that never go to production. Waiting for “the right moment” to invest seriously.
That’s a losing strategy for both Scenario 2 and Scenario 3.
If Scenario 2 unfolds (most likely), you’re giving competitors years of compounding advantage while you wait for certainty.
If Scenario 3 unfolds (less likely but not impossible), you have zero foundation to adapt quickly when pressure hits.
The only scenario where cautious waiting works is Scenario 1. And that’s the least likely scenario given current evidence. Let me say it again, it's more likely that scenario 3 happens vs. scenario 1.
How to Actually Plan
Use Scenario 3 as your stress test. Ask: If rapid AGI-like capabilities arrive in 3-5 years, what breaks in our organization? Where are we completely unprepared? What would we need to have built by then to adapt?
Then work backwards. What needs to be true in 2 years? In 1 year? In 6 months?
Use Scenario 2 as your operating plan. This is your base case. Continued scaling. Agents in production. Gradual but real displacement of cognitive tasks.
Build competence now. Change processes now. Measure results now. Not because you’re certain Scenario 2 happens exactly as described, but because it’s the most likely path and the actions position you well for either acceleration (Scenario 3) or slowdown (closer to Scenario 1).
Stop planning for Scenario 1.
The comfortable fiction where you have years to adjust slowly. That ship has sailed. Even if development slows dramatically tomorrow, we already have enough capability to transform huge portions of knowledge work.
Bottom Line
The organizations that win aren’t the ones that predict the future perfectly. They’re the ones that position themselves to adapt quickly regardless of which scenario unfolds.
That requires:
- Building real competence now, not pilots that go nowhere
- Changing actual processes, not running innovation theater
- Measuring real results, not engagement metrics
- Moving faster than feels comfortable
You don’t need to know exactly which scenario happens. You need to be ready for the range.
Most organizations aren’t. That’s your opportunity if you move now. And your vulnerability if you don’t.
The question isn’t whether AI transforms work. It’s whether you’re part of that transformation or a casualty of it.

