Welcome to FullStack HR, and an extra welcome to the 49 people who have signed up since last edition.
If you haven’t yet subscribed, join the 9200+ smart, curious and like-minded HR people by subscribing here:
Today’s FullStack HR is brought to you by….Super!
Monday morning: Slack, emails, policy clarifications - sound familiar?
Super instantly answers employee questions by connecting to your tools like Slack, Drive, and Notion.
No more digging or repeating explanations.
Ask Super anything, from policies to analytics it delivers precise answers immediately.
Your always-on AI HR assistant.
Happy Tuesday!
Hope your week's off to a great start!
Here's something I've been thinking about lately, it's not exactly about self-driving cars, but the analogy fits to some extent.
In 2024 alone, traffic accidents took the lives of 210 people in Sweden, 39,345 in the US, and 1,607 in the UK. Every one of these deaths is tragic, yet we're still comfortable letting humans drive. Meanwhile, when it comes to new technologies like autonomous vehicles, we set the bar incredibly high, often expecting near-perfection.
At what point do we decide technology is "good enough" to replace humans, flaws and all? And why do we consistently place much higher demands on tech solutions than we do on ourselves?
I'd love your thoughts as we dive deeper into this idea today!
Listen to the article - powered by ElevenLabs - try it for free here.
Imagine two identical CVs sailing into a recruiter's inbox.
One signed "Emily," the other "Lakisha." Same education, same experience, same spark yet Emily lands 50% more callbacks (American Economic Association, NPR).
Switch continents, and the pattern repeats. In Canada, an Indian- or Chinese-sounding last name reduces interview chances by about 28% (NPR). In France, resumes with North-African names consistently find their way to the bottom of the pile even when credentials match exactly (ResearchGate).
The pattern persists within organizations. Gallup research reveals we misjudge leadership talent eight out of ten times, handing managerial roles to the wrong individuals . Result: only 21% of global employees feel genuinely energized at work (Gallup.com). The remaining 79%? They're coasting or quietly planning an exit.
Strategic decision-making is equally flawed. McKinsey analyzed 1,600 multibusiness firms and found capital budgets remained virtually unchanged year-over-year (correlation: 0.92), despite shifting market conditions (McKinsey & Company). Cognitive biases, confirmation bias, sunk-cost fallacy, and optimism bias push us towards autopilot, often steering us off course.
Put it all together: Names influence career opportunities; gut instinct guides promotions; comfort zones dominate strategic decisions.
Still, we champion keeping "a human in the loop."
But is that loop always our safety net, or does it sometimes act as the bottleneck? Given humanity's built-in biases, short attention spans, and random leadership selection odds, is it ethical to demand perfection from AI while accepting frequent "human factor" errors?
When will this shift occur?
At what point will it become unethical to let humans continue performing tasks better suited for AI?
We hear it over-and-over again that “we need to keep the human in the loop”.
The EU AI Act is centered around it.
I hear it in webinars.
People are talking about it on LinkedIn.
Why?
Self-comfort, fear of losing control, resistance to change, or perhaps uncertainty about trusting technology entirely?
What if the focus should be to keep the human OUT of the loop?
Perhaps true bravery and fairness means acknowledging that the most accident-prone algorithm in the room still operates on just 1.3 kilograms of grey matter.
Very simple: You cannot take humans out of the loop as long as AIs and algorithms cannot be made accountable for their decisions. It doesn't matter if a car is fully self-driving or a recruiter AI is fully autonomous. What happens when the machine's decision leads to a scandal, damage or death? Which humans are then accountable? Those humans are *still* in the loop.
Humans are only really out of the loop when nobody is accountable for what the AI does.
I love your bold take! It’s almost like we have anchored to this safe space “human in the loop” when we know in the long run its going to be a different narrative. I agree with the above comment too that AI is not ready to make decisions and its a long way to go! But lets think about this - Tesla’s goal with self drive car is clear - no humans in the loop! That gives me hint that I have missed my chance of pursuing a driver’s career in this lifetime and that I should think of something else! So it at least prepares me to take an alternative career!