6 Comments
User's avatar
Jurgen Appelo's avatar

Very simple: You cannot take humans out of the loop as long as AIs and algorithms cannot be made accountable for their decisions. It doesn't matter if a car is fully self-driving or a recruiter AI is fully autonomous. What happens when the machine's decision leads to a scandal, damage or death? Which humans are then accountable? Those humans are *still* in the loop.

Humans are only really out of the loop when nobody is accountable for what the AI does.

Expand full comment
Priya Tahiliani's avatar

I love your bold take! It’s almost like we have anchored to this safe space “human in the loop” when we know in the long run its going to be a different narrative. I agree with the above comment too that AI is not ready to make decisions and its a long way to go! But lets think about this - Tesla’s goal with self drive car is clear - no humans in the loop! That gives me hint that I have missed my chance of pursuing a driver’s career in this lifetime and that I should think of something else! So it at least prepares me to take an alternative career!

Expand full comment
Johannes Sundlo's avatar

Love this take! Yes, “human in the loop” can feel like a mental safety net, even when we know the long-term trajectory is different. The Tesla example is perfect. No one’s designing for partial control, it’s fully human-out-of-the-loop thinking.

And you’re right: even if AI isn’t ready today, we should start preparing for what happens when it is. Better to pivot careers early than be caught steering a car that no longer has a wheel.

Expand full comment
Hernan Chiosso's avatar

@Johannes, I am one of the "keep people in the loop" advocates. And while I acknowledge that people, even with the best intentions, can be flawed, it's still the case that:

-AI decisions are not (yet) fully auditable or transparent, and cannot self-audit (in recent Anthropic research, LLMs have been proven to generate an after-the-fact rationalization to please the user)

-AI cannot take ownership or accountability over decisions (you cannot sue AI for malpractice or discrimination)

-AI, capable as it can be about some things, is not yet intelligent. LLMs may never achieve that (the whole "probabilistic tool may never be able to provide deterministic responses" argument).

So for those reasons, I think we're not yet ready to get the humans out of the loop (and that's even before considering ethical or humanistic considerations).

So in the current state of technologies, I'd settle for something like "keep INFORMED, RATIONAL, AND RESPONSIBLE people in the loop".

I do agree that "getting humans out of the loop" is something we can strive toward in some low-risk cases. Still, we need to be realistic and responsible about the technological capabilities.

Expand full comment
Johannes Sundlo's avatar

Really appreciate your thoughtful response and I agree with much of it. We’re definitely not there yet when it comes to auditability, accountability, or intelligence. Your points are valid.

That said, I’m being deliberately provocative here. I think we often assume “human in the loop” is inherently better but humans also make opaque, biased, and irrational decisions. We just happen to have legal and social systems that let us hold them (us) accountable.

What if the real challenge is building similar structures around AI assigning responsibility to the org, not the algorithm? I’m not saying we remove people recklessly. But maybe the brave thing isn’t always to keep humans in the loop sometimes it’s to admit we’re the bottleneck?

Appreciate you pushing this forward. This is exactly the kind of debate we need!

Expand full comment
Hernan Chiosso's avatar

Maybe the question is not so much about whether there is a human in the loop, but rather whether we're dealing with the right loop or if it needs to change. :)

Expand full comment