How Does It Feel to Be Managed by a Machine?
The Other Side of the AI Boss
Welcome to the 87 new FullStack HR readers who joined last week!
The ambition is high with this newsletter - to be the guide for organisations in this AI transformation that is happening!
If you aren't yet subscribed, join the other like-minded people in this free newsletter by subscribing below:
I’ve spent three editions on bosses.
→ What an AI boss could look like.
→ What’s left when AI handles the managing.
→ Why we expect bosses to care in the first place.
I built the case. AI can individualize at scale, better than most humans can. It doesn’t get tired, it doesn’t have moods, and it remembers your goals. For many people, it could be the best boss they’ve ever had.
And yes, I still believe that, or parts of it at least.
But I’ve been arguing from the manager’s and the organization’s side.
What about the person on the receiving end?
How does it feel, or would it feel, to be managed, coached, followed up on, and given feedback by a machine?
We already know something
This isn’t entirely hypothetical. Millions of people already work under algorithmic management, like Uber drivers, Amazon warehouse workers, and food delivery couriers. Their schedules, performance ratings, and in some cases, terminations are shaped by software.
Not all of that is “AI” in the ChatGPT sense, but it’s still machines making managerial calls, and I do believe that some of the lessons transfer.
And the research is pretty clear. How it’s designed (and governed) matters more than what it is.
A 2024 study of food delivery couriers in Finland (based on 30 interviews), “Algorithmic management, wellbeing and platform work”, found that the work had what the researchers described as “iso-strain” characteristics: high demands, low control, and low support, plus isolation. Workers described stress, frustration, and mistrust, especially when decisions felt opaque or unfair.
And one detail matters here, and that was that when couriers reached out for help, they often met automated, algorithm-driven responses that didn’t address their concerns. That’s where a lot of the damage happened.
To be fair, the study also notes nuance here; support on issues directly related to deliveries could be fast and adequate. But broader support and real dialogue were limited.
The algorithm wasn’t the problem. The design was.
(And one could argue that this was not the latest and greatest models from the frontier labs, thus the study was set in the wrong premise.)
The loneliness chain
A 2023 paper in the Journal of Applied Psychology, “No Person Is an Island”. The researchers ran four separate studies with a total of 794 participants across Taiwan, Indonesia, Malaysia, and the United States.
The finding was that the more employees interacted with AI systems for work, the lonelier they felt.
Not metaphorically lonely. Lonely ad in the kind of loneliness that spilled into life outside work. More insomnia and, in some of the studies, more after-work alcohol use.
It’s important to emphasize that the loneliness was self-reported. But some of the spillover wasn’t. In parts of the research, coworkers and cohabiting family members also reported changes like less helping at work, and more insomnia. Alcohol results were more mixed across the studies.
The researchers linked this to social deprivation. When you spend your day interacting with a system that can’t reciprocate connection, you can end up with a social “deficit”. You still need affiliation and to belong.
A 2025 study in Behavioral Sciences, “Effects of Employee AI Collaboration on CWBs”, tested a similar chain in a vignette experiment. Employee-AI collaboration increased loneliness, which led to emotional fatigue, which led to counterproductive work behavior.
But there was a catch. When leaders provided emotional support, the negative effects were weaker. The leader’s care interrupted the chain.
Which raises a question - if we need humans to offset the loneliness AI can create, have we actually saved anything?
The belonging question
Work isn’t just about productivity. It’s about being part of something.
When I was at Spotify, Kry, or any of the other companies I’ve worked for, what kept people engaged wasn’t just the work. It was the team. The manager who noticed when you were struggling. The colleague who grabbed coffee with you after a hard meeting. The sense that you mattered to someone.
I see the same thing now in the organizations I work with. Last month, I ran a workshop where we discussed this topic, and a manager said something along the lines of: “My team doesn’t need me for the tasks. They need me for the ‘are you okay?’ after the meeting, when I can tell something’s off. I’m not sure AI can replicate it.”
Can it try? Sure. Can it do a version of it? Probably.
But the research so far suggests that as of today, something gets lost in translation.
The surveillance problem
But wait, can it actually do that? Mimic the above? Well, I touched on this in the last article, but it deserves more space here. Because the same AI that can provide personalized coaching can also track your keystrokes, analyze your calendar, measure your response times, and decide you’re “less productive” than last quarter.
Amazon warehouse workers know this reality. Pace and performance can be tracked closely, and those metrics can drive warnings and consequences. (For example, reporting on “Time Off Task” systems and automated warnings: The Verge.)
And we don’t need to pretend this is only about one company. A 2023 review in the International Journal of Environmental Research and Public Health, “Workers’ Health under Algorithmic Management”, summarized how algorithmic management can influence job quality factors linked to health and well-being.
When systems monitor without explaining, evaluate without transparency, and discipline without appeal, workers experience what many call “digital Taylorism.” The efficiency gains go to the organization, and the stress goes to the worker.
I mentioned Frederick Taylor in the last article. In Shop Management (1903), he wrote that “all possible brain work should be removed from the shop” and centered elsewhere.
Digital Taylorism is the same idea with better technology. And it’s already here.
So what should we do?!
I could paint a picture here of how AI management done well might look. The empathetic check-in, the personalized career development and the transparent feedback loop.
But I already did that in the three articles mentioned in the beginning. What I think is more useful, instead, is this. What’s the minimum standard we should accept?
My point of view is this, that if your organization is deploying AI in any management capacity, these four should be non-negotiable, and I think we need to cater for them to happen, to be explicit.
Transparency. Workers should be able to see why AI is giving them certain feedback or making certain decisions. “The algorithm decided” is not an explanation. If you can’t explain it, you shouldn’t deploy it.
Appeal. There has to be a human you can escalate to. Always. No exceptions. If someone’s performance rating, schedule, or employment status is affected by an algorithm, they need a path to a real person who can override it.
Voice. Workers need input into how these systems work. Not after deployment. During design. The people who will be managed by AI should have a say in how it manages them.
Boundaries. Define what AI can and can’t do, and what data it can and can’t collect. Can it give feedback? Fine. Can it fire someone? That needs a different conversation. Draw the lines before the technology makes the decision for you.
These aren’t ambitious goals; they’re the bare minimum as I see it. And many organizations deploying algorithmic management aren’t meeting them, nor thinking about them. But I strongly believe we need to keep all of them in mind when thinking about AI and leadership.
What I don’t know
All this said, not only the last part here, but thinking about all the articles I’ve written, I still feel uncertain about being managed and led by an AI. I don’t know how I’d feel being managed by AI.
Part of me thinks I’d appreciate the consistency. Less politics, less favoritism. Feedback that doesn’t depend on my manager’s mood.
Part of me thinks I’d miss having someone “who knows me”. Not “knows my data” but knows me. Who I can read, who might bend the rules when I need it. I’m not even sure if that’s rational, but it’s real for me at least.
Part of me suspects it depends entirely on whether the people who design these systems give a damn about the people who use them.
And that’s the question I want to leave you with.
When you deploy AI to manage people, whose experience are you optimizing for?
The answer to that question will shape what work feels like for millions of people. And it’s being answered right now, in every organization rolling this out.
Whether anyone is asking the question or not.


