The 97% Problem: When Employees Trust ChatGPT More Than Their Manager
New data reveals why workers are choosing AI over human leadership – and what it means for the future of management
Welcome to FullStack HR, and an extra welcome to the 87 people who have signed up since last edition.
If you haven’t yet subscribed, join the 9800+ smart, curious and like-minded future of work people by subscribing here:
Happy Wednesday!
Still riding the HR Tech high, honestly, it's a bit TOO fun meeting all these (you!) brilliant people.
But reality calls! Back in action this week with clients, and there's a clear theme: leadership. Everyone wants to talk about what management means now. Spoke to 100 managers on Monday and tomorrow it's 1,200 (!).
We Ask AI Instead of the Boss
PS. Prefer listening? You can listen to this episode on Spotify or Apple Podcasts.
97% of employees have turned to ChatGPT for advice instead of their manager. 63% do it regularly. 57% say it’s because they fear retaliation. All according from a new study done by Resume Now.
At one level, I’m not surprised.
Work is transactional.
I sell my time. You buy it.
That imbalance doesn’t go away just because we add some posters about culture or run a values workshop. It sits there, quietly shaping how people behave, and we sometimes tend to act as if it isn’t there. That your manager is truly your friend. Not saying that they can’t be your friend, but ultimately, a manager is a representative for the organization.
On another level, these numbers make me uneasy. We’ve been told for years that the future of leadership is “more human.” And yet here we are, with employees choosing a chatbot over their boss. Not because the chatbot is warmer, but because it feels safer.
The survey (968 U.S. workers, June 2025) shows:
97% have asked ChatGPT for advice instead of seeking it from their boss.
57% say they do it out of fear of retaliation.
70% think ChatGPT understands their work challenges better than their manager.
72% (!) believe it gives better advice.
49% say it’s been more emotionally supportive than their boss.
93% have used ChatGPT to prepare for a conversation with their manager.
61% have sent AI-written messages to their boss.
If you care about leadership, these numbers are hard to read. They suggest that employees are outsourcing both the practical and emotional sides of their work relationships to a tool.
What does this say about leadership?
Maybe we’ve misunderstood what “human” at work actually means and what it means to be a manager.
We often associate it with being friendly. With small talk, smiles, and maybe the occasional 1:1. That’s fine. But it’s not enough when power is unequal. When your boss controls your pay, promotion, and job security, there will always be questions you don’t dare to ask.
In that situation, turning to AI isn’t irrational. It’s rational.
A judgment-free, always-available assistant is simply safer.So perhaps the real “human side” of leadership isn’t about warmth at all. Maybe it’s about clarity, fairness, and accountability. Things that hold when power is real.
The data also shows how employees are using AI:
drafting messages,
rehearsing difficult conversations,
getting fast answers to tricky policy questions,
even talking through stress and mental health (93% say they’d feel comfortable doing that with AI)
That last point is telling. Nearly half say ChatGPT has been more emotionally supportive than their manager. Not because it’s empathetic, but because it provides a space with no judgment and no risk.
In other words: AI is exposing a gap. Not by being “human,” but by being consistent, safe, and available.
What managers should do
I don’t think managers can or should try to beat AI on speed or convenience. That battle is already lost. Instead, the work is to focus on what only humans can do.
Build real psychological safety. Not slogans. Behavior. Respond to hard questions without punishing people. Show consistency over time.
Provide context. AI can list options. Only a manager can say which matters here, now, for this team.
Standardize the basics. Policies, processes, routines. Let AI handle the operational questions. Free your time for the important stuff.
Take responsibility. When risk is on the line, the decision belongs to the manager. That accountability is the essence of leadership.
Use AI with the team. Don’t treat it as competition. Show how you want it to be used. Share the responsibility openly.
HR’s role
HR has its own part in this shift. If AI is already acting like a first-line manager, then we have to stop polishing culture decks and start building the infrastructure for how this works.
Set the guardrails. Publish a clear, usable AI policy. Not a 20-page PDF no one reads. Something employees can follow in real life.
Equip managers, with AI as support. Don’t just train managers to “lead better.” Give them access to AI copilots that draft feedback, rehearse tough conversations, and summarize team sentiment. Used well, AI can make managers faster and clearer - not weaker. However, we, in HR, must design this so that managers don’t outsource accountability.
Redesign workflows. Treat AI as the first line for operational questions. Make that explicit. Then free managers to do what only humans can do: carry risk, set direction, take responsibility.
Measure and adapt - with aggregated signals. If employees are asking AI instead of their boss, you lose visibility unless you design for it. This doesn’t mean spying on individuals. It means aggregating the questions employees bring to AI: “What do people ask about policies?” “Where is confusion highest?” That data should flow back to HR and managers so they actually know where the pain points are. Without this loop, AI becomes a black hole.
Where this leaves me
I don’t have a neat conclusion here. I’m still working through it.
I keep circling back to this question: have we romanticized leadership? We’ve talked about being “more human,” but maybe that has blinded us to what actually matters when the stakes are high.
AI doesn’t feel or care. But people still turn to it, because it gives them something managers often don’t: fast, clear, non-judgmental responses.
If that’s true, then the challenge for leaders isn’t to compete with AI on empathy. It’s to deliver where AI stops and falls short, in clarity, fairness, accountability, and responsibility.
And maybe that’s what “human leadership” really means.
But what am I missing here?