Continuous AI Audits Are the CHRO Superpower for 2025
I aimed to fly like Superman, but nonstop audits handed me the real cape.
Welcome to FullStack HR, and an extra welcome to the 19 people who have signed up since last edition.
If you haven’t yet subscribed, join the 9300+ smart, curious and like-minded HR people by subscribing here:
Happy Tuesday.
Thank you for all your emails and comments - not just around last week's article. It was everything I hoped for, and I hoped to spark a debate. I'm not saying that we should keep the human out of the loop, I'm not sure I'm convinced about that. But I wanna entertain the idea at least, because if we don't entertain the idea someone else will. And then it will just happen. And that's my whole point with writing this newsletter is to entertain different ideas.
For next week I'm working on a similar piece where I'm not only entertaining an idea I'm exploring an idea. But now I'm getting ahead of myself. Let's get back to this week's piece.
This one is a little bit special because I had to sit down and talk to someone. Usually I just write them from heart, but this time I actually had to sit down with someone.
(I'll get to who I sat down with in the piece)
Let’s get to it.
Listen to the article - powered by ElevenLabs - try it for free here. (It’s like black magic.)
January, London.
I was standing there, coffee in hand, moments before stepping onstage beside Thomas Otter. Now, Thomas is someone whose work I'd followed and admired from afar for ages, but we'd never met face-to-face. The session went great, we shook hands on keeping the conversation alive. (If you're not already, go follow his Substack).
Fast forward a bit to HR Tech in Amsterdam. Who do I run into? Thomas again! And his first words to me were, "You really need to meet Jeff at Warden AI. You guys will click."
If you've been reading my stuff for a while, you know my historical relationship with the word "compliance." Let's just say it wasn't exactly a topic that set my soul on fire initially.
But here's the thing, I trusted Thomas's instinct, followed that nudge and connected with Jeff. And Thomas was right. (He usually is.)
Jeff wasn't talking rules, he was talking about the absolute core of what we do in HR: building trust.
And to be crystal clear, this isn't a sponsored post.
I'm merely sharing something that clicked for me and that I think is incredibly important right now. Because I've been talking a lot lately about figuring out how to use AI safely and ethically in HR, diving into things like the EU AI Act and trying to make sense of what "compliant AI" looks like day-to-day.
And yes, the word "compliance" has definitely appeared in my writing way more than I ever would have guessed a few years ago. So, when Jeff offered to walk me through what they're building at Warden AI, my curiosity was absolutely piqued.
Trust
My first question to Jeff was pretty direct "Okay, lay it on me. What exactly does Warden do?"
He didn't launch into a feature list right away (which IMHO is always a great sign).
Instead he cut right to the pain point. To the thing I hear from so many other HR leaders as well. We want to use AI. We see the potential. But the worry about fairness, the fear of legal hot water, and the pure headache of trying to explain complex algorithms to actual human beings who are impacted? That friction is real.
Jeff put it simply: "HR leaders want AI, but they worry about fairness, legal risk, and explaining algorithms to people affected. We audit tools continuously, not once a year, and help teams communicate results so trust grows."
That phrase, "so trust grows," landed with me. It’s not only about finding red flags. It's about actively showing your work, demonstrating that you're committed to fairness, and proving you're fixing things when the data shows an issue. If you've ever had to get a new piece of tech past your (understandably) skeptical legal team or exec leadership team, you know how massive a deal that kind of demonstrable proof is.
But, yes, there are lots of "AI audit" tools popping up. Sometimes they feel like generic security checklists that someone tried to force-fit into an HR context. What I like about Warden AI is that it's built for HR workflows and the unique nuances of making decisions about people.
From what Jeff walked me through, here’s what felt particularly valuable from my perspective:
Continuous audits: This is non-negotiable now. AI models aren't static. They learn, they change, they interact with new data constantly. An audit once a year is like checking your car's brakes only when they fail completely. Compliance needs to be live, ongoing.
Deep fairness analysis: They look across a whole spectrum of protected groups, twelve different categories including disability, sexual orientation, age, and more. This goes beyond the usual suspects and helps uncover more subtle biases.
Individual-level checks: It's not only about group averages. They perform checks at the individual level too. That's crucial because even if an algorithm looks okay on average, it might be unfairly impacting specific people.
Actionable proof points: Real-time dashboards are helpful for the HR team, but those shareable "assurance badges" are a genius move. They offer tangible proof point you can show stakeholders, saying, "Look, this system isn't running on hope; it's being independently watched and verified."
Jeff's points to the last one here as well "Those badges prove someone independent is watching and the system isn't running on hope."
We in HR need clarity and confidence, not another layer of confusing data.
The EU AI Act isn't a drill (and we need to be ready)
Here’s the tough reality: the EU AI Act is happening. Now.
And while the EU might seem far away for some, this kind of regulation tends to set a global standard, plus we already have rules like NYC's Local Law 144, Colorados AI Act is coming and so on and so on. This new landscape means most HR tech that uses AI is now officially classified as "high risk."
What does that mean for us? Transparency, detailed documentation, and ongoing monitoring are shifting from "nice ideas" to mandatory legal requirements.
Tools like Warden AI are built specifically to help us meet these demands head-on by:
Doing independent audits that are aligned with specific regulations like NYC Local Law 144 and the EU AI Act.
Creating reports that are ready to be seen by regulators, or even the public, if needed.
Giving us live bias tracking that meets the regulatory expectation for continuous oversight.
Supporting explainability, so when someone asks why the AI did something, you're not left scrambling to decode a black box.
As Jeff put it so perfectly, and it's something I keep thinking about: "Compliance is no longer a once-a-year activity. It’s a rhythm." We absolutely have to learn that beat.
Okay, so based on all this, where should we start?
Thinking about my conversation with Jeff, digging into the regulations, and watching how fast things are moving, here’s my take on the absolute essential first steps we in HR need to be tackling right now:
Map every single place AI touches people: Get a crystal-clear picture of where algorithms are involved in any HR process. We're talking everything from that initial screening tool to the performance insights dashboard.
Bring legal and compliance to the table ASAP: Seriously, don't wait. They are your critical partners in navigating this.
Grill your HR Tech vendors: Ask them, specifically and repeatedly: How often do you audit their AI for fairness? What are you checking for? Who reviews and signs off on those audits? Demand transparency.
Build out candidate/employee documentation: You need clear, simple explanations for how AI is being used in processes that affect people's jobs and applications. Transparency builds trust.
Set up a regular review cycle: Your AI models aren't static; your oversight shouldn't be either. Put regular check-ins on the calendar to review performance and fairness metrics.
Start small: Pick one process, one tool. One simple tactic I heard about was labeling data green/yellow/red based on its sensitivity and whether it was okay to feed into AI tools. Low effort, surprisingly high clarity.
Let's talk fairness
Here's something interesting: When AI systems are audited and managed properly, they can often be more consistent and fair than human-only processes, which are prone to unconscious bias. (See last weeks post for more of my thoughts here) The goal isn't perfect AI (it doesn't exist). The goal is building systems that are measurably better and more consistent than what we had yesterday. Tools designed for this, like Warden AI, help by looking across the entire candidate or employee journey: sourcing, screening, interviews. They flag issues and suggest concrete fixes.
Quick trends I'm keeping an eye on:
Candidate power: Job seekers are going to start expecting transparency about AI use and potentially even ways to opt-out of algorithmic screening.
Lawsuits: Those early lawsuits around algorithmic discrimination are going to set important precedents for all of us.
Proof as a dealbreaker: Showing that you have a responsible, compliant AI approach is rapidly becoming non-negotiable when buying any new HR tech.
Look, this isn't about getting scared. It's about getting smart and being prepared.
Where I land after all this
My deep dive into this topic, sparked by that conversation with Jeff, reinforced something for me: Trust isn't given in the age of AI; it has to be earned. Earned by showing your work, being transparent about how algorithms are used, and being willing to adapt and correct when new data or audits tell you to.
That's precisely what these next-generation compliance tools are designed to help us achieve.
Whether you decide to explore a platform like Warden AI or take a different route, the critical questions we all need to be asking ourselves and our vendors remain the same:
Are our AI models doing what we think they're doing?
Are they fair and equitable for every single person?
Are they safe, transparent, and compliant with these evolving rules?
With major regulations like the EU AI Act coming online and more local laws following suit, now is the moment, absolutely the moment, to get our AI houses in order. These systems are shaping real lives and careers, and we have a huge responsibility to get them right.
If you're starting to navigate the complexities of AI in HR, or even if you're already using it, consider this your personal nudge. Start small, stay incredibly curious, and commit to building something fair, transparent, and worth believing in. That's the kind of thoughtful, bold, and ultimately human-centered HR future I'm excited about.
Oh, and hey, if you're feeling a little overwhelmed by all this and just want a simple place to start?
I've put together a quick two-minute bias-health check template based on some key compliance principles I've been digging into.
Reply to this email, and I'll send it over!