Welcome to FullStack HR, and an extra welcome to the 56 people who have signed up since last week.
If you haven’t yet subscribed, join the 7000+ smart, curious HR folks by subscribing here:
Happy Friday,
Today’s topic has been on my mind for quite a while, and I’d like to hear your perspectives on it, so feel free to read this and comment below!
Also, the first 30 users of my Learn AI and HR course will receive a 30% discount next week—use the code June2024 to take advantage!
Let’s get to it!
There's this sentiment that I meet almost everywhere I go (or when I open LinkedIn).
AI is full of bias and thus dangerous to use, so we shouldn't rely on it.
But is it really worse than humans?
I mean, we are full of biases.
Consider confirmation bias, where we favor information that confirms our preconceptions, or the halo effect, where our overall impression of a person influences our judgment of their traits.
These are just a couple of examples of the myriad cognitive biases that color our decision-making processes.
Yet, we often set the bar impossibly high for AI, expecting it to be flawless with no biases at all.
When it doesn’t achieve perfection, we become critical.
But should we be critical? Are we any better ourselves?
Despite these well-documented shortcomings, we often give ourselves a pass. It’s common for managers to let poor performance slide, as addressing it can be uncomfortable. A hiring manager might overlook a candidate's red flags during an interview because they share a common background or interest, leading to a poor hiring decision.
In contrast, as said, we scrutinize AI systems intensely, expecting them to be infallible. When AI does not consistently deliver perfect results, we criticize it harshly. This double standard is not only unfair but also counterproductive.
We, the humans.
When it comes to recruitment, the objections raised against AI are often the same issues that can be leveled against human recruiters.
AI can be biased, but so can humans.
AI can make mistakes, but so can humans.
What we need to acknowledge is that AI, when used appropriately, can augment human capabilities and help mitigate some of our inherent biases.
For instance, AI can assist in screening resumes, identifying potential candidates who might otherwise be overlooked due to unconscious biases. It can help ensure a more objective assessment by focusing on skills and qualifications rather than subjective criteria.
Moreover, AI can handle large volumes of applications efficiently, freeing up human recruiters to focus on more nuanced aspects of the hiring process.
So instead of asking, "Is it really ethical to use Generative AI in HR?" we should flip that question and ask, "Is it really ethical to rely solely on humans in HR?"
If Generative AI is better at predicting who will be a good hire than we are, is it ethical not to use it? People might suffer from poor hiring decisions if we don't use AI.
Moreover, if Generative AI is better at assessing performance, is it ethical not to use it? AI might even be better at giving constructive feedback than some managers. Wouldn't it be more ethical to let AI handle this if it leads to better outcomes for employees?
These questions highlight a critical point: It's time to rethink when it's ethically appropriate to keep humans in the loop.
If AI can enhance fairness, accuracy, and efficiency in HR processes, then don’t we have a moral obligation to consider its use seriously?
The fear of AI biases and errors should not overshadow the fact that humans are equally, if not more, prone to these issues.
By overestimating our own judgment capabilities and setting unrealistically high standards for AI, I believe that we miss out on the benefits that AI can bring to our field and, in the long term, contribute to fewer biases in our decisions.
What do you think?