Discover more from FullStack HR
Brace for Impact: AI Regulations Incoming.
Everything HR Leaders Need to Know About the Impending Changes
Welcome to FullStack HR, and an extra welcome to the 187 people who have signed up since last week.
If you haven’t yet subscribed, join the 5700+ smart, curious HR folks by subscribing here:
Happy Friday people,
Today will be all about AI regulations. I’ll base my discussion on the upcoming EU AI Act (it has a more fancy name, but that is essentially what it is).
But wait? Is this article only for HR peeps in the EU? No.
I think that similar regulations are coming in many many countries or states.
NY has already passed a similar law, for example.
Buckle up your legislation belt - here we go.
TL;DR - The EU's upcoming AI Act will require companies to use AI algorithms for high-risk HR functions such as hiring and employee evaluations to meet major obligations like transparency, risk assessments, and accountability. While presenting challenges, these rules also allow us in HR to establish formal governance and oversight mechanisms for developing ethical and responsible AI systems.
The European Union stands poised to pass the world's first major regulatory framework for artificial intelligence - the Artificial Intelligence Act (AI Act). This far-reaching new law will place obligations on companies and organizations using AI within the EU market.
At its core, the AI Act aims to create consistent rules and standards for trustworthy artificial intelligence. It takes a risk-based approach - classifying AI systems as high-risk, limited-risk, or minimal-risk. Higher-risk AI involving sensitive areas like law enforcement or recruitment will face stricter requirements.
The EU believes consistent guidelines are crucial to fostering ethical AI and protecting fundamental rights. AI risks opaque discrimination and bias without regulation as it is rapidly deployed across sensitive societal domains.
The AI Act targets high-stakes sectors like healthcare, employment, law enforcement, and consumer products. EU believes that unethical AI could significantly endanger people's safety and civil liberties in these areas.
That's why the European Commission in 2021 proposed to regulate this powerful technology, initiating a lengthy legislative process. With careful rules of the road, the EU aims to maximize AI's benefits while minimizing its dangers.
The three main EU institutions - the Parliament, Council, and Commission - have now kicked off intense negotiations known as trilogue meetings to reach a consensus on the final language of the AI Act. They aim to hammer out any differences and get the regulation approved and formally adopted by the end of 2023. If all goes according to plan, the landmark AI Act could then take effect across the EU within the next two to three years, following a transition period to allow companies time to prepare.
So, while the AI Act still faces some final hurdles, the finish line is within sight. The EU appears poised to make history by becoming the first leading economy to pass comprehensive legal standards governing this rapidly emerging technology.
But many complex details must still be ironed out first behind closed doors to turn the AI Act from an ambitious proposal into policy.
Now, specifically looking at human resources, which we always do here, we know that AI is rapidly transforming the field. Resume screening bots, personalized learning platforms, productivity trackers, predictive attrition models - the list goes on about how AI is reshaping our workflows.
Forward-thinking CHROs have embraced this change to create more efficient, data-driven, and even human-centric people processes powered by intelligent algorithms. That’s all good and well.
But things can go sideways very quickly if the power of AI is not handled responsibly. I’ve talked about this in the past as well. Bias, discrimination, blind spots, privacy breaches - these dangers lurk around every corner when unleashing unfettered algorithms on employees. Just look at the cautionary tales of automated hiring tools that ended up discriminating against women.
So the Machines are Coming for HR - but who will watch the watchers?
That’s where the EU's AI Act comes in. It puts guardrails around adopting artificial intelligence in HR, enforcing transparency, accountability, and ethics standards.
The law creates obligations for companies using AI in high-risk HR applications like:
Candidate screening and recruiting
Employee monitoring and evaluations
Promotion and termination decisions
Requirements will likely include algorithmic transparency, impact assessments, prevention of biased outcomes, and human oversight. The goal is to balance innovation with thoughtfulness as AI permeates HR processes.
IMHO, the AI Act presents opportunities to formalize ethical AI approaches. With foresight and initiative, we can lead the intelligent workplace of the future - not the other way around.
At first blush, this may seem like typical heavy-handed government regulation rearing its ugly head to stifle innovation and progress. But look a little closer; there’s a more nuanced story here about shaping the future of AI in the workplace.
If the AI Act is approved in its current proposed form, companies using high-risk AI systems will need to comply with new requirements around documentation, assessments, and human oversight. Specifically, they must document technical details of how their AI systems function, perform impact assessments examining risks, and ensure human review of automated decisions.
These are not exactly unreasonable safeguards when potentially life-changing functions are being delegated to algorithms. I don’t think so, at least.
Turning Obligations into Opportunities
Naturally, any new regulation requires investments and transition costs. Integrating oversight, documentation, and governance processes will not be painless.
However, if we act smart we can turn these obligations into opportunities to build institutional knowledge, assess risks, and formalize their approach to ethical AI.
As always, those who prepare early will race ahead.
Responsible rules of the road accelerate progress by preventing a race to the bottom. Companies aren’t tempted to cut corners if certain ethical and accountability standards are mandated. Of course, one could always have hoped that such regulations wouldn’t be needed. Still, as said, I deem them fairly good and handled rightfully; the law could act as a catalysator for discussing all ethical dilemmas connected to AI, for example.
If I look myself in the mirror, even if I’ve thought about it and discussed it since earlier this year, if I look at the broader HR community, it seems like ethical AI has not been matched by meaningful accountability and enforcement.
But the EU AI Act will soon start connecting words with actual requirements. We, the HR leaders, will need to walk the talk and prove AI can be this positive change that most of us believe it can be.
I am not saying that this will be easy, no.
This is uncharted territory filled with hard questions:
How do we balance productivity with dignity and consent when applying AI to manage people?
When does personalization turn into discrimination in AI-guided learning?
Can black-box algorithms ever provide satisfactory explanations for their HR-related decisions?
What guardrails are needed so automation enhances workers rather than displacing them?
There are no perfect answers yet. But the inquiry has begun in earnest.
What to do?
Keep reading FullStack HR; I’ll make sure to keep you updated on the EU AI Act and what it means for HR teams.
But you can also, already now, take action if you are considering any new HR technologies leveraging artificial intelligence:
Ask vendors directly how they prepare for and mitigate risks related to the upcoming AI regulations.
Inquire if their AI systems can be easily explained and understood by non-experts. Avoid inscrutable black-box algorithms.
Verify whether they have conducted independent audits or used outside experts to vet their AI tools for transparency, fairness, and compliance.
Review what employee rights and safeguards they have built into high-risk applications like candidate screening or monitoring.
Scrutinize what data is used to train their models - is it high-quality and free from inappropriate bias?
Confirm they allow human oversight and control rather than fully autonomous AI decisions.
Request documentation on their development processes, testing protocols, and risk minimization methods.
The EU AI Act warrants asking vendors tough questions to ensure their AI offerings align with emerging regulatory expectations. HR teams must procure AI responsibly and ensure it augments workers ethically.
What’s your take on the upcoming legislation?