Welcome to FullStack HR, and an extra welcome to the 98 people who have signed up since last week. If you haven’t yet subscribed, join the 6100+ smart, curious, likeminded HR folks by subscribing here:
Today’s FullStack HR is brought to you by…me.
It’s Black Friday, and here’s the deal - get an 80% discount on upskilling your whole HR team.
You’ll get one customized session and unlimited spots (!) in the course Generative AI for HR for only €499 (ex VAT).
Use code UpskillMyTeam to redeem the offer.
I'll match that price if you find a cheaper way of upskilling your whole HR team regarding AI and HR.
Want an invoice? No problem. Contact me.
Happy Friday, folks,
I won’t add another article about the ins and outs of all the debacle at OpenAI.
Ben Thompson made an excellent overview if you are looking for one.
No, I think now is the time to talk ethics and AI again. Because no matter what happens, you’ll be better off if you have your AI Ethics straight.
I did so a while back, but as with all things relating to AI - the speed of change is fast, and most of us are still yet to talk about ethics - hence, I deemed it a good topic to discuss again.
Here we go.
Ethics is an interesting topic; ethics are usually the moral rules that help us decide right and wrong. They guide us in how to treat each other fairly and with respect.
The basic idea behind ethics is to treat people in a way that does not cause unjustified harm. Acting ethically takes thoughtfulness, compassion, and seeing a situation from someone else's perspective.
It’s quite universally accepted how we humans should treat each other.
So why do we need ethical AI rules? Because AI can mimic us humans. It can convey to us that they are human-like. AI can and sometimes does make decisions that impact us humans. Thus, I think that we need to create an ethical framework for how we interact with these AIs.
And who doesn’t love a good policy?
Joking aside, I think having these ethical guidelines is important, not for the sake of having them but more for sitting down and reflecting around them.
I’m keen on the fact that we make conscious decisions around AI, and creating a policy requires consciousness from us - how do we want this to play out?
How do we want to interact with AI?
Steps to Develop Ethical AI Policies in HR
How does one then create these kind of policies?
It will differ from organization to organization, of course, but here’s a general overview of how you could create these policies.
Step 1: Basic Understanding
Learn about AI and HR. If you don’t have a basic understatement of AI and what it could mean for HR, creating such a policy is very hard.
Identify core ethical concerns. Use your values if you have them; what does your organization stand for? Does it align with how you potentially could use AI?
Step 2: Risk and Opportunity Analysis
Assess potential benefits and risks (biases, legal implications).
Gather feedback from employees and stakeholders - The best solution is to have cross-functional teams work on these ethical questions. If you include people far and wide, there’s probably tons of perspective in your organization.
Step 3: Drafting Guidelines
Create clear, practical guidelines focusing on ethical principles.
Include examples and scenarios for clarity - utilize Generative AI to create policies and examples!
Step 4: Training and Communication
Develop and conduct training for HR teams and employees - it’s also a great feedback opportunity for the entire organization.
Communicate the importance of ethical AI usage.
Step 5: Monitoring and Review
Set up a system to track AI performance and adherence to guidelines. You could utilize a third-party vendor for this, such as Holistic AI.
Regularly updated guidelines based on feedback and new insights.
Step 6: Cultivate Ethical AI Culture
Promote open discussions about AI in the workplace - the more we talk about it, the better.
Encourage adherence to ethical practices at all organizational levels.
I know this might not suit all, but I just wanted to give you an idea of how such a process could look.
But okay, I know what the question that will be hitting my inbox will be now.
“Sure, that is a great process to set it up, but do you have an example of what such a policy could look like?” I do. But I can’t share them.
But I can share my general thinking about what I think are key principles when creating this kind of policy.
Key Principles for Ethical AI Policies in HR
Transparency: AI systems in HR, such as those used for performance evaluations, should be designed to clearly explain their evaluations and recommendations. Employees should understand the criteria AI uses, whatever the application might be.
Fairness: AI in HR must actively work against existing biases. For example, AI used in recruitment should be meticulously tested and refined to ensure it does not favor candidates from certain backgrounds or with specific characteristics.
Non-Discrimination: Ties into the one above. Policies should ensure AI tools in HR do not perpetuate discrimination. For instance, regular audits of AI recruitment tools are necessary to ensure they are not biased against any group.
Privacy Safeguards: AI systems handling employee data must employ advanced methods to protect this sensitive information, ensuring confidentiality and compliance with data protection regulations.
Clear Accountability Frameworks: There must be clear lines of responsibility regarding the development and application of AI in HR. This includes defining who is accountable for the outcomes of AI-driven HR decisions.
This is not a complete set of principles; you need to define those for yourself, but it is just to give you an idea and a starting point.
I still think that having the discussion and daring to raise the topics is the most crucial action you could take. Daring to be a bit uncomfortable not knowing everything is okay in this instance - it is a fast-moving landscape, and few, if any, have it all figured out.
It’s okay not to know but not just to let it happen - dare to raise the question and work actively with it.