Discover more from FullStack HR
Ethics & AI Playbook.
Free copy inside. (I always wanted to write that!)
If you aren't yet subscribed, join in on the free newsletter by subscribing below.
We made a detour last week, but now we’re back on the AI path again.
Sometimes I want to climb a hill and scream from the top of my lungs that we all need to pay attention to what’s happening, and we need to do that right now, which, of course, isn’t news if you’ve been reading FullStack HR the latest months.
And yes, there are a lot of articles and LinkedIn posts about ChatGPT and AI. As usual, I’ve been subscribing to everything on Google Alerts since early December, and the flood of articles isn’t precisely decreasing.
Most briefly touch on the subject; at best, it nudges people to show an interest in using the tool, but it’s scarce that you’ll find something that dives deeper into the potential ethical implications and what the systemic shifts around this will mean, not only for us as employers but as for us as a society.
I have been no better myself; my articles have mostly been about all opportunities, which has served its purposes. But today, with a top-of-my-lungs-article, I’m hoping to change that.
I rarely ask this, but today, I’m asking for it. If this sparks any emotion in you, anger, happiness, fear, whatever it might be - please share this article forward. I know this will only be my view and that it’s incomplete, but that is also why I’m asking.
We need to shape the future, and we shape the future by discussing the future - hence if you share this and nudge people to start talking about it, we’ll hopefully discuss it more broadly.
Let’s get to it.
One of my favorite AI books is Life 3.0.
The first time I listened to the book, I drove from home to the train station. It isn’t a long car ride. Still, it was enough time to have Mag Tegmark paint such a grim picture of the future that I had to sit in the car, taking it all in before running to the train. It’s easy to end up in such a spot when discussing AI ethics. You paint the worst possible scenario and use that as a fondue for the rest of the story.
That’s not the aim of this piece.
This is more about addressing concerns and problems I see about AI, ChatGPT, and where the world is. Because I believe most of us are playing catch-up here.
What do I mean by that?
Since its launch in late November, ChatGPT has grown tremendously, and it’s guesstimated that they now have well over 200 million monthly active users.
The likely hood that you have employees utilizing ChatGPT is, therefore, relatively high. Which is great. But is it ok? And then I mean beyond what is obviously not ok such as sharing real-time financials and sensitive personal information. No, is it ok in terms of “have you thought about the other implications it might bring with it”?
What do I mean by that?
Let’s assume you pay and increase people's pay based on performance.
What if you suddenly see some people increase their efficiency by 30-40% due to ChatGPT - how will you handle that in the following year's performance review? Will people who are not utilizing ChatGPT get less of a salary increase? Perhaps they’ll even be in the bottom ten percent who will receive a PIP (or even worse, lose their job) - and all because compared to their ChatGPT-infused colleagues are outperforming them.
Don’t get me wrong. I think it’s marvelous to have ChatGPT as an assistant, always ready to help and improve efficiency, but what I don’t want to happen is to create an A-team that reaps the benefits from such tools and a B-team that chugs along as before. And we have a role to play here, ensuring we give people equal opportunities to learn and adopt the benefits of the technology.
Or if you start to analyze candidates through an AI tool? Plenty of tools on the market now claim they can help you source, evaluate, and either approve or decline candidates. Is that ok? Do you inform your candidates that the selection is made this way, or don't you? And hello, hello biases. We know that AI tools, in general, seem to be biased, so how do we make certain biases reduced?
I’m not suggesting you shy away from using the technology; quite the opposite. I believe that, to a large extent, the technology will or already is better than us in many aspects, such as vetting candidates. But we need to be thoughtful when implementing tooling in our organizations.
Taking a deliberate stance on utilizing the technology and supporting employees in it, ensuring we help our employees utilize whatever tech we implement. Be transparent towards whomever it may concern that this is how we operate, not only when using AI but in general.
I love an excellent playbook and see creating, and I know an AI playbook is a natural next step for many companies. We probably won’t need it long-term, but having one quickly is reasonable. Conflict and doubt come from ambiguity; since we are in this transition, a playbook will help diminish the ambiguity.
I’ve created a draft for such a playbook here.
It’s free - make your copy if you want to use it.
(If you have feedback on the playbook, reach out.)
The playbook emphasizes and almost starts with one crucial factor- ensuring that we educate and level the playing field for all employees. I eluded to it above, but I want to push this one - when a limited amount of your people have access to these tools, it creates an imbalance in the organization. We need to educate our people on how to use these tools.
I know this is not 100%, and there are probably angles I’ve missed to cover. Should you find angles or refine the template, please let me know.
We need to work together on this, and as said many times here, the future is yet to be defined - let’s define it together.