Welcome to FullStack HR, and an extra welcome to the 198 people who have signed up since last week.
If you haven’t yet subscribed, join the 5000+ smart, curious HR folks by subscribing here:
Listen to the episode on Spotify or Apple Podcast.
👋 Happy Friday,
Have I mentioned that I’m dropping an online course? 😅
Well, I have one coming out on Sunday.
Tomorrow is the last day to use the secret FullStack HR code I sent out.
Search your inbox for “Johannes Sundlo,” and you should be able to find it.
(You can also use EarlyBird30 to get 30% off until the end of the day tomorrow.)
And yeah, here’s a free preview!
Today will be a mixed bag.
There’s been a lot of interesting news this week regarding AI, and I’ll try to summarize the updates and put some context and ideas around them.
Let’s get to it.
Workday goes AIday?
Workday had its annual Workday Rising conference this week. And just as alluded to when they took the stage at Google Cloud Next, it was AI-intense, almost to the extent that it became comical. Everything was about AI, one way or another. I’ve yet to listen to all the sessions, but I’ve looked at the keynotes and read the press releases. And there is interesting stuff in there, especially given the scale Workday operates at.
New Generative AI solutions span use cases like creating job descriptions, analyzing contracts, generating knowledge articles, streamlining collections, building custom apps, and more.
Workday's approach is fueled by its unified dataset of over 625 billion transactions (!) and its commitment to delivering transparent, trustworthy, and responsible AI.
They pushed hard to build trust in their products and underpin the importance of things happening within their “safe space.” (I'm not saying they read last week’s article, but hey, they outlined the same stuff...)
One of the most interesting use cases is what they are calling AI-generated employee work plans. They describe it as managers [being able] to quickly summarize employees’ strengths and growth areas, pulling from stored data, including performance reviews, feedback, contribution goals, skills, employee sentiment, and more.
This sounds great from an HR point of view, but TechCrunch has valid criticism around this:
Studies have shown that text-analyzing AI can exhibit biases against people who use expressions and vernacular that fall outside the “norm” (i.e. the majority).
For example, some AI models trained to detect toxicity see phrases in African-American Vernacular English (AAVE), the informal grammar used by some Black Americans, as disproportionately “toxic.” And Black Americans aren’t the only minority group that suffers. In a recent study, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models
Workday, of course, has a response to these worries.
In response to these concerns, Workday says that it’s “transparent about how its AI models are designed” (albeit unwilling to reveal the exact data used to train its models) and built the work plan feature to show managers “how the data inputs contribute to a strength or area of growth.”
“As with other Workday generative AI use cases and our human-in-the-loop approach, users are encouraged to review the results as a strong first draft that they should edit, iterate on and finalize,” Shane Luke, head of AI and machine learning at Workday, told TechCrunch via email.
In combination with this, Workday said on stage that they would build this “as secure as you, the users, want.”
I’m not sure that is a good idea, and it’s slightly disheartening that they flinch when they get a moment to really build trust in their products.
I also think that we, the users, are quite poor in utilizing tools. If a tool says one thing, most of us will deem that to be the truth. And if there’s little to no understanding of how the models come up with these results - that will potentially become an issue. AI literacy is still low in organizations, and I think Workday, one of the biggest players in this space, should acknowledge that perhaps a bit more than they did.
If you use Workday, you’ll see all the upcoming generative AI features being rolled out over the next 6-12 months.
AI starts seeing
More from the news. OpenAI rolled out a new feature this week where you can submit a picture to ChatGPT, and ChatGPT can analyze and make sense of the picture. Imagine this capability being available to all your employees who are, for example, out-serving something. A wind turbine, for example. It’s not unrealistic to think that in a not-too-far-distant future, they can snap a picture of where they think the problem is and immediately get the shared knowledge from all employees.
But that’s not all that has happened in the “AI starts seeing” space. Meta announced the new Meta Smart Glasses.
We’ve integrated Meta AI on Ray-Ban Meta smart glasses and optimized it for a hands-free, on-the-go experience. By saying “Hey Meta,” people can engage with Meta AI to spark creativity, get information, and control your glasses—just by using their voice.
That’s cool and dandy, but what they aren’t saying in their press releases, but they did say on stage, that next year, they’ll come with a software update to make the multi-modal.
Starting next year, we’re going to be issuing a free software update to the glasses that makes them multi-modal. So the glasses are going to be able to understand what you’re looking at when you ask them questions. So if you want to know what the building is that you’re standing in front of, or if you want to translate a sign that’s in front of you to know what it’s saying, or if you need help fixing this sad leaky faucet, you can just talk to Meta AI and look at it and it will walk you through it step-by-step how to do it.
When I listened to this first, I had just listened into the stuff above from Workday and my head immediately raced to a future where everyone is wearing these glasses, and we combine that with the data from Workday as an overlay.
“Bob is unhappy with his boss and feels emotionless.”
Imagine a future where AI-powered glasses can detect emotional cues from employees' facial expressions and voice tone. The data could provide managers with real-time insights into workers' sentiment and engagement levels. However, this also raises concerns about privacy and surveillance. Employees may feel uncomfortable if their bosses have constant access to their emotional states. There is a fine line between leveraging technology to improve understanding, and crossing over into intrusive monitoring.
These emerging technologies create new responsibilities for us HR leaders. We will need to take an active role in developing ethical policies on if and how these tools are deployed. Clear guidelines must be established on informed consent, data access permissions, and preserving employee autonomy.
With a human-centric approach guided by ethics, these disruptive technologies could allow us in HR to take employee support and development to the next level. But without proper precautions, they run the risk of breeding mistrust and anxiety. A meaningful dialogue on designing an optimal employee experience, where both human needs and technological potential are considered, is needed.
The AI genie may be out of the bottle, but we have a vital role to play in steering the future course responsibly. Worker wellbeing should remain central. If used judiciously, immersive technologies could open up a new frontier for employee growth and satisfaction. But the risks can't be ignored.
Now is the time for HR to step up and develop the necessary guardrails - we have to adopt.