Welcome to FullStack HR, and an extra welcome to the 33 people who have signed up since last edition.
If you haven’t yet subscribed, join the 9300+ smart, curious and like-minded HR people by subscribing here:
Happy Wednesday!
Let’s start the newsletter with question!
There will be no weekly update this week as I have a couple of days off this week in connection to accession day.
Quick PSA: If you want me to terrify, I mean, inspire, your team about AI at your fall kick-off, now's the time to book.
My calendar is filling up faster than Claude burns through my token limit. I'm happy to join you digitally or in-person, both work for me.
I did a 3-hour deep dive with 50 CEOs the other week about how AI will affect their organizations, and the feedback has been outstanding.
(Though I can't tell if they were inspired or just needed a drink afterwards. Maybe both?)
Now on to the update.
I briefly mentioned this in the delayed update I sent out on Saturday, but I think it's worth talking more about the updates that Google and Anthropic launched last week.
For the past year or so, I've been speaking about how the ability to create almost anything is rapidly affecting life in general, but particularly work-life in the long term. It's a democratization of the creation process. It used to be that you had an idea, and then you needed to either be gifted with talent or have the grit to grind and learn a skill. If you wanted to become a painter, you had to practice painting. If you wanted to become a coder, you had to grind to learn how to code. That was the truth for everything.
I would argue that what we're seeing released from the labs now diminishes the importance of this ability, whether you call it grit, or something you were born with. Most people can now, with the help of these tools, create almost anything just by talking or typing to a computer. And yes, I know these models are far from perfect.
And yes, I also know that even though they can create marvelous art pieces that mimic the style of, let's say, Salvador Dalí, most people won't pay the same amount for an AI-generated Dalí painting as they would for a real painting by him. But that's not my point here.
My point is: what will happen to work when everyone can create decent stuff in almost any area that's somehow digitized? Take the release of Veo 3. (Since I'm in the EU, I've yet to try it, but there are so many people showcasing it right now.)
I know for a fact that creating videos is hard. I've been creating YouTube videos on and off since 2017, and it requires a tremendous amount of effort. Even for seemingly basic videos, there's effort involved. Not to mention creating learning curricula in the corporate world. Scripts need to be written, you need a spot to film your videos, it usually requires lighting, a sound engineer, editing afterwards, and so on.
It's not hard to see how Veo 3 and similar models will flip this on its head.
Not only can Veo 3 now generate video that is of a quality that is on pair with what a human would create, it can also create sound. Dialogue, effects and so on.
Take a PowerPoint, send it to Gemini, let Gemini know the audience and the surroundings you want your video to take place in, and voilà, a couple of minutes later, you have a customized learning video for your employees. (Yes, this still assumes that we'll have human employees who need to watch training videos from time to time.)
Are we fully there yet? We are not.
But consider that it's only been two years since Will Smith ate spaghetti like this:
Two years. One can only think about what will happen in the next two years.
It wasn't only Google who released models last week. Claude updated their models from 3.7 to 4. And what's been mostly debated is not how good they are (I've only been using them for limited testing. They seem great. But I seem to run out of tokens all the time, even though I'm paying for it), but instead what Claude’s models did when faced with the hypothetical example of being shut down by an engineer. It then tried to blackmail the engineer into keeping it online. The example isn't real, it was created as part of a test protocol, but it shows a glimpse into how the models are "thinking." (Hats off to Anthropic for choosing to reveal this and share it openly. Take notes, OpenAI, Google, and Microsoft, this is how you do testing.)
That said, the models are outperforming others, especially on coding, which once again begs the question: if skill is becoming a commodity and democratized, what do we make of this? I think we should revisit what skills are and what role they play in organizations.
For a long time, I've been talking about how AI will impact the career ladder, and I do think it's time to rework how our career ladders are set up. Years of experience won't be relevant for much longer when you have access to the complete knowledge from almost all of humanity in these models. Instead, it's about extracting and validating that information and (for now) pairing it with internal, not publicly available information.
Veo 3 might be a video-generating model and Claude Sonnet 4 might be a programming master, but make no mistake, the models are coming for all of our skills.
And that will impact your people and your organisation.
Which brings us to the question no one wants to ask: How long before your organization's competitive advantage becomes just another prompt away?
Fullstack HR was brought to you by…OneLab - AI-Powered insights for better workplace health.
Creating healthy and health-promoting workplaces is not only good for the individual, it also strengthens the company and society. OneLab helps you take the next step in that work – with the power of AI.
OneLab’s platform provides HR and managers with intelligent insights that make a difference. OneLab's AI assistant analyzes data from health examinations, employee surveys, and sick leave, and provides tailored recommendations based on your needs.
It has full knowledge of the ‘Arbetsmiljöverkets författningssamling’ and is always available to support your work environment and health efforts!
Every health examination in our platform becomes an anonymized, data-driven basis for decision-making that helps you identify health risks in time and prevent ill health before it arises.
Companies like Deloitte, Nasdaq, Electrolux, Länsförsäkringar, and Warner Bros already trust OneLab – you should too.