So the report itself argues there is a need for better data, and it seems fairly level headed, but…
…what’s with people being mad about it?
I say this a lot, but there seems to be a lot of weird anti-hype where people want this AI stuff to work better than it does so it can be worse than it is, and I’m often confused by it. The takeaway here is that most jobs don’t seem to be behaving that differently so far if you look at the labor market in aggregate. Which is… fine? It’s not that unexpected? The AI shills were selling that entire industries would be replaced by AI overnight, and most sensible people didn’t think so or argued that the jobs would get replaced with AI wrangler tasks because this thing wouldn’t completely automate most tasks in ways that weren’t already available.
Which seems to be most of what’s going on. AI art is 100% not production-ready out of the gate, AI text seems to be a bit of a wash in terms of saving time for programmers and even in more obvious industries like customer service we already had a bunch of bots and automation in place.
So what’s all the anger? Did people want this to be worse? Do they just want to vibe with the economy being bad in a way they can pin on something they already don’t like and maybe politics is too heavy now? What’s going on there?
It’s a boiling frog thing. AI and LLMs are shoved in our faces everywhere and it’s harder every day to opt out. Job boards are flooded with positions for human in the loop AI training or AI experience requirements. AI gen text, images, and video are obscuring an already muddled information space. They also draw an astronomical amount of energy which is detrimental to the global ecosystem. Meanwhile costs are going up, it’s borderline impossible to get a job, and people are scared this automation will push them out of employment without generating new jobs, especially if art and entertainment are taken over by gen AI. People are saying “I’m being boiled alive” but by the time there’s enough data to validate that we’ll already be stew.
The way information is presented matters too. When articles circulate they get often slanted and summarized (or people just read the headline and make assumptions). Key information gets tossed aside for easy talking points to support whichever narrative and the people affected feel unseen and unheard.
You’re literally saying “well, anecdotal impressions say this, so I refute this study that says something else”.
We don’t like that. That’s not a thing we like to do.
And for the record, as these things go, the article linked here is pretty good. I’ve seen more than one worse example of a study being reported in the press today.
They provide a neutral headline that conveys the takeaway of the study, they provide context about companies mentioning AIs on layoffs, they provide a link to the full study and they provide a separate study that yields different, seemingly contradicting results.
I mean, this is as close to best case scenario for reporting on a study as you can get in mainstream press. If nothing else, kudos to The Register. The bar was low but they went for personal best anyway.
Man, the problem with giving up all the wonky fashy social media is that when you’re in an echo chamber all the weird misinformation and emotion-driven politics are coming from inside the house. It’s been a particularly rough day for politically-adjacent but epistemologically depressing posts today.
Anger feels good. Especially anger that is socially validated. Being part of an angry mob means you get to feel righteous anger and not fear negative repercussions because everyone’s supporting you and providing cover for your bad behaviour.
And social media like this, where you can be an anonymous member of an angry mob? Candy for the human psyche.
So the report itself argues there is a need for better data, and it seems fairly level headed, but…
…what’s with people being mad about it?
I say this a lot, but there seems to be a lot of weird anti-hype where people want this AI stuff to work better than it does so it can be worse than it is, and I’m often confused by it. The takeaway here is that most jobs don’t seem to be behaving that differently so far if you look at the labor market in aggregate. Which is… fine? It’s not that unexpected? The AI shills were selling that entire industries would be replaced by AI overnight, and most sensible people didn’t think so or argued that the jobs would get replaced with AI wrangler tasks because this thing wouldn’t completely automate most tasks in ways that weren’t already available.
Which seems to be most of what’s going on. AI art is 100% not production-ready out of the gate, AI text seems to be a bit of a wash in terms of saving time for programmers and even in more obvious industries like customer service we already had a bunch of bots and automation in place.
So what’s all the anger? Did people want this to be worse? Do they just want to vibe with the economy being bad in a way they can pin on something they already don’t like and maybe politics is too heavy now? What’s going on there?
It’s a boiling frog thing. AI and LLMs are shoved in our faces everywhere and it’s harder every day to opt out. Job boards are flooded with positions for human in the loop AI training or AI experience requirements. AI gen text, images, and video are obscuring an already muddled information space. They also draw an astronomical amount of energy which is detrimental to the global ecosystem. Meanwhile costs are going up, it’s borderline impossible to get a job, and people are scared this automation will push them out of employment without generating new jobs, especially if art and entertainment are taken over by gen AI. People are saying “I’m being boiled alive” but by the time there’s enough data to validate that we’ll already be stew.
The way information is presented matters too. When articles circulate they get often slanted and summarized (or people just read the headline and make assumptions). Key information gets tossed aside for easy talking points to support whichever narrative and the people affected feel unseen and unheard.
There’s a lot going on and it isn’t just “AI bad”
Yeah, but… this isn’t that.
You’re literally saying “well, anecdotal impressions say this, so I refute this study that says something else”.
We don’t like that. That’s not a thing we like to do.
And for the record, as these things go, the article linked here is pretty good. I’ve seen more than one worse example of a study being reported in the press today.
They provide a neutral headline that conveys the takeaway of the study, they provide context about companies mentioning AIs on layoffs, they provide a link to the full study and they provide a separate study that yields different, seemingly contradicting results.
I mean, this is as close to best case scenario for reporting on a study as you can get in mainstream press. If nothing else, kudos to The Register. The bar was low but they went for personal best anyway.
Man, the problem with giving up all the wonky fashy social media is that when you’re in an echo chamber all the weird misinformation and emotion-driven politics are coming from inside the house. It’s been a particularly rough day for politically-adjacent but epistemologically depressing posts today.
Thank you for this counter-weight!
I love this and I’m stealing it.
Yup not contesting the article itself, but giving some explanation for all the anger you were wondering being about
Anger feels good. Especially anger that is socially validated. Being part of an angry mob means you get to feel righteous anger and not fear negative repercussions because everyone’s supporting you and providing cover for your bad behaviour.
And social media like this, where you can be an anonymous member of an angry mob? Candy for the human psyche.