AI will not take over. OpenAI on the other hand…
I don’t think it will take over. I think idiots will deploy ai everywhere and that will create systems that are fundamentally inhumane.
I mean more surveillance, more arbitrary “decisions” by opaque systems. Basically Oppression by lack of oversight and control.
The thing is, AI doesn’t need to take over the world if the BiG tHiNkErS are so eager to replace humans regardless of its merit.
The thing is, if they’re wrong, their businesses will fail, and anyone who didn’t jump on the hype train and didn’t piss revenue away should have better financials
Delicious either way.
People with no clue about AI are exactly why a dumb-as-a-brick LLM could very well end up destroying the world.
No, that scenario comes from AI’s use in eliminating opponents of fascism
It’s pretty funny that while everyone is whining about artist rights and making a huge fucking deal about delusional people who think they’ve ‘birthed’ the first self aware AI, Palantir is using our internet histories to compile a list of dissidents to target
Screenshotting for my eventual ban.
deleted by creator
Take over? Not llms. Destroy the world by accelerating climate change? Absolutely.
I get the spirit, but it’s capitalism that accelerates climate change. LLMs are just the new blockchain scapegoat.
You’re both right. Capitalism accelerates climate change but so does the outrageous electricity requirements of LLMs.
Many human activities cause climate change. LLMs are a relatively new one, and a disproportionate energy user, so it’s fair enough to shout about it and try to minimise adoption. Things that are already entrenched like consumer culture or aviation will be harder and slower to undo.
Also, whether it’s right or not that capitalism is largely to blame, if you take that to mean that the only useful action against climate change is fighting against capitalism, or let yourself feel like you’ve ‘done your part’ by having anticapitalist opinions, I think that’s counterproductive.
If it weren’t for Capitalism, those LLMs would have been designed with a lower climate impact from the get-go. But since that hurts the shareholders bottom line, they aren’t.
Don’t forget the damage to the creative industries
Yeah, all those poor scribes and monks who got put out of work…
It’s all about selling the ideology of AI, not the tools actually being useful.
A cat identifying as a dog is exactly the same amount of dog as all other dogs.
An AI that plays Elden Ring, I see.
They will destroy jobs first, then the rest will simply unravel.
AI does not exist. Large language models are not intelligent, they are language models.
This can’t be true… Businesses wouldn’t reshape their entire portfolios, spending billions of dollars on a technology with limited to no utility. Ridiculous.
Anyways, I got these tulip bulbs to sell, real cheap, just like give me your house or something.
Remember, investment in LLM infrastructure on the US is currently larger than consumer spending.
And they will cut interest rates soon, so expect the number to go up (the investment number, that is, not value).
Can confirm, I’m an electrical engineer working on a power substation supplying power to a future datacenter (not sure if an Ai project, there’s more that one). Let’s just say, money is no issue, commissioning schedule and functionality are their priorities.
Can confirm, I’m an electrical engineer working on a power substation supplying power to a future datacenter (not sure if an Ai project, there’s more that one). Let’s just say, money is no issue, commissioning schedule and functionality are their priorities.
The phrase AI has never actually meant that though? It’s just a machine that can take in information and make decisions. A thermostat is an AI. And not a fancy modern one either. I’m talking about an old bi-metallic strip in a plastic box. That’s how the phrase has always been used outside of sci-fi, where it usually is used to talk about superintelligent general intelligences. The problem isn’t that people are calling LLMs AI. The problem is that the billionaires who run everything are too stupid to understand that difference.
I would argue that, prior to chatgpt’s marketing, AI did mean that.
When talking about specific, non-general, techniques, it was called things like ML, etc.
After openai coopted AI to mean an LLM, people started using AGI to mean what AI used to mean.
That would be a deeply ahistorical argument.
https://en.wikipedia.org/wiki/AI_effect
AI is a very old field, and has always suffered from things being excluded from popsci as soon as they are achievable and commonplace. Path finding, OCR, chess engines and decision trees are all AI applications, as are machine learning and LLMs.
That Wikipedia article has a great line in it too
The Bulletin of the Atomic Scientists organization views the AI effect as a worldwide strategic military threat.[4] They point out that it obscures the fact that applications of AI had already found their way into both US and Soviet militaries during the Cold War.[4]
The discipline of Artificial Intelligence was founded in the 50s. Some of the current vibe is probably due to the “Second AI winter” of the 90s, the last time calling things AI was dangerous to your funding
To common people perhaps, but never in the field itself, much simpler and dumber systems than LLMs were still called AI
Does that mean that enemy AIs that choose a random position near them and find the shortest path to it are smarter than chatgpt? They have been called AI for longer than i played games with enemies
You can also disprove the argument by just using duckduckgo and filtering from before OpenAI existed https://duckduckgo.com/?q="AI"&df=1990-01-01..2015-01-01&t=fpas&ia=web
Doom enemies had AI 30 years ago.
But those weren’t generated using machine learning, were they?
So? I don’t see how that’s relevant to the point that “AI” has been used for very simple decision algorithms since for along time, and it makes no sense to not use it for LLMs too.
People have called NPCs in video games “AI” for like, decades.
A thermostat is an algorithm. Maybe. Can be done mechanically. That’s not much of a decision, “is number bigger?”
Litterally everything we have ever made and called an AI is an algorithm. Just because we’ve made algorithms for making bigger, more complicated algorithms we don’t understand doesn’t mean it’s actually anything fundamentally different. Look at input, run numbers, give output. That’s all there ever has been. That’s how thermostats work, and it’s also how LLMs work. It’s only gotten more complicated. It has never actually changed.
It’s pretty funny you think LLMs are the only implementation of machine learning
That’s not a dog that’s a salad
It’s not the “AI” that I’m worried about destroying the world. It’s the tech Bros and CEOs who are trying to force it on us all I’m worried about. Cuz I don’t trust them to think things through or think ahead.
I’ve never really liked this meme. I quite dislike AI, but just because your NN sucks doesn’t mean NNs or AI in general is fundamentally poor.
I often write very poorly performing programs due to mistakes, lack of knowledge, or just general incompetence. That doesn’t mean all my programs perform poorly. It certainly doesn’t mean all your programs perform poorly.
“AI” sucks for a lot of reasons, but so does this image.
not worried about AI taking over the world.
I’m worried that corporate interests are using AI as a mechanism to circumvent our already weak control of the world and enslaving us only for AI to be used inappropriately in ways that further the divide between the wealthy and poor.
I’m worried that these corporate interests are doing irreversible damage to the environment by consuming energy and resources that could be better spent on solving real problems like homelessness, joblessness, healthcare, and hunger.
I’m worried that my kids will grow up in a world where technology has become so toxic to intelligence that they’re living in a mashup of Fahrenheit 451 and 1984.
AI couldn’t find its way out of a wet paper sack. The maliciously negligent rich assholes that are forcing us to use AI when it’s a half-baked pyramid scheme filled with crypto-bros and fascist supporting scumbags – those guys I’m concerned about.