I refuse to sit here and pretend that any of this matters. OpenAI and Anthropic are not innovators, and are antithetical to the spirit of Silicon Valley. They are management consultants dressed as founders, cynical con artists raising money for products that will never exist while peddling software that destroys our planet and diverts attention and capital away from things that might solve real problems.

I’m tired of the delusion. I’m tired of being forced to take these men seriously. I’m tired of being told by the media and investors that these men are building the future when the only things they build are mediocre and expensive. There is no joy here, no mystery, no magic, no problems solved, no lives saved, and very few lives changed other than new people added to Forbes’ Midas list.

None of this is powerful, or impressive, other than in how big a con it’s become. Look at the products and the actual outputs and tell me — does any of this actually feel like the future? Isn’t it kind of weird that the big, scary threats they’ve made about how AI will take our jobs never seem to translate to an actual product? Isn’t it strange that despite all of their money and power they’re yet to make anything truly useful?

My heart darkens, albeit briefly, when I think of how cynical all of this is. Corporations building products that don’t really do much that are being sold on the idea that one day they might, peddled by reporters that want to believe their narratives — and in some cases actively champion them. The damage will be tens of thousands of people fired, long-term environmental and infrastructural chaos, and a profound depression in Silicon Valley that I believe will dwarf the dot-com bust.

And when this all falls apart — and I believe it will — there will be a very public reckoning for the tech industry.

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 hours ago

    But I don’t think it’s the best option if you consider everyone involved.

    Can you expand on this? Do you mean from an environmental perspective because of the resource usage, social perspective because of jobs losses, and / or other groups being disadvantaged because of limited access to these tools?

    • sem@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      Basically the LLM may make people’s jobs easier, for instance someone can get a meeting summary with less effort, but they produce worse results if you consider everyone affected by the work product, like considering whose views are underrepresented in the summary. Or, if you’re using it to categorize text, you can’t find out why it is producing incorrect results and improve it the way you could with other machine learning techniques. I think Emily Bender can do a better job explaining it than I can:

      https://m.youtube.com/watch?v=3Ul_bGiUH4M&t=36m35s

      check out the part where she talks about the problems with relying on LLMs to generate meeting summaries and with using it to clarify customer support calls as “resolved” or “not resolved”. I tried to get close to that second part since the video is long.

      • Greg Clarke@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        I agree and I think this comes back to execution of the technology as opposed to the technology itself. For context, I work as an ML engineer and I’ve been concerned with bias in AI long before ChatGPT. I’m interested in other folks perspectives on this technology. The hype and spin from tech companies is a frustrating distraction from the real benefits and risks of AI.