• Doorknob@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 hours ago

    Who wants to give me a billion dollars to dig a hole and I’ll give you a billion to fill it back in and we’ll both say to investors we posted a billion dollars in revenue.

  • gergo@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    10 hours ago

    Just exploitative market grab for early dominance. (Or: “Grift” lol.) They will make it back when all of us have no choice but use chatgpt for everything.

      • GreenKnight23@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 hours ago

        technically according to NSPM-7 any FOSS is terroristic by nature because it’s anticapitalist.

        that means if you have contributed to FOSS at any time, you are a terrorist. technically.

        • Stitch0815@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          I know this is not a real discussion :D

          But I don’t think FOSS is inherently anticapitalist. It’s just not late stage capitalism. There are plenty of commercial FOSS projects.

          Sure you could compile them from source or download somones executable. But especially companies often want convenience, customer support and LTS versions.

  • J52@lemmy.nz
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 hours ago

    It’s not small change anymore. That’s what happens when you don’t listen to your customers.

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      63
      ·
      16 hours ago

      Well actually there is a long and rich history of companies that are able to operate at a loss using funds appropriated from sale of shares to investors, and this process continues so long as new investors keep buying in such that anybody selling out is covered by the new funds until enough people try to sell out that the price starts to plunge, although the collapse can be delayed by the company strategically buying back and occasionally splitting or reorganizing, meaning everyone gets their money back unless they sell too late.

      You know.

      A fucking Ponze Scheme.

    • Tiresia@slrpnk.net
      link
      fedilink
      English
      arrow-up
      20
      ·
      16 hours ago

      Oh honey, that hasn’t been true since 2008.

      The government will bail out companies that get too big to fail. So investors want to loan money to companies so that those companies become too big to fail, so that when those investors “collect on their debt with interest” the government pays them.

      They funded Uber, which lost 33 billion dollars over the course of 7 years before ever turning a profit, but by driving taxi companies out of business and lobbying that public transit is unnecessary, they’re an unmissable part of society, so investors will get their dues.

      They funded Elon Musk, whose companies are the primary means of communication between politicians and the public, a replacing NASA as the US government’s primary space launch provider for both civilian and military missions, and whose prestige got a bunch of governments to defund public transit to feed continued dependence on car companies. So investors will get their dues through military contracts and through being able to threaten politicians with a media blackout.

      And so they fund AI, which they’re trying to have replace so many essential functions that society can’t run without it, and which muddies the waters of anonymous interaction to the point that people have no choice but to only rely on information that has been vetted by institutions - usually corporations like for-profit news.

      The point of AI is not to make itself so desirable that people want to give AI companies money to have it in their life. The point of AI is to make people more dependent on AI and on other corporations that the AI company’s owners own.

    • kadu@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      34
      ·
      23 hours ago

      I wish. Even knowing it’s all a gigantic scam, they’ll first protect themselves before letting it burst and screw everybody else. The rich get a buffer period.

      • Honytawk@feddit.nl
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        30
        ·
        23 hours ago

        Is it really a scam when it creates content?

        It may be slop to you, it may not be useful for everything they market it as.

        But plenty of people find it useful, even if they use it for the wrong things.

        It is not like cryptocurrency, which is only used by people who want to get rich from it.

        • IronBird@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 hours ago

          enough people use crypto now that it’s unlikely to crater entirely…unless western governments finally kick out the neolib types and take back their country’s from private equity/big business

        • kadu@scribe.disroot.org
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          17 hours ago

          But plenty of people find it useful

          Plenty of people don’t properly wash their anuses too. Plenty of people think our planet is flat.

        • Leon@pawb.social
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          19 hours ago

          Is it really a scam when it creates content?

          No one is claiming that it doesn’t output stuff. The scam lies in the air castles that these companies are selling to us. Ideas like how it’ll revolutionise the workplace, how it will cure cancer, and bring about some kind of utopia. Like Tesla’s full-self-driving, these ideas will never manifest.

          We’re still at a stage where companies are throwing the slop at the wall to see what sticks, but for every mediocre success there’s a bunch of stories that indicate that it’s just costing money and bringing nothing to the table. At some point, the fascination for this novel-seeming technology will wear out, and that’s when the castle comes crashing down on us. At that point, the fat cats on top will have cashed out with what they can and us normal people will be forced to carry the consequences.

          • Goodeye8@piefed.social
            link
            fedilink
            English
            arrow-up
            8
            ·
            18 hours ago

            Exactly. Just like the dotcom bubble websites and web services aren’t the scam, the promise of it being some magical solution to everything is the scam.

            • ErmahgherdDavid@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              9 hours ago

              Unlike the dotcom bubble, Another big aspect of it is the unit cost to run the models.

              Traditional web applications scale really well. The incremental cost of adding a new user to your app is basically nothing. Fractions of a cent. With LLMs, scaling is linear. Each machine can only handle a few hundred users and they’re expensive to run:

              Big beefy GPUs are required for inference as well as training and they require a large amount of VRAM. Your typical home gaming GPU might have 16gb vram, 32 if you go high end and spend $2500 on it (just the GPU, not the whole pc). Frontier models need like 128gb VRAM to run and GPUs manufactured for data centre use cost a lot more. A state of the art Nvidia h200 costs $32k. The servers that can host one of these big frontier models cost, at best, $20 an hour to run and can only handle a handful of user requests so you need to scale linearly as your subscriber count increases. If you’re charging $20 a month for access to your model, you are burning a user’s monthly subscription every hour for each of these monster servers you have turned on. That’s generous and assumes they’re not paying the “on-demand” price of $60/hr.

              Sam Altman famously said OpenAI are losing money on their $200/mo subscriptions.

              If/when there is a market correction, a huge factor of the amount of continued interest (like with the internet after dotcom) is whether the quality of output from these models reflects the true, unsubsidized price of running them. I do think local models powered by things like llamacpp and ollama and which can run on high end gaming rigs and macbooks might be a possible direction for these models. Currently though you can’t get the same quality as state-of-the-art models from these small, local LLMs.

        • SaveTheTuaHawk@lemmy.ca
          link
          fedilink
          English
          arrow-up
          35
          arrow-down
          1
          ·
          edit-2
          19 hours ago

          Is it really a scam when it creates content?

          I create content in a ceramic bowl twice a day. Give me a billion.

          The scam is that the business plan is not feasible. Hundreds of techs have died because some cool idea could never make real money.

          And this is the finance model:

        • suicidaleggroll@lemmy.world
          link
          fedilink
          English
          arrow-up
          30
          arrow-down
          2
          ·
          edit-2
          23 hours ago

          It’s a scam because the prices they’re charging right now don’t reflect the actual costs. AI companies are trying to get people and companies hooked on it so that once they crank the prices up by 10x to start turning a profit, they’ll be able to maintain some semblance of a customer base. If they were charging the real prices a year ago, the AI bubble would have never reached the levels it has, and these companies wouldn’t be worth what they are now. It’s all propped up on a lie.

        • reksas@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          1
          ·
          21 hours ago

          creates content? Out of what? I dont deny that there are some use cases for ai that are good, but ultimately its all built on backs of people who have actually contributed to this world. If it was completely non-profit it would be more okay, but as it currently is ai is tool of exploitation and proof that law protects only the rich and binds only us.

        • AnAverageSnoot@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          18 hours ago

          It’s not that it’s not useful for the end customer. It’s more that investors are overpromised on the value and return from AI. There has been no returns yet, and consumers are finding less useful than these companies intended. The scam is for the investors, not the end user

          • badgermurphy@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            16 hours ago

            I think it is that its not useful for the end customer. Every anecdote I’ve heard about LLMs helping someone with their work were heavily qualified with special cases and circumstances and narrow use cases, resulting in a description of a process that was made more complex by adding the LLM, which then helped them eliminate nearly as much complication and effort as it added. These are the stories from the believers.

            Now add in the fact that almost nobody is on a paid service tier outside of work, and all the paid tiers are currently heavily subsidized. If it has questionable utility at today’s prices, the value will only decline from there as prices rise to cover the real costs to run these things.

        • Danitos@reddthat.com
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          22 hours ago

          I agree with you. Not as useful as tech-bros claim, but not as little as other people claim neither. Definitely not a trillion value thing, tho.

  • oakey66@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    23 hours ago

    Wow. Glad they just converted to a for profit entity! Can’t wait for them to unleash all this success on to the the general financial market.

    • SugarCatDestroyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      21 hours ago

      I agree, and essentially they used slightly reworked old neural network technologies, increasing their power with the help of data centers.

  • AnAverageSnoot@lemmy.ca
    link
    fedilink
    English
    arrow-up
    230
    arrow-down
    2
    ·
    1 day ago

    AI is funded solely by sunk cost fallacy at this point. I wonder how long it will be before investments start getting pulled back because of a lack of ROI. I can already feel the sentiment towards AI and it getting pushed in everything turning negative amongst consumers recently.

    • Taldan@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      20 hours ago

      I wouldn’t have a problem if they were actually investing the money in something useful like R&D

      Nearly all the investment is in data centers. Their approach for the past 2 years seems to be just throwing more hardware at existing approaches, which is a really great way to burn an absurd amount of money for little to nothing in return

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        16 hours ago

        It’s very corporate, isn’t it? “Just keep scaling what we have.”

        That being said, a lot of innovation is happening, but goes unused. It’s incredible how my promising papers come out, and get completely passed over by Big Tech AI, like nothing matters unless it’s developed in house.

        The Chinese firms are picking up some research in bigger models, at least, but are kinda falling into local maxima too.

    • katy ✨@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      1 day ago

      AI is funded solely by sunk cost fallacy at this point.

      and the us economy an gdp relies solely on ai make of that what you will.

        • merc@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 hours ago

          That’s the only reason I don’t think it will pop in the next 6 months or so. Even Biden or Obama would have stepped in to try to prevent the economy from crashing. But, there’s the Trump factor. First of all, some of his biggest backers are from the AI “industry”. His VP is tied to Peter Thiel, his biggest donors are Crypto and AI bros. The vast majority of his own personal money is tied up in the current Crypto bubble. In addition, he’s obviously so easily bribed. Even if he he wasn’t interested in intervening otherwise, he could easily be bribed to intervene.

          Because of Trump, and the fact that the house, senate and judiciary are all Trump lackeys, I think the bubble will survive until at least the 2026 midterms. If the Democrats take back control of the House and Senate they could take control over spending from Trump, which might mean the bubble is allowed to pop. But, I wouldn’t be surprised to see Trump hand over literal trillions in taxpayer dollars to keep the bubble inflated.

    • SSUPII@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      5
      ·
      edit-2
      1 day ago

      Investment is done really to train models for ever more miniscule gains. I feel like the current choices are enough to satisfy who is interested in such services, and what really is lacking is now more hardware dedicated to single user sessions to improve quality of output with the current models.

      But I really want to see more development on offline services, as right now it is really done only by hobbyists and only occasionally large companies with a little dripfeed (Facebook Llama, original Deepseek model [latter being pretty much useless as no one has the hardware to run it]).

      I remember seeing the Samsung Galaxy Fold 7 (“the first AI phone”, unironic cit.) presentation and listening to them talking about all the AI features instead of the real phone capabilities. “All of this is offline, right? A powerful smartphone… makes sense to have local models for tasks.” but it later became abundantly clear it was just repackaged always-online Gemini for the entire presentation on $2000 of hardware.

      • mcv@lemmy.zip
        link
        fedilink
        English
        arrow-up
        37
        ·
        1 day ago

        They’re investing this much because they honestly seem to think they’re on the cusp of super intelligent AGI. They’re not, but they really seem to think they are, and that seems to justify these insane investments.

        But all they’re really doing is the same thing as before but even bigger. It’s not going to work. It’s only going to make things even more expensive.

        I use Copilot and Claude at work, and while it’s really impressive at what it can do, it’s also really stupid and requires a lot of hand holding. It’s not on the brink of AGI super intelligence. Not even close. Maybe we’ll get there some day, but not before all these companies are bankrupt.

          • merc@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 hours ago

            Comparing the coming crash to the dot com crash is like comparing a rough landing to the various crashes on Sept 11th, 2001.

            The dot com crash was mostly isolated in high tech. Because it was lead by the Japanese economy starting to fail, and followed by the Sept 11th attacks, the various combined crashes resulted in the S&P 500 falling by about 50% from its peak to the bottom, but it was already back up to the peak value in 2007, then the global financial crisis hit.

            This bubble is much bigger. Some analysts say the AI bubble is 17x the size of the Dot Com bubble, and 4x the size of the 2007/08 real estate bubble. AI stocks were 40% of all US GDP growth in 2025, and 80% of all growth in US stocks.

            Nvidia’s stock price has gone up 1700% in just 2 years. OpenAI is planning to go public on a valuation of $1 trillion despite losing vast amounts of money. Just 7 US tech companies make up 36% of the entire US stock market, and they’re all heavily betting on AI.

            At least when the dot com bubble popped, it left some useful things behind, like huge amounts of dark fibre. But, the AI processors are so specialized they can’t be used for much of anything else. They also wear out, sometimes within months. The datacenter buildings themselves can maybe be repurposed to being general purpose datacenters, but, a lot of the contents will have to be thrown out.

            • bobo@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              8 hours ago

              Have you seen any comparisons to the previous AI bubbles and winters?

      • artyom@piefed.social
        link
        fedilink
        English
        arrow-up
        26
        ·
        1 day ago

        I knew it was a bubble since Computex January 2024 when Derb8uer showed an “AI PC case”. He asked “What’s AI about this PC case?” and they replied that you could put an AI PC inside it.

        • SSUPII@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          9
          ·
          1 day ago

          You are talking more about the term here being used everywhere out of context.

          • artyom@piefed.social
            link
            fedilink
            English
            arrow-up
            24
            ·
            1 day ago

            I am talking about companies slapping “AI” on their products and systems and raising their value, in the same way that companies in the 90s slapped “dotcom” on their branding and raised their value.

      • Taldan@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 hours ago

        what really is lacking is now more hardware dedicated to single user sessions to improve quality of output with the current models

        That is the exact opposite of my opinion. They’re throwing tons of computing at the current models. It has produced little improvement. The vast majority of investment is in compute hardware, rather than R&D. They need more R&D to improve the underlying models. More hardware isn’t going to get the significant gains we need

      • ferrule@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        The problem is there is little continuous cash flow for on prem personal services. Look at Samsung’s home automation, its nearly all online features and when the internet is out you are SOL.

        To have your own Github Copilot in a device the size and power usage of a Raspberry Pi would be amazing. But then they won’t get subscriptions.

      • humanspiral@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 day ago

        more development on offline services

        There is absolutely massive development on open weight models that can be used offline/privately. Minimax M2, most recent one, has comparable benchmark scores to the private US megatech models at 1/12th the cost, and at higher token throughput. Qwen, GLM, deepseek have comparable models to M2, and have smaller models more easily used on very modest hardware.

        Closed megatech datacenter AI strategy is partnership with US government/military for oppressive control of humanity. Spending 12x more per token while empowering big tech/US empire to steal from and oppress you is not worth a small fraction in benchmark/quality improvement.

    • jordanlund@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      1 day ago

      One of our biggest bookstores contracted with a local artist for some merch. That artist used AI with predictable results. Now everyone involved is getting raked over the coals for it.

      No surprise, they just announced a 4th round of layoffs too. 😟

      https://lithub.com/everything-you-need-to-know-about-the-powells-ai-slop-snafu-and-what-we-can-all-learn-from-it/

      https://www.koin.com/news/portland/powells-layoffs-employees-10292025/

    • Strider@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 day ago

      Why do you think AI is pushed so hard?

      Everyone is aware this has to be useful. Too much money.

      Still the powers that be will do everything to avoid a hard crash, which would be so much earned.

    • gian @lemmy.grys.it
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      I wonder how long it will be before investments start getting pulled back because of a lack of ROI.

      Just wait for the next hot thing to come out

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        Uber used accounting tricks to hide their true losses for years. They’ve only recently managed to become profitable by squeezing both drivers and passengers at the same time. Is that sustainable? Almost certainly not, but, for the moment, they’re getting away with it.

      • Jhex@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        24 hours ago

        this is not a bad analogy, but you are off by orders of magnitude

        more importantly, both Uber and Amazon always had a path to profitability (Amazon specifically was already making tons of money on AWS long before the store front made money). AI has already been shown to not have a path to profitability; whatever little value companies around the world have been able to extract, cannot pay the cost of producing it.

        think of it this way:

        You produce a little car that can drive 2 people and some bags around, it costs you $1000 to make and you sell it for $3000 which a ton of people can afford… you have a path to profitability

        I enter the market with a car that can carry 20 people, plus full on luggage for all and it moves twice as fast… but, in practice, I can only really move 3 people and often take them the wrong way, also the luggage was a complete lie and I can only allow passengers with their purses… also my car cost $50,000 to make so I would have to sell it for $70,000 and nobody would pay that when they could get 20 of your cars for less… also also, I promised the people making some parts of my car that would invest 7 kajillion on their companies somehow.

        Which company would succeed? yours or mine?