• evo@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      un anno fa

      I can’t find a single production app that uses MLC LLM (because of the reasons I listed earlier (like multi GB models that aren’t garbage).

      Qualcomm announcement is a tech demo and they promised to actually do it next year…

      • sciencesebi@feddit.ro
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        un anno fa

        Who said about production and non-garbage? We’re not talking quality of responses or spread. You can use distilled roberta for all I give a fuck. We’re talking if they’re the first. They’re not.

        Are they the first to embed a LLM in an OS? Yes. A model with over x Bn params? Maybe, probably.

        But they ARE NOT the first to deploy gen AI on mobile.

        • evo@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          un anno fa

          You’re just moving the goal posts. I ran an LLM on device in an Android app I built a month ago. Does that make me first to do it? No. They are the first to production with an actual product.