• sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    2 days ago

    It doesn’t have to, you can run LLMs locally. We do at my org, and we only have a few dozen people using it, and it’s running on relatively modest hardware (Mac Mini for smaller models, Mac Studio for larger models).

    • squaresinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Yeah, shitty toy ones. This here is about productivity, not about a hobby. And not even real state-of-the-art models were able to actually give a productivity advantage.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        24 hours ago

        Our self-hosted ones are quite good and get the job done. We use them a lot for research, and it seems to do a better job than most search engines. We also link it to internal docs and it works pretty well for that too.

        If you run a smaller model at home because you have limited RAM, yeah, you’ll have less effective models. We can’t run the top models on our hardware, but we can run much larger models than most hobbyists. We’ve compared against the larger commercial models, and they work well, if little slowly.