• tekato@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    2 days ago

    I don’t see where a government would need a chatbot. Anyways, chances are that half the staff was already using some form of LLM before this trial.

      • Treczoks@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        2 days ago

        The point is that this is all happening in a cloud. One that is probably located in the US. Not a good thing for a non-US government to send potentially confidential or even secret data to.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          2 days ago

          It doesn’t have to, you can run LLMs locally. We do at my org, and we only have a few dozen people using it, and it’s running on relatively modest hardware (Mac Mini for smaller models, Mac Studio for larger models).

          • squaresinger@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 day ago

            Yeah, shitty toy ones. This here is about productivity, not about a hobby. And not even real state-of-the-art models were able to actually give a productivity advantage.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              24 hours ago

              Our self-hosted ones are quite good and get the job done. We use them a lot for research, and it seems to do a better job than most search engines. We also link it to internal docs and it works pretty well for that too.

              If you run a smaller model at home because you have limited RAM, yeah, you’ll have less effective models. We can’t run the top models on our hardware, but we can run much larger models than most hobbyists. We’ve compared against the larger commercial models, and they work well, if little slowly.