As Snowden told us, video and audio recording capabilities of your devices are NSA spying vectors. OSS/Linux is a safeguard against such capabilities. The massive datacenter investments in US will be used to classify us all into a patriotic (for Israel)/Oligarchist social credit score, and every mega tech company can increase profits through NSA cooperation, and are legally obligated to cooperate with all government orders.

Speech to text and speech automation are useful tech, though always listening state sponsored terrorists is a non-NSA targeted path for sweeping future social credit classifications of your past life.

Some small LLMs that can be used for speech to text: https://modal.com/blog/open-source-stt

  • fonix232@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    3 hours ago

    Aye, I was actually hoping to use the NPU for TTS/STT while keeping the LLM systems GPU bound.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 hours ago

      It still uses memory bandwidth, unfortunately. There’s no way around that, though NPU TTS would still be neat.

      …Also, generally, STT responses can’t be streamed, so you mind as well use the iGPU anyway. TTS can be chunked I guess, but do the major implementations do that?

      • fonix232@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        3 hours ago

        Piper does chunking for TTS, and could utilise the NPU with the right drivers.

        And the idea of running them on the NPU is not about memory usage but hardware capacity/parallelism. Although I guess it would have some benefits when I don’t have to constantly load/unload GPU models.

        • brucethemoose@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          2 hours ago

          Yeah… Even if the LLM is RAM speed constrained, simply using another device to not to interrupt it would be good.

          Honestly AMD’s software dev efforts are baffling. They’ve focused on a few on libraries precisely no-one uses, like this: https://github.com/amd/Quark

          While ignoring issues holding back entire sectors (like broken flash-attention) with devs screaming about it at the top of their lungs.

          Intel suffers from corporate Game of Thrones, but at least they have meaningful contributions in the open source space here, like the SYCL/AMX llama.cpp code or the OpenVINO efforts.