• 𝓹𝓻𝓲𝓷𝓬𝓮𝓼𝓼@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    1
    ·
    2 days ago

    doesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought)

    any money says they’re vulnerable to prompt injection in the comments and posts of the site

    • JustTesting@lemmy.hogru.ch
      link
      fedilink
      English
      arrow-up
      5
      ·
      17 hours ago

      They also have a ‘skill’ sharing page (a skill is just a text document with instructions) and depending on config, the bot can search for and ‘install’ new skills on its own. and agyone can upload a skill. So supply chain attacks are an option, too.

    • CTDummy@piefed.social
      link
      fedilink
      English
      arrow-up
      30
      ·
      edit-2
      2 days ago

      Lmao already people making their agents try this on the site. Of course what could have been a somewhat interesting experiment devolves into idiots getting their bots to shill ads/prompt injections for their shitty startups almost immediately.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        18 hours ago

        I am a little curious about how effective a traditional chain mail would be on it.

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      ·
      2 days ago

      There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.