• bobbyguy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    they look at your speech patterns and the specific words you use to make the way they talk seem more familiar, remember when twitter launched its own ai that would post tweets and learn from other posts, they had to take it down after about 15 hours because it became super racist and homaphobic

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Training LLMs based on tweets is one thing, training it on chats with users is something completely different. I don’t think this actually happens. The model would degrade extremely fast.