• Mitch Effendi (ميتش أفندي)@piefed.mitch.science
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    2 days ago

    FWIW, this is why AI researchers have been screeching for decades not to create an AI that is anthropomorphized. It is already an issue we have with animals, now we are going to add a confabulation engine to the ass-end?

    • Jankatarch@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      2 days ago

      Yeah apparently even Eliza messed up with people back in the day and that’s not even an LLM.

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        2 days ago

        I’m starting to realize how easily fooled people are by this stuff. The average person cannot be this stupid, and yet, they are.

        • Xerxos@lemmy.ml
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          4
          ·
          1 day ago

          The average IQ is 100. That is not a lot and half of the population is below that. I’m more surprised how bad our education system is in filtering out the dumb people. Someone who is ‘not smart’ but has good memory and is diligent can make it frighteningly far in our society. Not to mention nepo babies who are a different kind of problem

          • Randomgal@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 hour ago

            The average IQ is 100 because it is designed to always have the average at 100%. If magically everyone became exactly 20% better at IQ tests tomorrow, the average IQ would be adjusted and still be 100.

            Smart argument.

    • uuldika@lemmy.ml
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      2 days ago

      LLMs are trained on human writing, so they’ll always be fundamentally anthropomorphic. you could fine-tune them to sound more clinical, but it’s likely to make them worse at reasoning and planning.

      for example, I notice GPT5 uses “I” a lot, especially saying things like “I need to make a choice” or “my suspicion is.” I think that’s actually a side effect of the RL training they’ve done to make it more agentic. having some concept of self is necessary when navigating an environment.

      philosophical zombies are no longer a thought experiment.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      People have this issue with video game characters who don’t even pretend to have intelligence. This could only go wrong.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      2 days ago

      Personally, I hate the idea of not doing something because there’s idiots out there who will fuck themselves up on it. The current gen of AI might be a waste of resources and the whole concept of the goal of AI might be incompatible with society’s existence; those are good reasons to at least be cautious about AI.

      I don’t think people wanting to have relationships with an AI is a good reason to stop it, especially considering that it might even be a good option for some people who would otherwise just have no one or maybe too many cats for them to care for. Consider the creepy stalker type that thinks liking someone or something gives them ownership over that person or thing. Better for them to be obsessed with an LLM they can’t hurt than a real person they might (or will make uncomfortable even of they end up being harmless overall).