Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • leftzero@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    13 hours ago

    LLMs don’t have the mind of a five year old, though.

    They don’t have a mind at all.

    They simply string words together according to statistical likelihood, without having any notion of what the words mean, or what words or meaning are; they don’t have any mechanism with which to have a notion.

    They aren’t any more intelligent than old Markov chains (or than your average rock), they’re simply better at producing random text that looks like it could have been written by a human.

    • plyth@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      They simply string words together according to statistical likelihood, without having any notion of what the words mean

      What gives you the confidence that you don’t do the same?

    • IratePirate@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      10 hours ago

      I am aware of that, hence the ""s. But you’re correct, that’s where the analogy breaks. Personally, I prefer to liken them to parrots, mindlessly reciting patterns they’ve found in somebody else’s speech.