Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • cøre@leminal.space
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    They have to be for a specialized type of treatment or procedure such as looking at patient xrays or other scans. Just slopping PHI into a LLM and expecting it to diagnose random patient issues is what gives the false diagnoses.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 minutes ago

      I don’t expect it to diagnose random patient issues.

      I expect it to take labels of medication, vitals, and patient testimony of 50,000 post-cardiac event patients, and bucket a random post-cardiac patient into the same place as most patients with like meta.

      And then a non LLM model for Cancer patients and xrays

      And then MRI’s and CT’s.

      And I expect this all to supliment the doctors and techs decisions. I want an xray tech to look at it, and get markers that something is off, which has already been happening since the 80’s Computer‑Aided Detection/Diagnosis (CAD/CADe/CADx)

      This shit has been happinging the hard way in software for decades. The new tech can do better.