Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    2 hours ago
    1. can cut through bias is != unbiased. All it has to go on is training material, if you don’t put reddit in, you don’t get reddit’s bias.
    2. see #1
    3. The study is endoscopy only. results don’t say anything about other types or assistance like xrays where they’re markedly better. 4% on 19 doctors is error bar material. Let’s see more studies. Also, if they were really worse, fuck them for relying on AI, it should be there to have their back, not do their job. None of the uses for AI should be doing anything but assisting someone already doing the work.
    4. that’s one hell of a jump to conclusions from something that’s looking at endoscope pictures a doctor is taking while removing polyps to somehow doing the doctors job.
    • XLE@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      1 hour ago

      1/2: You still haven’t accounted for bias.

      First and foremost: if you think you’ve solved the bias problem, please demonstrate it. This is your golden opportunity to shine where multi-billion dollar tech companies have failed.

      And no, “don’t use Reddit” isn’t sufficient.

      3. You seem to be very selectively knowledgeable about AI, for example:

      If [doctors] were really worse, fuck them for relying on AI

      We know AI tricks people into thinking they’re more efficient when they’re less efficient. It erodes critical thinking skills.

      And that’s without touching on AI psychosis.

      You can’t dismiss the results you don’t like, just because you don’t like them.

      4. We both know the medical field is for profit. It’s a wild leap to assume AI will magically not be, even if it fulfills all the other things you assumed up until this point, and ignore every issue I’ve raised.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 hour ago

        1/2: You still haven’t accounted for bias.

        Apparently, reading comprehension isn’t your strong point. I’ll just block you now, no need to thank me.