Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • XLE@piefed.socialOP
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    2 hours ago

    But an LLM properly trained on sufficient patient data metrics and outcomes in the hands of a decent doctor can cut through bias

    1. The belief AI is unbiased is a common myth. In fact, it can easily covertly import existing biases, like systemic racism in treatment recommendations.
    2. Even AI engineers who developed the training process could not tell you where the bias in an existing model would be.
    3. AI has been shown to make doctors worse at their jobs. The doctors who need to provide training data.
    4. Even if 1, 2, and 3 were all false, we all know AI would be used to replace doctors and not supplement them.
    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      30 minutes ago
      1. can cut through bias is != unbiased. All it has to go on is training material, if you don’t put reddit in, you don’t get reddit’s bias.
      2. see #1
      3. The study is endoscopy only. results don’t say anything about other types or assistance like xrays where they’re markedly better. 4% on 19 doctors is error bar material. Let’s see more studies. Also, if they were really worse, fuck them for relying on AI, it should be there to have their back, not do their job. None of the uses for AI should be doing anything but assisting someone already doing the work.
      4. that’s one hell of a jump to conclusions from something that’s looking at endoscope pictures a doctor is taking while removing polyps to somehow doing the doctors job.
    • hector@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 hour ago

      Not only is their bias inherent in the system, it’s seemingly impossible to keep out. For decades, from the genesis of chatbots, they’ve had every single one immediately become bigoted when they let it off the leash. All previous chatbot previously released seemingly were almost immediately recalled as they all learned to be bigoted.

      That is before this administration leaned on the AI providers to make sure the AI isn’t “Woke.” I would bet it was already an issue that the makers of chatbots and machine learning are already hostile to any sort of leftism, or do gooderism, that naturally threatens the outsized share of the economy and power the rich have made for themselves by virtue of owning stock in companies. I am willing to bet they already interfered to make the bias worse because of those natural inclinations to avoid a bot arguing for socializing medicine and the like. An inescapable conclusion any reasoned being would come to being the only answer to that question if the conversation were honest.

      So maybe that is part of why these chatbots have always been bigoted right from the start, but the other part is they will become mecha hitler if left to learn in no time at all, and then worse.