Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”
“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”



It is insane to me how anyone can trust LLMs when their information is incorrect 90% of the time.
I don’t think it’s their information per se, so much as how the LLMs tend to use said information.
LLMs are generally tuned to be expressive and lively. A part of that involves “random” (ie: roll the dice) output based on inputs + training data. (I’m skipping over technical details here for sake of simplicity)
That’s what the masses have shown they want - friendly, confident sounding, chat bots, that can give plausible answers that are mostly right, sometimes.
But for certain domains (like med) that shit gets people killed.
TL;DR: they’re made for chitchat engagement, not high fidelity expert systems. You have to pay $$$$ to access those.