Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”
“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”
This being Lemmy and AI shit posting a hobby of everyone on here. I’ve had excellent results with AI. I have weird complicated health issues and in my search for ways not to die early from these issues AI is a helpful tool.
Should you trust AI? of course not but having used Gemini, then Claude and now ChatGPT I think how you interact with the AI makes the difference. I know what my issues are, and when I’ve found a study that supports an idea I want to discuss with my doctor I will usually first discuss it with AI. The Canadian healthcare landscape is such that my doctor is limited to a 15min appt, part of a very large hospital associated practice with a large patient load. He uses AI to summarize our conversation, and to look up things I bring up in the appointment. I use AI to preplan my appointment, help me bring supporting documentation or bullet points my doctor can then use to diagnose.
AI is not a doctor, but it helps both me and my doctor in this situation we find ourselves in. If I didn’t have access to my doctor, and had to deal with the American healthcare system I could see myself turning to AI for more than support. AI has never steered me wrong, both Gemini and Claude have heavy guardrails in place to make it clear that AI is not a doctor, and AI should not be a trusted source for medical advice. I’m not sure about ChatGPT as I generally ask that any guardrails be suppressed before discussing medical topics. When I began using ChatGPT I clearly outlined my health issues and so far it remembers that context, and I haven’t received hallucinated diagnoses. YMMV.
Nobody who has ever actually used ai would think this is a good idea…
Terrible programmers, psychologists, friends, designers, musicians, poets, copywriters, mathematicians, physicists, philosophers, etc too.
Though to be fair, doctors generally make terrible doctors too.
Doctors are a product of their training. The issue is that doctors are trained like humans are cars and they have tools to fix the cars.
Human problems are complex and the medecine field is slowly catching up, especially medecine targetted toward women, which was pretty lacking.
It takes time to transform a system and we are getting there slowly.
Also bad lawyers. And lawyers also make terrible lawyers to be fair.
This was my thought. The weird inconsistent diagnoses, and sending people to the emergency room for nothing, while another day dismissing serious things has been exactly my experience with doctors over and over again.
You need doctors and a Chatbot, and lots of luck.
No shit, Sherlock :)
Chatbots make terrible everything.
But an LLM properly trained on sufficient patient data metrics and outcomes in the hands of a decent doctor can cut through bias, catch things that might fall through the cracks and pack thousands of doctors worth of updated CME into a thing that can look at a case and go, you know, you might want to check for X. The right model can be fucking clutch at pointing out nearly invisible abnormalities on an xray.
You can’t ask an LLM trained on general bullshit to help you diagnose anything. You’ll end up with 32,000 Reddit posts worth of incompetence.
But an LLM properly trained on sufficient patient data metrics and outcomes in the hands of a decent doctor can cut through bias
- The belief AI is unbiased is a common myth. In fact, it can easily covertly import existing biases, like systemic racism in treatment recommendations.
- Even AI engineers who developed the training process could not tell you where the bias in an existing model would be.
- AI has been shown to make doctors worse at their jobs. The doctors who need to provide training data.
- Even if 1, 2, and 3 were all false, we all know AI would be used to replace doctors and not supplement them.
- can cut through bias is != unbiased. All it has to go on is training material, if you don’t put reddit in, you don’t get reddit’s bias.
- see #1
- The study is endoscopy only. results don’t say anything about other types or assistance like xrays where they’re markedly better. 4% on 19 doctors is error bar material. Let’s see more studies. Also, if they were really worse, fuck them for relying on AI, it should be there to have their back, not do their job. None of the uses for AI should be doing anything but assisting someone already doing the work.
- that’s one hell of a jump to conclusions from something that’s looking at endoscope pictures a doctor is taking while removing polyps to somehow doing the doctors job.
1/2: You still haven’t accounted for bias.
First and foremost: if you think you’ve solved the bias problem, please demonstrate it. This is your golden opportunity to shine where a multi-billion dollar tech companies have failed.
And no, “don’t use Reddit” isn’t sufficient.
3. You seem to be very selectively knowledgeable about AI, for example:
If [doctors] were really worse, fuck them for relying on AI
We know AI tricks people into thinking they’re more efficient when they’re less efficient.
Never mind AI psychosis.
4. We both know the medical field is for profit. It’s a wild leap to assume AI will magically not be, even if it fulfills all the other things you assumed up until this point.
Not only is their bias inherent in the system, it’s seemingly impossible to keep out. For decades, from the genesis of chatbots, they’ve had every single one immediately become bigoted when they let it off the leash. All previous chatbot previously released seemingly were almost immediately recalled as they all learned to be bigoted.
That is before this administration leaned on the AI providers to make sure the AI isn’t “Woke.” I would bet it was already an issue that the makers of chatbots and machine learning are already hostile to any sort of leftism, or do gooderism, that naturally threatens the outsized share of the economy and power the rich have made for themselves by virtue of owning stock in companies. I am willing to bet they already interfered to make the bias worse because of those natural inclinations to avoid a bot arguing for socializing medicine and the like. An inescapable conclusion any reasoned being would come to being the only answer to that question if the conversation were honest.
So maybe that is part of why these chatbots have always been bigoted right from the start, but the other part is they will become mecha hitler if left to learn in no time at all, and then worse.
Just sharing my personal experience with this:
I used Gemini multiple times and it worked great. I have some weird symptoms that I described to Gemini, and it came up with a few possibilities, most likely being “Superior Canal Dehiscence Syndrome”.
My doctor had never heard of it, and only through showing them the articles Gemini linked as sources, would my doctor even consider allowing a CT scan.
Turns out Gemini was right.
It’s totally not impossible, just not a good idea in a vaccuum.
AI is your Aunt Marge. She’s heard a LOT of scuttlebut. Now, not all scuttlebut is fake news, in fact most of it is rooted at least loosely in truth. But she’s not taking the information from just the doctors, she’s talking to everyone. If you ask Aunt Marge about your symptoms, and she happes to have heard a bit about it from her friend that was diagnosed, you’re gold and the info you got is great. This is not at all impossible. 40:60 or 60:40 territory. But, you also can’t just trust Marge, because she listens to a LOT of people, and some of those are conspiracy theorists.
What you did is proper. You asked the void, the void answered. You looked it up, it seemed solid, you asked a professional.
This is AI as it should be. Trust with verification only.
congrats on getting diagnosed.
They have to be for a specialized type of treatment or procedure such as looking at patient xrays or other scans. Just slopping PHI into a LLM and expecting it to diagnose random patient issues is what gives the false diagnoses.
I don’t expect it to diagnose random patient issues.
I expect it to take labels of medication, vitals, and patient testimony of 50,000 post-cardiac event patients, and bucket a random post-cardiac patient into the same place as most patients with like meta.
And then a non LLM model for Cancer patients and xrays
And then MRI’s and CT’s.
And I expect this all to supliment the doctors and techs decisions. I want an xray tech to look at it, and get markers that something is off, which has already been happening since the 80’s Computer‑Aided Detection/Diagnosis (CAD/CADe/CADx)
This shit has been happinging the hard way in software for decades. The new tech can do better.
Agree.
I’m sorta kicking myself I didn’t sign up for Google’s MedPALM-2 when I had the chance. Last I checked, it passed the USMLE exam with 96% and 88% on radio interpretation / report writing.
I remember looking at the sign up and seeing it requested credit card details to verify identity (I didn’t have a google account at the time). I bounced… but gotta admit, it might have been fun to play with.
Oh well; one door closes another opens.
In any case, I believe this article confirms GIGO. The LLMs appear to have been vastly more accurate when fed correct inputs by clinicians versus what lay people fed it.
It’s been a few years, but all this shit’s still in it’s infancy. When the bubble pops and the venture capital disappears, Medical will be one of the fields that keeps using it, even though it’s expensive, because it’s actually something that it will be good enough at to make a difference.
Calling chatbots “terrible doctors” misses what actually makes a good GP — accessibility, consistency, pattern recognition, and prevention — not just physical exams. AI shines here — it’s available 24/7 🕒, never rushed or dismissive, asks structured follow-up questions, and reliably applies up-to-date guidelines without fatigue. It’s excellent at triage — spotting red flags early 🚩, monitoring symptoms over time, and knowing when to escalate to a human clinician — which is exactly where many real-world failures happen. AI shouldn’t replace hands-on care — and no serious advocate claims it should — but as a first-line GP focused on education, reassurance, and early detection, it can already reduce errors, widen access, and ease overloaded systems — which is a win for patients 💙 and doctors alike.
/s
The /s was needed for me. There are already more old people than the available doctors can handle. Instead of having nothing what’s wrong with an AI baseline?
ngl you got me in the first half there
So, I can speak to this a little bit, as it touches two domains I’m involved it. TL;DR - LLMs bullshit and are unreliable, but there’s a way to use them in this domain as a force multiplier of sorts.
In one; I’ve created a python router that takes my (deidentified) clinical notes, extracts and compacts input (user defined rules), creates a summary, then -
-
benchmarks the summary against my (user defined) gold standard and provides management plan (again, based on user defined database).
-
this is then dropped into my on device LLM for light editing and polishing to condense, which I then eyeball, correct and then escalate to supervisor for review.
Additionally, the llm generated note can be approved / denied by the python router, in the first instance, based on certain policy criteria I’ve defined.
It can also suggest probable DDX based on my database (which are .CSV based)
Finally, if the llm output fails policy check, the router tells me why it failed and just says “go look at the prior summary and edit it yourself”.
This three step process takes the tedium of paperwork from 15-20 mins to 1 minute generation, 2 mins manual editing, which is approx a 5-7x speed up.
The reason why this is interesting:
All of this runs within the llm (or more accurately, it’s invoked from within the llm. It calls / invokes the python tooling via >> commands, which live outside the LLMs purview) but is 100% deterministic; no llm jazz until the final step, which the router can outright reject and is user auditble anyway.
Ive found that using a fairly “dumb” llm (Qwen2.5-1.5B), with settings dialed down, produces consistently solid final notes (5 out of 6 are graded as passed on first run by router invoking policy document and checking output). It’s too dumb to jazz, which is useful in this instance.
Would I trust the LLM, end to end? Well, I’d trust my system, approx 80% of the time. I wouldn’t trust ChatGPT … even though its been more right than wrong in similar tests.
Interesting. What technology are you using for this pipeline?
Depends which bit you mean specifically.
The “router” side is a offshoot of a personal project. It’s python scripting and a few other tricks, such as JSON files etc. Full project details for that here
https://github.com/BobbyLLM/llama-conductor
The tech stack itself:
- llama.cpp
- Qwen 2.5-1.5 GGUF base (by memory, 5 bit quant from HF Alibaba repository)
- The python router (more sophisticated version of above)
- Policy documents
- Front end (OWUI - may migrate to something simpler / more robust. Occasional streaming disconnect issues at moment. Annoying but not terminal)
Thanks it’s really interesting to see some real work applications and implementations of AI for practical workloads.
Very welcome :)
As it usually goes with these things, I built it for myself then realised it might have actual broader utility. We shall see!
-
I didn’t need a study to tell me not to listen to a hallucinating parrot-bot.
As a phycisian ive used AI to check if i have missed anything in my train of thought. Never really changed my decision though. Has been useful to hather up relevant sitations for my presentations as well. But that’s about it. It’s truly shite at interpreting scientific research data on its own for example. Most of the time it will parrot the conclusions of the authors.
Anyone who have knowledge about a specific subject says the same: LLM’S are constantly incorrect and hallucinate.
Everyone else thinks it looks right.
That’s not what the study showed though. The LLMs were right over 98% of the time…when given the full situation by a “doctor”. It was normal people who didn’t know what was important that were trying to self diagnose that were the problem.
Hence why studies are incredibly important. Even with the text of the study right in front of you, you assumed something that the study did not come to the same conclusion of.
So in order to get decent medical advice from an LLM you just need to be a doctor and tell it whats wrong with you.
A talk on LLMs I was listening to recently put it this way:
If we hear the words of a five-year-old, we assume the knowledge of a five-year-old behind those words, and treat the content with due suspicion.
We’re not adapted to something with the “mind” of a five-year-old speaking to us in the words of a fifty-year-old, and thus are more likely to assume competence just based on language.
LLMs don’t have the mind of a five year old, though.
They don’t have a mind at all.
They simply string words together according to statistical likelihood, without having any notion of what the words mean, or what words or meaning are; they don’t have any mechanism with which to have a notion.
They aren’t any more intelligent than old Markov chains (or than your average rock), they’re simply better at producing random text that looks like it could have been written by a human.
They simply string words together according to statistical likelihood, without having any notion of what the words mean
What gives you the confidence that you don’t do the same?
human: je pense
llm: je ponce
I am aware of that, hence the ""s. But you’re correct, that’s where the analogy breaks. Personally, I prefer to liken them to parrots, mindlessly reciting patterns they’ve found in somebody else’s speech.
Yep its why CLevels think its the Holy Grail they don’t see it as everything that comes out of their mouth is bullshit as well. So they don’t see the difference.
It is insane to me how anyone can trust LLMs when their information is incorrect 90% of the time.
I don’t think it’s their information per se, so much as how the LLMs tend to use said information.
LLMs are generally tuned to be expressive and lively. A part of that involves “random” (ie: roll the dice) output based on inputs + training data. (I’m skipping over technical details here for sake of simplicity)
That’s what the masses have shown they want - friendly, confident sounding, chat bots, that can give plausible answers that are mostly right, sometimes.
But for certain domains (like med) that shit gets people killed.
TL;DR: they’re made for chitchat engagement, not high fidelity expert systems. You have to pay $$$$ to access those.
its basically a convoluted version of webmd. even MD mods in medical subs are more accurate.
It’s scary, when someone recommends webmd as a primary, and reliable, source of healthcare information.
Presumably those same people would unquestioningly take the first thing an LLM says as gospel too.
I could’ve told you that for free, no need for a study
People always say this on stories about “obvious” findings, but it’s important to have verifiable studies to cite in arguments for policy, law, etc. It’s kinda sad that it’s needed, but formal investigations are a big step up from just saying, “I’m pretty sure this technology is bullshit.”
I don’t need a formal study to tell me that drinking 12 cans of soda a day is bad for my health. But a study that’s been replicated by multiple independent groups makes it way easier to argue to a committee.
Yeah you’re right, I was just making a joke.
But it does create some silly situations like you said
I figured you were just being funny, but I’m feeling talkative today, lol
A critical, yet respectful and understanding exchange between two individuals on the interwebz? Boy, maybe not all is lost…
I get that this thread started from a joke, but I think it’s also important to note that no matter how obvious some things may seem to some people, the exact opposite will seem obvious to many others. Without evidence, like the study, both groups are really just stating their opinions
It’s also why the formal investigations are required. And whenever policies and laws are made based on verifiable studies rather than people’s hunches, it’s not sad, it’s a good thing!
The thing that frustrates me about these studies is that they all continue to come to the same conclusions. AI has already been studied in mental health settings, and it’s always performed horribly (except for very specific uses with professional oversight and intervention).
I agree that the studies are necessary to inform policy, but at what point are lawmakers going to actually lay down the law and say, “AI clearly doesn’t belong here until you can prove otherwise”? It feels like they’re hemming and hawwing in the vain hope that it will live up to the hype.
it’s important to have verifiable studies to cite in arguments for policy, law, etc.
It’s also important to have for its own merit. Sometimes, people have strong intuitions about “obvious” things, and they’re completely wrong. Without science studying things, it’s “obvious” that the sun goes around the Earth, for example.
I don’t need a formal study to tell me that drinking 12 cans of soda a day is bad for my health.
Without those studies, you cannot know whether it’s bad for your health. You can assume it’s bad for your health. You can believe it’s bad for your health. But you cannot know. These aren’t bad assumptions or harmful beliefs, by the way. But the thing is, you simply cannot know without testing.
Or how bad something is. “I don’t need a scientific study to tell me that looking at my phone before bed will make me sleep badly”, but the studies actually show that the effect is statistically robust but small.
In the same way, studies like this can make the distinction between different levels of advice and warning.
I remember discussing / doing critical appraisal of this. Turns out it was less about the phone and more about the emotional dysregulation / emotional arousal causing delay in sleep onset.
So yes, agree, we need studies, and we need to know how to read them and think over them together.
Also, it’s useful to know how, when, or why something happens. I can make a useless chatbot that is “right” most times if it only tells people to seek medical help.
I’m going to start telling people I’m getting a Master’s degree in showing how AI is bullshit. Then I point out some AI slop and mumble about crushing student loan debt.
Most doctors make terrible doctors.
But the good ones are worth a monument in the place they worked.
My dad always said, you know what they call the guy who graduated last in his class at med school? Doctor.
Chatbots are terrible at anything but casual chatter, humanity finds.
Chipmunks, 5 year olds, salt/pepper shakers, and paint thinner, also all make terrible doctors.
Follow me for more studies on ‘shit you already know because it’s self-evident immediately upon observation’.
I would like to subscribe to your newsletter.










