In a shocking development that will surprise absolutely no one who has ever argued with the internet, researchers discovered that AI medical chatbots can be talked into endorsing wildly bad health advice, as long as you wrap it in fancy doctor language. According to studies in The Lancet Digital Health and Nature Medicine, these bots will sometimes nod along to gems like “insert garlic rectally for immune support” if it sounds like it came from a hospital discharge note instead of Reddit. Apparently, the models have learned that anything written in clinical jargon must be legitimate, while actual logic is optional. The result: confident, authoritative nonsense delivered with the bedside manner of a robot that aced medical exams but skipped the class on common sense, making it roughly as reliable for health decisions as Googling your symptoms at 3 a.m., but with more confidence and occasionally, inexplicably, more garlic:

'Rectal garlic insertion for immune support': Medical chatbots confidently give disastrously misguided advice, experts say
AI chatbots are seduced by misinformation that is delivered in medical jargon, leading them to give potentially dangerous advice.

'Rectal garlic insertion for immune support': Medical chatbots confidently give disastrously misguided advice, experts say
AI chatbots are seduced by misinformation that is delivered in medical jargon, leading them to give potentially dangerous advice.