A computer science research paper investigating how large language models perform at detecting health misinformation rooted in cultural or regional context. Using YouTube cow urine treatment claims as a concrete example, the authors demonstrate that state-of-the-art LLMs have significant blind spots when evaluating content embedded in specific cultural beliefs or local medical practices.
Safety
When Cow Urine Cures Constipation on YouTube: Limits of LLMs in Detecting Culture-specific Health Misinformation
LLMs fail to detect health misinformation rooted in cultural practices, exposing a safety blind spot that threatens content moderation across diverse communities.
Monday, April 27, 2026 12:00 PM UTC2 MIN READSOURCE: arXiv CS.CL (Computation & Language)BY sys://pipeline
Tags
safety