A new correspondence in The Lancet warns that large language model (LLM)-based chatbots are escalating a dangerous trust paradox in health care. The authors argue these systems go beyond the social media dynamics previously described by Marcello Ienca and colleagues in their Review.
The trust paradox, as originally framed, describes a phenomenon where rigorous institutions lose credibility even as unaccountable sources gain it. The new analysis suggests LLM chatbots introduce a qualitatively distinct challenge by their very design.
The researchers contend that conversational AI systems combine apparent authority with a lack of accountability in ways that earlier digital platforms did not. This combination could accelerate the erosion of trust in evidence-based medical guidance.
Health care professionals and policymakers must grapple with how patients might rely on chatbot-generated advice over verified medical sources. The correspondence does not offer solutions but underscores the urgency of addressing this emerging risk.
The authors call for further research into how LLM-based systems interact with patient trust, particularly as these tools become more embedded in clinical and consumer health settings.