“Patients arriving with researched information is not new. They have long brought newspaper clippings, internet search results, or notes from conversations with family. Potential solutions passed along in WhatsApp threads have at times been an integral part of my clinical conversations. Information seeking outside the health care setting has always been part of the landscape of care.
But something about this moment feels different. Generative artificial intelligence (AI), with tools like ChatGPT, offers information in ways that feel uniquely conversational and tailored. Their tone invites dialogue. Their confidence implies competence. Increasingly, patients are bringing AI-generated insights into my clinic and are sometimes confident enough to challenge my assessment and plan.
[..] comparing LLMs [large language models] to clinicians feels inherently unfair. Clinicians often work under pressure with rushed visits and overflowing inboxes as health care systems demand productivity and performance. My concern is that research is comparing clinicians who are not given the luxury of performing their best under strained systems with the inexhaustible resources of generative AI. That is not a fair fight, but it is reality. I find patients may seek clear answers, but even more, they want to feel recognized, reassured, and truly heard. Unfortunately, under the weight of competing demands, that is what often slips for me—not just because of systemic constraints but also because I am merely human.
Despite generative AI’s capabilities, patients still come to see me. The tools they use usually offer confident suggestions but eventually defer with some version of a familiar disclaimer: “Consult a medical professional.” Accountability for care still rests with the clinician. Both in terms of liability, who will take the blame if something goes wrong or someone gets hurt, and in terms of responsibility, who will place the orders for diagnostic plans and prescriptions for care. Clinicians remain the gatekeepers. In practice, this means navigating patient requests like a tilt-table test for intermittent dizziness—tests that are not unusual but may not be appropriate at a specific stage of care. I find myself explaining concepts like overdiagnosis, false-positives, or other risks of unnecessary testing. At best, the patient understands the ideas, which may not resonate when one is the person experiencing symptoms. At worst, I sound dismissive. There is no function that tells ChatGPT that clinicians lack routine access to tilt-table testing or that echocardiogram appointments are delayed due to staffing shortages. I have to carry those constraints into the examination room while still trying to preserve trust.
When I speak with medical students, I notice a different kind of paternalism creeping in. And I have caught it in my inner monologue even if I do not say it aloud. The old line, “They probably WebMD’d it and think they have cancer,” has morphed into the newer, just-as-dismissive line, “They probably ChatGPT’d it and are going to tell us what to order.” It often reflects defensiveness from clinicians rather than genuine engagement and carries an implicit message: We still know best. It is an attitude that risks eroding the sacred and fragile trust between clinicians and patients. It reinforces the feeling that we are not “in it” with our patients and are truly gatekeeping rather than partnering. Ironically, that is often why I hear patients turn to LLMs in the first place.
One patient said plainly, “This is how I can advocate for myself better.” The word advocate struck me. It is what one does when trying to convince someone with more power. I know clinicians still hold the keys to ordering tests, referrals, and treatment plans, but there is a sense of preparing for a fight that the word advocate conveyed. When patients feel unheard, arming themselves with knowledge becomes a strategy to be taken seriously. I vividly recall another patient who, following a pregnancy loss, arrived with a specific set of evaluations she wanted. Her request was layered with grief, prior dismissal, and the quiet hope that being prepared might finally lead to being heard. She did not need the usual strategy of launching into an explanation of false-positives, overdiagnosis, and test characteristics. From a patient’s perspective, I recognized that all the cognitive explaining can sound like: “I still know more than you, no matter what tool you used, and I’m going to overwhelm you with things you don’t understand.” What worked in that moment was much different. I said, “We’ll talk through the testing options. But first, I’m so sorry for your loss. I can’t imagine how you’re feeling. I want to figure this out with you and make a plan together.” That moment of acknowledgment was what really opened the door.
Our roles as clinicians have been constantly shifting since my training. Patient-centered care and collaboration were part of the lexicon during medical school. Physicians have been long evolving from authoritative experts to collaborative guides, but the pace is accelerating now. The democratization of medical knowledge demands relational humility as a priority more than ever from clinicians.
Seeing AI-informed visits as opportunities for deeper dialogue rather than threats to clinical authority may sound aspirational, but it reflects a necessary shift. The practical challenges are real. Patients may come with unrealistic expectations or cite recommendations that do not align with evidence-based guidelines or are impractical for a given resource setting. These moments are not new. We have long had to explain why a magnetic resonance imaging scan is not always needed for back pain or why antibiotics will not help a viral infection. We know the solution is not to shut these conversations down but to meet them with patience and curiosity. Medicine has always depended on relationships. What is changing is how those relationships begin and what patients bring to the table. If patients are arming themselves with information to be heard, our task as clinicians is to meet them with recognition, not resistance. In doing so, we preserve what has always made medicine human: the willingness to share meaning, uncertainty, and hope, together.”
Full editorial, KR Sundar, JAMA, 2025.7.24