Navigating AI in Health Care: What Patients Should Know
In a short time, artificial intelligence (AI) has gone from a novelty to the default for countless internet users. Google, which accounts for over 90% of the search engine market according to Statista, now regularly provides an “AI Overview” to accompany results. And the most popular AI platform, Chat GPT, is quickly being adopted for online searches, with 77% of its users relying on the AI as a search engine, according to a survey from Adobe.
So if you’re asking questions about a health condition online, it’s very possible your answers are going to be filtered through some form of AI. However, how much should you trust this information, and how should you approach AI in health care as a patient?
Where AI Falls Short in Health Decisions
According to National Jewish Health Chief Medical Information officer and pulmonologist David Beuther, MD, when it comes to using AI for health info, the biggest risk is mistaking accessibility for accuracy. AI tools can organize information quickly and provide digestible answers, but they aren’t thinking from a clinical perspective.
“They don’t know you,” said Dr. Beuther. “They don’t have your full medical history, they can’t examine you, and they can’t interpret subtle details the way a clinician can.”
That difference matters. In medicine, context is everything. Symptoms that seem minor in isolation can signal something more serious when paired with other details. A trained clinician asks follow-up questions, evaluates risk and adjusts their thinking in real time. AI, by contrast, responds only to what it’s given and how it’s phrased. Without a medical background, most people aren’t going to know what to look for, what to ask and how to ask it.
This limitation becomes especially important when people turn to AI for diagnosis. AI tools are designed to provide answers that sound confident and complete. As Liz Kellermeyer, director of Library & Knowledge Services at National Jewish Health, noted, an AI’s answers can seem reassuring in the moment. However, that same tone of authority can often lead patients in the wrong direction.
“AI very confidently tells you things that are sometimes right and sometimes totally incorrect,” Kellermeyer said. “And if you’re feeling vulnerable or scared, that authoritative voice sounds great.”
On the other hand, an AI’s confident tone can also cause patients unnecessary distress if it misdiagnoses them with a serious illness. Most people have gone down an internet rabbit hole asking questions about a health issue, only to arrive on the other side with a crippling bout of hypochondria. An AI’s seemingly personalized, authoritative diagnosis could make you even more certain that you’re dealing with something worse than you actually are.
“People are not asking those follow-up questions that a doctor would ask to understand if this is something that requires immediate care,” said Kellermeyer. She pointed to a study recently published in Nature Medicine that demonstrated that people using AI for health questions chose the correct course of action less than half the time.
“A lot of it came down to how these people were communicating their health issues,” Kellermeyer said. “If you tell an AI ‘I have the worst headache I’ve ever had in my life,’ it may just take that at face-value, without digging further.”
Dr. Beuther echoed this concern from a clinical standpoint. In his view, AI can be a useful starting point for general education, but it falls short when it comes to decision-making.
“It can help you understand concepts or learn more about a condition,” he said. “But it shouldn’t be the thing that tells you what to do next for your specific condition.”
How to Use AI as a Helpful Health Tool
Even though AI may not be able to give you a proper diagnosis the way a trained clinician can, according to Dr. Beuther and Kellermeyer, it can give you important resources that help you better communicate with your doctor.
“One of the most useful things about AI is that it helps people figure out what questions to ask,” said Dr. Beuther. “If you go into a visit with a clearer sense of what you’re worried about, that can make the conversation much more productive.”
According to Dr. Beuther, thinking of AI in health care as a support tool rather than a decisionmaker is central to using it safely. Instead of asking, “What’s wrong with me?” patients may benefit more from asking, “What questions should I ask my doctor?” or “What does this term mean?”
“AI is good at turning word soup into clear questions,” said Kellermeyer. “If you can come to your doctor with two clear, concise questions, that can help you feel more in control.”
At the same time, Kellermeyer stressed that it’s important that patients be careful about sharing their private health information with an AI. “Patients need to understand that it's a tool. It's not a doctor, protecting your privacy. So whatever you're putting in there is being gathered as data for that tool. Even if they say they're not gathering it, there's not really a way for us to know that how they're using this information and how it’s going to be used for training models. The protections are not in place the way they are with HIPAA and your doctor.”
“If you start uploading medical records that contain identifiable data like your address and your date of birth into these systems, that's a big problem for both privacy and security,” Dr. Beuther added.
Again, using AI to educate yourself about general conditions and to help with strategies in communicating concerns with your doctor seems to be the best way to apply the technology. Even then, both Dr. Beuther and Kellermeyer emphasized the importance of verifying responses.
“Always ask the AI where it’s getting its information,” said Kellermeyer. “Ask for sources. Ask if it’s recent information. Sometimes, this can help patients perform research to help them before their appointments.”
The Takeaway
Even though AI represents an exciting new tool for understanding health issues, the fundamentals haven’t changed. The most reliable health information still comes from trusted sources, and clinical decisions still require professional judgment.
Used thoughtfully, AI can help patients feel more informed, more prepared and more engaged in their care. Used without caution, it can create confusion, false reassurance or unnecessary fear.
“There’s no substitute for the interaction between doctor and patient,” said Dr. Beuther.
You can learn more about using AI in health care from this downloadable handout produced by our hospital’s library team.
This information has been approved by David Beuther, MD, and Liz Kellermeyer, MSLS, (March 2026)