Artificial Intelligence has become one of the most talked-about technologies in recent years, with new tools emerging in everything from personal productivity to healthcare. As search engines and hospitals embrace AI to improve efficiency and access to information, patients are beginning to encounter this technology more frequently, sometimes without realizing it. But what does this shift mean for the way you understand your care or make healthcare decisions?
Not long ago, it became common, and even joked about, for people to turn to WebMD when feeling unwell, only to be overwhelmed by worst-case-scenario results. While the medical content on WebMD and similar sites is reviewed by professionals, the rise of AI-generated summaries has changed how search results appear. Google’s “AI Overviews,” for example, now synthesize answers from multiple sources. However, reports show that these overviews can include incorrect or misleading statements, sometimes even distorting the original source material. While AI Overviews may link back to original websites, the sources themselves are not always reliable, and the rewriting process may introduce factual errors.
This concern is echoed by experts at MD Anderson Cancer Center, a leader in oncology research and care. Dr. Caroline Chung, the center’s chief data and analytics officer, explained that AI tools often provide polished and confident answers that can psychologically encourage users to trust them. However, this sense of confidence can be misleading. Dr. Shawn Stapleton, director of Data Impact and Governance at MD Anderson, added that tools like ChatGPT rely on large language models, or LLMs, that generate responses by predicting likely word sequences based on patterns in the data they were trained on. These models are not pulling facts from a database but instead assembling words based on probabilities. The training includes both accurate and inaccurate data collected from across the internet, including websites, open-source materials, and textbooks, but often without clarity about the original source or when the data was last updated.
Dr. Stapleton emphasized the importance of verifying AI-generated responses through traditional web searches, particularly by checking reputable sources like peer-reviewed medical journals, educational institutions, or government health organizations. Dr. Chung recommended viewing AI like a stranger giving advice. While the information might be worth considering, it should never replace consultation with healthcare professionals or thorough personal research.
Outside of search engines, many people are also using AI in the form of conversational chatbots. One widely known example is ChatGPT, a chatbot created by OpenAI. Recognizing the growing presence of these tools, the National Cancer Institute (NCI) has been studying how patients use chatbots for both medical questions and emotional support. Dr. Danielle Bitterman, from the Artificial Intelligence in Medicine Program at Mass General Brigham in Boston, led two studies exploring this issue. In one study, her team asked ChatGPT version 3.5 to explain standard treatments for various cancers, such as stage I breast cancer. While most of the responses included at least one recommendation that aligned with expert guidelines, nearly one third also contained advice that did not match clinical standards. In another study, researchers asked four different chatbots to define common cancers. The responses were generally accurate, but many were written in highly technical language that could be difficult for patients to understand. Both studies were published in the Journal of the American Medical Association.
Dr. Wen-Ying Sylvia Chou, a health communication researcher at NCI who was not involved in the studies, noted that these findings reinforce why AI chatbots should not yet be relied upon for accurate or comprehensive cancer information. At the same time, she acknowledged that AI is here to stay and encouraged a broader conversation about its role in healthcare and communication. As new systems continue to be developed, questions remain about how to ensure they use the most current and inclusive medical data. Dr. Chou pointed out that if the datasets used to train AI tools leave out specific patient populations, the resulting responses could reflect bias and inaccuracies that impact care.
In addition to its role in communication, AI is also beginning to influence how cancer is detected and treated. A study published by the National Library of Medicine described an AI system trained on more than 288,000 breast ultrasound exams. The tool was designed to match the accuracy of radiologists in identifying breast cancer. On a test set of over 44,000 exams, the AI achieved an area under the receiver operating characteristic curve of 0.976, outperforming the average performance of ten board-certified breast radiologists. When radiologists worked alongside the AI system, their false positive rates dropped by more than 37 percent and they requested nearly 28 percent fewer biopsies, all without reducing the sensitivity of their diagnoses. These results suggest that AI could make breast ultrasound diagnosis more accurate, consistent, and efficient.
The Cancer Research Institute supports the growing role of AI in prevention and early detection, calling these two areas the most powerful tools in the fight against cancer. One example involved a woman with a persistent thyroid lump. She initially received a benign diagnosis following a traditional ultrasound and biopsy. A second opinion confirmed the same result, but the second radiologist used an AI-supported thyroid ultrasound tool to reach their conclusion. If she had begun with the AI-enhanced imaging, she might have avoided the biopsy altogether and been spared the anxiety and waiting that come with such procedures.
At Penn Medicine, researchers are also working to develop tools that use AI to identify cancer cells that can be difficult or impossible to detect with the human eye. These tools process large quantities of imaging data in a short time, helping physicians flag areas of concern for further investigation. In particular, AI is being trained to analyze scans such as MRIs to locate and highlight tumor-like structures quickly and efficiently, which can then be reviewed more thoroughly by radiologists and oncologists. These technologies are not yet standard in most clinical settings, but they represent a growing movement toward precision medicine, where treatment plans are tailored to individual patients using advanced data analysis.
AI also plays a role in predicting how a tumor may respond to different treatments and in personalizing care based on a patient’s genetic makeup. By analyzing genomic data in depth, AI systems can help identify patterns that inform treatment decisions and improve outcomes. The goal is not to replace healthcare professionals, but to equip them with tools that enhance their capabilities and allow them to focus on the human side of care.
While AI tools can offer quick answers and impressive data processing, they are not equipped to offer the kind of context, compassion, or emotional understanding that people often seek when facing a difficult diagnosis. If you have questions about your care, or if you feel overwhelmed and are looking for guidance, Pillar Patient Advocates is here for you. Our board-certified patient advocates can help you interpret medical information, ask the right questions, and provide the emotional support you may be seeking from an AI conversation. We are real people who understand the challenges of navigating the healthcare system, and we are ready to support you every step of the way.