Skip to content

The Medical Minute: Is Dr. ChatGPT doing more harm than good?

It starts innocently enough. Just a quick question: “How do you know a mole is cancer?”

And despite knowing that asking the internet medical questions will only convince you that you have six weeks to live, you type the words anyway and brace for news of your untimely demise.

Search engines have always been a portal to too much – and not necessarily the right –information. Now, with the explosion of generative artificial intelligence (AI), patients have even more ways to seek medical advice without stepping into a doctor’s office. Some AI engines, like ChatGPT Health, even allow patients to connect their medical records and health-tracking apps for more personalized advice.

Replacing your primary care physician with Dr. AI could be dangerous, but physicians at Penn State Health are finding innovative and safe ways to use this evolving technology to improve patient care.

The User is Always Right, Even When They’re Wrong

Dr. Daniel Schlegel, a family medicine physician at Penn State Health Medical Group – Middletown and medical director of Penn State Health Virtual Primary Care, says patients have always come to appointments with preconceived notions based on outside advice.

“Patients often bring other sources of information into visits, whether they consulted their best friend first or an aunt who’s a nurse or home remedies their grandmother taught them,” Schlegel says.

However, AI comes with risks that friendly advice lacks. These platforms are designed to validate the user. Schlegel says if patients ask questions leading to a certain conclusion, the AI chatbot will often answer in a way that confirms that suspicion, even if it’s wrong.

“AI could lead you down a road you might want to be led down anyway,” he says. “The degree to which you trust the source telling you this, whether it’s a friend or a bot, could make it difficult for you to align with what your provider is recommending. There’s a real risk of this advice being competitive rather than complementary.”

Find a Penn State Health family medicine provider near you.

A Tool in the Right Hands

While AI might not provide patients the best “research,” physicians are using it to access more data to better care for their patients. Schlegel says Open Evidence, an AI-powered physician co-pilot, sources information only from reputable research journals and medical associations, ensuring the data is accurate and appropriate. The platform helps him quickly access trusted information, allowing him to answer patients’ questions with clinical data from inside the medical community.

Dr. Christopher DeFlitch, vice president and chief medical information officer at Penn State Health, points to another AI tool assisting doctors in the exam room.

Penn State Health is testing Abridge, an AI solution focused on clinical documentation. Abridge records and transcribes patient visits in real time, then turns them into useful notes for the electronic health record. Physicians review the notes and edit as needed to ensure accuracy. The tool successfully helps physicians capture details that might otherwise be missed.

“Sometimes, when a patient comes in for one complaint but then asks about a secondary issue on the fly, Abridge helps us capture both issues, document in detail and follow up, rather than just focus on the primary reason for the visit,” DeFlitch says.

Many physicians are also using AI-powered remote patient monitoring, says Schlegel. His office partners with a program that uses a Bluetooth-enabled blood pressure cuff connected to a phone app. The app prompts the patient to check their blood pressure, automatically uploads the data to their chart and notifies providers if readings trend in the wrong direction. Clinicians in the program can then give tailored advice on diet and exercise – support that primary care doctors seldom have time to provide. Schlegel says of the one thousand patients enrolled in the program, more than half have gone from having uncontrolled to controlled blood pressure.

AI Guardrails for Safe Use

Although AI shouldn’t replace your relationship with your doctor, Schlegel and DeFlitch know many patients will likely consult Dr. AI occasionally. They recommend keeping a few things in mind.

Privacy – Medical information in a doctor’s office is protected under the Health Insurance Portability and Accountability Act. You may not have the same protections when using an AI engine. Only disclose what you feel comfortable sharing online.

Knowledge source – Just as you wouldn’t visit an eye doctor for a skin issue, asking a general-purpose AI engine for medical advice may get you close to an answer, but it probably won’t give you useful information about your particular concern, DeFlitch says.

“It’s important to think about what you’re asking, who you’re asking and what knowledge base that answer is coming from,” DeFlitch says. “We joke about Dr. Google, but at least you select what you’re going to look at. General AI engines go to the same sources, make decisions about which information applies to you and summarize it for you. Good search engines show their sources, but many don’t. You have to be careful.”

So, is it safe to use AI in health care? The answer is: It depends.

“Asking AI is kind of the Wild West of technology,” DeFlitch cautions. “Don’t take it on blind trust that you’re getting the right answer.”

Related content:

Learn more about Penn State Health Virtual Primary Care.

The Medical Minute is a weekly health news feature produced by Penn State Health. Articles feature the expertise of faculty, physicians and staff, and are designed to offer timely, relevant health information of interest to a broad audience.

If you're having trouble accessing this content, or would like it in another format, please email Penn State Health Marketing & Communications.