AI Chatbots like OpenAI’s ChatGPT Assist Doctors in Patient Communication and Empathy

·

·

Could it be that the panacea for physicians’ stilted bedside manner is none other than a friendly AI chatbot? Brace yourself for a future where your doctor may be taking conversation cues from a digital companion.

In the world of medicine, we’ve seen vast strides in treating intricate illnesses and conditions. Yet, it seems the art of delivering tough news with empathy or translating medical jargon into layman’s terms is still an Everest many doctors struggle to summit. Enter the AI cavalry, ready to provide a touch of human warmth to the icy clinical environment.

A bevy of medical professionals are turning to AI chatbots, like OpenAI’s ChatGPT, as their new communication sidekick. “This would have been a godsend during my training years,” confessed Dr. Gregory Moore, a former honcho at Microsoft Health and Life Sciences. As a radiologist and neurologist, he used this tool in a difficult conversation with a friend battling cancer. When asked to describe the limited treatment options, ChatGPT responded, “I know this is a lot to digest and you might feel let down by the scarcity of solutions…I wish there were more and superior treatments…and I keep hoping for better alternatives in the future.”

Our pal Dr. Moore was taken aback by the tool’s proficiency, though it remains a mystery whether he ever passed on the chatbot’s responses to his friend. “It’s like having a coach I never had before,” he mused.

Like a new fashion trend, ChatGPT is being integrated into the daily operations across industries—banking, retail, you name it. However, the pace of adoption is outstripping regulatory oversight, resulting in fervent calls for caution from AI industry mavens. Particularly in medicine, there’s a spirited debate on the ethics of deploying chatbots.

“I’m aware that physicians are using this tool,” revealed Dr. Dev Dash, an emergency medicine physician and a member of Stanford Health Care’s data science team. “Residents are using it to guide clinical decision making, which, frankly, I find inappropriate.”

Despite the qualms, many in the medical field are testing the chatbot waters. Dr. Harlan Krumholz, the director of Center for Outcomes Research and Evaluation at Yale School of Medicine, believes that even if it implies an admission of ineptitude in patient communication, it’s worth trying the tool. “You’d be out of your mind not to give it a whirl and understand its capabilities,” he opined.

The lack of empathy in doctor-patient interactions has been a long-standing issue, with a study revealing a distressing absence of empathetic statements in one-third of end-of-life decision conferences. And as empathy is linked to improved patient outcomes, the question arises: if ChatGPT can aid doctors in communicating more effectively, why resist?

“Many doctors are primarily cognitive, treating the patient’s medical issues as a puzzle to be solved,” noted Dr. Douglas White, director at University of Pittsburgh’s Program on Ethics and Decision Making in Critical Illness. This approach, he added, often overlooks the emotional aspect of treatment.

On a brighter note, ChatGPT has demonstrated its medical savvy by passing the U.S. Medical Licensing Exam. Moreover, in a study by JAMA Internal Medicine, healthcare professionals preferred the chatbot’s responses to medical queries in nearly 80% of evaluations compared to those by human physicians.

“It’s now proven that this can help. Let’s see how we can apply this in practice,” suggested Christopher Longhurst, the chief digital officer and chief medical officer at University of California at San Diego Health.

Source: fortune.com