There's something in medicine called the “doorknob phenomenon,” where a doctor hears a patient finally express something that's been bothering them for a long time just before they leave (probably while still holding the doorknob). A version of this happened to me recently.
In the midst of a crowded outpatient clinic, I was handed a chart with a referral attached that read “NYD Lymphadenopathy,” meaning “as yet undiagnosed” swollen lymph nodes. My patient was a middle-aged man, quiet, and in talking to him, I learned he lived an ordinary life. His neck and armpits were dotted with pea-sized lumps, which I rolled between my fingers as I flipped through a mental checklist of possible diagnoses: medication effects, infectious exposure, possible spread of cancer. We still didn’t know the cause, I told him, trying to reassure him.
But as the test progressed, I noticed a change in his expression. As I began to explain the procedure, I noticed his feet begin to move and he nervously tugged at his sleeve. “I think something else is bothering you,” I said. His eyes watered and he began to cry.
This is where I stumble. As a physician, my focus is firmly tied to healing in a very narrow sense — following a blueprint that lists signs and symptoms, makes a diagnosis, and suggests a treatment. But in reality, this script tends to fall apart, because at the end of the script lie larger, more challenging forces that really impact a patient's health.
I spend a lot of time learning about my patients' plights: lack of money or food, struggles to find work or housing, or addictive drugs that have a tight grip on people. These situations are often complex and intricate, and solutions are rarely straightforward. Which leads me to wonder more and more: How do these situations fit into the future of healthcare, which aims to be very neatly packaged and guided by the artificial intelligence (AI) algorithms that so many are eager to adopt?
Physicians have generally been reluctant to unlearn the ways they've been handed down, but many now believe that AI has valuable potential to enhance physician clinical competency and increase efficiency, making the technology an attractive paradigm shift for busy physicians.
How do messy human problems fit into the future of health care, which aims to be packaged so neatly by artificial intelligence algorithms that many are eager to adopt?
But how patients receive these algorithmic insights is much less clear. For example, as suggested by a recent study published in JAMA Network Open, if an AI were to do the emotional heavy lifting for me and craft email replies “with empathy,” I could imagine feeling more out of touch and less empathetic to my patients’ genuine concerns. We also know that algorithms are coded based on factors such as race and ethnicity and can perpetuate them. For example, they can indefinitely classify patients who need opioids as potential addicts. AI may significantly ease our workload in some cases, but we must be careful of the risk of overdiagnosing patients.
Adopting a more formulaic approach to healthcare seems like a natural part of evolution. Clinical scoring systems, which I and many other physicians use regularly, help us predict aspects of a patient's health that we cannot reasonably predict. By inputting certain parameters such as heart rate, age, and liver function measurements, we can get a range of probabilities, such as the likelihood of developing a blood clot in the lungs, the likelihood of having a heart attack within the next 10 years, or the favorable outcome of steroids for someone whose liver is inflamed from alcohol.
But the more we become accustomed to these methods, the less certain their truth becomes: A person's health follows a multifaceted path set as much by the mysteries of biology as by the realities of where they live, grow, and work.
Individual outputs do not necessarily reflect individual inputs. And when we focus our understanding of health on the minutiae of genes and molecular structures, the bigger picture of how fundamental social systems underpin our existence falls out of focus. According to Statista, a global data platform, the value of AI in healthcare exceeded $11 billion in 2021. Ten years from now, depending on which projection you read, that figure is expected to increase tenfold. Meanwhile, state and local health departments are neglected. Efforts to address gaps in critical areas like housing, education, and mental health remain chronically underfunded and subject to oppressive budget cuts.
I often think about what gives meaning to the support we provide to patients, and how that will be shaped by the precision and calculation of the coming revolution in new technologies, or by further investments in the social safety nets that underpin them, but ultimately I come to the same conclusion: neither is useful without the unique personal connections that sustain us.
A person's health follows a multifaceted path set as much by the mysteries of biology as by the realities of where they live, grow and work.
I don’t see my patients as a collection of data points, and they may not see me as just a flesh-and-blood central processor. Understanding my patients more deeply means experiencing these very human moments and focusing beyond the neat, tidy paths that algorithms chart. I need to maintain a curiosity about all the wrinkles in life as I seek to see our illnesses in a larger context that ultimately must be one that recognizes my patients as distinctly imperfect human beings, just like me, and always will be.
One day, an AI might parse a series of answers to the software's questions against a database and diagnose the cause of my patient's swollen lymph nodes, possibly even before I've met the patient. (It took a second visit, several weeks later, to arrive at a diagnosis.) But I think talking about my own uncertainties and human limitations helped the patient accept his own limitations. He told me that he had moved from the city to the countryside to help his aging parents, but they soon died. This left him hopeless, unemployed, depressed, and addicted to drugs.
“We'll help you,” I said, “slowly, one step at a time.”
Written by
Arjun V.K. Sharma
Arjun V. K. Sharma is a physician whose articles have appeared in The Washington Post, The Los Angeles Times and The Boston Globe, among other newspapers.
Website
This article was originally published by Undark on May 30, 2024.