World

A.I. Will Change Medicine but Not What It Means to Be a Doctor

When faced with a particularly tough question on rounds during my intern year, I would run straight to the bathroom. There, I would flip through the medical reference book I carried in my pocket, find the answer and return to the group, ready to respond.

At the time, I believed that my job was to memorize, to know the most arcane of medical eponyms by heart. Surely an excellent clinician would not need to consult a book or a computer to diagnose a patient. Or so I thought then.

Not even two decades later, we find ourselves at the dawn of what many believe to be a new era in medicine, one in which artificial intelligence promises to write our notes, to communicate with patients, to offer diagnoses. The potential is dazzling. But as these systems improve and are integrated into our practice in the coming years, we will face complicated questions: Where does specialized expertise live? If the thought process to arrive at a diagnosis can be done by a computer “co-pilot,” how does that change the practice of medicine, for doctors and for patients?

Though medicine is a field where breakthrough innovation saves lives, doctors are — ironically — relatively slow to adopt new technology. We still use the fax machine to send and receive information from other hospitals. When the electronic medical record warns me that my patient’s combination of vital signs and lab abnormalities could point to an infection, I find the input to be intrusive rather than helpful. A part of this hesitation is the need for any technology to be tested before it can be trusted. But there is also the romanticized notion of the diagnostician whose mind contains more than any textbook.

Still, the idea of a computer diagnostician has long been compelling. Doctors have tried to make machines that can “think” like a doctor and diagnose patients for decades, like a Dr. House-style program that can take in a set of disparate symptoms and suggest a unifying diagnosis. But early models were time-consuming to employ and ultimately not particularly useful in practice. They were limited in their utility until advances in natural language processing made generative A.I. — in which a computer can actually create new content in the style of a human — a reality. This is not the same as looking up a set of symptoms on Google; instead, these programs have the ability to synthesize data and “think” much like an expert.

To date, we have not integrated generative A.I. into our work in the intensive care unit. But it seems clear that we inevitably will. One of the easiest ways to imagine using A.I. is when it comes to work that requires pattern recognition, such as reading X-rays. Even the best doctor may be less adept than a machine when it comes to recognizing complex patterns without bias. There is also a good deal of excitement about the possibility for A.I. programs to write our daily patient notes for us as a sort of electronic scribe, saving considerable time. As Dr. Eric Topol, a cardiologist who has written about the promise of A.I. in medicine, says, this technology could foster the relationship between patients and doctors. “We’ve got a path to restore the humanity in medicine,” he told me.

Beyond saving us time, the intelligence in A.I. — if used well — could make us better at our jobs. Dr. Francisco Lopez-Jimenez, the co-director of A.I. in cardiology at the Mayo Clinic, has been studying the use of A.I. to read electrocardiograms, or ECGs, which are a simple recording of the heart’s electrical activity. An expert cardiologist can glean all sorts of information from an ECG, but a computer can glean more, including an assessment of how well the heart is functioning — which could help determine who would benefit from further testing.

Even more remarkably, Dr. Lopez-Jimenez and his team found that when asked to predict age based on an ECG, the A.I. program would from time to time give an entirely incorrect response. At first, the researchers thought the machine simply wasn’t great at age prediction based on the ECG — until they realized that the machine was offering the “biological” rather than chronological age, explained Dr. Lopez-Jimenez. Based on the patterns of the ECG alone, the A.I. program knew more about a patient’s aging than a clinician ever could.

And this is just the start. Some studies are using A.I. to try to diagnose a patient’s condition based on voice alone. Researchers promote the possibility of A.I. to speed drug discovery. But as an intensive care unit doctor, I find that what is most compelling is the ability of generative A.I. programs to diagnose a patient. Imagine it: a pocket expert on rounds with the ability to plumb the depth of existing knowledge in seconds.

What proof do we need to use any of this? The bar is higher for diagnostic programs than it is for programs that write our notes. But the way we typically test advances in medicine — a rigorously designed randomized clinical trial that takes years — won’t work here. After all, by the time the trial were complete, the technology would have changed. Besides, the reality is that these technologies are going to find their way into our daily practice whether they are tested or not.

Dr. Adam Rodman, an internist at Beth Israel Deaconess Hospital in Boston and a historian, found that the majority of his medical students are using Chat GPT already, to help them on rounds or even to help predict test questions. Curious about how A.I. would perform on tough medical cases, Dr. Rodman gave the notoriously challenging New England Journal of Medicine weekly case — and found that the program offered the correct diagnosis in a list of possible diagnoses just over 60 percent of the time. This performance is most likely better than any individual could accomplish.

How those abilities translate to the real world remains to be seen. But even as he prepares to embrace new technology, Dr. Rodman wonders if something will be lost. After all, the training of doctors has long followed a clear process — we see patients, we struggle with their care in a supervised environment and we do it over again until we finish our training. But with A.I., there is the real possibility that doctors in training could lean on these programs to do the hard work of generating a diagnosis, rather than learn to do it themselves. If you have never sorted through the mess of seemingly unrelated symptoms to arrive at a potential diagnosis, but instead relied on a computer, how do you learn the thought processes required for excellence as a doctor?

“In the very near future, we’re looking at a time where the new generation coming up are not going to be developing these skills in the same way we did,” Dr. Rodman said. Even when it comes to A.I. writing our notes for us, Dr. Rodman sees a trade-off. After all, notes are not simply drudgery; they also represent a time to take stock, to review the data and reflect on what comes next for our patients. If we offload that work, we surely gain time, but maybe we lose something too.

But there is a balance here. Maybe the diagnoses offered by A.I. will become an adjunct to our own thought processes, not replacing us but allowing us all the tools to become better. Particularly for those working in settings with limited specialists for consultation, A.I. could bring everyone up to the same standard. At the same time, patients will be using these technologies, asking questions and coming to us with potential answers. This democratizing of information is already happening and will only increase.

Perhaps being an expert doesn’t mean being a fount of information but synthesizing and communicating and using judgment to make hard decisions. A.I. can be part of that process, just one more tool that we use, but it will never replace a hand at the bedside, eye contact, understanding — what it is to be a doctor.

A few weeks ago, I downloaded the Chat GPT app. I’ve asked it all sorts of questions, from the medical to the personal. And when I am next working in the intensive care unit, when faced with a question on rounds, I just might open the app and see what A.I. has to say.

Daniela J. Lamas (@danielalamasmd), a contributing Opinion writer, is a pulmonary and critical-care physician at Brigham and Women’s Hospital in Boston.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Back to top button