AI will transform the doctor-patient relationship

Some time ago, I discovered an online calculator meant to help heart surgeons determine patients’ chances of complications or death. Surprisingly, the calculator, based on published studies, is not password-protected. A patient willing to wade through a thicket of technical terms could use the information in their electronic health record to manually fill in the needed numbers and discover their surgical risk.

In contrast, consider the Cosmos platform assembled by the giant EHR company Epic. It comprises 300 million de-identified patient records from a mind-boggling 15.7 billion patient encounters with 440,000 physicians at thousands of hospitals and clinics. With the press of a button you can get a personalized analysis of “real-world evidence,” based on the patient’s medical record, showing what happened to similar patients receiving a particular treatment for almost any disease.

But there’s a catch: The “you” here isn’t the patient. Cosmos and other platforms parsing real-world evidence or the evidence in the medical literature are largely available only to professionals.

At least for now. The sociologist Eliot Freidson famously defined medical professionals as “possessing a monopoly over a body of knowledge that is relatively inaccessible to lay people.” Now, as AI makes it increasingly possible for patients to find, create, control and act upon an unprecedented breadth and depth of personalized and reliable health information, the monopolistic medical model is collapsing.

Yet tearing down a hierarchy can as easily lead to confusion as constructive change. To prevent data democratization from devolving into disarray, there’s an urgent need for a new structure that adapts the doctor-patient relationship to the AI era. I propose a framework I call “collaborative health,” made up of three pillars: shared information, shared engagement, and shared accountability.

The Trump administration’s commitment to accelerating AI adoption has recently been high profile, both in a wide-ranging executive order, America’s AI Action Plan, issued by the president on July 23, and in a health care AI initiative unveiled just a week later. The Centers for Medicare & Medicaid Services announced creation of a “digital health ecosystem,” whereby more than 60 health tech companies promised to make health data more accessible and to develop apps to more effectively and easily help individuals use their data to improve their health and health care.

Unfortunately, the magnitude of disruption accompanying data democratization is something most doctors still can’t see. Physicians have focused on, “Can AI help me provide the best possible care to my patients?” I have yet to see any doctor ask, “Can AI help my patients find the best possible care, even without me?” An ecosystem, after all, is made up of autonomous, albeit connected, elements.

Or as Bob Dylan bluntly advised in a different era of disruption: “Your sons and your daughters/ Are beyond your command/ Your old road is rapidly agin’/ Please get out of the new one/ If you can’t lend your hand/ For the times they are a-changin’.”

A recent National Academy of Medicine report titled “An Artificial Intelligence Code of Conduct for Health and Medicine” unfortunately demonstrates just how oblivious the profession remains in some ways to the speed and magnitude of the changing times. In its recommendations for informed consent, the academy still sees physicians as, essentially, custodians. It says doctors should protect “the health, safety and well-being of patients” by providing “patient-centered, culturally appropriate language” about AI tools and then letting patients opt out of care that uses them. (Disclosure: I’m a member of a National Academy workgroup involved in a separate report.) A recent JAMA Perspective focused on AI risk, titled, “Ethical Obligations to Inform Patients About Use of AI Tools,” took a similarly narrow, custodial approach.

Conspicuously absent is any recognition that “informed” consent involving all pertinent information should clearly include disclosing what the doctor knows (or should know) about personalized treatment data provided by reliable AI analytics. An AI code of conduct could even include a requirement to make accessible versions of this type of information available to patients. CMS should strongly consider nudging providers to “voluntarily” promise to do that, once the tech companies live up to their own voluntary commitment.

To be clear, patients still need doctors to “lend a hand,” in Dylan’s words. Even the AI Jedis of the #PatientsUseAI Substack emphasize that their data-driven discoveries can’t fully substitute for a discussion with an expert physician. Real-world evidence and randomized controlled trials, like advice from human doctors, are prey to hidden flaws, biases and other limitations.

But the context of the doctor-patient relationship is key. Forty years ago, the medical ethicist Jay Katz wrote about the “oddity of physicians’ insistence that patients follow doctors’ orders.” There was, he said, a better way.

“I believe patients can be trusted,” Katz wrote. “If anyone were to contest that belief, I would ask: ‘Can physicians be trusted to make decisions for patients?’” Both must be trusted, Katz concluded, but only “if they first learn to trust each other.”

The collaborative health framework is designed to help create and sustain that level of mutual trust at a time of rapid technological change. Bearing in mind that privacy and security concerns are always paramount, here is a very brief description of its three components.

Shared information describes a two-way street. While physicians should commit to fully sharing  AI-enabled information — particularly as personalized, predictive analytics proliferate —patients, too, must commit to candor.

In a recent conversation about information sharing, for example, Epic chief medical officer Jackie Gerhart told me that as a family doctor she wants to know if a patient is receiving medical advice or medication from outside sources — but she also believes transparency should go both ways. “I definitely want to be able to share the Cosmos screen with the patient to help their care decisions,” she said.

Shared engagement is trickier, but vital. Health and Human Services Secretary Robert F. Kennedy Jr. has said that within four years he wants all Americans to use wearables to monitor and improve their health. However, that requires clinicians and patients to interact effectively without being overwhelmed by too much information, particularly when dealing with chronic conditions. To make that happen, we need rules, incentives, and appropriate technology. HHS should study the new European Union guidelines requiring app stores and those who develop algorithms for apps to comply with the same regulations as medical devices and, as well, learn the lessons of enthusiastic overuse of glucose monitors by some non-diabetics.

Shared accountability is the most sensitive element. As individuals gain control over their health information, that control needs to be accompanied by greater responsibility for use of that information. If we’re building an ecosystem to replace a hierarchy, “it’s not a system if only one person has responsibility and accountability,” Philip R.O. Payne, chief health AI officer for BJC Health and the Washington University School of Medicine, told me.

The digital health ecosystem theme sounded by CMS was reinforced in a personal way in a YouTube video posted by Amy Gleason, a special adviser to CMS Administrator Mehmet Oz. As someone with both a tech entrepreneur background and a daughter with a rare disease, Gleason related how her family had used an AI analysis of daughter Morgan’s medical record to unearth information enabling Morgan to enroll in a clinical trial for which she’d previously been rejected as ineligible. That acceptance, said Gleason, represented “the first real hope we’ve had in over 15 years.”

As for CMS’s tech ecosystem initiative, Gleason added, “This isn’t just a showcase — it’s a national sign of acceleration. It’s about action.” Since CMS decisions affect a staggering $1.5 trillion spent on care each year, those messages carry considerable weight.

Albert Schweitzer, whose humanitarian work garnered the Nobel Peace Prize, once advised, “We are at our best when we give the doctor who resides within each patient a chance to go to work.”

While the destruction being wrought by AI is an understandable cause of anxiety, it also represents a rare opportunity to reimagine a new dynamic for the doctor-patient relationship. AI can help bring together physicians and the “doctor who resides within each patient” in a relationship of shared learning where “making America healthy again” begins with mutual collaboration and trust.

Michael L. Millenson is president of Health Quality Advisors LLC and author of the book “Demanding Medical Excellence: Doctors and Accountability in the Information Age.

Continue Reading