AI in Healthcare: Designed for Progress or Profit?

By Crystal Lindell

As a pain patient, I take a controlled substance medication, which means every single time I need a refill I have to contact my doctor. 

It doesn’t matter that this refill comes every 28 days and that I have been getting it refilled every 28 days for years. It doesn’t matter that my condition has no cure, and that I will most likely need this medication refilled every 28 days for the foreseeable future.

No. I have to make sure to contact my doctor and specifically ask for it, every single time.  

There are ways to automate this process. They could give me a set number of automatic refills and have them sent to the pharmacy every 28 days. If we were even more practical, they could just give me 60 to 90 days worth of pills at a time, and save me from multiple trips to the pharmacy. 

But because of insurance rules, hospital policies and opioid-phobia legislation, all of those options are impossible. In fact, they actively work to make a process that could be automated into something that has to be done manually. 

Which is why I’m so skeptical of Artificial Intelligence (AI) in healthcare. 

The promise of AI is that it can automate away the mundane tasks so many of us hate doing. Many health related tasks could easily be automated. They just purposefully are not. 

The hospital I go to for my medical care, University of Wisconsin-Madison, recently released a report filled with recommendations for how AI should be integrated into healthcare. It was based on a recent roundtable discussion that included healthcare professionals from across the country. 

But while the participant list included doctors, IT staff, policy experts, and academics, there was one very glaring absence – the list of participants included exactly zero patients. 

UW Health was one of the organizers for the panel, along with Epic, a healthcare software developer. Their report includes some seemingly good recommendations. 

They ask that AI be used to supplement the work that doctors, nurses and other healthcare staff perform, as opposed to replacing the staff altogether. They say AI could be a great tool to help reduce staff burnout. 

They also recommend that the technology be set up in such a way that it also helps those living in rural areas, in addition to those in more metropolitan ones. The report also emphasizes that healthcare systems should prioritize “weaving the technology into existing systems rather than using it as a standalone tool.”

Additionally, the report stressed the need for federal regulations to “balance space for innovation with safeguarding patient data and ensuring robust cybersecurity measures.”

I don’t disagree with any of that. But it’s a little frustrating to see those recommendations, when some of those problems could already be solved if we wanted them to be. 

And while the panel’s report is new, UW Health’s use of AI is not. 

In April, UW Health announced that they were participating in a new partnership program with Microsoft and Epic to develop and integrate AI into healthcare. 

At the time they said the innovation would be focused on “delivering a comprehensive array of generative AI- powered solutions… to increase productivity, enhance patient care and improve financial integrity of health systems globally.”

That’s the real motivation to bring AI into healthcare: make more money by improving “financial integrity.” Something tells me that AI won’t be used to lower patient’s bills though. 

UW Health also recently shared that its nurses were using AI to generate responses to patients. Over 75 nurses were using generative AI, which assisted them in creating more than 3,000 messages across more than 30 departments.

“This has been a fascinating process, and one I’ve been glad to be part of,” said Amanda Weber, registered nurse clinic supervisor, UW Health. “I have found having a draft to start from helpful, and I’m glad I could provide feedback on improvements and features to ensure this can be a good tool for nurses and have a positive impact on our patients.”

Before I even knew about this program, I had a feeling that AI was involved. 

Recently, when I messaged my doctor about my upcoming refill, I received an overly-formal, odd response that felt very much like generative AI writing to me. Which is fine. I honestly don’t mind if my doctor saves time by using AI to respond to patient emails. Heck, I myself have used AI to write first drafts of some emails. 

But my doctor and his staff wouldn’t even need to reply to my emails if he was allowed to set up automatic refills of my long-time medication instead. 

There are many ways to improve health care, and tools like generative AI are likely among them. But AI can’t solve problems that exist on purpose. 

Unless patients are at the forefront of the conversations about these tools, I fear they’ll only be used to solve the sole problem hospital administrators actually care about: how to make more money.