A research team airs the messy truth about AI in medicine — and gives hospitals a guide to fix it

Excerpt – The challenges uncovered by the project [reviews of AI compiled by researchers at Duke] point to a dawning realization about AI’s use in health care: building the algorithm is the easiest part of the work. The real difficulty lies in figuring out how to incorporate the technology into the daily routines of doctors and nurses, and the complicated care-delivery and technical systems that surround them. AI must be finely tuned to those environments and evaluated within them, so that its benefits and costs can be clearly understood and compared.

As it stands, health systems are not set up to do that work — at least not across the board. Many are hiring more data scientists and engineers. But those specialists often work in self-contained units that help build or buy AI models and then struggle behind the scenes to keep them working properly.

“Each health system is kind of inventing this on their own,” said Michael Draugelis, a data scientist at Hackensack Meridian Health System in New Jersey. He noted that the problems are not just technical, but also legal and ethical, requiring a broad group of experts to help address them. [..]

Many AI products, even if approved by the Food and Drug Administration, don’t come with detailed documentation that would help health systems assess whether they will work on their patients or within their IT systems, where data must flow easily between record-keeping software and AI models. Many vendors of commercially-available AI systems do not disclose how their products were trained or the gender, age, and racial make-up of the testing data. In many cases, it is also unclear whether the data they employ will map to those routinely collected by health care providers. [..]

Hospitals are ideal environments for building AI models to solve problems that arise in patient care. But of the more than 500 AI products that have been cleared by FDA, none of the approvals went to health systems, which are more focused on patient care than pushing AI tools through regulatory pipelines. Instead of submitting to that process, providers find ways to work around it, by tweaking the use of AI models, and guardrails around them, to avoid regulation. [..]

The reason for the regulatory line in the first place was that the FDA did not want to meddle with doctors’ decision making. “But that was assuming in most cases that it’s one physician in a room with a patient,” said Keo Shaw, another lawyer at the firm who participated in the research. “But obviously with AI, you can make a lot of decisions. That can happen very quickly.” [..]

One problem with putting so much emphasis on statistical performance, [bioethicist and AI specialist at The Hospital for Sick Children in Toronto Melissa] McCradden said, is that it disregards so many other factors that may bear on an AI’s impact, such as a clinician’s judgment or the individual values and preferences of patients. “All of those things together can change the outcome,” she said.

That doesn’t necessarily mean that every AI intervention should be subjected to a randomized controlled trial. But it does underscore the need for a deeper exploration of whether accurate detection of an illness happened in a timely way — and spurred the right kind of follow-up care — to ultimately change a patient’s trajectory. Hospitals could also evaluate such tools by asking patients themselves about whether the AI influenced their decisions and behaviors. [..]

Establishing that kind of surveillance is particularly difficult in health systems where IT specialists, data scientists, and clinicians work in separate departments that don’t always communicate about how their decisions might affect the performance of an AI model.

Draugelis, the data scientist at Hackensack Meridian, said those barriers point to the need for an engineering culture around AI systems, to ensure that data is always inputted the right way, and that controls are put in place to flag errors and quickly fix them.

“If that really takes hold, then you have a team that represents all the collective skills needed to deliver these things,” he said. “That is what’s going to have to occur as we talk about AI and all these new ways of delivering care.”

Full article, C Ross, STAT News, 2023.4.27