JAMA interviewed Sarah C Hull, associate director of the biomedical ethics program at Yale.
“[JAMA] [..] What are the biggest moral dilemmas that you think we’re facing with AI [artificial intelligence], especially within this area?
[Hull] I think that we need to be very careful when deploying tools that act human vs tools that are decidedly not human acting. I think that ethically, it can become a lot more complex when you have really advanced generative AI that can seem very humanoid, but very importantly does not possess moral agency and does not have a fiduciary responsibility for the well-being of the patient.
Therefore, as I’ve argued in the paper, it can be very helpful to use AI tools as a second-pass mechanism, or as a supplement to clinical judgment to help with interpretation, or sometimes even decisional support when there isn’t a clear right or wrong answer and we want to really incorporate a lot of different data sources in helping us to make our decisions with patients. I think there needs to be a hard boundary where we say we can’t deploy this for tasks that require moral agency. [..]
That I think is categorically an area that we should stay away from. Any task that requires moral agency, we should not be delegating to an AI as the primary deliverer of that task. Then even with other tasks, because the machines don’t have fiduciary responsibility for patients—physicians and other clinicians do—we have to view these tools as a supplement rather than a replacement for what we do, at least for clinical tasks. Now, administrative tasks that do not require moral agency—that’s a different story and an area of potential where they may actually serve a great purpose and hopefully help to relieve us of some administrative burdens.
Although I am a bit cautious about that because I think we heard that about EMRs [electronic medical records] as well, that everything would be more streamlined. In fact, the opposite has been true. A lot of technologies that have promised to free up time actually have just served to really chain us to them. Email’s a lot more efficient than regular postal mail, and yet I think most of us feel quite chained to our emails. So I don’t think that really has served a liberating role at all. So I think we need to be very cautious about unintended consequences and overestimating the efficiency benefit of these tools.
[JAMA] [..] Do you think that there will be a point where we do have AI making those final decisions and replacing human care?
[Hull] Let’s say it’s a surgical procedure where the robot clearly has better outcomes than a human does. We practice evidence-based medicine, and if the evidence shows that outcomes are better with a machine than with a human, I can understand why it might be tempting to want to put the robot in the front seat. But then the question is, what if there is a complication? Who’s responsible for that? Who is taking that ethical responsibility, that fiduciary responsibility? The robot’s not taking it because the robot doesn’t have moral agency. So is it the person who designed the robot? I don’t think so. You can argue they should, but I don’t think that is realistically where things are going.
Someone needs to own the ethical repercussions of that outcome. So I think that question needs to be answered before we would ever move to that scenario. I think ultimately, the person who is going to take responsibility for that is still going to be the clinician. I don’t think clinicians are going to feel comfortable taking the responsibility for that if they’re not somehow involved, even if it’s simply overseeing the robot to make sure that the robot is functioning properly. You might quibble with my use of the word—a supplement rather than a replacement—because maybe it is serving in more of a replacement-type capacity then, but you still have some human oversight.
[..] There are a lot of cases where there isn’t going to be objectively one favorable outcome over another where many patients will say, “Well, I would much rather have outcome A in this scenario,” and then other patients will say, “Well, I would rather have outcome B because my goals and values and priorities are different.” So who’s going to do the moral work of interrogating those goals, values, and priorities? Again, are we asking the robots to do that? [..]
[JAMA] I wholeheartedly agree with you about the relationships. [..] I think it’s also why we’re going online and we have more connections, but we’re seeing that people are more lonely, right?
[Hull] Those connections don’t fix loneliness because they’re not real. That’s exactly right.
[JAMA] I think a lot of hospitals and institutions are already integrating generative AI into health care and potentially not being transparent. Patients are talking to a chatbot when they don’t know it. What are your thoughts on that, and how do you prevent this overautomation and generate guardrails to ensure that we are giving the relationship that patients need to heal?
[Hull] [..] The first step really has to be transparency, and that’s not to say that transparency in every possible context is going to be either feasible or even ethically necessary. For example, using interpretive AI as a second pass to verify one’s impression when reading a diagnostic imaging study—that’s just another technical tool. In general, there’s a pretty reasonable consensus that we don’t need to disclose to patients we’re using this brand of imaging software vs that brand of imaging software. When we’re just adding an additional enhancement for interpretive purposes, I think it’s really hard to argue that there’s an ethical need to disclose that, and also it’s going to create some feasibility issues.
But with generative AI that’s directly interfacing with patients, there are a couple issues at play. Number one, of course, is just the transparency issue. If a patient believes that they’re interacting with a human and they’re interacting with a machine, then that’s deceptive. So that’s clearly ethically problematic.
Then there are a lot of concerns that generative AI may be collecting sensitive data if it’s interfacing with electronic medical records. There needs to be a lot of algorithmic transparency and assurances that these data are not going to be shared outside of a health system. So I think there are transparency issues, and privacy and security issues, and all of that needs to be really discussed explicitly prior to rollout.
If patients are using some communication portal that’s going to be relying heavily on chatbots, then I think it needs to be disclosed that that’s going to happen. Because otherwise, it’s hard to argue that we’re not bamboozling them, whether intentionally or unintentionally. Whether or not our intent is to be deceptive, if our impact is deception, then I think that that’s ethically problematic.
[JAMA] If you could have AI in the room with you as a provider, what would you have it do? What do you think AI is most useful for when you’re in that scenario?
[Hull] One of my areas of focus in cardiology and particularly in cardio-oncology is lifestyle intervention. I think that this is actually another big area of ethical importance that is really underappreciated in cardiology and also in medicine in general. Poor diet has surpassed smoking as the number one risk factor for death in the United States. Eating an unhealthy diet is so normalized in the United States. The standard American diet is a very health-undermining diet.
We’re good at prescribing medications, and I want to be clear, we need to prescribe medications. I’m certainly not trying to espouse this false dichotomy of meds vs lifestyle. For many of our patients, they do need both. But I think it’s a lot harder for us to prescribe lifestyle changes because those aren’t things that can fit neatly into a pill and be taken once a day or twice a day and be forgotten about. So I do wonder if there are AI tools that could provide support to patients once they’re outside of the clinic. I’m not aware of any tools like this, but tools that empower patients to make healthier decisions.
I’d like to see less friction in the pursuit of healthier choices because right now, the pursuit of unhealthy lifestyles is so frictionless and there are so many barriers—financial barriers, educational barriers, spatial barriers for people who live in food deserts—to eating healthier food and just having a healthier lifestyle in general. If there’s a way that AI could be harnessed to help counter some of those social determinants of health, that would be at the top of my wish list. I am not really interested in having AI in the room with me and my patient. But when I can’t be there, maybe there’s a role that they could play.
Although I would want to rigorously test it and to make sure it’s not giving bad suggestions that go against what the medical evidence would suggest, and make sure it’s culturally sensitive and sensitive to a patient’s socioeconomic and other circumstances, because I think sometimes we can be woefully unaware of those in the medical profession. Although we’re getting a lot better, we still have a lot of work to do.
[JAMA] What are you hoping that AI is going to bring to us in the next 5 to 10 years?
[Hull] Number one, that it would improve the quality of the care that we provide by giving us an extra level of precision, giving us an extra level of interpretive support or decisional support in difficult cases.
And then also improving access. Is there a way that AI can be leveraged to, again, lower some of the barriers that some of our most vulnerable patients face in terms of accessing health care? Although I think we need to be careful that we’re not using AI as a patch for the fact that vulnerable patients have trouble accessing care. So we’ll allow them to access a certain level of care, but they don’t get as much face time with clinicians as patients who already are more privileged. We have to be really careful that we’re not creating doubly disadvantaged patients by creating a 2-tiered system in which AI is being disproportionately leveraged for people who already are underserved by the medical profession.
Another thing that I’ve heard used as an argument for implementing or incorporating AI, but it makes me very cautious, is the question of efficiency. Of course, we all want to be efficient, but I think we do need to recognize that efficiency and speed are not the same thing, and that if tools are helping us to do things faster, but are compromising quality or are compromising the experience that the patient is having, and is not meeting their moral needs, just because it’s enabling us to do something faster doesn’t mean that that’s an unmitigated good. Efficiency is a great thing to have, but true efficiency is doing the best job you can in the least amount of time.
Patients already feel that they’re shuffled in and out too quickly at times, that the system is overburdened. I think it is in many ways, but I don’t think the answer to unburdening the system is by trying to squeeze more and more throughput out of the same number of clinicians just by saying, “Well, use AI. So, you can see people faster,” but not see them in as comprehensive, as complete, as high quality, and as compassionate a way. I think that’s really going to further undermine our mission to give the best quality care to as many patients as possible. There’s also great potential to increase burnout, which we’ve already seen of course with EHRs [electronic health records].
[JAMA] I think my worst nightmare is that health care will become calling a customer service line and getting a robot or getting a chatbot and just never having an interaction with a human being. Yes, they can answer basic questions, but they also don’t understand a lot.
[Hull] Right. Even if I told you, “Well, if we have an AI do the first pass, we can get you into clinic tomorrow, but if you want to wait for a person, it’s going to be 2 weeks from now,” I think, unless it’s a really, really simple question, most people will wait 2 weeks for the appointment. And if you really need medical attention that quickly, you probably need to go to the emergency department.
But to that end, I do think having good empirical grounding to back this up and using models such as community-based participatory research to make sure that we are engaging diverse stakeholders in different communities, including communities of more historically vulnerable populations, to get a sense of, “Well, what are their priorities? What are their concerns? What would they like to see AI do for them?” Not just, “What would we as the profession like to see AI do for us?” I think that’s going to be an absolutely critical piece in terms of ensuring the most ethical deployment of AI going forward.”
Full article, Y Hswen and J Abbasi. JAMA 2024.11.15