Skip to main content

Promise and Potential Pitfalls of AI in Health Care Come to the Fore at One-U RAI Symposium

Sophia Friesen

The potential of artificial intelligence (AI) sometimes seems limitless. But with that potential comes serious new concerns, such as bias, misinformation, and privacy violations. How to develop and implement AI in a way that ethically addresses those concerns is the goal of the One-U Responsible AI Initiative (RAI), which launched in fall 2023 and held its inaugural symposium this September to a packed audience.

A man in a suit speaks with feeling to a packed audience.
Manish Parashar, PhD, gives opening remarks at the symposium.

Health research is one of three main focus areas of the initiative. Manish Parashar, PhD, director of One-U RAI and professor in the Kahlert School of Computing, describes the University of Utah as particularly well-positioned to advance the field, due to its strong health research community and extensive health records. “We have the strengths,” he says. “We have the data. We have the expertise. And we can make a meaningful difference across the state.”

A woman with brown hair speaks behind a podium.
Nina de Lacy, MD. Image credit: RAI.

Nina de Lacy, MD, who leads RAI’s Healthcare and Wellness working group, thinks those statewide solutions will have global impact. De Lacy, assistant professor in psychiatry in the Spencer Fox Eccles School of Medicine (SFESOM), sees Utah as a microcosm for some of the most urgent issues in health research, such as how climate change affects health and how to improve health care access for remote communities. 

“Here in the Mountain West, we have a great opportunity to advance solutions to issues that are affecting people all over the planet,” de Lacy says.

Predicting stillbirth risk

Nathan Blue, MD, assistant professor of obstetrics and gynecology in SFESOM, approaches AI from a clinician’s perspective. As an obstetrician specializing in high-risk pregnancies, Blue is deeply familiar with a sobering statistic: 1 in 160 pregnancies end in stillbirth. Delivering a baby early can prevent stillbirth, but it can also lead to infant death due to prematurity, which means that “the holy grail for obstetric decision-making is knowing when to recommend delivery,” Blue says.
 
It's an extraordinarily complex decision to make. Even accounting for all of stillbirth’s known risk factors, standard-of-care delivery times fall in a range of about two weeks. And the final recommended date is up to physician judgment.

That’s why Blue is building an AI tool to assess the risk of stillbirth in complex pregnancies. He’s specifically using a type of “explainable AI” that shows the user which factors contributed to the model’s prediction. The transparency offered by this kind of model can help doctors reduce and adjust for bias in decision-making, as well as integrate data at a scale that human brains simply can’t.
 
If it works, Blue says, accurate, individualized stillbirth risk prediction would be a breakthrough for the field of obstetrics and, therefore, health care in general. “We were all born,” Blue says. “This has transformative potential impact for human health.”

A man with red hair speaks behind a podium.
Nathan Blue, MD. Image credit: RAI.

Enhancing medical image analysis

While Blue is integrating data from pregnancies, Shireen Elhabian, PhD, associate professor in the Kahlert School of Computing, has her sights set on how AI image analysis can change medicine. She’s taking advantage of AI’s potential to integrate large quantities of data to help doctors get more valuable information from medical images such as X-rays and MRIs, which constitute about 90% of all health data.
 
“We’re developing automated tools to allow doctors to leverage the full potential of medical images to diagnose and study diseases more quickly and precisely with minimal expert input,” Elhabian says. “My goal is to create systems that are not only smart but also practical, accessible, efficient, and reproducible.”

Doctors can use measurements taken from X-rays and other medical scans to mark how well a patient is responding to a course of treatment, or to estimate someone’s risk of future injuries. But doctors usually take two-dimensional measurements that may not capture the full spectrum of anatomical variability. The image analysis tools Elhabian is building can model and analyze anatomical structures in 3D, allowing a much more comprehensive understanding of a patient’s health.
 
Elhabian emphasizes that her AI tools are designed to help physicians interpret images, not take the human out of the picture. “AI is not here to replace medical experts,” she explains. “It’s about revealing what’s hidden and enhancing workflow.”

A woman in a blue hijab speaks behind a podium.
Shireen Elhabian, PhD. Image credit: RAI.

Artificial intelligence, human responsibility

While the day’s talks were overall hopeful about the ability of AI-based tools to improve health care, a common thread throughout the symposium was the importance of designing, maintaining, and using those tools responsibly. Researchers emphasized the need for designs that keep humans in the loop, reduce bias, and respect privacy.
 
“AI is not a hurricane bearing down on us. It is us,” says panelist David Danks, PhD, professor of data science and philosophy at UC San Diego. “And it’s events like this that give me a lot of optimism. We can do better.”