As researchers across the University of Utah build, study, and use generative artificial intelligence (AI), they’re uncovering high-stakes ethical questions that can’t easily be solved by technologists or humanists alone.
Physician Ryan A. Metcalf is exploring how AI might help doctors decide who truly needs a blood transfusion—a common, lifesaving treatment that is also costly and often overused—without sidelining clinical judgment at the bedside.
Economist Ellis Scharfenaker is asking who will control AI’s growing economic power as it reshapes work, with the potential to reduce drudgery and improve safety but also to intensify surveillance, deskilling, and inequality.
Political scientist Yuree Noh (pictured above) is using AI to analyze a massive global dataset on censorship and surveillance and wonders how to ensure a large language model’s judgments hold up across countries—including authoritarian ones—without reinforcing biases that could shape policy. “I’m thinking about aid allocation, for example,” Noh said. “What if these systematic biases are affecting those who have the least power to push back?”
Researchers grappled with these questions and others at a first-of-its-kind AI and Ethics Workshop held Friday, April 3, at the University Guest House. About 75 people attended the daylong interdisciplinary event, led by One-U Responsible AI Initiative faculty fellow and philosophy professor C. Thi Nguyen and his collaborator Jeff Phillips, a computer science professor and member of the initiative’s Faculty Engagement Committee.
“AI is invading or driving everything, depending on your perspective,” Phillips said. “We need to pause and think, ‘Is it ok to do it that way?’”
As part of Nguyen’s fellowship, he and Phillips are building the U’s first AI and ethics course cross-listed in philosophy and computing. They used the event, in part, to start to build an interdisciplinary cohort around the subject.
Researchers across the U are examining problems at the intersection of AI and ethics, but many remain siloed in their own departments. “The workshop was centered around facilitating these conversations with people who would normally not get a chance to talk,” Phillips said.
The event blocked off time for in-person brainstorming between people who build or study AI and humanities scholars, who are essential for understanding AI’s influence on society.
“Either side going it alone tends to miss vast swathes of what’s really important,” said Nguyen, who researches data ethics and has published two acclaimed books, including The Score, released earlier this year. “The best work I’ve seen in research and in teaching has come from people working together.”
The event featured four longer talks, but its hallmark was an open problem session: a dozen researchers pitched big questions to the room, then invited interested colleagues into breakout groups to work toward solutions. The event organizers borrowed the format from computer science conferences Phillips has attended.
“The warning: this is highly experimental,” Nguyen said at the event. “We have not seen anyone try to do an open problem session in an interdisciplinary setting before, so we have no idea if this will work.”
After the workshop, Scharfenaker said the sessions did a better job of fostering genuine interdisciplinary conversations than other campus events. He left with several concrete ideas for collaborations that wouldn’t have emerged from his own department.
“The most valuable aspect, by far, was seeing what questions other departments are actually working on and where our concerns overlap,” Scharfenaker said. “That kind of visibility is rare on a campus this size. It revealed not just shared interests but shared blind spots, which is arguably more useful.”
Noh agreed. “I got direct, substantive feedback on a problem I’m actively stuck on,” she said. One new idea was whether telling an AI model what kind of political system a country has would sharpen its analysis—or skew it. Another entailed using donated chatbot data or secure platforms like Signal to hear from people who might stay silent in a standard survey.
More importantly, Noh said, was the chance to gather with people who care about similar issues. She pointed to conversations with peers whose research, like hers, spans countries and languages. “The open problem session really created collaboration opportunities.”
Noh and Scharfenaker emphasized the importance of events like the workshop. AI, Scharfenaker said, has the potential to undermine public university values such as access, critical inquiry, and democratic knowledge production. The U’s job is “not to ride the wave of AI enthusiasm but to subject that enthusiasm to the kind of critical scrutiny that only genuine academic inquiry can provide,” he said.
Noh also said the event reinforced a broader point: the hardest problems in AI aren’t necessarily technical—they’re often conceptual and political. “Who decides what ‘repression’ means, for example? What counts as ground truth when even human coders disagree?” she asked. “I hope future iterations of this event allow us to explore things like this—the messy, human side of the work.”
Moving forward, Phillips and Nguyen hope to build a lasting cohort around AI and ethics and expect to hold one or two half-day workshops a semester. Sign up for initiative emails to stay informed on future events.
201 PRESIDENTS CIRCLE
SALT LAKE CITY, UT 84112
801-581-7200
© 2026 THE UNIVERSITY OF UTAH
U researchers confront urgent AI ethics questions – The University of Utah
Leave a Comment
