Algorithms are increasingly being used to make big decisions in healthcare, and there’s a common misconception that they’re unbiased. The truth is: they’re as biased as the humans that create them and a group of researchers—including several at Chaminade University—are trying to spotlight what that means.
In partnership with a federal consortium looking at the issue—dubbed AIM-AHEAD—Chaminade hosted a special virtual symposium in August aimed at better understanding the uses of artificial intelligence (AI) and machine learning in health and what changes are needed to improve health equity in the Pacific.
Dr. Claire Wright, an associate professor at Biology at Chaminade, said the symposium was about beginning a conversation—and ensuring Pacific voices are part of it. “We wanted to engage the community and understand some of the things that are really important to them,” she said.
“We think it’s kind of science fiction but there are many elements of health sciences that machine learning is already used, like diagnostics, surgery, prognosis, and driving health plans. But the data used to run those algorithms is not representative of our population in Hawaii.”
So is that a problem? And if so, how much of a problem?
That was the question tackled during the event—and the topics covered could help guide the development of best practices and ethical guidelines nationally. AIM-AHEAD, which organized the symposium with Chaminade, is funded by the National Institutes of Health and focused on increasing diversity among AI and machine learning researchers to ultimately improve the technologies in health applications, starting with electronic health records. The initiative’s acronym stands for the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity.
Chaminade Data Science Director Dr. Rylan Chong delivered the opening keynote at the symposium, urging attendees to consider how bias makes its way into algorithms and machine learning programs at various points in the process—from the bias that researchers bring to biases baked into “norm studies.”
Dr. Melissa McCradden, who is a bioethicist at the Hospital for Sick Children (affiliated with the University of Toronto), built on those themes in her keynote. She said the conversation happening around algorithmic bias, including in healthcare, is ultimately about “doing better science.”
“One of the major misperceptions is because AI uses so much data … that it’s ultimately leading us toward a place where we’re modeling objective truths about the world,” she said. “But we need to be really, really cautious about assuming that more data can get us closer to objectivity.”
Instead, she said, communities need to work together to make “values-based choices.”
And then, McCradden added, those choices need to be evaluated for their impacts.
Importantly, the symposium also included listening sessions so participants could weigh in on where inequities are now—and how technology might help to address them—rather than make them worse.
“These tools are already being used, but we don’t know the power of them for our communities,” Wright said. “Before we go too far down the line, we want to make sure that some of the folks who are underrepresented have the opportunity to be involved in the conversation.”
She added, “Let’s direct the quality of our own healthcare. Let’s tailor it to fit our needs.”