Racial Myths About Pain Are Embedded in Artificial Intelligence

By Crystal Lindell

A new study published in JAMA found that artificial intelligence (AI) programs are encoded with racial and ethnic biases – just like humans – when it comes to evaluating a patient's pain. 

The authors said they wanted to look into the issue because it's already well-known that doctors underestimate and undertreat black patients’ pain compared to white patients. 

To study how that may impact AI, researchers had 222 medical students and residents evaluate two different patients, one black and one white, who were both experiencing pain. They also had them evaluate statements about how race may impact biology, some of which were myths and some of which were true. 

Then the researchers had two Large Language Models (LLMs) widely used in AI — Gemini Pro  and GPT-4 — do the same by feeding them patient information reports, and then having them evaluate statements about how race impacts biology. 

There wasn’t much difference between the humans and the AI models when it came to rating patients’ pain, regardless of race. Both the humans and the AI models rated the patients as having similar pain scores. 

However, both the humans and AI systems had some false beliefs about race and patient pain. Gemini Pro fared the worst, while GPT-4 and the humans came out relatively similar. 

Specifically, Gemini Pro had the highest rate of racial myths (24%). That was followed by the humans (12%) and GPT-4 (9%).

“Although LLMs rate pain similarly between races and ethnicities, they underestimate pain among Black individuals in the presence of false beliefs,” wrote lead author Brototo Deb, MD, a resident at Georgetown University–MedStar Washington Hospital Center.

“Given LLMs’ significant abilities in assisting with clinical reasoning, as well as a human tendency toward automation bias, these biases could propagate race and ethnicity–based medicine and the undertreatment of pain in Black patients.”

Deb and co-author Adam Rodman, MD, says their study corresponds with previous research showing that AI models have biases related to race and ethnicity. 

Given how AI is increasingly used in clinical practice, there’s concern that black patients’ pain will continue to be undertreated, making them less likely to get opioids and more likely to be drug tested. 

There’s a common belief that AI will eliminate racial bias because computers are seen as more logical than humans. However, AI is encoded with data provided by humans, which means as long as humans have bias, AI will too. 

The real problem is if doctors start to rely too much on AI for patient evaluations, there’s a potential for real harm. Especially if doctors use AI to justify their medical decisions under the false belief that they are unbiased. 

It’s still unclear how these new AI systems will impact healthcare, but everyone involved should be careful to avoid relying too heavily on them. At the end of the day, just like the humans who program them, AI models have their flaws.