Research: Artificial intelligence can fuel racial bias in healthcare, but can mitigate it, too (2022)

By Julia Sklar

July 24, 2022

Research: Artificial intelligence can fuel racial bias in healthcare, but can mitigate it, too (1)

Artificial intelligence has come to stay in the healthcare industry. The term refers to a constellation of computational tools that can comb through vast troves of data at rates far surpassing human ability, in a way that can streamline providers’ jobs. Some types of AI commonly found in health care already are:

  • Machine learning AI, where a computer trains on datasets and ‘learns’ to, for example, identify patients who would do well with a certain treatment
  • Natural language processing AI, which can identify the human voice, and might transcribe a doctor’s clinical notes
  • Rules-based AI, where computers train to act in a specific way if a particular data point shows up–these kinds of AI are commonly used in electronic medical records to perhaps flag a patient who has missed their last two appointments.

Regardless of the specific type of AI, these tools are generally capable of making a massive, complex industry run more efficiently. But several studies show it can also propagate racial biases, leading to misdiagnosis of medical conditions among people of colour, insufficient treatment of pain, under-prescription of life-affirming medications, and more. Many patients don’t even know they’ve been enrolled in healthcare algorithms that are influencing their care and outcomes.

A growing body of research shows a paradox, however. While some algorithms do indeed exacerbate inequitable medical care, other algorithms can actually close such gaps.

Popular press tends to cover AI in medicine only when something goes wrong. While such reports are critical in holding institutions to account, they can also paint the picture that when AI enters health care, trouble is always around the corner. If done correctly, AI can actually make health care fairer for more people.

Historically, much of the research in the medical sciences and in the biological sciences has relied on subject pools of white — often male — people of European ancestry. These foundational studies on everything from normal internal body temperature to heart disease become the stuff of textbooks and training that doctors, nurses, and other health care professionals engage with as they move up the professional ladder.

However, those studies offer a limited, one-size-fits-all view of human health that opens the door to racial bias — which patients get treated and how. The most easily graspable example of this type of knowledge gone wrong is consulting images of white skin to diagnose dermatological diseases across all skin types, when such diseases may manifest in unique ways depending on the pigmentation of someone’s skin.

(Video) Artificial Intelligence | FULL DEBATE | Doha Debates

When AI is trained by data that lack diversity, then it is more likely to mimic the same racial bias that healthcare professionals can themselves exhibit. A poorly structured AI training dataset is no better (and in fact is sometimes worse) than a human with a medical degree predicated on lessons learned about the health of primarily white patients.

On the flipside, when AI is trained on datasets that include information from a diverse population of patients, it can help move the health care field away from deep-seated biases.

Below are summaries of some of the research on the intersection of AI and race.

Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations

Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Science, October 2019.

What the researchers focused on: This study dove into how a nationally circulated health care algorithm perpetuated the under-serving of Black patients as compared with white patients. Such algorithms have the potential to do immense harm, by replicating the same racial biases in play by humans, but at an even more massive scale, the authors write.

What they found: Commercially applied risk-prediction algorithms are among the most common types of AI the health care industry currently uses. They’re applied to the care of some 200 million Americans every year. In this study, researchers show one unnamed algorithm assigned Black patients the same level of health risk as white patients, when in reality the Black patients were sicker.

The researchers learned that the machine-learning algorithm had trained itself to see health care costs as a proxy for a patient’s level of health, when in reality it is reflective of the healthcare industry’s inequitable investment in some patient populations over others.

In other words, the algorithm assumed that because it cost hospitals less to care for Black patients, Black patients were healthier and required less care. However, hospital costs are lower for Black patients even when they are sicker than white patients, because hospitals funnel fewer resources toward the care of sick Black patients. The researchers suggest that training the algorithm not to equate cost with health would undo this tripwire.

What researchers did with their findings: “After completing the analyses described above, we contacted the algorithm manufacturer for an initial discussion of our results,” the authors write. “In response, the manufacturer independently replicated our analyses on its national dataset of 3,695,943 commercially insured patients. This effort confirmed our results—by one measure of predictive bias calculated in their dataset, Black patients had 48,772 more active chronic conditions than White patients, conditional on risk score—illustrating how biases can indeed arise inadvertently.”

Researchers then began experimenting with solutions with the algorithm manufacturer and have already made improvements in the product.

“Of course, our experience may not be typical of all algorithm developers in this sector,” they write. “But because the manufacturer of the algorithm we study is widely viewed as an industry leader in data and analytics, we are hopeful that this endeavor will prompt other manufacturers to implement similar fixes.”

(Video) The danger of predictive algorithms in criminal justice | Hany Farid | TEDxAmoskeagMillyard

AI Recognition of Patient Race in Medical Imaging: A Modelling Study

Judy Wawira Gichoya; et al. The Lancet: Digital Health, May 2022.

What the researchers focused on: Previous research has shown that AI can be trained to detect a person’s race from medical images, even though human experts who are looking at the images aren’t able to tell the patient’s race just from looking at those images. The authors wanted to find out more about AI’s ability to recognize a patient’s race from medical images. They analyzed a total of 680,201 chest X-rays across 3 datasets where Black patients comprised 4.8% to 46.8% of the subjects, white patients 42.1% to 64.1%, Asian patients 3.6% to 10.8%; 458,317 chest CTs also across 3 datasets where Black patients comprised 9.1% to 72% of the subjects, white patients 28% to 90.9% and Asian patients were unrepresented; 691 digital radiography X-rays where Black patients comprised 48.2% of the subjects, white patients 51.8%, and Asian patients were unrepresented; 86,669 breast mammograms where Black patients comprised 50.4% of the subjects, white patients 49.6% and Asian patients were unrepresented; and 10,358 lateral c-spine X-rays where Black patients comprised 24.8% of the subjects, white patients 75.2%, and Asian patients were unrepresented. The images themselves contained no racial information and represented different degrees of image clarity, full and cropped views and other variations.

What they found: The deep learning model was able to identify a patient’s race accurately from medical images that contained no identifiable racial information. Researchers thought perhaps the model was learning to do this by matching known health outcomes with racial information.

There is “evidence that Black patients have a higher adjusted bone mineral density and a slower age-adjusted annual rate of decline in bone mineral density than White patients,” the researchers write, so they thought perhaps they could trick the model by cropping out parts of medical images that showed such characteristic bone density information. Even still, the model was able to identify the patient’s race from the images. “This finding is striking as this task is generally not understood to be possible for human experts,” the authors write.

How they explain it: “The results from our study emphasize that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging,” the researchers write. “The regulatory environment in particular, while evolving, has not yet produced strong processes to guard against unexpected racial recognition by AI models; either to identify these capabilities in models or to mitigate the harms that might be caused.”

An Algorithmic Approach to Reducing Unexplained Pain Disparities in Underserved Populations

Emma Pierson; et al. Nature Medicine, January 2021.

What the researchers focused on: Previous research has shown Black patients are more likely than white patients to have their pain dismissed and untreated. One example is knee pain due to osteoarthritis. Researchers wanted to find out if an AI could undo biases in how knee pain is diagnosed and treated.

What they found: The researchers used a deep learning model trained on X-rays of osteoarthritis in the knee of 2,877 patients —18% of whom were Black, 38% were low-income, and 39% were non-college graduates — to predict the level of pain a patient would be expected to have based on the progression of their osteoarthritis. The model was better at assigning pain levels to underserved patients than human radiologists. The researchers conclude that the model was able to predict pain even when the imaging did not necessarily show the expected level of disease severity. That’s because patients of colour are more likely than white patients to have “factors external to the knee” that influence their levels of pain, such as work conditions and higher stress, the researchers write. In other words, the same level of osteoarthritis severity can result in very different levels of pain depending on the patient population, and evaluating a patient without that context can lead to underdiagnosis for underserved patients. In this case, an AI could solve an issue that persists because of human racial bias.

How they explain it: “In addition to raising important questions regarding how we understand potential sources of pain, our results have implications for the determination of who receives arthroplasty for knee pain … Consequently, we hypothesize that underserved patients with disabling pain but without severe radiographic disease could be less likely to receive surgical treatments and more likely to be offered non-specific therapies for pain. This approach could lead to overuse of pharmacological remedies, including opioids, for underserved patients and contribute to the well-documented disparities in access to knee arthroplasty.”

Other academic studies, reports and commentaries to consider:

The Algorithm Bias Playbook

(Video) The three big ethical concerns with artificial intelligence

Ziad Obermeyer, Rebecca Nissan, Michael Stern, Stephanie Eaneff, Emily Joy Bembeneck, and Sendhil Mullainathan. Center for Applied Artificial Intelligence, The University of Chicago Booth School of Business. June 2021. Jonathan Huang, Galal Galal, Mozziyar Etemadi and Mahesh Vaidyanathan. JMIR Medical Informatics, May 2022.

Evaluation and Mitigation of Racial Bias in Clinical Machine Learning Models: Scoping Review

Jonathan Huang, Galal Galal, Mozziyar Etemadi and Mahesh Vaidyanathan. JMIR Medical Informatics, May 2022.

Systemic Kidney Transplant Inequities for Black Individuals: Examining the Contribution of Racialized Kidney Function Estimating Equations

L. Ebony Boulware, Tanjala S. Purnell and Dinushika Mohottige. JAMA Network Open, January 2021

Hidden in Plain Sight – Reconsidering the Use of Race Correction in Clinical Algorithms

Darshali A. Vyas, Leo G. Eisenstein and David S. Jones. New England Journal of Medicine. August 2020

Challenging the Use of Race in the Vaginal Birth after Cesarian Section Calculator

Darshali A. Vyas, David S. Jones, Audra R. Meadows, Khady Diouf, Nawal M. Nour and Julianna Schantz-Dunn. Women’s Health Issues, April 2019.

This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.Research: Artificial intelligence can fuel racial bias in healthcare, but can mitigate it, too (2)Research: Artificial intelligence can fuel racial bias in healthcare, but can mitigate it, too (3)

READ MORE:

Artificial intelligence in Australia needs to get ethical, so we have a plan

(Video) Who Gets Health Care and Why: AI, Race and Health Equity

About the author

By Julia Sklar

Julia Sklar is Julia Sklar is an award-winning science journalist and reporter for Journalist's Resource.

Tags: AI in medicine artificial intelligence healthcare algorithms healthcare industry inequitable medical care Machine learning AI medical misdiagnosis Natural language processing AI people of colour racial bias Rules-based AI

Research: Artificial intelligence can fuel racial bias in healthcare, but can mitigate it, too (5)

New government. So what now?

$50 for your first 90 days

Get Premium Today

Already a subscriber? Login

(Video) MD vs. Machine: Artificial intelligence in health care

FAQs

How can artificial intelligence be used to solve the problems in the health care sector? ›

A common use of artificial intelligence in healthcare involves NLP applications that can understand and classify clinical documentation. NLP systems can analyze unstructured clinical notes on patients, giving incredible insight into understanding quality, improving methods, and better results for patients.

How AI can reduce bias? ›

By exposing a bias, algorithms allow us to lessen the impact of that bias on our decisions and actions. They enable us to make decisions that reflect objective data instead of untested assumptions, reveal imbalances, and alert people to their cognitive blind spots so they can make more accurate, unbiased decisions.

Can artificial intelligence be biased? ›

It is relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group.

What are the threats of artificial intelligence in healthcare? ›

The report identifies and clarifies the main clinical, social and ethical risks posed by AI in healthcare, more specifically: potential errors and patient harm; risk of bias and increased health inequalities; lack of transparency and trust; and vulnerability to hacking and data privacy breaches.

What are the benefits of artificial intelligence to healthcare? ›

Integrating AI into the healthcare ecosystem allows for a multitude of benefits, including automating tasks and analyzing big patient data sets to deliver better healthcare faster, and at a lower cost. According to Insider Intelligence, 30% of healthcare costs are associated with administrative tasks.

What are examples of artificial intelligence in healthcare? ›

AI in healthcare examples that are meant for improving communication involves, for instance, platforms for automotive appointments systems, real-time health status monitoring (handy for chronic diseases such as diabetes), or developing patient engagement solutions.

How do you mitigate bias in data? ›

3 Steps To Better Mitigate Bias in Your Data Analysis
  1. Step 1: Ensure All Available Data Is Included in Analytical Decision Making. ...
  2. Step 2: Open Access To Data to more People. ...
  3. Step 3: Strengthen the Feedback Loop Between Humans and Machines.
14 Dec 2021

How can Artificial Intelligence AI help reduce or eliminate bias in the recruitment process? ›

AI can free human recruiters (who often spend 40 percent of their time sorting resumes) to do more high value tasks, like building relationships with candidates, and streamline and automate interview scheduling, candidate screening, and measure specific recruitment KPIs.

How can AI systems be biased give examples? ›

For example, Amazon found out that their AI recruiting algorithm was biased against women. This algorithm was based on the number of resumes submitted over the past 10 years and the candidates hired. And since most of the candidates were men, so the algorithm also favored men over women.

What is the main reason for bias in the AI system? ›

A major contributor to the problem of bias in AI is that not enough training data was collected. Or more precisely, there is a lack of good training data for certain demographic groups. Because algorithms can only pick up patterns if they have seen plenty of examples.

What are the negative impacts of AI? ›

AI can lead to unfair outcomes.

AI use cases including facial recognition and predictive analytics could adversely impact protected classes in areas such as loan rejection, criminal justice and racial bias, leading to unfair outcomes for certain people.

What is artificial intelligence and its advantages and disadvantages? ›

Advantages and Disadvantage of Artificial Intelligence
Advantages of artificial intelligenceDisadvantages of artificial intelligence
3. It introduces a new technique to solve new problems.3. A robot is one of the implementations of Artificial intelligence with them replacing jobs and lead to serve unemployment.
5 more rows
24 Aug 2022

What is artificial intelligence and how is it used in healthcare? ›

In the simplest sense, AI is when computers and other machines mimic human cognition, and are capable of learning, thinking, and making decisions or taking actions. AI in healthcare, then, is the use of machines to analyze and act on medical data, usually with the goal of predicting a particular outcome.

Will artificial intelligence replace healthcare workers? ›

AI will not fully replace human doctors but will certainly improve physician performance, and at the same time offer patients more accessibility to healthcare at lower costs. Proper regulations and a legal framework are required to make the best out of AI to serve mankind in a better and the most beneficial way.

Should artificial intelligence be used in healthcare? ›

AI makes healthcare more accessible.

Artificial intelligence (AI) can be used to develop a more efficient healthcare ecosystem. Patients will be able to better understand their symptoms and receive the treatment they require with the support of such digital infrastructure.

Can artificial intelligence assist the health industry give two examples? ›

Widespread implementation of electronic health record systems. Improvements in natural language processing and computer vision, enabling machines to replicate human perceptual processes. Enhanced the precision of robot-assisted surgery. Improvements in deep learning techniques and data logs in rare diseases.

What is the role of artificial intelligence in education and healthcare? ›

AI applications can perform the basic tasks like grading, attendance and timetable making – it means every side-task, which teachers have to do, can be given to machines to improve the education level by allowing teachers to read more.

When was artificial intelligence first used in healthcare? ›

In the 1970s, AI applications were first used to help with biomedical problems. From there, AI-powered applications have expanded and adapted to transform the healthcare industry by reducing spend, improving patient outcomes, and increasing efficiencies overall.

How can fairness be built into AI? ›

What actions can I take to ensure fairness in AI models? All organizations considering AI should start with a holistic view of both data and AI models. In such a view, governance is a key component. To avoid discrimination or bias in outcomes, consider validating your data by implementing an AI Platform.

Why is it important to reduce bias in research? ›

Bias in research can cause distorted results and wrong conclusions. Such studies can lead to unnecessary costs, wrong clinical practice and they can eventually cause some kind of harm to the patient.

What are ethics of artificial intelligence? ›

AI ethics is a set of guidelines that advise on the design and outcomes of artificial intelligence. Human beings come with all sorts of cognitive biases, such as recency and confirmation bias, and those inherent biases are exhibited in our behaviors and subsequently, our data.

How AI can remove bias from the hiring process and promote diversity and inclusion? ›

By using only job relevant data as the foundation of decisions, companies can more effectively remove bias from hiring, ultimately better equipping hiring managers and recruiters to prioritize improving diversity and creating a more inclusive hiring process within their organizations.

Will AI remove hiring bias? ›

AI tools use data points to source and evaluate candidates. The models can predict the best candidates without using any bias or assumptions. AI eliminates these biases, helping to ensure that candidates are evaluated purely on merit.

Does the implementation of AI in the recruitment process facilitate the diversity of the candidates? ›

AI-based sourcing increases the diversity of candidates and allows recruiters to reach passive candidates, while responsible AI guarantees unbiased and fair automated decisions and GDPR compliance. All of these benefits are already creating a positive impact on the way recruiters do their job.

What are the 2 main types of AI bias? ›

There are two types of bias in AI. One is algorithmic AI bias or “data bias,” where algorithms are trained using biased data. The other kind of bias in AI is societal AI bias. That's where our assumptions and norms as a society cause us to have blind spots or certain expectations in our thinking.

Which one of the following is an example of AI bias? ›

An example of algorithmic AI bias could be assuming that a model would automatically be less biased when not given access to protected classes, say, race. In reality, removing the protected classes from the analysis doesn't erase racial bias from AI algorithms.

What is discrimination AI? ›

When the unfair treatment is caused by automated decisions, usually taken by intelligent agents or other AI-based systems, the topic of digital discrimination arises. Digital discrimination is prevalent in a diverse range of fields, such as in risk assessment systems for policing and credit scores [1], [2].

What is bias in artificial neural network? ›

Bias in Neural Networks can be thought of as analogous to the role of a constant in a linear function, whereby the line is effectively transposed by the constant value. In a scenario with no bias, the input to the activation function is 'x' multiplied by the connection weight 'w0'.

How do you feel AI is impacting our day to day life explain with example? ›

One of such examples that we use in our daily lives is voice assistants on our mobile phones. Voice assistants are the gift of AI. Other than this, AI has brought us things like self-driving cars, humanoid robots, and reusable rockets. AI has given a boost to human creativity.

Why is AI enabled machine making a mistake considered an ethical issue? ›

But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.

Is artificial intelligence a threat or a benefit? ›

One of the biggest advantages of Artificial Intelligence is that it can significantly reduce errors and increase accuracy and precision. The decisions taken by AI in every step is decided by information previously gathered and a certain set of algorithms.

How does artificial intelligence harm society? ›

Unemployment—Loss of Jobs to Machines

One of the main concerns of society is that with the rise of artificial intelligence, many jobs that are characterized by a sort of routine, will get automated. The more jobs that get automated, the more people will have to leave their jobs.

What are two negative impacts of artificial intelligence? ›

Unemployment and Loss of Jobs: One of the major effects that Artificial Intelligence negatively has on humanity is that it causes loss of jobs and unemployment. The use of computers and machines to do tasks which ab initio were done by humans have caused loss of jobs and employment opportunities for people.

What are the risk and benefits of AI to technology? ›

AI machines use machine learning algorithms to mimic the cognitive abilities of human beings and solve a simple or complex problem.
  • Increase work efficiency. ...
  • Work with high accuracy. ...
  • Reduce cost of training and operation. ...
  • Improve Processes. ...
  • Risks of Artificial Intelligence. ...
  • AI is Unsustainable. ...
  • Lesser Jobs.
3 Sept 2021

Is artificial intelligence a threat to humanity? ›

The takeaway: The AI that we use today is exceptionally useful for many different tasks. That doesn't mean it is always positive – it is a tool which, if used maliciously or incorrectly, can have negative consequences. Despite this, it currently seems to be unlikely to become an existential threat to humanity.

Will the development of artificial intelligence harm or benefit humankind? ›

AI can help eliminate the necessity for humans to perform tedious tasks. One of the main benefits of artificial intelligence is its ability to reduce the drudgery involved in many work tasks. Repetitive, tedious tasks in any job are the bane of many human workers around the world.

Should artificial intelligence be used in healthcare? ›

AI makes healthcare more accessible.

Artificial intelligence (AI) can be used to develop a more efficient healthcare ecosystem. Patients will be able to better understand their symptoms and receive the treatment they require with the support of such digital infrastructure.

How is artificial intelligence used in medicine? ›

Primary care physicians can use AI to take their notes, analyze their discussions with patients, and enter required information directly into EHR systems. These applications will collect and analyze patient data and present it to primary care physicians alongside insight into patient's medical needs.

What is artificial intelligence in healthcare PDF? ›

AI aims to mimic human cognitive functions.AI can help doctors to. remain up to date by providing recent research about certain diseases and helping them to. provide better care to their patients.

What are pros and cons of artificial intelligence in healthcare? ›

More than just the research done on nanotechnology in medicine, AI has created a vastly easier environment for healthcare professionals to get things done.
  • Real-Time Access to Information. ...
  • Streamlining Tasks. ...
  • Cost-Efficient and Resourceful. ...
  • Research Ability. ...
  • Requires Human Oversight. ...
  • Might Create Social Biases.
20 Jul 2022

What are the pros and cons of artificial intelligence? ›

Advantages and Disadvantages of Artificial Intelligence
  • 1) Reduction in Human Error: ...
  • 2) Takes risks instead of Humans: ...
  • 3) Available 24x7: ...
  • 4) Helping in Repetitive Jobs: ...
  • 5) Digital Assistance: ...
  • 6) Faster Decisions: ...
  • 7) Daily Applications: ...
  • 8) New Inventions:

What is meant by artificial intelligence in healthcare? ›

Artificial intelligence in healthcare is an overarching term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to mimic human cognition in the analysis, presentation, and comprehension of complex medical and health care data.

What are main types of AI and its applications in healthcare? ›

Types of AI of relevance to healthcare
  • Machine learning – neural networks and deep learning. Machine learning is a statistical technique for fitting models to data and to 'learn' by training models with data. ...
  • Natural language processing. ...
  • Rule-based expert systems. ...
  • Physical robots. ...
  • Robotic process automation.

What is the future of AI in medicine? ›

The future of medical AI is promising and shows that AI has the potential to improve healthcare delivery. While AI currently has a relatively limited role in direct patient care, its evolving role in complex clinical decision making is foreseeable.

Videos

1. How will Artificial Intelligence affect the delivery of healthcare?
(Laboratory Medicine & Pathobiology University of Toronto)
2. AI in health and medicine | Eric Topol, Scripps Research Translational Institute | Discovery
(AI for Good)
3. VDISummit AI & Ethics 2021
(Omina Technologies)
4. Algorithmic Bias and Fairness: Crash Course AI #18
(CrashCourse)
5. Stanford Artificial Intelligence & Law Society (SAILS) Symposium - AI & Discrimination
(Stanford Law School)
6. Artificial Intelligence: The World According to AI |Targeted by Algorithm (Ep1)| The Big Picture
(Al Jazeera English)

Top Articles

You might also like

Latest Posts

Article information

Author: Rob Wisoky

Last Updated: 12/01/2022

Views: 6677

Rating: 4.8 / 5 (68 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Rob Wisoky

Birthday: 1994-09-30

Address: 5789 Michel Vista, West Domenic, OR 80464-9452

Phone: +97313824072371

Job: Education Orchestrator

Hobby: Lockpicking, Crocheting, Baton twirling, Video gaming, Jogging, Whittling, Model building

Introduction: My name is Rob Wisoky, I am a smiling, helpful, encouraging, zealous, energetic, faithful, fantastic person who loves writing and wants to share my knowledge and understanding with you.