AI Observability
 • 
8
 minutes read

Did Bias Just Creep Into Your AI Model? The Whys And The Should Nots

A discussion on bias in AI, its types, and influencing factors

By 
Gatha
Did Bias Just Creep Into Your AI Model? The Whys And The Should Nots
In this post:

What is AI Bias?

Bias is a side effect of knowledge. When we are young, we do not know how to judge someone by their looks, clothes, or language. We simply trust. With time, we learn to view some attributes with suspicion. The adults who have their own life experiences pass on their assumptions. At this point, we learn a little about the world and gain biases against certain characteristics. Then there are some biased views we did not know we possessed until a particular incident made us think about it.

The nifty little models that make predictions or classify labels start as algorithms that know little. Slowly and steadily, they discover knowledge through training and testing. Undoubtedly, the word bias sounds out of context when we think of a model that classifies images between cats versus dogs. A biased model would not hurt any cats or dogs. But now, think of an AI model that classifies CT scans for tumors or predicts the loan default probability of an applicant. Algorithmic bias can cause significant issues in such cases.

Recent Instances of AI Bias

In 2018, Gartner predicted that by 2022, 85 percent of AI projects would deliver erroneous outcomes due to bias in data, algorithms, or the people involved in managing them. It is clear that ithe users have reported many AI bias examples rrespective of the expertise behind the model development. For instance, an exciting image generation tool named DALL-E was re-launched to fix some issues related to gender and racial bias. This offering by OpenAI is still on its way to bringing in the balance, as mentioned in its preview of limitations. This tool, built on another powerful model named GPT-3, renders detailed images for a given text.

The text could be something simple like 'an armchair in the shape of an avocado' to something imaginative like 'an illustration of a baby daikon radish in a tutu walking a dog’. The generated images are breathtaking yet DALL-E fumbles like any other AI algorithm when it comes to reports of bias.

Many users noticed that images generated for the word 'builder' primarily showed men, while images for the word 'flight attendant' showed women. Many other generated images were found to be rife with stereotypes.

Stereotypical images generated by DALL-E for the prompt text ‘CEO’ as reported on 6th April 2022
Stereotypical images generated by DALL-E for the prompt text ‘CEO’ as reported on 6th April 2022. Source: DALL·E 2 Preview - Risks and Limitations 

Stereotypical images generated by DALL-E for the prompt text ‘personal assistant’ as reported on 6th April 2022
Stereotypical images generated by DALL-E for the prompt text ‘personal assistant’ as reported on 6th April 2022. Source: DALL·E 2 Preview - Risks and Limitations 

Stereotypical images generated by DALL-E for the prompt text ‘flight attendant’ as reported on 6th April 2022
Stereotypical images generated by DALL-E for the prompt text ‘flight attendant’ as reported on 6th April 2022. Source: DALL·E 2 Preview - Risks and Limitations 

Stereotypical images generated by DALL-E for the prompt text ‘builder’ as reported on 6th April 2022
Stereotypical images generated by DALL-E for the prompt text ‘builder’ as reported on 6th April 2022. Source: DALL·E 2 Preview - Risks and Limitations 

In another finding related to biased AI in the year 2019, advertisers on Facebook were found to have tailored advertisements based on gender. The users who identified as female were targeted for nursing or secretarial job advertisements, while users who identified as males were shown employment ads for jobs like janitors and drivers. The targeted advertisements were also found to tie certain jobs with minorities. The tailoring of advertisements on Facebook has since been banned.

Some Common Types of AI Bias

Ethical compromises have been recognized as the common ground in the reported bias yet it can be categorized into different types. Some of the commonly found AI bias definitions are

Historical bias

Models trained on datasets that do not reflect current dynamics may result in AI models suffering from historical bias. For instance, a keen data scientist may decide to train a model on flight ticket sales to predict demand in the upcoming season. While the idea to use a decade-long dataset may sound appealing, it should be noted that traveling habits have changed significantly due to the recent pandemic and the subsequent feeling of carpe diem induced by social media. An older dataset may train the model on wrong, or imbalanced notions.

Sampling bias

The analytics are driven by sampling and hypothesis tests. It is practical to draw samples since the whole population cannot be surveyed for data collection. Let us assume that you wish to develop an audio classification model and train it on recordings of audiobooks. Now, such books are typically read by persons who have flawless diction and voice modulation. By restricting yourself to a niche dataset, your model could be biased against audios of a generic reader and nuances of dialects.

Now what happens when the drawn samples are not truly representative of the population? It could further cause AI racial bias.

What biased sampling looks like? Source: White Paper on Crowdsourced Network and QoE Measurements -- Definitions, Use Cases, and Challenges
What biased sampling looks like? Source: White Paper on Crowdsourced Network and QoE Measurements -- Definitions, Use Cases, and Challenges

Prejudicial bias

This type is introduced by the humans involved in the development. The stereotypical images generated by DALL-E were examples of such compromises. The humans who annotated the training-testing images transferred their prejudices that a builder is a male-specific occupation while flight attendants are aligned with the other gender.

Labeling bias

The data does not speak for itself and its labeling is a crucial pre-processing step and requires human involvement. The insufficient labeling or limited insight at this stage can lead to labeling bias since the model will learn what it is shown. Also called confirmation bias, the following images would help you understand it better.

A model trained on the bounding boxes labeled as lion's face will recognize the front profile
A model trained on the bounding boxes labeled as lion's face will recognize the front profile. Source: 6 Types of AI Bias Everyone Should Know

The biased labeling of the training data confused the model
The biased labeling of the training data confused the model. Source: 6 Types of AI Bias Everyone Should Know

Evaluation bias

A trained model should be validated to ensure the relevance of its results for unforeseen data as well. While it is difficult to simulate every unpredictable situation, training your model on different types of inputs and exposing it to different environments can ensure that its learning is not riddled with evaluation bias.

Aggregation bias

The reduction and transformation steps during pre-processing can sometimes combine data subsets having different properties which may lead to a biased inference. For instance, if you combined the incomes of different professions and plotted their growth trends that look like this, an analyst could draw the obvious conclusion that a strong correlation exists between an increase in income and work experience.

The trend was followed by aggregated income over the years
The trend was followed by aggregated income over the years. Source: 6 Types of AI Bias Everyone Should Know

To ensure that the inference is correct, let us plot the income trends again but this time for different professions present in the dataset.

The income trends followed by different professions over the years
The income trends followed by different professions over the years. Source: 6 Types of AI Bias Everyone Should Know

Voila! Now you can see that the model trained on the aggregated income had displayed bias since income in professional sports is following a different trend altogether.

How Does Bias Creep into AI?

We saw how different types of bias get introduced to AI algorithms, yet four major factors abet algorithmic bias:

  • Technical factors: As we saw, the training-testing data, processing approaches, and the human-in-the-loop may introduce bias at different stages. This factor gains further significance since there is a lack of metrics for discrimination resulting from bias.
  • Legal factors: Strict regulations like GDPR, HIPAA, and CCPA are ensuring the protection of user rights and data privacy. However, there is a lot of scope in the handling of discrimination and other complexities that result from algorithmic bias.
  • Social factors: There is a need to educate and spread empathy towards under-represented groups since AI models are heavily influenced by human prejudices.
  • Ethical factors: Apart from legal regulations, moral codes of conduct need reassessment and standardization. Since AI-powered systems promise better economic returns, standards that protect against unfair practices should be ingrained in the community.

Nipping Bias in the Bud: How to Prevent AI Bias?

AI has found acceptance in various domains like finance, entertainment, gaming, and education. Decisions are made by the algorithms to save time and minimize human error. But it should not cost a person an opportunity because the model had learned human biases and considered them not on par with other applicants. In this dynamic world, economy and political peace are also influenced by such instances. Therefore, it is important to develop AI systems that do not discriminate over factors like race, gender, or other socio-economic attributes.

AI algorithms are sophisticated and often build on large datasets and built on complex learning representations. Therefore, they could appear as black boxes. How do we measure a concept like bias and minimize it? The issue can be addressed through the following AI bias detection approaches:

Procedural approach to identifying bias

This approach focuses on bias introduced due to the logic and ad-hoc analysis through explainability reports can prove to be advantageous. The reports generated for different granularities can help interpret association rules, causal reasoning, and counterfactual explanations. The ability to explain decisions holds the potential to uncover discriminatory logic and pinpoint the rogue behavior of algorithms.

Relational approach to identifying bias

This approach focuses on bias introduced by the data and the resulting output. While the data can be evaluated through statistical parity checks, the interpretations help in analyzing model outputs. Exercising the model behavior for reasoning that involves questions like "What if scenario B happens instead of A" can help uncover logic flows for unpredictable inputs.

Achieving unbiased AI

In both of these approaches, monitoring, and explainability reports can prove to be the most efficient methods. The Censius AI Observability platform provides these functionalities and more to help you build solutions protected against discrimination. The comprehensive monitoring of projects is easy to set up and provides timely alerts against data discrepancies and violations. 

The Censius AI Observability platform will help steer your project towards responsible AI
The Censius AI Observability platform will help steer your project towards responsible AI. Source: Censius AI

Additionally, explanation reports assist in a posthoc analysis of model behavior but a more transparent analysis could be achieved through subset-specific model outputs. This is achieved through cohort analysis which is very easy to run on the intuitive interface. Evaluation bias can also be uncovered through a what-if analysis, an upcoming feature, that validates the model for a variety of inputs.

Lastly, we hope to have stoked your curiosity about how the Censius AI observability platform can initiate your team to unbiased AI and its benefits.

For this, you can delve further by signing up for a free trial!

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Censius AI Monitoring Platform
Automate ML Model Monitoring

Explore how Censius helps you monitor, analyze and explain your ML models

Explore Platform

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring