AI systems work with a child-like curiosity. They learn from the given data, test their experiences, and change their behavior according to user choices. At each of these steps, human expectations guide their learning. But what if human guidance is faulty, discriminatory, or simply driven by incomplete information?
The AI model will exhibit human bias since it acquired the shortcomings of the developer and user knowledge.
Now let us see what algorithmic bias is in AI. The learning algorithms, whether still training or deployed to the target environment, detect the context and adapt the outputs. Therefore, the decisions by such algorithms must be neutral across different demographics.
Algorithmic bias occurs when the model’s behavior is unfavorable toward specific aspects of its users.
It is concerning because human intervention to stem the bias is absent or limited. There are many kinds of bias in ai algorithms. In this blog, we will talk about the bias typical of human behavior and the bias inherent to learning algorithms. But first, let us understand how human and algorithmic bias in AI could be a major concern for your business.
How Do Human & Algorithmic Bias Affect Your Business?
The state of AI bias report by DataRobot published in 2019 revealed the top concerns of businesses using AI. Among the 350 U.S. and U.K.-based organizations, more than half the respondents considered AI bias a serious risk. In fact, 81% expected the government to form stricter regulations to address this.
The negative outcomes of AI bias included loss of trust from customers and employees, revenue losses, brand reputation loss, and increased scrutiny.
The numbers shown in the graph seem speculative, yet 36% of participants had suffered due to AI bias in one of their algorithms. Of these, 62% of businesses had experienced revenue loss and 61% saw customer depletion.
While it is difficult to devise methods for business decisions that are agnostic to human and algorithm bias, it definitely helps in understanding their causes.
What are the causes of Human Bias in AI?
Let us assume a visual analytics tool assisting a potential home buyer, which learns their preferences from browsing activity. It then builds a list of recommended options based on what the underlying model learned. Now let us see how the user’s behavior and analysis could translate to human bias instances.
Human Bias Due to Cognitive Error
Cognition is a combination of reason and intuition. While intuition helps a person make quick decisions, reason aids in reflected, deliberate actions. A user’s cognition may seek a shortcut and decide quickly based on the initial information.
Bias could arise if further supportive or contrary information is discarded. For instance, if the user had an existing knowledge of a high price for preferred housing options, then they may filter their choices based on this initial knowledge.
Apart from the anchoring effect when users rely heavily on the initial knowledge, people are also dismissive of contrary beliefs. Such acceptance of pre-existing knowledge can lead to confirmation bias.
Lastly, the preference for quick decisions is also driven by availability bias when users are influenced by the most recently or easily acquired knowledge.
Human Bias Due to Information Processing
A human mind is limited in processing the presented information. If swamped by an information overload, users would allocate mental resources to higher priority tasks. Such overload tactics are increasingly being used to influence shopping behavior as well.
A user’s decision-making might get overwhelmed and become biased towards the information assimilated at that particular instance.
Human Bias Due to Preconception
We grow up with certain preconceptions influenced by the culture and environment around us. The stereotypes learned over time and the associated comfort in the beliefs shape the unconscious bias.
If the user had a preconceived bias against safety in certain neighborhoods, then they would avoid the listings irrespective of the current law and order situation. The model would thereby acquire the bias through user browsing behavior.
Human Bias Due to Model Mechanism
Some models have explicit bias parameters to process information and form interpretations. The bias-variance tradeoff is necessary so that a model is able to adapt to the dynamics of real-life scenarios.
The process incrementally samples parameters and accumulates information to reach a decision.In the case of a model that builds on features for real estate sales, the significance attached by the user can play an important role.
The users who attach more importance to the number of rooms or distance from school or pubs could lower the significance of other important features like the availability of loans against the property. Based on such browsing behavior, the recommender could learn to bias against the abstract features, which were significant nonetheless.
What are the causes of Algorithmic Bias in AI?
Biased algorithms may arise from different factors, but understanding their taxonomy can detect bias sources and help prevention. Let us look at the common causes that result in algorithm bias.
Algorithm Bias Due to Training Data
The lore of data science, ‘garbage in, garbage out.’ can be modified to ‘bias in, bias out.’ The data used to train the model defines its behavior. It is impossible to achieve a neutral model if the training data is not representative of users and situations. This source of bias is more worrisome since the training data is generally not shared publicly.
If unintentional, such bias can be subtle and take a long time to be discovered. A housing analytics tool trained on data that was skewed against single persons may not serve its purpose to the user group.
Additionally, it may take the analysts a longer time to realize why the tool is showing listings not suitable for the particular demography.
Algorithm Bias Due to Focus
The output of learning algorithms must be morally relevant for a trustworthy system. But the definition of moral relevance is a debatable concept especially if the model decisions have legal or critical consequences. The algorithm bias which stems from the focus or avoidance of specific features might raise doubts about its neutrality.
Algorithmic bias examples, in this case, can be AI models in self-driving cars responsible for human safety in case of a collision. The bias due to focus could stem from the design decision that passenger safety must be given more weightage than pedestrian safety or vice versa. Such design decisions may lead to moral bias in the algorithm.
Algorithm Bias Due to Processing
The developers often tweak the algorithm’s processing to ensure robust performance. For instance, using a biased estimator would reduce the variance on smaller samples in accordance with the bias-variance tradeoff.
Or if you wish to contain the bias due to training data, then regularization or smoothing can avoid overfitting noisy input data. While this choice results in a more reliable algorithm, it is not neutral per se. Moreover, this form is probably the most common form of algorithm bias and is often benign.
Algorithm Bias Due to Context Transfer
AI models are often re-used for different purposes. The change in context might result in deviation from statistical or moral standards. For instance, the real-estate recommender system which was trained and deployed in the US might give outputs deemed biased in other geographies.
Think of how swimming pools are common in some countries while considered a luxury in others. Some may argue that the context transfer bias seems more like a human bias. However, such cases qualify as algorithm bias due to the deviations in the algorithm functioning.
Algorithm Bias Due to Interpretation
Another type of algorithm bias that is often mistaken as user error. The interpretation bias occurs when there is a mismatch between the output of the model and the information required by the model user. The model user could be a human or another system component.
Again a commonly-reported bias, it often stems from the fact that system semantics could be incomplete during development, or change context over time. Take the example of a surveillance system using human activity recognition.
The recognition method could be using another algorithm that determines the threat posed by the detected human form. The detection and risk prediction could be misinterpreted by the surveillance system and cause biased against some individuals.
Human Bias versus Algorithm Bias in AI
To wrap up the above discussion, human bias in AI occurs due to the influence of developers and users. The preconceived notions, different cognitive biases, human analytical capability, and real-time user behavior can introduce bias in algorithms. The umbrella term for such issues is called human bias in AI.
In contrast, certain types of bias are inherent to algorithms. These include unfair learning from skewed data, weights assigned to attributes, processing logic, context transfer, and misinterpretation due to dynamic semantics.
Bringing Your Business Above the Human and Algorithm Bias
Having introduced you to the causes and effects of human and algorithm bias in AI, let us wrap up this blog with methods to prevent them.
Addressing Human Bias in AI
User interactions can pass on the human bias to your model. In contrast, domain expertise is also crucial in dealing with human bias in AI systems. Therefore, human bias in AI can be stemmed by a mixed approach where user interaction sequences are monitored and inconsistencies are evaluated. The domain expertise can evaluate the reported inconsistencies to decide if the perceived bias is a detriment or not. Additionally, it is also your responsibility that the user interface does not introduce any human bias due to design flaws.
Addressing Algorithmic Bias in AI
Algorithmic bias in AI is required sometimes to offset any potential overfitting due to high variance in the learning algorithm. However, it must be tackled if it causes unfair outputs. A three-pronged approach of monitoring, model health checks, and explainability can foster algorithmic bias detection and mitigation.
Monitoring AI Models
The first arsenal called monitoring means that you watch the project for data-introduced bias and other drifts typical of AI models. However, tracking is effective only if the issues are remedied in time. Automated monitoring can therefore save on critical resources like your team’s time and enable faster solutions to algorithmic bias. The Censius AI observability platform is an interactive easy-to-use solution that adds the zing to a pre-emptive strike. Not only you can tackle issues in time but receive reports for future documentation.
Model Health Checks for fine-tuning
The second arsenal called model health checks keeps your AI model fine-tuned. Apart from ensuring smooth, predictable runs, automated checks are essential if you are re-using an AI model. The bias that can sneak in due to context transfer is difficult to detect or becomes apparent only after the model performance degrades noticeably. The Censius AI observability platform can answer such needs as well as help your teams in comparing the historical performance of the model.
AI Explainability for Model Transparency
The third and final arsenal called explainability is the buzz of AI town. Explainability reports help appreciate the decisions taken by the model and the significance of data attributes. While it converts the black-box AI model to a more transparent system, understanding model behavior for different cohorts can help catch bias for overlooked demographics. This fine-grained understanding of model performance not only aids bias detection in machine learning but also ensures compliance with ethical standards.
If we have stoked your curiosity about how the Censius AI observability platform can help your team to detect bias, you can get a customized demo or sign up for the 14-day free trial.