AI is now infused into most technologies we use in our day-to-day. It’s used widely across industries for more than that. For example, it automates decision making, like in finance or recruitment; predicts outcomes, like in healthcare; or spots anomalies like in security surveillance.
What Is AI Bias
Bias is an injustice against a person or group, think race, gender, sexuality, disability or class. But AI doesn’t understand context or have intuition. So how can it be biased? AI is built by an inherently biased source – humans. As a result, discrimination is scaled out and automated.
For example, Amazon was building an AI recruitment system that, if implemented, would systematically disadvantage female applicants in technical roles. This was because the AI was trained on data that was heavily-male dominated. Because of this, it started to penalise CVs that included words such as ‘women’. Once discovered, they pulled the project to prevent bias in their systems.
Types of Bias in AI
Here are some of the more common bias types:
Historical bias: Using past data that includes historical prejudice or discrimination, it can learn and reinforce those patterns, perpetuating bias.
Algorithmic bias: If the AI relies on flawed or incomplete data, it can reinforce existing stereotypes or inequalities instead of providing fair, objective outcomes.
Cognitive bias: Humans inputting unconscious or personal bias, which impacts the dataset or AI behaviour and leads to unfair or inaccurate outcomes.
Confirmation bias: When the AI is trained to favour information that aligns to assumptions and ignoring contradictory evidence. Automation bias: When humans interacting with AI implicitly trust the results without questioning them, even when the AI makes errors.
Measurement bias: When the data does not reflect real-life conditions it can lead to unfair outcomes.
Selection bias: When the data used doesn’t fully represent the population, AI can make unfair or incorrect decisions.
What Can We Do About It?
Firstly, we need to ensure the data we’re inputting works. Where is it from? Is it diverse? A large enough sample? Are some groups ignored?
You also need to make sure your model can generalise to a wide set of scenarios. To do this, test the model in different environments and contexts.
You need to ensure both the model and infrastructure design is free from bias from the get-go. Diverse interdisciplinary teams who are trained to recognise bias will go a long way into ensuring the tool is bias-free, as well as continuous audits and feedback loops.
To get more tips on how to recognise bias and ensure the responsible use of AI, join Hannah Dahl’s virtual classroom, Ethical Use of AI.