AI and bias

An AI model is a program that has been trained on a set of data – also called the training set – to recognize certain types of patterns. AI models use different types of algorithms and a lot of data to learn from, with the goal of solving various problems.

But while AI is trained on data, data are often biased. And therefore the learned model could be biased and AI decisions could also be biased.

This is a serious problem for minorities, since biased data could lead to discrimination. In the past, AI tools have already perpetuated several types of discrimination. 

In 2019 a research from a team of University of Berkeley researchers found that lenders using algorithms to generate decisions on loan pricing have discriminated against race of borrowers. Researchers performed analysis of 7 million 30-year mortgages and found that 1.3 million of creditworthy black applicants between 2008 and 2015 were rejected because of racial discrimination.

Similar discrimination has happened in hiring and financial lending.

Recently the US Department of Justice and the Equal Employment Opportunity Commission (EEOC) has warned that AI algorithms can result in unlawful discrimination against people with disabilities, which is a violation of the Americans with Disabilities Act (ADA).

In the beginning of this year UK regulators also warn banks on use of AI in loan applications. Regulators require that UK banks that use AI to approve loan applications must be able to prove the used technology will not worsen discrimination against minorities and they also must recognize the inherent flaws in machine learning models and improve algorithms transparency.

Discriminatory AI is also being present in other areas. In 2016, a US based non-profit organization ProPublica found out that an AI tool used in US courtrooms to predict future crimes (in fact to predict risk of recidivism and risk of violent recidivism) was biased against black defendants. They found out that the tool called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was much more prone to mistakenly label black defendants as likely to reoffend that the white ones. The AI algorithm often predicted black defendants to be at a higher risk of recidivism than they actually were and white defendants were often predicted to be less risky than they were. Black defendants were also twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism.

Since AI can make decisions that affect people’s lives, bias in AI can harm humans. That is why it is so important that developers of AI and users of AI are aware of it’s flaws and limitations.

Authors: Matej Kovačič

Links: 

Consumer-Lending Discrimination in the FinTech Era, https://faculty.haas.berkeley.edu/morse/research/papers/discrim.pdf

EEOC, DOJ Warn Artificial Intelligence in Employment Decisions Might Violate ADA, https://www.jdsupra.com/legalnews/eeoc-doj-warn-artificial-intelligence-5070319/

UK watchdogs to clamp down on banks using discriminatory AI in loan applications, https://www.businessinsider.com/uk-clamps-down-on-discriminatory-ai-in-loan-applications-2022-2

Machine Bias, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing