With the rise of artificial intelligence (AI) and machine learning (ML) technologies, financial services have begun to incorporate these methods into their decision-making processes. However, there are potential concerns about how AI and ML can lead to bias and discrimination in financial services. In this article, we will discuss the risks of bias and discrimination that AI and ML can bring, how data labelling and structuring can help prevent this, and the role of humans in decision-making processes informed by AI.
Risk of Bias and Discrimination
Depending on how they are used, AI and ML methods have the potential to help avoid discrimination based on human interactions, or intensify biases, unfair treatment and discrimination in financial services. By delegating the human-driven part of the decision-making to the algorithm, the user of the AI-powered model avoids biases attached to human judgement.
At the same time, the use of AI applications may risk bias or discrimination through the potential to compound existing biases found in the data; by training models with such biased data; or through the identification of spurious correlations. The use of flawed or inadequate data may result in wrong or biased decision-making in AI systems. Poor quality data can result in biased or discriminatory decision-making through two avenues. ML models trained with inadequate data risk producing inaccurate results even when fed with good quality data. Equally, ML models that are trained on high-quality data will certainly produce a questionable output if they are then fed with unsuitable data, despite the well-trained underlying algorithm.
Labelling and Structuring of Data Used in ML Models
Labelling and structuring of data is an important, albeit tedious task, necessary for ML models to be able to perform. AI can only distinguish the signal from the noise if it can successfully identify and recognise what a signal is, and models need well-labelled data to be able to recognise patterns in them. To that end, supervised learning models (the most common form of AI) require feeding software stacks of pre-tagged examples classified in a consistent manner, until the model can learn to identify the data category by itself.
Analysis and labelling of data by humans present opportunities to identify errors and biases in the data used, although according to some it may inadvertently introduce other biases as it involves subjective decision-making. As the process of data, cleansing and labelling could be prone to human error, and a number of solutions involving AI themselves have started to develop. Considerations around the quality of the data and its level or representativeness can help avoid unintended biases at the output level.
The Role of Humans in Decision-Making Processes Informed by AI
The role of humans in decision-making processes informed by AI is critical in identifying and correcting for biases built into the data or in the model design. Also, in order to explain the output of the model, although the extent to which all this is feasible remains an open question. The human parameter is critical both at the data input stage and at the query input stage and a degree of scepticism in the evaluation of the model results can be critical in minimising the risks of biased model output/decision-making.
The design of a ML model and its audit can further strengthen the degree of assurance about the robustness of the model when it comes to avoiding potential biases. Inadequately designed and controlled AI/ML models carry a risk of exacerbating or reinforcing existing biases while at the same time making discrimination even harder to observe. Auditing mechanisms of the model and the algorithm that sense check the results of the model against baseline datasets can help ensure that there is no unfair treatment or discrimination by the technology.
The potential for risk of bias and discrimination in AI-powered financial services is a real concern. Poor quality data and inadequate data labelling can lead to biased decision-making. The role of humans in decision-making processes informed by AI is critical in avoiding potential biases, and auditing mechanisms can help ensure that there is no unfair treatment or discrimination. With the right measures and safeguards in place, AI and ML can be used in financial services in a safe and secure manner.