As AI systems become more prevalent, it is essential to have clear governance arrangements and accountability mechanisms in place. This is especially true for high-value decision-making, such as determining who gets access to credit or how investment portfolio allocation is decided. Organisations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning.
To ensure that intended outcomes for consumers are met, it is necessary to incorporate them into any governance framework, along with an assessment of how these outcomes are reached using AI technologies. Additionally, it is important to have human oversight throughout the AI product and system lifecycle. This can act as a safeguard against unintended behaviour in AI systems.
Existing Governance Frameworks and Model Committees
Financial market participants rely on existing governance and oversight arrangements for the use of AI-based algorithms as they are not considered to be fundamentally different from conventional ones. Existing model governance frameworks can serve as the basis for the development or adaptation for AI activity, taking into account the considerations and risks associated with AI.
Internal model committees have been set up in financial service providers to design, approve, and oversee the implementation of model governance processes. These committees are responsible for model building, documentation, and validation. Model validation is done using holdout datasets, while other standard processes include the monitoring of model inputs, outputs, and parameters.
AI is being deployed for RegTech purposes, and as part of their model governance, financial services companies have automated processes to monitor and control the data that is consumed by the models in production, as well as to monitor model outputs.
The ultimate responsibility and accountability over AI-based systems lies with executive and board level management. They must ensure that the level of model risk is within their tolerance and that the models are producing results that are not disparate. It is also important to ensure that it is possible to determine why the model produced a given output.
Outsourcing and Third-Party Providers
When outsourcing AI techniques to third parties, risks arise in terms of competitive dynamics and the risk of convergence. This is due to the possibility of a lack of heterogeneity of third-party models in the market, which could give rise to herding and bouts of illiquidity in times of stress.
When outsourcing AI techniques, it is important to consider the associated risks in terms of concentration and competitive dynamics. Additionally, it is essential to have the skills necessary to audit and perform due diligence over the services provided by third parties. It is also important to have contingency and security plans in place in case of disruption of service with potential systemic impact.
The governance of AI systems and accountability is essential in order to ensure that AI models are deployed in a safe and responsible manner. Existing model governance frameworks can serve as the basis for the development or adaptation for AI activity, while internal model committees have been set up in financial service providers to design, approve, and oversee the implementation of model governance processes. When outsourcing AI techniques to third parties, it is important to consider the associated risks in terms of concentration and competitive dynamics. Ultimately, the responsibility and accountability over AI-based systems lies with executive and board level management.