Skip to content

AI and Financial Markets: Understanding Regulatory Considerations, Fragmentation and Potential Incompatibilities

As Artificial Intelligence (AI) continues to become more prevalent in the financial markets, it is important to understand the regulatory considerations and potential incompatibilities that accompany the use of AI-driven algorithms and models. In this article, we will explore the current regulatory environment, fragmentation across jurisdictions, and potential incompatibilities with existing regulatory requirements.

Current Regulatory Environment

Despite the increasing use of AI-based algorithms and models in the financial markets, only a small number of jurisdictions have specific requirements for AI-driven applications. In most cases, regulation and supervision of these AI-based applications is based on overarching requirements for systems and controls (IOSCO, 2020[78]). These consist primarily of rigorous testing of the algorithms before they are deployed in the market, and continuous monitoring of their performance throughout their lifecycle.

Fragmentation of Regulatory Landscape

Due to the increasing complexity of some of these AI-based applications, the existing financial sector regulatory regimes may not be able to adequately address the systemic risks posed by the growing adoption of these techniques. Furthermore, the lack of transparency and explainability of some ML models, and the dynamic nature of deep learning models, may be incompatible with existing legal or regulatory requirements. This could lead to fragmentation of the regulatory landscape across national, international, and sectoral levels.

Potential Incompatibilities

In addition to existing regulation that applies to AI models and systems, a multitude of published AI principles, guidance, and best practice have been developed in recent years. However, these principles may be difficult to translate into effective practical guidance due to the opaqueness of AI systems. This can make it difficult to identify and prove possible breaches of laws, including those that protect fundamental rights and attribute liability.

Furthermore, the ease of use of standardised, off-the-shelf AI tools may encourage non-regulated entities to provide investment advisory or other services without proper certification/licensing in a non-compliant way. This could lead to regulatory arbitrage, particularly with BigTech entities that have access to large datasets.

The increasing use of AI-based algorithms and models in the financial markets brings with it a need to understand the regulatory considerations and potential incompatibilities that accompany their use. While existing regulation and guidance are already in place, there is a risk of fragmentation of the regulatory landscape across national, international, and sectoral levels. Furthermore, the lack of transparency and explainability of some ML models, and the dynamic nature of deep learning models, may be incompatible with existing legal or regulatory requirements. To ensure that these techniques can function safely and effectively across borders, there is a need for more consistency in the regulatory landscape.

Leave a Reply

Your email address will not be published. Required fields are marked *