The Use of Artificial Intelligence and Machine Learning by Market Intermediaries and Asset Managers

September 7, 2021/IOSCO

EXECUTIVE SUMMARY

Background

Image Credit: IOSCO

Artificial Intelligence (AI) and Machine Learning (ML) are increasingly used in financial  services, due to a combination of increased data availability and computing power. The use of  AI and ML by market intermediaries and asset managers may be altering firms’ business  models. For example, firms may use AI and ML to support their advisory and support services,  risk management, client identification and monitoring, selection of trading algorithms and  portfolio management, which may also alter their risk profiles.

The use of this technology by market intermediaries and asset managers may create significant  efficiencies and benefits for firms and investors, including increasing execution speed and  reducing the cost of investment services. However, this use may also create or amplify certain  risks, which could potentially have an impact on the efficiency of financial markets and could  result in consumer harm. The use of, and the controls surrounding, AI and ML within financial  markets is, therefore, a current focus for regulators across the globe.

IOSCO identified its work on the use of AI and ML by market intermediaries and asset  managers as a key priority. The IOSCO Board approved a mandate in April 2019 for  Committee on Regulation of Market Intermediaries (C3) and Committee 5 on Investment  Management (C5) to examine best practices arising from the supervision of AI and ML.1 The  committees were asked to propose guidance that member jurisdictions may consider adopting  to address the conduct risks associated with the development, testing and deployment of AI  and ML.

Potential risks identified in the Consultation Report

IOSCO surveyed and held roundtable discussions with market intermediaries and conducted  outreach to asset managers to identify how AI and ML are being used and the associated risks.  The following areas were highlighted in the Consultation Report released in June 20202 where  potential risks and harms may arise in relation to the development, testing and deployment of  AI and ML:


Governance and oversight;

Algorithm development, testing and ongoing monitoring;

Data quality and bias;

Transparency and explainability;

Outsourcing; and

Ethical concerns.

Click here to read full PDF copy of publication

Leave a Comment

Your email address will not be published. Required fields are marked *

*