US Regulators Warn of AI Risk to Financial Systems


Artificial Intelligence & Machine Learning
Finance & Banking
Industry Specific

Financial Stability Oversight Council Expects AI Use to Increase

US Regulators Warn of AI Risk to Financial Systems
The U.S. Financial Stability Oversight Council classified artificial intelligence as an “emerging vulnerability.” (Image: Shutterstock)

U.S. regulators detailed the risks artificial intelligence poses to the financial system and classified the technology as an “emerging vulnerability.”

See Also: Freeing public security and networking talent to do more with automation

In its annual report, the Financial Stability Oversight Council – a team made up mostly of financial regulators and chaired by the secretary of the Department of the Treasury – highlighted AI’s potential to spur innovation but flagged its ability to introduce “certain risks.”

Financial services use of AI is expected to accelerate as organizations deploy the technology to reduce costs and improve employee and operational efficiency, performance and accuracy.

But this rapid adoption can also introduce hazards, such as cyber and model risks, and it could hamper financial stability, the council said.

“Supporting responsible innovation in this area can allow the financial system to reap benefits like increased efficiency, but there are also existing principles and rules for risk management that should be applied,” Treasury Secretary Janet Yellen said on Thursday.

Generative AI models use large datasets to identify patterns that allow the generation of new content including text, software code, images and other media, introducing operational risks related to data controls, privacy and cybersecurity.

Many AI approaches have an explainability challenge, which means that humans have a tough time reverse-engineering how the AI came to a certain conclusion. This “black box” approach can make it difficult for organizations to understand the source of the information an AI model uses, and therefore to assess where and how to use the information and the model, how much to rely on them, and how to determine the accuracy and potential bias of the output the model generates.

The council also warned of “complicating factors” associated with generative AI, such as hallucinations – output that is flawed but is presented in a convincing narrative – and added that assessing the performance of such output may require specific expertise. Some generative AI outputs may be inconsistent over time. Even when posed with the same prompts, users may not know the sources used to produce the output, and the financial institutions using these tools may not have transparency or control over the dataset for the underlying model uses, the council said.

It said that financial services companies should apply the same general risk management requirements to AI as they would apply to any technology. They must also keep a close watch on regulatory developments in the space that aim to make the technology’s use safer, such as President Joe Biden’s October executive order that established new requirements for safety testing for foundational models (see: Why Biden’s Robust AI Executive Order May Fall Short in 2024).

In January, the National Institute of Standards and Technology published the AI Risk Management Framework, developed with private and public sectors, to offer guidelines that help organizations better manage the risks associated with the technology in terms of design, development, use and evaluation of AI products, services and systems.

The FSOC also recommended monitoring the “rapid developments” in AI to ensure that oversight structures account for emerging risks to the financial system, while also facilitating efficiency and innovation. It advised financial institutions, market participants and regulatory and supervisory authorities to deepen their expertise and capacity to monitor AI innovation and usage to identify emerging risks.

“Errors and biases can become even more difficult to identify and correct as AI approaches increase in complexity, underscoring the need for vigilance by developers of the technology, the financial sector firms using it, and the regulators overseeing such firms,” the council said.

It also supported the G7 Cyber Expert Group’s initiative to coordinate cybersecurity policy and strategy across the jurisdictions that come under it and address how AI and other new technologies could affect global financial systems.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *