The potential of AI in financial services is immense. With years of large data sets and access to  real-time transactional data, there is huge opportunity for financial institutions (FIs) to develop unrivalled client experience and personalization. 

However, along with its myriad of tangible benefits, AI brings a host of new challenges which require new governance processes and validation tools to ensure its safe and effective deployment within the enterprise. 

Borealis AI and RBC, with our combined expertise in AI safety, regulation, and model governance, are uniquely well-placed to navigate the complexities of this space to develop a robust and comprehensive AI validation process. 

Model validation has been in place for many years . It helps to ensure that models are performing as expected, identifies potential limitations and assumptions and assesses possible negative impact.  Guidance from the US Federal Reserve dictates that “all model components—inputs, processing, outputs, and reports—should be subject to validation…”1 FIs in Canada have to adhere to similar regulations2 and have already developed extensive validation processes to meet these requirements and ensure that model risk is appropriately managed. However, the advent of AI poses a number of challenges for traditional validation techniques.

First, the volume and variety of data used by AI models are more costly to validate. AI models can make use of significantly more variables—referred to as “features” in AI parlance—than conventional quantitative models, and ensuring the integrity and suitability of these large datasets requires more computational power and more attention from validators. This challenge is particularly acute for AI models that use unstructured natural-language data like news feeds and legal or regulatory filings, which require new validation tools as well as more resources. Moreover, AI modelers often use “feature engineering” to transform raw data prior to training, which further increases the dimensionality of the data that must be validated.

Second, the complexity of AI methodologies makes it more difficult for validators to assess the models’ performance. Compared to conventional models with relatively few features, it is harder to determine how AI models will behave—and why they behave this way—across the full range of inputs the models might face once deployed. The complexity and limited explainability of AI models can also make it difficult to identify biased or unfair predictions. Ensuring that models do not lead some groups of customers to be treated unfairly—and abide by fair lending standards—is an important part of the validation process. Note that bias can arise from input data as well as modeling techniques, so the volume of data used by AI models can also make it more difficult to root out bias.

Finally, the dynamic nature of many AI models also creates unique validation challenges. Conventional models are typically calibrated once using a fixed training dataset before being deployed. AI models, on the other hand, often continue to learn after deployment as more data become available, and model performance may degrade over time if these new data are distributed differently or of lower quality than the data used during development. These models must be repeatedly validated and continuously monitored to ensure that they remain robust and reliable.

In order to meet these challenges, FIs must develop new validation methods that are better equipped to deal with the scale, complexity, and dynamism of AI. Borealis AI and RBC’s model governance team have joined forces to develop a new toolkit that automates key parts of the validation process. Our approach focuses on providing stronger safety guarantees by conducting an exhaustive search of the input space. This initiative will help to support faster AI deployment and more agile model development, and it will provide validators with more comprehensive and systematic assessments of model performance. AI safety is central to everything we do at Borealis AI, much like strong governance and risk management practices are central to RBC. Together we are developing leading-edge applications of AI in finance while pushing forward the frontier of AI model governance.