For regulators, the outcome is what matters
Fortunately, regulators are very supportive of the use of AI in credit processes. But that doesn't mean they are blind to the downside. Machine learning models are known to discriminate if the training data set is not carefully curated. So, regulators are understandably cautious.
As a result, regulators require that the same validation processes that apply to traditional regression models also apply to AI. Specifically, auto lenders should be able to easily explain and interpret AI models. And AI models should yield consistent and reproducible results. It is not good enough to implement 'black box' models that consume scores of data points and deliver a decision that cannot be unpacked.
With current, traditional models, the provider can explain to the customer which parameters fed the model (such as payment history, for example), what happened during the decision process, and why they were given a particular answer. And this will remain the standard. Generally speaking, regulators will focus on outcomes. If these outcomes appear to be discriminatory, the regulatory risk, financial penalties, and reputational damage for lenders could be devastating.
So yes, engaging with regulators may mean moving a little more slowly. But it also means that the industry will ultimately adopt standards that are best in class. Regulators don't want uniformity in the way auto lenders use AI – as long as it is fair. Rather, they desire a vibrant industry that isn't brought low by bias.