Will regulators’ warnings chill lenders’ use of AI?

The Consumer Financial Protection Bureau recently issued a fresh warning to lenders that use artificial intelligence in lending decisions, saying they must be able to explain how their models decide which borrowers to approve and provide clear reasons to consumers who are declined.

None of this is new: Fair-lending laws and credit reporting laws have been on the books for 50 years, and there was no reason to think they wouldn’t apply to newer lending software. But in the way that the CFPB and all the national bank regulators have been repeatedly issuing such warnings, they seem to be signaling closer scrutiny of the way banks and fintechs use such software.

This could be taken two ways. Banks and fintechs could decide the regulatory scrutiny isn’t worth the risk of using more advanced decision making models. Or they could see the warnings as evidence regulators understand that the use of AI in lending is inevitable and that they are developing clear rules around what is okay and what is not. Early indications are the latter, industry watchers said.

Regulators’ concerns

“Companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions,” CFPB Director Rohit Chopra said in a news release May 26. “The law gives every applicant the right to a specific explanation if their application for credit was denied, and that right is not diminished simply because a company uses a complex algorithm that it doesn’t understand.”

The CFPB emphasized that regardless of the type of technology used, lenders must abide by all federal consumer financial protection laws, including the Equal Credit Opportunity Act, and that they “cannot justify noncompliance with ECOA based on the mere fact that the technology they use to evaluate credit applications is too complicated, too opaque in its decision-making, or too new.” ECOA requires creditors to provide a notice when they take an adverse action against an applicant, and that notice must contain specific and accurate reasons for the action.

The Equal Credit Opportunity Act, Regulation B that implements it and the requirement for adverse action notices that explain why people are declined have been around for decades, noted Chi Chi Wu, staff attorney at the National Consumer Law Center. 

“What is new is that there's this technology that makes it a lot harder to provide the reasons why an adverse action was taken,” Wu said. “That's artificial intelligence and machine learning. The legacy systems for credit are built so that they can produce these reason codes that could be translated into reasons given why a credit score is the way that it is, and then that can be used as part of the adverse action notice.”

Explainability is harder in AI software, Wu argued. 

“It’s a lot harder when you let the machine go and make these decisions and use thousands or tens of thousands of variables,” she said. 

Online lenders say it may be harder, but it’s doable.

At the Chicago-based online lender Avant, which has been using machine learning in lending since 2014, Debtosh Banerjee says his company has been complying with ECOA and other consumer protection laws all along. 

In the early days, “One of the biggest problems we had was, how do we explain why we declined someone, because we still had to comply with all the rules,” said Banerjee, who is senior vice president and head of card and banking at Avant and formerly worked at U.S. Bank and HSBC. 

The company came up with algorithms that explain why applicants are denied credit. It has had to defend those models to regulators. 

“The fundamental rules are the same as they were 20 years back; nothing has changed,” Banerjee said. “We are highly regulated. Customers come to us and we have to give them reasons why they are declined. That's business as usual.” 

Other lenders that use AI and AI-based lending software vendors like Upstart and Zest say the models they use are not black boxes, but have had explainability built in from the beginning. These programs, they say, generate reports that fully explain lending decisions in more detail than traditional models do. They also say their software has built-in guardrails and tests for fair lending and disparate impact. 

Alexey Surkov, a Deloitte partner who leads the model risk management team at Deloitte, looks skeptically at such claims.

“Some of the larger institutions have teams of developers building and testing and deploying models of this kind all day long,” he said. They do use model risk management controls, as third party vendors do, to handle documentation, explainability, monitoring and other safeguards, he said, but the controls aren’t always implemented to filter out all problems. 

“I would stop short of saying that they are all perfect on those scores,” Surkov said. “Sometimes there is a little bit of a gap between the marketing and the reality that we see when we go and actually test models and open up the hood and see just how transparent the model is and how well monitored it is.” He declined to give specific examples. 

A model may initially check off a lot of the boxes from a documentation and control perspective, but may require some additional work around things like explainability, he said. 

“This is not a new concept. It's not a new requirement. But it is certainly not a fully solved issue, either,” Surkov said. 

Deloitte has been fielding more calls from banks about AI governance, including explainability. The company has what it calls a trustworthy AI framework that’s designed to help companies with this. 

Are regulators getting comfortable with banks’ use of AI?

Surkov sees the regulators’ warnings about AI in lending as an acknowledgement that regulated banks are already using these models.

“Historically the regulators have not been very supportive of the use of AI or machine learning or any sort of a black-box technology for anything,” Surkov said. “The positive thing here is that they are getting more and more comfortable and are basically saying to the banks, listen, we know that you're going to be using these more advanced models. So let's make sure that as we enter this new era, that we're doing it thoughtfully from a governance perspective and a risk perspective and that we are thinking about all of the risks.”

The calls for explainability, fairness, privacy and responsibility are not intended to reduce the use of technology, Surkov said. 

“They're meant to enable the use of this technology, like the seat belts and airbags and antilock brakes that will enable us to go much faster on this new highway,” he said. “Without those technologies, we'd be going 15 miles an hour, like we did a hundred years ago. So having regulations that clearly delineate what is okay, what is not okay and what institutions should have if they are to use these new technologies will enable the use of these technologies as opposed to reducing and shutting them down.” 

Banks will continue to be more conservative about using AI in lending than fintechs, Wu said.

“Banks have prudential regulators as well as the CFPB as their regulator,” Wu said. “Also, that's just their culture. They don't move quickly when it comes down to these issues. We hear complaints that banks that are credit card lenders haven't even moved to FICO 9, they’re still on FICO 8, so having them go from that to alternative data to AI algorithms, those are big leaps.”

Fintechs are more willing to move forward and say they have all the explainability they need. All need to be careful, Wu cautioned. 

“The promise of AI is that it will be able to better judge whether people are good borrowers or not and be much more accurate than credit scores that are really blunt,” Wu said. “That's the promise, but we're not going to get there without intentionality and strong focus on ensuring fairness and not what you'd call woke washing.” 

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Technology
MORE FROM NATIONAL MORTGAGE NEWS