Is AI a threat to fair lending?

Complimentary Access Pill
Enjoy complimentary access to top ideas and insights — selected by our editors.

There are all sorts of legal and technical issues about how lending rules apply to the new breed of online lenders, but here’s a more fundamental one: How sure are they their automated technology is colorblind?

Even if a company has the best intentions of following fair-lending principles, it’s debatable whether the artificial intelligence engines that online lenders typically use —and that banks are just starting to deploy — are capable of making credit decisions without inadvertently lending in affluent sections and not in minority neighborhoods.

AI-based lending platforms analyze thousands of data points — including traditional and alternative credit bureaus, bank account records, social media streams and public records — and find patterns that indicate creditworthiness, propensity to default, and likelihood of fraud. The machines could make credit decisions that end up redlining an area, even if they never receive addresses.

For instance, a system that considers college data could start recognizing that graduates of a particular school are a good credit risk, and those students may be from mostly privileged socioeconomic backgrounds.

“These are issues every lender has,” said Jim Moynes, vice president of risk management at Ford Motor Credit Co., which recently began testing ZestFinance’s software in its underwriting process but has not yet put it into production. “We have compliance processes today, and we’ll have to see how we adjust, if need be, those processes in the future of machine learning to make sure we stay where we are today — compliant.”

joao-menano-crowdprocess.jpg

João Menano, co-founder and CEO of James, a provider of AI-based online lending software to banks (until recently it was called CrowdProcess), pointed out that a lender might not consider age, gender or race in its underwriting now, but machine learning could learn that a data point that correlates with one of those factors is relevant to credit decisions.

“So how do you ensure fair lending? That's the big question,” Menano said.

First of all, a company has to figure out how it defines discrimination, he said.

“If my bank is in a region where there are more black people than white, what is not discriminating?” Menano said. “Giving 50-50? Or giving according to whatever the population distribution is? These are the questions that are going to be all over the newspapers and pondered by regulators for the next five years. It's very complex.”

Vendors’ response

James’ software helps bank clients avoid discrimination by applying a test generated by the Consumer Financial Protection Bureau to loan decisions.

“The CFPB has helpfully made some of their methods for evaluating discrimination available online, which has greatly helped with building prototypes with our U.S. clients in this field,” Menano said. “From a user point of view, the most important ability is to be able to self-diagnose for illegal discrimination, as the bias is often contained in the data, appearing with no fault of the risk officer.”

The software also can adjust the acceptance threshold to ensure equal opportunity for different populations, and it can monitor and flag inconsistencies.

Douglas Merrill, founder and CEO of ZestFinance, points out that humans make arbitrary assumptions about creditworthiness just as much as, if not more than, AI software.

“Any classifier you like is subject to inducing categories that you don’t want it to,” he said. “People do face-to-face categorization and machines induce categories.”

Banks tend to run annual tests on their loan portfolios to ensure their policies, practices and decisions are not having a disparate impact.

“It’s very painful,” Merrill said. “Some do it once a year, some do it only every time they’re examined, which is every couple of years. And of it’s a horrible process. It starts with defining categories and having compliance lawyers analyze your current book and then go through every new loan, to make sure you haven’t triggered a problem for yourself.”

ZestFinance, he said, has a set of tools that can execute the same kinds of tests in real time, and determine if the software has learned a classification that could negatively affect a protected category.

Credibly’s AI-based small-business lending platform sifts through thousands of data points and finds the 200 or 300 that are predictive, said Ryan Rosett, co-founder and CEO. Data that is not predictive is kicked out of the system.

In one example of a predictive data source, Credibly has an application programming interface with the New York City Health Department through which it receives New York restaurant ratings.

“If a restaurant is downgraded, upgraded or closed, it goes into our data feed,” Rosett said. “So we know if there was a B rating because there was spoiled chicken or whatever, so that would be an example of an alternative data source that’s somewhat predictive. We don’t want to lend them money if they’re in the business of selling food and were recently cited by the health department.”

Credibly also takes in Yelp data. “We’re not looking at the ratings, we’re looking to see if there’s a management change, or they’re closed — we’re searching for key words,” Rosett said. “That’s another example of where we try to scrape a certain amount of information that would then raise a flag, which would then hit our scoring model or trigger a human underwriter to evaluate it.”

Every credit decision is reviewed by an underwriter. “Some are faster than others, but there is verification, and there are checkpoints,” he said. The system produces reports on the segmentation of the loan decisions.

Bank partners in its senior credit facility are shown the underwriting models for their approval. “They have the right to review and approve any modifications we make,” Rosett said. Credibly also measures disparate impact in its algorithm, using a method that mirrors the analysis used by other banks like Citigroup, American Express and JPMorgan Chase, Rosett said.

Regulators require that banks provide a clear reason for declining a loan. AI tools that discover patterns that indicate creditworthiness or lack thereof might take a circuitous path that does not necessarily lend itself to a crisp reason code. Vendors say they have tools to provide such reason codes, but regulators have yet to grant an official blessing to any of them.

“Regulators love to know deterministic outcomes — they like to know that when you underwrote somebody, these were the three or four factors you used to determine it,” said Michael Abbott, digital lead for Accenture Financial Services. “When you apply AI and machine learning, you can’t describe exactly the factors, and the factors may change over time, so one of the greatest potential uses will be underwriting but it will require partnership with regulators. That’s the longest pole in this tent.”

Editor at Large Penny Crosman welcomes feedback at penny.crosman@sourcemedia.com.

For reprint and licensing requests for this article, click here.
Artificial intelligence Marketplace lending Consumer lending Data transparency Small business lending Bank technology BankAI Conference
MORE FROM NATIONAL MORTGAGE NEWS