How mortgage fintechs are using AI for equitable decision-making

With fair lending expected to be a priority for the Biden administration, lenders may turn to artificial intelligence to ensure both regulatory compliance and equitable decision making.

This technology has been touted as a method of removing human bias from the process. It can bring to light some of the day-to-day payment patterns of unbanked and underbanked people, information that is not available in credit repository data.

"It's the idea that not everything can be properly captured through a single document but potentially through a varied composition of data fields and documents that could ultimately lead you to a very qualified borrower," said Sipho Simela, head of mortgage strategy at mortgage fintech Ocrolus. "Not all of that is immediately transparent in a credit report for example."

Sipho Simela

Multiple sources of data, beyond a traditional credit score, could help to boost approvals for Black, Indigenous and People of Color, a group that is more likely than other groups to be denied a mortgage loan. In 2019, denial rates for conventional home loans were highest for Blacks, at 16%, compared to 10.8% for Hispanics, 8.6% for Asians and 6.1% for whites, according to a report the Consumer Financial Protection Bureau released in June.

However, AI is not a panacea, especially if the underlying data contains some of the unconscious bias of previous decisions. If unchecked, AI could end up perpetuating those discrepancies instead.

So in creating models that eliminate bias, firms must "make sure that when it does come to comparing Borrower A to Borrower B, is there actually a like-for-like comparison? Are they actually playing on the same playing field and if not, there needs to somewhat of a modularization that needs to be built into the AI," Simela said.

Other fintechs like Capacity, which provides AI-driven help desk platforms for employees and customers, are trying to get ahead of the problem of racial bias by looking to "teach" their AI products how to make fair choices.

Capacity's primary use of machine learning is natural language understanding. Its chatbot interfaces can understand differences in phrasing, regional dialects, and other types of variation.

Other use cases are more operational. For example, the technology can flag potential anomalies or risks in an automatically processed paper form or in a user's profile. In these cases, it may not be making the final decision on granting or denying a loan, but it can ensure there is a human in the loop who can review cases that land beneath a given confidence threshold. The technology's actions can be analyzed to help users understand what specific factors are driving a given decision by that model.

"My layman's definition of AI is that AI is software that learns," said CEO David Karandish. "And it's software that learns by essentially picking up patterns in data, and testing those patterns over and over and over again."

David Karandish

Mortgage lenders need to make certain that the initial introductory data being placed into the AI softwar are representative of the traits of the customers it is seeking to serve, with characteristics that lead to biased results eliminated from those training steps, he said.

"If you gave all of the training data low credit scores or data points from one region of the country or data points from one gender, etc., it will start to pick up on those variables," Karandish said. "The system might be correct in the pattern you have given it [for its decisions], but the dataset doesn't represent the overall whole of your customer base."

Fintech firm Finastra is developing its own technology to combat the bias issue in originations underwriting.

The software, FinEqual, looks to spot potential bias in credit decisions. It's a component of Finastra's analytics strategy "that identifies the potential anomalies or areas of inequality and uses the data to suggest to our customers where that may exist and potentially suggest ways to combat that," said Chris Zingo, executive vice president, Americas.

FinEqual is still in the exploratory phase, ensuring the app can work as intended, Zingo added. Once that is determined, it can be brought to market quickly. The company was not able to disclose those companies looking at FinEqual at this time.

If, despite proper fairness training, things go sideways, errors in AI can often be identified more quickly than human error, said David Snitkof, vice president of analytics at Ocrolus. He's bullish on AI in part because the decision trail is trackable and examinable in a consistent and explainable fashion. Ocrolus uses optical character recognition technology to pull information off forms that then can be used in decision making by originators and servicers. For the latter, it helps with loss mitigation determinations.

"You can simulate the use of the model on different datasets and you measure disparate impact," Snitkof said.

"And that's where you get into this notion of responsibility of ethics when you're building an AI system of simulating the different outcomes and actually measuring the impact," he continued. "Because given how correlated different pieces of data can be in the world, you can't [just put] data into the model, you need to understand what comes out the other end and what the effects of it are going to be."

The trackable nature of these technologies will be important to lenders, considering that regulatory hawks in a Biden administration could be more aggressive on enforcement of violations of Fair Housing practices.

"Creating a transparency level of 'the why' that particular decisions are being made, that's good politics regardless of who is in the White House," said Karandish. "Anytime you have a human decision you have the possibility of bias. The only way to correct that bias is to examine it, to replay it and to try to understand what could or should we do differently next time."

Digitalization of the workflow is the first step, so users can see from both a regulatory and ethical standpoint why they made their decision under each scenario, Karandish continued.

There is likely to be more government scrutiny on tech companies like Facebook, but Simela said that might not necessarily be a bad thing.

"I do think there an underbelly to technology, [but] good technology helps people. If we're able to partner with the incoming administration — we as technologists as a whole — I think the end product will be much better. With the renewed strength of the CFPB … they're going to better leverage technology," said Simela, who added the end result will be a better consumer-facing product.

For reprint and licensing requests for this article, click here.
Digital mortgages Racial bias Underwriting Artificial intelligence Housing markets
MORE FROM NATIONAL MORTGAGE NEWS