Do lending algorithms discriminate? Congress seeks answers

WASHINGTON — After years of largely standing on the sidelines, lawmakers are taking a closer look at whether algorithms used by banks and fintechs to make lending decisions could make discrimination worse instead of better.

The issue was at the forefront of a hearing Wednesday by Congress' newly chartered artificial intelligence task force.

“How can we be sure that AI credit underwriting models are not biased?” asked Rep. Bill Foster, D-Ill., who chairs the panel. “Who is accountable if AI algorithms are just a black box that nobody can explain when it makes a decision?”

Sen. Doug Jones, R-Ala.
Senator Doug Jones, a Democrat from Alabama, walks though the U.S. Capitol before a meeting with Senate Minority Leader Chuck Schumer, a Democrat from New York, not pictured, in Washington, D.C, U.S., on Wednesday, Jan. 3, 2018. Two new Democrats arrived in the U.S. Senate today, reducing the Republican majority to one vote and lifting the number of women in the chamber to a record level. Photographer: Andrew Harrer/Bloomberg

Sens. Elizabeth Warren, D-Mass., and Doug Jones, D-Ala., also pressed the heads of the Federal Reserve, Federal Deposit Insurance Corp., Office of the Comptroller of the Currency, and Consumer Financial Protection Bureau earlier this month to ensure that the algorithms used by financial firms do not result in discriminatory lending.

They cited a University of California, Berkeley study that showed algorithmic lending created more competition and reduced the likelihood that minority borrowers would be rejected loans. But it also found that African-American and Hispanic borrowers were charged higher interest rates than white and Asian borrowers. The senators asked whether the agencies have the resources to evaluate algorithmic lending.

Though the issue has been debated by banks and fintechs for several years, it appears pressure from lawmakers is brewing.

“This is the next big kind of civil rights and financial services frontier,” said Brandon Barford, a policy analyst at Beacon Policy Advisors.

Ed Mills, a policy analyst at Raymond James, said that the debate around algorithmic lending and artificial intelligence mirrors the discussion around the fairness of the methods used by credit bureaus in determining consumers’ scores.

“We’ve been fighting this battle over credit bureaus and credit scores over a generation,” Mills said. This “is just the next front in that war.”

To be sure, many in the policy world appear to be conflating two different issues. Warren and Jones were primarily focused on automated lending, widely practiced by financial institutions, which relies on algorithmic models to determine whether a borrower qualifies for a loan and what he or she should pay. The results can vary widely depending which model is used and the data put into it.

But some observers incorrectly equate that with true artificial intelligence-based lending, which few institutions use and appears to be some time off. In that scenario, an AI engine is allowed to find patterns that correlate to creditworthiness. This is the concern — that an AI engine could determine that people who are members of a certain golf club or who graduated a certain school are better risks than others. In such a case, those people would predominantly be white males.

“We know that financial institutions have started to use algorithms, so one of the big questions is ... what are the consequences of using these algorithms?” said William Magnuson, a professor at Texas A&M University School of Law. “The big concern is that if we are using data to create rules for lending for investments or any other financial decision … what if that data that is used is flawed?”

Witnesses at Wednesday's hearing suggested that companies using algorithms should be required to audit them.

Auditing is what's “needed to ensure that we are not seeing these unintended consequences of racial bias,” said Nicol Turner Lee, a fellow at the Brookings Institution's Center for Technology Innovation. “I would also recommend much like I said earlier that we see developers look at how the algorithm is in compliance with some of the nondiscrimination laws prior to the development of the algorithm.”

Many companies that have developed AI lending software and their users already bake visibility and auditability into the software. They also can build in controls that prevent their software from using prohibited characteristics in their loan decisions. Fintechs that use AI in their lending decisions argue their outcomes are far less discriminatory than the human-based lending decisions at traditional banks.

Tom Vartanian, founder and executive director of the Financial Regulation & Technology Institute of George Mason University’s Scalia Law School, said there are two different ways policymakers could approach legislation on the issue.

At one end of the spectrum, legislators could force regulators to require financial institutions to create monitoring systems that would ensure their programs “have been tested against certain standards to prevent data that might create a bias.” That is similar to what regulators have already sought to do. They mandate that credit decisions have "explainability," and are not a black box.

On the other end of the spectrum, he said some members may try to write legislation that punishes institutions that use algorithms that produce discriminatory results, regardless of intent. That would likely open up the door for a contentious debate over disparate impact, a legal theory that suggests lenders can be liable for discrimination even if unintended.

Lawyers may say that "if the application produces disparate results, we are going to assume that that is illegal discrimination,” Vartanian said.

Others say Congress could encourage regulators to spend more resources to fully understand how algorithmic or AI lending could lead to discriminatory outcomes.

“Do they have an AI researcher on staff?” said Chris Calabrese, vice president for policy at the Center for Democracy & Technology. “They really need to have technologists. They really need to have computer scientists.”

But as automated and AI-based lending gains ground, regulators will find themselves under pressure to do more to ensure the new technologies aren't making discrimination worse.

“The issues of algorithmic discrimination are real and would have to be grappled with,” Calabrese said. “AI tools are really complicated. The bias issues are some of the thorniest of those complications. Even with the best of intentions, you are likely going to see some sort of algorithmic discrimination and agencies are going to have to figure out how they are going to do that.”

Penny Crosman contributed to this article.

For reprint and licensing requests for this article, click here.
Fintech regulations Digital mortgages Digital banking Artificial intelligence Racial bias Elizabeth Warren Senate Banking Committee Federal Reserve OCC FDIC CFPB
MORE FROM NATIONAL MORTGAGE NEWS