Opinion

Regulators should support adoption of fair-lending algorithms

In 1869, the English judge Baron Bramwell rejected the idea that “because the world gets wiser as it gets older, therefore it was foolish before.” Financial regulators should adopt this same reasoning when reviewing financial institutions’ efforts to make their lending practices fairer using advanced technology like artificial intelligence and machine learning.

If regulators don’t, they risk holding back progress by incentivizing financial institutions to stick with the status quo rather than actively look for ways to make lending more inclusive.

The simple, but powerful, concept articulated by Bramwell underpins a central public policy pillar: You can’t use evidence that someone improved something against them to prove wrongdoing. In law this is called the doctrine of “subsequent remedial measures.” It incentivizes people to continually improve products, experiences and outcomes without fear that their efforts will be used against them. While lawyers typically apply the doctrine to things like sidewalk repairs, there’s no reason it can’t apply to efforts to make lending algorithms fairer. 

The Equal Credit Opportunity Act and Regulation B require lenders to ensure that their algorithms and credit policies don’t deny credit to protected groups in unfair ways. For example, a credit underwriting algorithm would be considered unfair if it recommended denying loans to protected groups at higher rates than other groups when the differences in approval rates are not reflective of differences of credit risk. And, even if they were, the algorithm might be considered unfair if there were a different algorithm that achieved a similar business outcome with fewer disparities. That is, if there were a Less Discriminatory Alternative, or LDA, algorithm.

Advancements in modeling techniques, particularly advancements enabled by artificial intelligence and machine learning, have made it possible to de-bias algorithms and search for LDAs in unprecedented ways. Using AI/ML, algorithms that would recommend denying Black, Hispanic and female loan applicants at far higher rates than white men can be made to approve those groups at much more similar rates without being materially less accurate at predicting their likelihood of defaulting on a loan. Herein lies the rub. If a lender uses an algorithm, and later finds an LDA, it might worry about being sued by plaintiffs or their regulators if it admitted to its existence.

This is not a theoretical problem. I have personally seen bankers and fair-lending lawyers grapple with this issue. Lenders and lawyers who want to improve algorithmic fairness have been held back by fear that using advanced LDA search methods will be used to show that what they did before was not sufficient to comply with the ECOA. Similarly, lenders worry that upgrading to a new and fairer credit model essentially admits that the prior model violated the law. For this reason, lenders may be given incentives to stick with plain-vanilla fair-lending tests and LDA searches to substantiate the validity of the status quo.

It is precisely this scenario that Bramwell’s reasoning was intended to prevent. Economic actors should not be incentivized to avoid progress for fear of implicating the past. To the contrary, as modern tools and technologies, including AI/ML, allow us to more accurately assess the fairness and accuracy of credit decisioning, we should be encouraging adoption of such tools. Of course, we should do so without condoning past discrimination. If a prior model was unlawfully biased, regulators should address it appropriately. But they should not use a lender’s proactive adoption of a less discriminatory model to condemn the old.

Fortunately, the solution here is simple. Financial regulators should provide guidance that they will not use the fact that a lender identified an LDA — or that an existing model was replaced with an LDA — against the lender in any fair-lending-related supervisory or enforcement actions. This acknowledgement by regulators of the 19th-century common law doctrine encouraging remediation and innovation would go a long way in encouraging lenders to constantly strive for more fairness in their lending activities. Such a position would not excuse past wrongdoing, but rather encourage improvement and advance consumer interests.

This article originally appeared in American Banker.
For reprint and licensing requests for this article, click here.
Fair Housing Act Mortgage technology Automation
MORE FROM NATIONAL MORTGAGE NEWS