Editor’s Note: This post originally appeared, in slightly different form, here.
A recent article in American Banker asked the question “Is it OK for lending algorithms to favor Ivy League schools?” The article begins by saying that much of the energy behind the fintech movement comes from its promise of financial inclusion. Unfortunately, that is really not the goal of the fintech (aka alternative lender) movement—making money is.
And what is the fintech plan for lending? It’s pretty simple actually. Here is the recipe: Take the inclusive element (credit risk data) out of the mix. Instead, move to discover potential borrowers who can borrow big money (and pay it back) and inform your algorithms that where borrowers went to school — ah, yes, especially the Ivy League schools — is key to propensity to pay and makes these borrowers good credit risks. Is this discrimination? You bet.
It is discrimination against the financially excluded — big time. The traditional lenders that serve the underbanked — and need these wealthier consumers to widen their risk pool — lose good-performing loans to alternative lenders instead. This outcome makes it unlikely that traditional lenders will extend credit to marginal or unscorable borrowers. Not that regulated lenders have done a great job at financial inclusion — they haven’t — but it isn’t their fault entirely.
At least a good part of the blame has to go to the regulators that, post-Dodd-Frank, have very often overridden bank inclusion initiatives with safety and soundness concerns. So, what many traditional lenders have been doing is using their alternative data and algorithms to approve loans that have been declined by traditional credit scoring — in other words, they search for ways to include unscorables. Indeed, Aite Group research among traditional lenders shows that the use of trended and alternative payments data to shore up traditional credit data has improved inclusion rates and portfolio performance. This is a win-win for consumers who have been unable to secure access to credit at reasonable rates despite pristine recurring bill-payment records (e.g., telco bills), for instance. Unlike fintech companies, the alternative data that incumbents crunch into their underwriting models is transparent. If a consumer is denied credit, he will know where the information came from and where to go to correct inaccurate data.
It is long past time that a regulatory agency looks into these fintech credit algorithms — in fact, the Office of the Comptroller of the Currency should probably have done that before the agency decided it planned to award fintech companies with bank charters.
Secondly, the other key justification for fintech credit practices is built around the fact that traditional lenders do not have “machine learning” like the startups do; therefore, the incumbents’ credit decisions are made by individual loan officers and are more likely to be discriminatory. This, too, is a myth. Years ago, rule- and case-based reasoning decision engines automated consumer lenders’ decision-making processes.
For many overly cautious traditional lenders that have long reverted to humans to manage exceptions, this fintech regulatory review could serve as the call-to-action needed to justify updating their policies and procedures at a time when many are anticipating a lending turnaround with an uptick in loan applicant volume. If done right, traditional lenders could finally automate more than 35% of credit approvals.