Ally Bank's foray into generative AI: 'We don't want to stand still'

Sathish Muthukrishnan, the chief information, data and digital officer at Ally Bank.
"[Generative AI] has the potential to unleash productivity for us," said Sathish Muthukrishnan, the chief information, data and digital officer at Ally Bank (pictured at Ally's Technology Partner Awards).

Ally Financial dove headfirst into generative artificial intelligence after ChatGPT made its splash at the end of 2022.

The Detroit bank formed a working group around generative AI in early 2023. It met with both Microsoft and Amazon in Seattle in February and hashed out a contract with Microsoft to use its enterprise-grade generative AI software in April. The team started building Ally.ai, a proprietary cloud-based platform that developers will use for AI-related projects, in June, and launched a pilot for its first use case at the end of that month. The pilot moved to production on July 31. 

"We do not want to stand still," said Sathish Muthukrishnan, the chief information, data and digital officer at Ally Bank.

Ally.ai is a bridge between external large language models (Microsoft's GPT 3.5 right now; perhaps other large language models in the future), generative AI technology, Ally's internal applications and data, its data security protections and — for now — human intervention. Ally's early work demonstrates how a $197 billion-asset bank is handling risks such as hallucinations and protecting customer information. It's also showing promise, with high approval ratings from the contact center agents that are part of Ally's first use case.

One area of focus for the bank is using advanced artificial intelligence to detect business-email compromise. The payment messaging network Swift and online gambling host Caesars are also using AI to stop people from gaming their systems.

July 3

Ian Watson, head of risk at Celent, finds banks are generally doing three things right now relating to generative AI.

One is cleaning up their data foundations and pulling bank data out of its siloes. Another is choosing which large language models they want to use, from the big names such as Microsoft and Google to smaller open-source models that Watson says have much of the functionality of the big ones but can be trained on less data. The third is experimenting with use cases.

"That is the part that most captures the imagination and paints a picture of what is possible," he said. "It's where a lot of people are focused in terms of getting funding for this or to show they are on the cutting edge. But the bulk of the investment is going into data foundations."

Ally's first use case focuses on the contact center. Normally, contact center agents take notes when speaking with a customer and summarize the contents of the call when it's over. This is necessary for regulatory reasons, as well as ensuring good customer service. Ally piloted a system in June and July where AI technology transcribes the conversation in real time to the Ally.ai platform and creates a summary of the call. One goal is to relieve the agents of multitasking and let them be more present in the conversation.

"This has the potential to unleash productivity for us," said Muthukrishnan.

For now, agents will manually review these summaries to ensure everything is accurate. The system is showing promise so far: When the pilot began in late June, the rate of agents approving their summaries with no changes was in the low teens. By the time the pilot wrapped up at the end of July, the approval rate was 78%. Now, it's fully deployed to more than 700 agents.

Human intervention is still important as Ally refines its models. It's also one of three principles that Ally adopted before using generative AI. The others are to learn and test on internal customers (employees) before deploying to external customers, and to keep personally identifiable information strictly within Ally firewalls.

These precautions are vital.

Celent groups the risks surrounding generative AI into two buckets. One is adverse outcomes, such as bias, hallucination and false output. The other is external threats, such as regulatory violations and cybersecurity.

"There is a real danger of the models developing complete falsehoods," said Watson.

The team at Ally observed hallucinations when calls took less than one minute or the line was fuzzy, and had to refine prompts to prevent this from recurring. Other security measures include a secure pipeline between the bank and Microsoft and a dedicated GPT 3.5 model. Ally does not let PII leave its firewalls or let the foundational models learn from Ally data. Ally's model will "forget" personal data after a session with a customer associate is over. The team conducts tests and evaluations to guard for model "drift" and bias creeping in.

Despite the risks of generative AI, "The most obvious risk is actually not using it," said Watson. "We think it's going to change business models and the competitive playing field," from reducing drudge work and employee turnover to customizing marketing materials.

Ally is evaluating other potential use cases, such as writing user stories for software features and answering basic questions about human resources benefits.

Further down the road, it could develop use cases for customers.

"This will give us the ability to truly understand customer needs and wants, and to personalize experiences that fit their financial needs at the right time," said Muthukrishnan.

The debut of Ally.ai dovetails with its transition to the cloud. More than two-thirds of Ally applications are now cloud-enabled.

"AI by itself requires a massive amount of compute," or processing power, memory and storage, said Muthukrishnan. "If you want horizontal scaling and infrastructure on demand, you want your applications running on the cloud."

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology
MORE FROM NATIONAL MORTGAGE NEWS