Track 1 – Deep Fakes & Fraud: How AI Is Both Problem and Solution

TRACK SPONSORED BY:  Black Knight

AI deep fakes in pop culture go viral, but how do we ensure this fraud doesn't infect mortgage underwriting? We're only just starting to contend with the many ways AI can fool underwriters. What are the top ways AI is enabling mortgage fraud? What are the solutions? What's the role of GSEs and regulators? And how do lenders balance productivity lifts with fraud risks? Top lender ops, legal, and tech experts answer all these critical questions.

Transcription:

Laura Hunter (00:07):

Hi everyone. Welcome to our final session of Real World AI use Cases Track. For those who are just joining us, I am not Sandra Madigan. I'm Laura Hunter. I'm the SVP of our product strategy team at Black Knight who is now Ice. We're proud to sponsor this session and it's my pleasure to introduce our panelists for deepfakes and fraud, how AI is both the problem and the solution. I think my panelists are in the back if you guys want to walk up. And we have Chris Bixby, Managing Director of Venture Capital Strategies, rice Park Capital Management, Sergey Dyakin, Chief Information Officer at Celink. Kevin Peranio, Chief Lending Officer, paramount Residential Mortgage Group. Kevin. And last but not least, Andrew Martinez, a reporter for National Mortgage News.

Andrew Martinez (01:06):

Thank you for everyone joining us Today. Today's panel, as the title mentioned, deep fakes and fraud. AI is both a problem and solution today. We'll be talking about what they are, we'll be diving Into the threats, maybe even some solutions that they can provide. We'll see about that. But maybe to dive in, maybe I'll just start with a definition of deep fake. So Chris, I'll ask You if you can provide, Or let's go Kevin actually.

Kevin Peranio (01:40):

Let's do it. Yeah, so Deepfake, not to be confused with putting you into a deep trance here at sleep. This is AI generated fraud and it comes in a couple of different forms. It's getting better. The AI's, generative AI is getting better out there so it could voice mimicking is something we see a lot of phone spoofing and as the technology gets stronger, even video rendering of just faking a video of you. So sad for me. I've got a lot of content out there. So I don't know if anyone wants to fake me for any reason, but I'm actually a little worried about this with our lending operation at Paramount Residential Mortgage Group.

Andrew Martinez (02:25):

Yeah, Sergey and then Chris would you guys agree with that description of what is.

Sergey Dyakin (02:29):

It's a good description. I want to highlight that this is not only video, which we would usually tend to think about. This is also audio, obviously pictures. And to your point of this picking up probably started to pick up more substantially around 2014 where better technology appeared to create this kind of video and started becoming more and more popular to the extent that it's not a focus of law enforcement agencies to look into the issues that DeepFakes create.

Chris Bixby (03:03):

Yeah, So my name's Chris Bixby. I lead a venture capital strategy at Rice Park. And so we look at a lot of different tech companies kind of within the ecosystem and kind of in preparation for this panel, I'll be very honest, I think we were like, where is Deepfake really? DeepFakes really fit into mortgage and mortgage fraud. And I think through a lot of conversations starting with Kevin and Sergey is I think we all scratched our head. Does it fit in? I think that was the first piece of it. Where does this actually fit in? What are the actual trigger points that would say that something like a deep fake, which we all talk about, we hear about, we see it on social media, there's legislation that will likely come out in terms of the political environment that will be regulating DeepFakes. And I think we're still scratching our head.

(03:41)

Where is that applicable in the mortgage process? I think the one thing, and Kevin just alluded to it, is in all these conversations that I've had with originators and top 10 IBS and top 10 banks, I think at the end of the conversation, all of them were concerned about it. It was at the beginning they were like, don't really see, it doesn't really happen. Fraud is fraud. We have kind of wire fraud, we have different types of fraud. But at the end of the conversation, we started having kind of where are the different points that this could come into play. I think there's this scratch in the head moment of do we actually have the right things in place to address this issue? And I think that's where this panel kind of came together. Talking to Andrew and the rest of digital mortgage was, there's not a lot of solutions right now today. And there's probably very, very few data points to say that DeepFakes are happening within the mortgage origination process, the secondary process, the servicing process. But I think we're here because it could be an issue and it's probably worth having more of a dialogue about.

Andrew Martinez (04:36):

No good stuff. A lot to get into in that topic. And I think maybe just want to continue to establish what the deepfake is because as Chris mentioned, the social media, you might've seen Harry Potter and the style of Wes Anderson and so forth, and a bunch of funny examples. There's not some so funny examples. But Kevin, you mentioned earlier an example, an actual fraud example, and I was wondering if you could talk about that.

Kevin Peranio (04:59):

Sure. It came up in our discussion, and I guess I get to steal everyone's thunder and use the example of three and a half million dollars wire fraud. We're a lender, we're privately owned, there's four of us that own it. And our CEO is your typical hard charging. ROI driven CEO. He's a great operator. And if he called our CFO and said, Hey man, you to send a wire out, well, that was a piece of deep fake fraud that happened. It was an actual instance. So a CEO's voice got faked. They called the C ffo and said, I need you to wire three and a half million dollars. And the CFO was like, okay, yeah man, sure. Old school, CEO, I've been working with you for, I'm umpteen years. I'm going to do what you tell me to do. No multi-factor authentication, no back channels, no process, no person calling on the air, the end, they're just wired, three and a half million dollars, boom, gone just like that.

(05:54)

That'll sting a little bit. And so that's a real life instance. So having these conversations, I mean when this whole topic first came up, I was kind like, what the hell the hell I know about DeepFakes. And so we start digging in and researching and we have a lot of fraud measures in place, a lot of quality control. We have, like Chris mentioned, a lot of wire protection, fake emails being spoofed and some Somali pirate trying to pretend like they're the borrower and then email my stuff over here. And then next thing you know, the wire for the closing costs for $40,000 is gone. That's happening today a lot less because of some great practices that we put into place. So what's the next evolution of fraud? Fraudsters are always thinking of how to gain the system so they can use AI out there and fake someone's voice, fake a cell phone number. So I just thought it was fascinating that we're having this conversation because it raises awareness of something that's coming around the corner.

Andrew Martinez (06:53):

Definitely. And I want to go around the panel here, but Kevin, curious, how convincing was that Deepfake example? I know I'm not sure if you heard the audio, but I'm wondering how convincing can these things get? I know if it fools somebody?

Kevin Peranio (07:05):

Three and a half million dollars is convincing for sure.

Andrew Martinez (07:09):

Sergey and Chris, just curious your thoughts, how convincing these things can get?

Sergey Dyakin (07:12):

Yeah, if you look at actually some of the research that has been done to try to determine the DeepFakes, if you look at those examples, they're very impressive, right? There are libraries out there that put out both the deepfake and actual videos. You can look at them, you can say that, yes, this is now good stuff. This has been produced with a high degree of believability.

Chris Bixby (07:39):

We've all been fooled, right? Anyone on social media, you've been fooled at some point. I think it's the process of there's actually also technology to catch these DeepFakes. So as individuals, we're not always noticing it. We're seeing it on social media. Again, this political protection, and I think we're going to see this more and more, whether it's mortgage or other forms of fraud within the process. I think as individuals, maybe we could say we did, maybe we could say we didn't. The technology's going to go fast enough. There's going to be enough kind of data points for the technology to get good enough that we're not going to be able to with the naked eyeball or hearing to actually identify it. And so there is technologies and tools out there that are doing different types like biometric screenings, artifact testing, kind of looking for pauses and voice, comparing it with actual data.

(08:26)

I think again, the recognition of this panel in this conversation was, I don't know any lender that I've talked to that's using it, and I actually don't know any technology vendors. And someone, if there's a technology vendor in the room that is doing this, please even raise your hand. But I haven't heard technology vendors that are actually going to this level of detail in terms of that types of screening to determine if there is a Deepfake or if there isn't. There's certainly, again, wire fraud process. There's a behavioral base, there's a whole bunch of different data that you can work through on the machine learning and automation side and AI side. But in terms of actually identifying true DeepFakes in the process, I think it's very limited in terms of the application. And I think that's probably a little bit, because we're still relying on underwriters to do their process. So we're relying on kind of double verifications in various steps. And I think I would imagine our job here is to build a thesis in the industry is people don't think it's a problem. There's not this idea that that's a problem yet. And so until we start seeing it and data start coming in and being collected, maybe we won't have that technology go out. But I'm sure when it happens, everyone's going to be in the room being like, holy cow, that was real. And I think the other just last thing I'll say about this is it's not going to be one loan. If a fraudster figures out how to do something either through the origination process, a repayment schedule, rerouting wires, it's not going to be one. This technology is so good and so fast, it's going to be like hundreds. And so it's going to be a meaningful, I think, impact from a lender perspective or others kind of in the ecosystem.

Kevin Peranio (10:02):

I think the heightened sense of get your guard up is probably the best takeaway from this conversation. I came up through sales in the industry since O one and now into ownership and executive management. But when you're in sales, I mean there's a little bit of conversational acceptance of, all right, that's not exactly totally true. Like catfishing, we've all seen the business cards that realtor originator, and then you see them in person. You're like, you don't look like your business card. And we just accept it. We accept. I mean, shit, I probably have some bio photo up here back when I had red hair, but it's accepted. People are trying to put on airs. People are trying to look a little bit different or maybe a little bit better. And, So people want to believe, especially in sales. And so where's your blind spot? Where's your weak point? So for us, when volume is down, fraud picks up. So we've seen an uptick in fraud and just flat out, this is how it is. People get desperate out there. And if your source of origination is from the sales front, I mean, I don't have oversight on every text message, every email I want to, I try to, we're not a bank, but when there's borrow information, we do. But there's still stuff that we don't have complete oversight on. And that's where all the loose ends are. We do penetration tests for all of our cybersecurity policies every year, and we're constantly looking at where our biggest gaps and holes are. And it's BYOD, it's bring your own device. So if we hire a team of originators that come in with their own laptop, day one, we're not like, okay, hang on, you have to buy our laptops. Here's a 10K bill for all your new stuff. No, okay, maybe will let us in there and do some encryption and maybe If something

(11:58)

Goes wrong, we can turn your thing into a brick. But those are still areas of weakness that we're always trying to shore Up. And So now I got to worry about a phone call. I have to worry about a phone call, Voice that's being faked. I don't how you fake biometrics. I'm sure it's happening out there.

Andrew Martinez (12:17):

Actually that's a good point. I did want to ask about how these defects are made. I know none of us are making Deep fakes, but I'm wondering if it's something where, is it just there's the stereotypical hacker in the basement with the hoodie. I think we've all kind of gone past that example. But I'm wondering how complex are these things to make Sergey? You did mention something that started coming up when AI started getting more advanced. I'm curious if you could talk about how these things are actually made?

Sergey Dyakin (12:44):

Absolutely right. And the technology at one point was the domain of maybe researchers or like you said, the very advanced hackers because it required a substantial amount of understanding of machine learning and the technology that lies behind that. But things continue to improve and up to the point where apps started to appear that actually allow you to change your voice, to change your appearance. And typically what I mentioned about the improvements that happened, the arrival of those GA or generative adversarial networks, which allowed you to create two kinds of AI mechanism, one that generates the DeepFakes and another one that looks at them and says, are they deepfake or are they real? And they compete with each other until finally the discerning mechanism says, yep, I agree, that's real. And so that's how they became very, very good. And as the prevalence of images and the cheaper and cheaper technology to do that started to appear, then you suddenly could train more and more in order to produce better and better fake for the videos and audios for that matter. So that's how we got to the point, this prevalence of more available images and their ability for the technology to catch up and develop further. And so that was caused the DeepFakes to become so advanced.

Andrew Martinez (14:15):

And Chris, is that kind of what you've seen in terms of the timeline of when these things started popping up and how much they're now?

Chris Bixby (14:21):

Yeah, a little bit. I guess what I'll just say is there's this confluence of factors. I mean, one thing is obviously chat, GPT and kind of everything they've done in terms of building out natural language models and image and identification, et cetera. But then there's also this idea that we've consolidated a lot of data, whether we're individuals or banks. So the ability to actually hack into data security systems have gotten better, but it's all consolidated now. The idea that social media and everything's available, And then just the idea that the mortgage process, given we're talking about mortgage, has continued to this evolution of digitalization. Now, yes, there's still hurdles there. There's a lot of manual process. That's why we're here kind of talking about this. But overall as an industry, it's moved towards digitalization. So you have these kind of three elements happening, and then Across, All of it you have this acceleration of deepfake technology chat, GPT, and then all the other layers underneath. What I'll say is DeepFakes are one element of it, and it's easy. There's websites, you go to sign up and you can create them yourself. You can create your own persona, add an image, one or two photos, and then they can create that or it can be created. So it's actually a very low bar in terms of creating these themselves. But then I think there's the other elements that we talked about, which is deep fakes are just one element. Deep fakes are one element of this process. You have things that are ability to do like spoofing and scamming and phishing and SMSs and all these other elements are also now happening because of generative AI and the ability to very quickly use natural language to create content and then even try to get into security systems, et cetera. So all these things are happening. I think DeepFakes are one, they're going to continue to get better very, very quickly. And so I think we should just assume they're going to be there. They're going to be really, really good. We have to assume in a lot of cases that this could be happening.

Andrew Martinez (16:16):

Yeah, definitely. And I'm curious, we talked, we touched on security, big portion of this discussion. I'm curious where the DeepFakes really can impact the loan origination process, what steps exactly. And Kevin, I was interested if you could maybe talk about where you see it being a threat in terms of the borrower. Yeah, just curious your thoughts.

Kevin Peranio (16:38):

Sure. Well, you try and think where's their video in the process, and there's not a lot of it. And from what I've seen with Dream Studio, it's like everyone still has six or seven fingers. So the videos aren't that good yet on the AI Deepfakes, but

(16:51)

There are remote online notarization and closings. And so some of the tech I've seen out there to kind of have good security is you have your bar doing the closing live in the Zoom or teams or whatever the video is. And then while they're on there showing their ID and They have to answer questions, you would, what's your first car you own? What's your mother's maiden name? You have to answer 10 questions while you're also live. So there's some good checks and balances that are out there in that part of the process. But I still see wire fraud as a problem. Wire fraud is whether you're using some of these title companies, they use Gmail. Anytime I get a Gmail from anyone, especially with an attachment, I'm like, I'm out delete. So go be a little more professional and try and sell your wares in a different way.

(17:44)

But That's where borrowers answer it. They don't care. They're going to answer it anyway. They don't know whose corporate email is what. So there's a lot of care there in the communication chain, especially if you're dealing with PI personally identifiable information. You've got to make sure that that's on a secure email, some kind of secure system. You've got encryption on the devices that are collecting those kinds of documents. What's your source of truth? Go straight to the IRS for IRS data straight to the employer for employer data straight to the bank. So you feel like there's some level of integrity with the data and then you pull it into your data warehouse, single source of truth to talk to all your systems. So where can deepfake get into that? It's typically the communication chain. It's a communication change. And where will the money be rated from you? Well, that wire example was one. And where does money get exchanged throughout the process? You collect a credit card from a consumer for the appraisal, you collect a credit card in some instances for credit, whether you're doing a rapid rescore or anything like that. So there's a credit card collection in that process, you got to make sure that you secure that because then that card could get taken. Okay, maybe the credit card or the bank will eat the loss if it gets stolen and something gets charged. It's always like a thousand bucks at Walmart and then something online just like in a heartbeat. It's always the same place. But that is an area where the communication chain could get spoofed, could get deep faked, could get AI fraud, and there's a loss there. I will just add one more thing on wire losses. If you don't catch it within the first 48 hours, it's pretty much gone. Our banking system is so archaic going to be started on securitization and SA and all that, but with the transaction, the wiring system, because it is so slow and archaic, you do have about 48 hours to catch wire fraud if one goes out, speaking from experience, caught them all, no losses on our end and then beefed up our systems after getting hit about two or three times, about four-ish years ago. That's where I still see a lot of the weak spots and blind spots for DeepFakes or AI fraud.

Andrew Martinez (20:09):

Yeah, Sergei and Chris, would you guys agree with some of those examples or Yep. Where are you seeing it?

Sergey Dyakin (20:15):

I would add to this since I'm on the servicing side, and we actually are the largest service of reverse mortgages. And for those that don't know much about reverse, it's a product that is given to seniors 62 years and older, and this product allows them to tap into the equity in their homes. So this is almost like a HELOC, even though obviously terms are different, et cetera. But HELOC in the sense that they can draw on that line of credit or in this case on their mortgage. Same in HELOC, you can draw on a line of credit. So to earlier point about where the fraud can happen is where the money changes hands. And so in the servicing, you can imagine that somebody will be calling the call center and attempting to redirect the funds to a different bank account at different point. So you also, we had actually experienced attempt to commit fraud where somebody interjects between maybe the title company and the service trying to do the fraud. And we would be able to, in many cases, it's not as Sophisticated. And so we're able to catch that. And I can imagine, again, situations where somebody would call the call center and either attempt to change the ACH and attempt to remove the fund elsewhere, and that's where the voice item come into play, The voice fraud and pretending that you are an 82 year old Person when actually it's a much younger person calling in.

Chris Bixby (21:50):

And I'll just say two things. One is as we looked at this, those are the two examples. And then the other two examples maybe that fully touched on is fake deeds. So going in and re-registering kind of deeds, representing someone else, and then doing something like a cash out refi. And then the second one was around HELOCs and second liens. And so the sense that typically a little bit lower kind of underwriting standards and then not necessarily actually the same full verification process. And then the other piece of that is kind of payoff, so whether it goes to student loans and debt and consolidation, et cetera. So those are probably the other two areas that in the conversations that we were having with some both tech companies as well as some large originators and second leann originators we're calling out. The only other thing I'll say is we kind of are trying to address deep fakes in the title, but if you just swap out deep fakes and just put in generative AI, that's where a lot of this is coming from.

(22:49)

The ability to rapidly create content and create the elements to allow for even things like deep fakes to allow for fraud. The fraud's not new. It's like the same ideas. The same ideas have been happening over decades, but the evolution in the scale and the quickness that can happen because of these new technology tools that have scaled out there have been just remarkable, I think. And that's where I look at this and have the conversation and I'm like, I don't know. This is my sense. I feel people should be a little bit more worried than they are as kind of looking at this and spending some time thinking about it.

Andrew Martinez (23:27):

For sure. And I'm going to go back to you, Kevin, with the example that you described earlier, but I'm curious if there's any estimate as to how often DeepFakes are being used in a fraud attempt in the mortgage space. I figure there's probably not an official account, but I'm curious if there's any guesstimates of it.

Kevin Peranio (23:46):

It's on the rise and especially now just fraud in general. And lenders don't typically like to talk about it because embarrassing, but fraud is on the rise and we all deal with it all the time. It happens more than you think. So let's just say that. And as far as real losses, the big losses always make the news so everyone finds out about it eventually. But ransomware, I mean, how many times have we read stories about ransomware and viral attacks on your software and shutting down your website or whatever, unless you pay us in crypto or whatever the case may be. And so it does happen more than you think, I don't want to be an alarmist, but we should bring some alarm. We should be thinking about this and talking about it and talking about fraud. And again, the fed says we're going to keep rates higher for longer. I think they're just going to keep them high for longer. I don't know if they're going to go much higher if at all. But the longer people go out there, the closer we get to maybe a slower economy or even a recession, More fraud occurs and AI is all the buzz and there's so many free tools out there. And just for my own education, back in February when chat GPT became like all the rage, it's a large language model. It's different than maybe image rendering or voice spoofing. And so there's a bunch of different forms out there and there's a lot of AI companies out there and software that's gearing up to be practical and help us with task automation and help us do things and be connected to voice and have a voice prompt on your customer service line or in your HelpBot. And so can that system get hacked into take some task and feed some information to a fraudster potentially? Yeah. So every time you add RPA or AI or some new tech vendor, we have this whole vendor management. I swear to God, I think we're a bank.

(25:47)

It's like, do you have your SOC two certification and do you have all this stuff? We put them through and then we've got cybersecurity policies. We do pen tests with RS mclare, and we're looking at CrowdStrike and Pantera, an Arctic wolf, and we got these guys. I mean, we have all this stuff. We spend like $11 million a year on tech, even if we've fund zero loans. So we're taking it seriously and it's still not enough. So because the tech steep keeps getting better, so welcome to our challenge out there. Any other fellow lenders passionate about it?

Chris Bixby (26:19):

Well, I was just going to say, so one thing, so I come from the world of payments and that was when I was in an operating company. So now I am not in an operating company, so I can be more theoretical about ideas, but in payments, fraud happens. It's like 2% of payments fraud or excuse me, 2% of payments results in fraud. And it's a high frequency and there's a lot of data. The unique part of, I think as you think about loans and the variety of IBs banks, credit unions, with the exception of really the large kind of lenders, there's not enough data to run the models to start to look at all the different behavioral and patterns to necessarily say this is a fraudulent transaction or not. Again, payments easy. There's dozens of fraud vendors out there. Typically the retailers the one that has to take responsibility for it, and because of that, they put in all these different fraud mechanisms. But again, they all work through machine learning, training, the model, having enough inputs to say what is actually fraud and what is not fraud? You put in enough inputs, you get enough outputs to be able to identify it. I think the problem that I also have seen in these conversations is there's not necessarily enough data points within an individual lender to say, what's going to be fraudulent?

(27:29)

What's not going to be fraudulent? So one of the prompt questions was always like, what are the roles of the GSEs? What are the roles of agencies and other pieces? I don't exactly know if they're going to play that role or if a fraud provider will play that role, but there's going to have to be something or anti-fraud provider hopefully, but there's going to have to be something that is aggregating the data to start to indicate and look at patterns and behaviors through the process, and we will see kind of where that comes from and who wants to identify that both as an issue and then develop some sort of solution to share it out more broadly around banks and IBs.

Andrew Martinez (28:07):

Yeah, and Sergei, I was curious too, the surfacing aspect of this. Is there any estimate as to how the portion of fraud that's occurring in that sector coming from DeepFakes, is there anything close to an estimate of?

Sergey Dyakin (28:20):

I haven't seen anything in general in terms of mortgage related fraud. The number I've seen, and that's mortgage related fraud, right, was in the FBI report from last year, I think it was estimated before, hundred million. That's overall. But deepfake, it's actually a very good point about the fact that because the data is so scarce, right, it's very difficult to estimate what the true number is. And also I think we mentioned earlier that whether the deepfake was synthetic identities, et cetera, just synthetic content creation, the only part of the process. So can you actually pinpoint that this was a deep fraud or deepfake Instead, you might say, well, this is a wire fraud and maybe there's an element of this. This was actually done using deepfake or some other synthetic creation. So specifically pinpointing and attributing it to this as a main factor might be challenging.

Kevin Peranio (29:12):

I think it's going to go largely under-reported, right? I mean you guys, if you weren't in the room with me, you wouldn't know that we had two wire fraud attacks in the range of 304,000 and we recovered both. So unless it rises to a level of a claim against my cyber policy, my cybersecurity policy or my insurance policy or EO or whatever, it's not going to get reported anywhere. And the analogy is like no one's going to talk about their side hustle or their second job the first Friday of every month. The Bureau of Labor Statistics, they put out their data and there's part of that is the household Survey, like, hi, this is so-and-so from the BBL, do you have a side job? Nope, nope, nope, Not at all.

(29:53)

Don't have any side job. I mean, why would you tell someone from the government you have a side job that they're going to tell the IRS? You get taxed, right? So it's just things out there that are just not going to get reported and this is one of them. It's going to be largely under-reported unless it rises to the level of something like that, three and a half million dollars scam.

Chris Bixby (30:09):

What I'll say is, again, the analogy or looking at payments, this is all that was talked about. It was like interchange, it was one, and then fraud was number two. You go into conferences with retailers and fraud was again, one of the top two conversations constantly. There's a lot of sharing, a lot of information, a lot of vendors that were doing it, a lot of investment in that because it was identified as a problem. Again, maybe we're too soon to say it is or it isn't, but I think that information sharing, that honesty, the transparent, whoever that body is going to be able to capture that will probably shine some light on it and help prevent it. Our number one fraud as a lender, as a three channel lender, wholesale correspondent in retail that we still face is straw buyer fraud, straw buyer scam. And that is pretty elaborate. It's usually some of the red flags or there's a realtor that you've never done business with from out of state, and then it's a brand new originator typically in the wholesale channel. Unfortunately for us, that is a brand new one and you've got the buyer, the realtor.

(31:17)

Sometimes even the title company and escrow involved, and you've got multiple parties involved. So imagine if that little strength in numbers approach to commit fraud against a lender use the power of ai. That's very scary.

Andrew Martinez (31:27):

And I actually know we're going up in time a little bit, but I wanted to ask a few different questions. Curious, all you guys, I think in passing mentioned the regulatory aspect of all of this, and Sergei, I want to start with you if you've seen any guidance whatsoever from housing regulators regarding DeepFakes, and if not, if you anticipate it.

Sergey Dyakin (31:47):

Nothing that I've seen, and I'm not sure again at what point we should be anticipating this. I think there probably will be as the starts to rise to the level of people noticing this and as fraudsters evolve in what the tools they use, et cetera. But for now, I haven't seen anything that would be of that nature.

Andrew Martinez (32:08):

Chris, have you seen anything?

Chris Bixby (32:10):

I think we were talking about this last night at cocktail hour. I was like, I went through all the, I think it was Freddie Fannie, FHFA and then a couple lenders, and most of the fraud that was talked about was first party fraud. So the idea of making up bank statements or income or dah, dah, dah, there's a little bit on straw buyers. There's a little bit on third party fraud, but very, very limited. I'll be honest in kind of what I saw those out there.

Kevin Peranio (32:39):

I echo that, but actually it is either the morning of or the day before our prep call, if you guys recall up on the hill in DC, they have all these AI leaders from all over the world, they're all flew in and they're talking to our government leaders, how are we going to reign in AI or we're going to put brakes on it. Are we going to let it just run wild? I mean, the worst case scenario is Skynet, we're all going to die in an atomic Holocaust, but that's what we all see. Your average person, they see the Terminator. And so it's being talked about, even if not in our business, by our regulators at the moment. It is being talked about by Congress and I think Elon Musk was there and just all these thought leaders from all the world. So it's being talked about on the hill. So whether we're putting actual governors on what we do with AI and how far AI can go, Yet. The conversations have started, and my guess I'm only 47, but from what I've seen from our government is they will react to something that goes bad and then the regulation will pop in.

Andrew Martinez (33:46):

And I know I wanted to ask one more question, maybe open up for audience questions and moment, but just curious about the cybersecurity cost to this. I'm wondering if there's any.

Kevin Peranio (33:57):

Goes up.

Andrew Martinez (33:58):

Yeah, it is like for instance, a deepfake response costs extra. I'm curious. Let's open it up.

Sergey Dyakin (34:05):

Well, we probably should also mention how to define them to begin with. And I think the technology of creating deep fraud or DeepFakes has evolved. The technology to catch them also started to improve, and there is a continuous competition there. For example, in early days of DeepFakes, the researchers noticed that DeepFakes don't blink because all the images that they used to create DeepFakes were of people looking in the camera with eyes open. So deepfake started to use the images with eyes closed to actually create more natural things. And there are other things that started to evolve in the area down to, if you look at the video, you can insert what's the heartbeat of a person and you look for irregular patterns in the heartbeat, but all of them, if we can use AI to do this, this will be at least efficient in many cases. If you have to go to forensic methods and do it with sort of a magnifying glass, that's where the expense come from. But even the AI side of this, you have to throw money and money and money to actually buy tools to be able to catch that. So like Kevin said here, the cost goes high and high increase. And I can attest to your comment about the security budget.

Kevin Peranio (35:20):

Yeah, I mean the cybersecurity policy comes up with renewal. Then you got to, alright, if you have a higher whatever the down payment is, the minimum or whatever.

Andrew Martinez (35:32):

It's Deductible.

Kevin Peranio (35:33):

There are ways to kind of carve it up, but at the other of the day it's getting more expensive and there's less insurers out there. I mean look, the liquidity crunch that we're in, I mean what the Fed is doing, drawing liquidity out of our system, there's not enough dollar denomination around the world. There's a shortage of it. And the dollar's strong, well that means insurers, they're puckering up too. It's happening in homeowners insurance. You see a State farm pull out of California along with progressive and same with Florida. So liquidity is liquidity and that includes cybersecurity policy. So they have gotten more expensive as the threat gets more real.

Andrew Martinez (36:07):

Yeah. Chris, quickly, any thoughts on the cost of cybersecurity with defects?

Kevin Peranio (36:14):

DeepFakes don't blink.

Chris Bixby (36:15):

Exactly.

Andrew Martinez (36:17):

Definitely. I want to ask the one question at the end, but I wanted to give a chance for the audience to ask any questions for our panelists today. If anybody has any questions?

Audience Member 1 (36:29):

I got a quick, quick comment and then a quick question. So to the title with deep AI as the problem and solution, GPT four is fantastic at identifying when it is written something. So if you give it something that any of us wrote versus something that was the output of GPT four, it gets that right a hundred percent of the time. So I think the policing of future deep fake attacks will be done effectively by the machines that create these things to begin with. But if we accept that there aren't really deep fake attacks and it hasn't happened yet, are there solutions that you guys have seen where AI is providing solutions for old fashioned fraud?

Kevin Peranio (37:23):

It's got to be a combination multifactor, right? What's the IP address from where something's coming from? Plus maybe answer some security questions and someone old fashioned have a call back to that's what we did with wire fraud.

Audience Member 1 (37:36):

But I mean something like as a falsified document, AI picking those documents off, is that happening inside of your shops?

Kevin Peranio (37:46):

Yes. So an example would be like a signature. So we have an OCR that will take every document that has a signature and then pull all the signatures and put them on a page so we could see all the signatures and see if they looked similar. I mean, that's an example of using some kind of technology and OCR is getting better too. The tech out there is, I don't know if it's AI driven necessarily, but I'm sure it will be pretty soon. So we are seeing OCR help in that respect to look for fraudulent pieces of information.

Chris Bixby (38:19):

Yeah, I'd say, I mean you're probably going to have a better answer, but the two areas that We I've seen in this is The OCR document verification kind of piece upfront that depending on the vendor, et cetera, there's obviously the credit reports and the credit polls from the different credit agencies. And then on the backend is the wire piece. So I think the two probably areas that have been focused on, at least that we've seen is that upfront piece. So information going into the system, P-O-S-L-O-S, et cetera, and then verifying it. And there's yes, a lot of vendors, some vendors here, et cetera that do it. And then the actual wire piece, there's definitely some vendors that are out there doing that piece, that kind of in between and et cetera. I think what your point is valid and right, right? There's tools out there that you can identify if it's a deepfake, the integration of those tools is where my question is, is when and how those are going to be integrated. They're probably not going to be vertical specific tools to be dropping into a lender process. And so it's either going to have to be the lender doing it or it's going to have to be another kind of tech company doing it as part of theirs and they'll just integrate it into their process.

Andrew Martinez (39:29):

Yeah, thank you for that question and no, we got to wrap up in a minute, but wanted to, oh, we got another question. Sorry about that. No, no, no, but we have some time. Yeah, we got some.

Audience Member 2 (39:38):

Real quick comment for Kevin is that if you want to do some of your stop gap measure in the fraud world is we recently terminated an Ello and he got caught by tech. He forged a company logo and you can tell your LOs there are millions of logos and stationary being collected and it came from one of the large aggregators and they said this ain't real and it wasn't. So just as an internal, just a quick thought, you can pass that out on a meeting and you might slow it down from within. Second one is I'm a real big proponent of Ron. I think it's just the greatest thing since sliced bread, but I've run into two problems. One of the things is trying to figure out all the ways it could be faked out and I came up with a unique solution I thought would work.

(40:27)

I would have heart monitors on the people at the other end and briefly throw a disturbing photo up and make sure that they had their heart rate increase. The problem was then I realized if it was on the inside, they could have those monitors on somebody else and it would go out. My question is, I know it sounds a little insane, but you're also starting to think about it. You're going, wait a minute, that isn't so silly after all. My question is, what do you see as the way to control when the bad actor is within? So you've got all these vendors and all these people, including your own people who stand to profit, millions of dollars rapidly, and it's just too great of a temptation. I keep running workflows and waterfalls and I can get most of the outside solved. I like the gentleman who's talking about AI four. I agree with him wholeheartedly, but it's the internal and I just keep running into it. Nope. There's a spot where an internal person could get it. I'd be curious to see what you come up with.

Kevin Peranio (41:25):

Yeah, I mean it obviously depends on the position and who the person is. And I don't if this really, But on our retail channel, when we hire someone that's a retail branch manager, our two founders, Paul and Robert, they put their eyeballs on this person in a full-blown interview. Every single one. There's not like all these layers, we're not even at our peak at 2,900 employees now we're at about's caught 2000 still. They take the time to talk to everyone and have a relationship with them so they don't feel like it's someone that's going to do them wrong. Now that person has a P and L, okay, that branch manager. So there is a financial buffer there that buffers corporate. So that entity in that branch, they run their own P and L and if something goes wrong there, there's a lawsuit or there's some kind of fraud, they eat it, maybe we'll split it with them depending on what the mistake was. But yeah, I mean when times get tough, this fraud happens a lot more.

(42:21)

And you just got to know who's on your team. There's a lot of, you talk about the documents that you are referring to. I mean, look, I don't know how these underwriters do it. They have this sixth sense, even when something's an imaging system, they still can sniff out that a logo looks like crap just from years of experience by having experienced people then by having a quality control team that is doing secret shopping behind the scenes and doing random tests on things that have red flag type activity from decades and decades of us just being in the business. And so something will get through and when it gets through, then all the winners we're all in our closed doors. We all tell each other, oh man, do you hear about this thing? And then we all kind of beef up our systems to fight whatever that was that happened and then blackball that LO and make sure no one ever hires them. At least in our little group.

Chris Bixby (43:07):

If you're looking for maybe the technology solution, obviously there is always this process solutions that you need to put in place, some kind of verification, audit, et cetera. But you can also think about, and again, it will come down to vendors training their models on those specific cases, but the anomaly detection can come up as something useful. Things like this already exist if you're actually willing to buy, for example, on the cybersecurity side where you're looking for patterns out of patterns, behavior of your employees, somebody who had never logged in at this hour, suddenly logging in at this hour or is actually moving files that person have never touched in the past. So those things can be trained upon and it becomes your anomaly detection engine. I would imagine similar things can be used for your situation, something that you described, but that's a really good question.

Chris Bixby (44:03):

Yeah, I think probably the one thing we didn't have a lot, I'm glad you asked the is the internal front, I mean that's where the kind of example you gave up front was happening as internal wire transfer. And I think The Internal piece is valid thinking about these Different, I mean you're right, it's the data, it's the kind of tracking, it's the training of the market, the training of it that has to be done. And Again, I don't know if there's vendors out there that are doing that or just CTOs and other people internally.

Kevin Peranio (44:27):

You kind of have to have your own known traveler system. So these are people we work with. There's certain people that are on the wire room. We wouldn't just hire someone off the street to be on the wire room, you know what I mean? So someone's been at our company 10, 15 years, maybe they want to come over from a different department, we know them. I mean, could that person possibly be a terrorist? Sure. But it's less likely. And then look, you get people with mental health issues sometimes. And we had some guy freak out and I don't, just threatened to come in and start shooting people. And so we had to fob up the whole place and put restraining orders on him and stuff because he and his girlfriend split and whatever. I mean this stuff happens and this guy was in shipping. And so the human aspect of this business is unpredictable and so you can only do so much, but can't be perfect, although that's always the goal.

Andrew Martinez (45:22):

Yeah, no, that was a great way, that was a great comment to close it. Unfortunately we're out of time. But thank you very much to our speakers today. Thank you for attending and be sure to catch us next door. We got a cocktail reception and another AI panel. So thanks again everyone.