Track 1 – The Role of AI in Underwriting Circa 2024

TRACK SPONSORED BY:  Black Knight

Of all the relevant mortgage AI use cases, underwriting might be the most impactful because it represents speed for customers and lenders and safety and uniformity for housing finance overall. But we've had AI in underwriting since the dawn of DU and LP, so this panel explores the next steps in progressing automated underwriting with both these GSE-owned systems as well as proprietary fintech systems that serve our ecosystem.

Transcription:

Sandra Madigan (00:06):

Good afternoon and welcome man. Those lights are blinding. Watch out, Frank. They are. Okay. As many of you have heard, black Knight is now part of ICE and we are a proud sponsor of the real world AI use cases. Track. My name for those of you that know me, is Sandra Madigan and I'm the Chief Digital Officer at Black Knight in this session. The role of AI in underwriting circa 2024, you're going to hear from several industry experts. Joining me, you have Frank Poise, who is the business strategy director for Dark Matter. We also have Leah Price. She's an independent advisor and Maria Volkova editor for the National Mortgage News. A round of applause for them. Please go ahead. Nick was unable to join us. He had a personal matter to take care of, but he would've loved to be on this panel and enjoy.

Leah Price (01:10):

I got a clicker.

Maria Volkova (01:16):

Okay. For this panel, we have opted to stand. So, hi, my name is Maria Volkova and I'm the technology reporter for National Mortgage News. Thank you for joining us this afternoon for a conversation about artificial intelligence and how this technology is used in loan underwriting buzz around the use of artificial intelligence. Both the positives and the perils have intensified in the past year, and of all the relevant AI use cases, underwriting might be the most impactful because it makes or breaks a consumer's opportunity to qualify for a mortgage and become a homeowner. During this discussion, we will talk about how stakeholders in the industry are thinking through using AI in underwriting, how bias may impact such systems and how to keep them transparent. So to start it off Leah, if you could maybe introduce yourself one more time.

Leah Price (02:18):

Sure. Leah Price. I'm an independent advisor right now. Previously I was at Figure Technologies working on blockchain solutions for the mortgage industry. Before that, I was at Fannie Mae. I worked on the Day one Certainty team, the desktop appraisal pilot, and also worked in sales. Before that, I was at American Express, and I bring this up, it's something I never tell people, but I actually wrote their first machine learning model governance policy, which was very unsexy. I never bring it up except it's very relevant in these times. Today I come to AI now because in the midst of pitching, I realized at some point nobody wanted to hear any more about blockchain. So the only way I could get invited to speak at conferences was to become an AI expert and figure out how I could sell blockchain as a way to solve some of the problems that AI creates. So I'm here to talk about that a little bit today.

Frank Poiesz (03:18):

Hi everybody. Frank Poise. I've been with Black Knight. Now Dark Matter, you may or may not have heard the whole news story, but Black Knight is now merged with ice, and in the course of that transaction we created Dark Matter out of the Black Knight Origination Technologies division. I was with Black Knight for about five years. If you include the first year when I was with Compass Analytics, an acquisition that was done by Black Knight back in 2019. So over this period of time, I've been excited to work with a number of great technologies that were built by the Black Knight team, including our ava artificial intelligence based platform that was first deployed against the problem of documents. So we'll talk about that a little bit more later. I'm not going to go into my background because it's way too long a story you don't want to hear. But anyway, this is going to be a great session and thank you for joining.

Maria Volkova (04:14):

So to start the conversation off. In general, where are we with AI and what is up with all the hype? 

Frank Poiesz (04:22):

Awesome. So I'm going to start with the hype cycle. So Gartner created this hype cycle. So basically what it is, it's a way to think about emerging technologies and it's cool to think about blockchain and AI because they've been through their recent hype cycles. So the concept here, and I'm going to ask you guys to raise your hand in a minute to ask you where you think we are with both of those technologies. But the idea is with, so we have expectations across time and expectations is really public perception. So red, orange, yellow, green, blue, but red is the innovation trigger. So that's down at the bottom. So something happens, there's some big release that is followed by the peak of inflated expectations. Everybody's enthralled with this technology. All these conferences are coming out, everybody's throwing money at it, which is then followed by the trough of disillusionment.

(05:19)

That's the crash. Everybody wants out. Nobody wants anything to do with it. Everyone's staying away. It's the bad guy. We should have never touched this technology. Then companies have failed and what remains are valuable use cases. So we're getting into the green star, which is the slope of enlightenment, and then anyone who survives through all of that drama then reaches the plateau of productivity. So that's the blue star there. So I want to ask you guys to raise your hand, talk about generative AI chat, GPT. Do you guys feel like we are at the peak of inflated expectations? Would you say, anyone think we're at the peak? Does anyone think we have further up to go in terms of our expectations? Okay, alright, that's awesome. I kind of feel like we're about at the peak with generative AI. Gartner back in August, thought we were at the peak. Just curious. Blockchain, technology, crypto. Do you guys feel like we're at the bottom yet?

Frank Poiesz (06:27):

Almost.

Leah Price (06:27):

Almost. There's further to go down.

Frank Poiesz (06:30):

I need hard to tell.

Leah Price (06:30):

Okay. Alright, so I'm bringing this up. So AI and blockchain are really different. AI has a much longer history. I want to point out if we're at the peak, we got to be ready for the crash because there is going to be something that happens, it's really bad and that's going to make everybody want to get out of this technology and it's going to be evil. People will go to jail, but you buy at the bottom. So those of us who are in the space, you want to keep thinking about it, thinking about which companies are going to survive and have the legitimate use cases.

Frank Poiesz (07:06):

Leah, what do you think is going to trigger kind of the bottom?

Leah Price (07:10):

Okay, so it's interesting because blockchain has had this crash. Let's think about FTX and people going to jail, consumers losing a lot of money. I'm curious, I want to hear what you think for AI could be the bottom, but I feel like I'm starting to read about some really scary things happening with generative AI, just kind of gross, nasty stuff that at some point I think enough people will get upset that the regulators will start to take action and it might end up being pretty hefty regulation that comes down. That would be the bottom.

Frank Poiesz (07:48):

Yeah, I agree with you. I think that we're starting to hear from the CFPB increasingly frequently just this past couple of weeks, we've had two announcements from CFPB about concerns. First concern that AI chatbots be used appropriately with consumers and the possible effects of using that particular technology, generative AI, inappropriately and misinforming customers. And the second was the announcement that they are very concerned and are going to track closely how credit decisions are made and that to make sure that the real reason behind any adverse credit decision or actually any credit decision is discoverable by the consumer and of course by the regulators. So as they have announced that they're on the offensive to be sure that industry does it right. The problem we have is we don't know what right is yet, and that's why I feel that we're kind of at a point where we've got to watch how we use AI as an industry to make sure we deploy great solutions that are also transparent.

Leah Price (08:58):

Yeah, I think the other thing that could be the bottom is that there are a bunch of lawsuits right now happening against the generative AI companies for intellectual property. So whoever survives is going to have to have pretty deep pockets to be able to debate this in the courtroom. And so I made the point we will have some kind of a crash and we will want to stick with things. And I want to point out AI is different from blockchain in that it has a much longer history. So it's had lots of different ups and downs. People think about AI, it goes back to 1950 with Alan Turing, you all have probably seen the imitation game. Things didn't really pick up with AI until the government stepped in and started investing for the purposes of defense. So the first AI summer happened in 1963 with DARPA stepping in and funding AI.

(10:03)

And one really interesting thing in the popular imagination now people talk about the dangers of general human level, general intelligence. So like open AI, they're trying to build general intelligence. So that would be where a machine is able to do every single thing that a human is being is able to do. No one has achieved that yet. But I want to point out that back in the 1970s, one of the godfathers of AI, Marvin Minsky thought that in three to eight years we would have machines with the general intelligence of a human being. And obviously that has not come to be. So all of these fears that humans can be replaced are really fun to think about, but there isn't really a good reason to think that'll come into play.

(10:58)

The first AI winter then came about when government pulled spending and it's really interesting now we think about the government and regulation as being the big role. No one really talks about government spending right now in the AI space. You don't read about that, but I'm sure that there will be defense spending in this space going forward. In the nineties again, there was a summer, so I think this current iteration of where we are with AI has really been building since the nineties, since the advent of deep learning neural networks up to last year's chat, GPT launch, which everybody here is familiar with leading even up to now. Like this week chat, GPT rolled out new functionality. So now it's multimodal, which just means that it can work with text voice images on your phone with your app. It's really cool. There are some demos I haven't been able to play with the new voice response. So basically I could tell it would pick which voice I want. They're guy voices, girl voices. I can kind of turn it into my little friend that I talked to on my phone, but it's really cool. So this multimodal generative AI is probably going to be all the rage this winter. I don't know if you want to talk about that at all Frank any of this.

Frank Poiesz (12:19):

I think that a lot of us in the industry are looking at how to use AI for the conversation that has to happen between your lender, a lender and the consumer, and also how to use AI within our own companies. If we're software developers, the ability to use chat GPT or other generative AI to assist in writing software is very tempting. So all of those capabilities, we're studying a number of those angles and there is a ton of potential. But again, we're going to start the first half of this being a little bit conservative and then at the end I think there's a very hopeful story that we'll get to.

Leah Price (13:01):

Awesome. No more for this slide. Cool. Then I guess when we talk about AI use cases, I think it would be relevant since this is a discussion about AI in underwriting to talk about what are some of the specific use cases of AI in that space. Frank, if you want to go to your slide. 

Frank Poiesz (13:20):

Yes, So I happen to see one of my favorite people and a prior customer who was on this journey with the dark matter team over the past three and a half years of looking at underwriting and trying to arrive at a solution that takes most of the work and automates the decisions that underwriters have to take. We've been using AI since, as I mentioned earlier, on about four years, building models to handle the problem of reading documents. This turns out to be a fairly complex thing even though humans are very good at it because it's not simple to find all the pieces of information that you need. So that journey was a big part of the last four years of my life, but the question really is do we need AI in order to support and dramatically improve the underwriting experience? And I think Dino would probably agree that the most valuable part is not necessarily the AI, it's the rule sets that you can build complex rule sets that are following the guidelines that the agencies promulgate.

(14:30)

So the rules to underwrite are all deterministic with very, very few exceptions where there's subjectivity. And since lenders have to sign up for all of the reps and warrants that they have to follow in order to get a loan funded and not have a repurchase, everyone's worried about whether the answers are going to be correct. So in some quarters, and I think this is the message we hear a lot from the regulators, do we want AI in the decision or not? Because if we have AI in the decision and it's not transparent and we don't know what exactly it's going to do next, then it's not a very predictable model and we've got to rethink that. So on the first score, yeah, taking AI and applying it against deterministic rule sets is not very smart. You need deterministic rules to do that. So come up with a lot of this then if then else logic and provide the data to those rule sets and you're going to improve underwriting.

(15:28)

We've proven that that's true. However, the processes of gathering the information an underwriter needs is where applying AI matters. again, that problem of solving, how do you read all the transactions on a bank statement so that you can answer the question, are there any large deposits here that we have to worry about to be able to read pay stubs? I don't know about you, but I've looked at the pay stub of a ship captain and it's incredible how complicated it's so getting the ability to reliably read documents is where applying AI is useful. Classifying and extracting data is also important to make sure that you extract the data from all the inbound stream work streams. So you've got to be able to look at and correlate across data sets and there are some applications for AI or at least for some machine learning in looking at some of the different kinds of data that are coming through.

(16:33)

Most important though, those inbound data streams and all those tasks are very, very ripe for improvement in process deep learning around how loans are made, learning about how the process works so that you can accelerate workflows, learning about the behaviors of the people making decisions. These are all fertile ground for the class of AI called deep learning. And in addition, there are a ton of applications that we're working on that include helping the people that have to understand the seller's guide. So every one of us in the room has at least at one point seen Fannie Mae, Freddie Mac seller guides, all the guides that are required for FHA VA loans. This mountain of data is not only difficult to understand, but also you got to get good at researching it. So being able to break down that knowledge and hand that knowledge to users is certainly a good application of generative AI to be able to take and synthesize that and come up with good answers for lenders, for underwriters, and perhaps even to reduce the time it takes to educate underwriters.

(17:44)

And further, there's of course talking to consumers about, oh, we need your pay stub. One of the things that we joke about at dark matter is we get consumer documents and the consumer's document needs to be evaluated to be classified. So we get a document, the consumer said it's AW two, it ends up being a picture of their cat. So how do we make sure that we don't have that go into the file and sit there for a week before somebody discovers they have a picture of the cat. So that's one of the ways that we can improve the consumer experience. So again, we're working on the ability to support consumers, the ability to understand and improve workflows and the ability to continue to enhance how we handle the inbound data needed for underwriting.

Leah Price (18:31):

Leah, do you want to take a stab? Well, actually I wanted to ask Frank A. Little bit more. Okay. Because we were nerding out earlier about how cool model validation is. And so increasingly they're going to, regulators will go to let's say bank lenders and they will say, okay, tell me about your model inventory. Tell me every single place that you're using a model if it's machine learning or not, and tell me about it. Since you're a vendor and you're selling products that use models, what's that conversation like between you and your customers? What do you give them when they tell you they're being audited?

Frank Poiesz (19:15):

One of the things that DARPA did over the years, you mentioned DARPA as being one of the roots of the current generation of AI, is they promulgated the concept of X AI or explainable AI, explainable AI transparency. These are all terms that we're hearing in the industry that are used to describe the process of understanding how a given model works so that you can perform tasks, sort of like model validation, that take model validation to the next step, the next level, the next level meaning okay, we can validate certain business rules and certain features of a given AI, but we need to make sure we go a step further and try to understand where the risks are, where the areas of possible drift are AI models can change as their interaction with data occurs. And understanding how that kind of thing is expressing in the business environment that the model's working in also requires you to track how they perform to have policies and procedures for how you manage the data that's used for training, how you explain to users how given features inside of the model work so that you can provide the people who do the validation with frameworks that they can use to test how a given AI solution is, number one, managed in terms of the data.

(20:43)

Number two, retrained when necessary that there are ways to discover that it goes sideways or not. And number three, that you have ways to test how its outcomes match reality and determine when its outcomes do not match and therefore trigger a retraining event.

Leah Price (21:04):

Yeah, I mean it sounds like dark matter is really well prepared. So I feel like the lesson there for other vendors is that they're going to have to be ready when their lender customers ask them for all kinds of documentation that I would say the newer AI companies probably aren't ready for the kind of aggressive digging that some of these regulatory organizations are used to.

Maria Volkova (21:33):

And building off of that, when mortgage lenders come to you guys and ask you about your products and services, what are their concerns? What questions are they asking you?

Frank Poiesz (21:46):

Not enough actually, but they're asking,

Maria Volkova (21:49):

What do you wish they asked you? Maybe?

Frank Poiesz (21:51):

We really do want them to explore this whole idea of the transparency of the models that we use. It is true that the larger organizations, your extensive history in doing model validation, large lenders are good at that. It's part of the craft that they apply to their business. But if you're a smaller lender, you may not have the staff or the capability set to do that kind of deep analysis. So you need to get good at asking questions of vendors, and it's up to us as vendors, as a vendor community. I know we're biased heavily toward vendors here. It's up to us as a vendor community to not only say that we're being responsible, but be able to prove that we're being responsible and how we deliver AI to have policies, procedures, and statements of fact about how we do business that our community of customers can rely on and learn from.

Maria Volkova (22:47):

Since the discussion is about how AI and underwriting may change going into next year, that's my question for you guys. How Might the use of such technologies change going forward, both machine learning? 

Leah Price (23:04):

Yeah, If I can speak to this a little bit, I feel like the AI in mortgage lending conversation is really focused on the underwriting decision because that's very sensitive and there's a history of bias in that decision, but I feel like there's so much more in our world that can and will be impacted by artificial intelligence. So what I think is really interesting is all of our companies, this is this last column here, corporate functions and code generation. So I think about how there are now code whisperers that your developers can use that will make them much more efficient at writing software. I'm personally, as a former product person, I'm excited for the day where I can just use my chat GPT to mock up a user interface and then say, Hey, can you go build this UI for me? And then they've got the code, then I worked with a developer for two days to debug the code, then we put it into production and within a week I could have a piece of working software just as a business person with an idea. So I feel like whether you're a lender or a vendor or some, I mean even as a consumer, you'll get used to this, things are just going to move a whole lot faster. We'll be able to test things, we'll be able to have things fail a lot faster and that really is going to change the way consumers demand any kind of software product to work. They're going to demand more out of their home ownership experience. So that's what I'm most eager to see in this space.

Frank Poiesz (24:51):

Well, I'm excited by the whole story. I mean, we've got so many opportunities and there's many opportunities that are yet to arrive. We're seeing some technologies here today that are new and innovative in the appraisal space and as well as in the underwriting space and in the real estate world, there's a whole world of innovation. One of the reasons that I'm so excited to be now with Constellation Software, the owner of Dark Matter, is that we have the ability to collaborate with a large community of like-minded companies and the ability to take the data that lenders own. And that's an important point for you who are vendors. Never forget that your customer owns their data and that you need to have good ways to protect the data while still realizing that AI depends on data. If you don't have good current data, there's no way you're going to make AI models work. So getting to the point where customers understand the value of their data and that the vendors respect the value of the data means that we can now start to explore all the boundaries of this great opportunity. So I'm really excited about the exploration because we've only just begun to look at what the potential is for this technology.

Maria Volkova (26:07):

So you mentioned quality of Data and I feel like AI systems for underwriting are faster, maybe more efficient in making lending decisions, But there is a speed bump around the Conversation surrounding bias. And I guess My question is what roadblocks exist in the implementation of AI in the underwriting Space?

Frank Poiesz (26:36):

So I mentioned the regulators earlier, and that's one piece of the puzzle, but bias is a real thing. If we think about all the rules that are in the seller guide, they are biased, they're biased to people with high credit scores and low LTVs. So it's not a reach to expect that the software that we develop using a technology that is probabilistic, that it makes guesses about things in order to do its job, could end up introducing not only biases in terms of human bias, in terms of violating the rights of other people, but also biases that result in poor results. Just a given model, if it's biased the wrong way, can create problems. I often tell the story of our early days with our AVA system. We had, I think you may remember this, we had a period of time we were introducing new documents and we were doing appraisals and we were doing all the other documents.

(27:42)

We started to build a model to handle driver's licenses. And I got a call from our operations people and they suddenly said, Ava thinks everything is driver's license. It's driving us nuts. Well, what it was doing is looking at anything that had a photo in it, and the model was just not really astute enough to know that a picture in an appraisal is different than a picture in a driver's license. So that's the kind of tuning that data scientists have to do. And you mentioned earlier some of the interesting things about chat g pt, one of the other things that's important is you have to know how to train, how to prompt a model. If you ask chat GPTA question one way and then ask it a different way, you'll get different answers. One answer may be better than the other, but who knows? So I think there's a lot to learn about how to understand how models behave so that you can avoid both the kind of bias that affects people, which is going to be regulatorily prohibited, and the bias that will dramatically affect your business if it does things wrong.

Maria Volkova (28:54):

And Leah, I wanted to quickly address this question to you. What might be solutions that will keep these systems more transparent? I know that I wrote a story recently about the way that blockchain can be such a solution. So if you could talk a little bit about that.

Leah Price (29:14):

I actually wanted to check on to Frank too, is that it is impossible to get the mortgage industry to use any new technology. Like full stop. I was on the day one certainty team and we launched in 2016, and I think there's only 20% adoption or something really disheartening despite probably all of you all in the room trying to push towards a digital mortgage. And actually digital mortgage like an E-Mortgage. Less than 10% of the industry is using E-Mortgage, and you just shook your head, but now we're talking about blockchain and all these generative AI.

(29:59)

Oh yeah, it's coming. We're all going to be taking these huge risks. But honestly, okay, if we do a reality check, I bet we wouldn't find any lenders who would feel comfortable coming up here and talking about how they're using machine learning because they're really scared of CFPB coming after them. So in terms of a barrier to adoption, I mean it's just this industry is a huge barrier to any kind of technology adoption. The other piece of that is fear without any regulatory clarity right now, lenders are just kind of scared to say anything publicly. And if people aren't going to talk about it, then they're not going to admit to using anything, and we're just going to have to tiptoe around things. That's also a huge barrier to adoption.

Frank Poiesz (30:49):

That's actually the way we get out of the bottom of the hype cycle. The way we get out of the bottom of the hype cycle is by going through a period where some people go just a bit too far and there are regulatory penalties and there are companies that have big problems and there are vendors who get in trouble with their customers. But by doing that, we together will learn how to apply this technology more effectively. We will learn what the regulators don't know now, which is how they feel and how they prove that their methods for managing the regulatory risk of these products will work. We'll know what the CFPB expects with more granular detail than just make sure you have a good reason for giving a decision. There will be rigorous things we have to do. There will be some costs to us as vendors and as lenders, but once we get through that and we understand what's going to be expected, I think that's our opportunity to come out of that trough Strong.

Leah Price (31:53):

Totally agree. So there will be some kind of enforcement on some of these, so there's going to be very expensive lawsuits for sure, and it's going to look really scary. But that's the point where we get that clarity that the industry needs and then people can be more straightforward about what we're doing. So to Maria's question about all the wild cool stuff that blockchain can offer, artificial intelligence. So to put up, let's say blockchain, I'm going to say it's at the bottom. We're still in the trough of disillusionment, but what's interesting about it is that the tenets of blockchain are truth transparency, which when you think about what people challenge about AI, it's the opacity of some of these models, the lack of transparency. You really cannot blindly trust an AI model, an output, but something like blockchain-based tool, so cryptography for example, helps to validate the authenticity of an image.

(32:58)

Tools like that I think will start to proliferate more as we see more challenges with AI. So even in image generation with all those tools that exist today, water markings, people talk about that being a good solution, but there isn't a really good watermarking AI solution. So cryptography is a good solution there. But there are other examples, Maria, that we've talked about. So let me highlight her cutting edge article in National Mortgage News about blockchain and AI. And I also published a white paper with two industry colleagues, one from Rocket and one  who some of you may know on AI plus blockchain. 

Maria Volkova (33:43):

Frank, would you like to add your two sense?

Frank Poiesz (33:46):

Yeah. My two sense is that we have a number of great technologies and the idea that we need transparency, blockchain offers transparency, the ability to ledger the information that's associated with a given lending transaction so that we can understand what actually happened. I mean, let's face it. Who's looked at a loan file lately? Can you understand exactly how the underwriter came to that decision with any real clarity? No. So the ability to ledger information that comes out of a given AI event and having all of those events stacked up for examination when necessary offers a great opportunity to improve confidence in the results of these technologies.

Maria Volkova (34:38):

Great. So we've reserved a few remaining minutes for Questions, So if anybody has questions, please?

Audience 1 (34:49):

I'm going to start with Frank, but it's going to go to Leah as well. So Frank, you were talking about the size, for example, of the GSE guides, and I don't if you have tackled this thought or not, but you kind of came across it as the conversation of intelligent query and then the conversation of, I'm going to slide even one more depth on that. You said, Hey, if you don't get good data from your source, I either say a mortgage company, it's almost as like it's their responsibility to have good data. So my two fold question is what do you see, and maybe Leah, you jump in on this as well. What do you see that's going to help us develop intelligent query as a natural artifact of the human race? And what are you going to do to make it so that let's just say a mortgage company for the moment has good data, that it doesn't become their burden to meet your algorithms or your cycles of input? 

Frank Poiesz (35:49):

It's a big challenge and I think that part of the solution is that we have to start with things we currently understand. That's why our partnership with Dino over the past few years to understand how to apply AI and machine learning to the complex problem of really, really chaotic documents has been so instructive for the dark matter team. We understand some things that we just didn't really understand before. We understand now how to take apart the problem and to start to assemble models that really solve the problem. The same thing's going to be true with the examples that you gave. How do we get a bot that's really like an underwriter that can answer questions like an underwriter that would then give us the ability to have a bot that could talk to a consumer reliably? The only way to do that is to start working the data, is to have example, our data that you can trust and to start to train and retrain and develop models.

(36:50)

That's why the data is so important. So if you have information about workflow in your company, if your system, you are a vendor and your system doesn't really husband that data, make sure it's clear and easy to understand, then you won't be able to train with data very, very confidently because the idea is okay, no matter what data you have, you can apply a model to it and get better understanding. So if you're going to talk about, Hey, this loan, is this loan going off track in underwriting because I need a document that I don't have. Having data about all the loans that you've done over the past three or four or five years and having workflow information about all of the steps that happened to make that loan gives you an incredible edge over your competitors who might not have that data. That's one of the avenues that we're following. The other thing is understanding it, and Leah will reinforce this, if you're doing model validation, it all starts with the data. I mean, you really can't validate a model if you don't know where your data comes from.

Leah Price (37:59):

So I want to answer your question because you mentioned the human race. So one really interesting thing about generative AI and large language models, there's actually a shortage of data. So if you go out and you Google large language models shortage of data, you'll see that we've kind of consumed these models have pretty much consumed everything that's out there with all of the internet, all of the books, whether

Frank Poiesz (38:25):

Permitted or not,

Leah Price (38:26):

Whether permitted. Yes, absolutely. And so now the question is, okay, now that our models have eaten all of the data that's out there, some of it is just scrappy data and some of it's good data. What are we going to do to keep our models smart? So there's a whole emerging, I don't even know how emerging it is, but there are these companies that all they do is they create synthetic data. So data that's labeled in a way that's really helpful for these models. So I think it's cool to think about catastrophic data shortages in the world. So you can Google that.

Audience 2 (39:06):

Would you clarify the word synthetic as you use it? Because that word gets used a lot of different ways.

Leah Price (39:12):

Yeah, so I would say, well, so let's say, I don't know the opposite of it, but I would say organic data would be anything on Wikipedia pages. The stuff that has come to exist kind of in the way we think that language exists. So synthetic data would be something that was manufactured for the purpose of feeding a model. So one use case of synthetic data is to train autonomous vehicles. So right now we haven't been on every single crevice on every single cul-de-sac and seeing every single tree. And so there's synthetic data that's used to train autonomous vehicles right now, like fake data that's well labeled, that's clean, that really works well to feed and train a model. That's what I meant. 

Maria Volkova (40:05):

Does anybody else have any Questions? 

Sandra Madigan (40:10):

We have time for one or two more. Come on. Be brave. Speak up.

Audience 2 (40:14):

I thought that Leah, you would've brought up the concept of, because you're selling on blockchain and you're trying to find a way for Right. So I figured the core ingredient there is obviously Frank's talked about the things. First is take everything as analog convert to digital. So once you have in digital on the same plane, you feed it into the rule engine, like Frank said. But the key there with regulation, looking in over and machines sometimes being myopic and such is blockchain. Right? And I would've thought that would've been more your angle. I was just wondering if you blockchain.

Leah Price (40:53):

Because I'm cranky about it. Yeah, yeah. I'm pretty cranky over blockchain. You talked about some of the regulators. I want to highlight that most of the regulatory agencies that are in financial services, so OCCFDIC, CFPB, all of them, FHFA, they all have these FinTech groups now and are looking at innovation. So I know blockchain is a big priority for them, as is artificial intelligence. So it's really interesting that DARPA started the first summer for artificial intelligence. It would be so fascinating if it's actually the regulators that kind of stimulate a different type of summer for responsible innovation. And you're laughing to yourself, I know. Come on.

Frank Poiesz (41:43):

Well, who can afford it as well? Who can invest the time and energy and understanding the enormous data sets involved. Think about what our partners at Fannie, Maine, Freddie Mac have done with DULPA and all the associated technologies that were only possible because of the mountain of data about all the loans and all the properties that they accumulated as in the normal course of their business. So I think we need that kind of mass in order to get to a point where there's general acceptance of these tools.

Maria Volkova (42:18):

I think we have time for maybe one more question. One last question, guys. Someone be brave.

Leah Price (42:28):

A question for you all. Do you Feel like robots are going to kill us all? Raise your hand.

Frank Poiesz (42:35):

I keep hearing Skynet from people.

Leah Price (42:38):

It was six months ago that Elon Musk and other leaders issued their letter saying, AI must be stopped and do you know what happened? Absolutely nothing. Including those leaders at those AI shops. They continued to do development because no one else was stopping. So there's plenty of money to be made in the space. So I don't know.

Maria Volkova (43:03):

And on that note, I guess if there are no other questions, thank you so much for coming and listening to our panel.

Frank Poiesz (43:10):

Thank you.