How AI Happens

FICO Chief Analytics Officer Dr. Scott Zoldi

Episode Summary

Today, we're joined by Dr. Scott Zoldi, Chief Analytics Officer at FICO, a global analytics software company that empowers businesses to improve customer experiences, lower risks, and operate more efficiently through its FICO Platform. Dr. Zoldi is also a named inventor on over 130 active and pending patents, including a recent breakthrough in using blockchain for auditable machine learning decision-making.His research and work have earned him a spot as one of American Banker’s 2024 Innovators of the Year. 

Episode Notes

In this episode, Dr. Zoldi offers insight into the transformative potential of blockchain for ensuring transparency in AI development, the critical need for explainability over mere predictive power, and how FICO maintains trust in its AI systems through rigorous model development standards. We also delve into the essential integration of data science and software engineering teams, emphasizing that collaboration from the outset is key to operationalizing AI effectively. 


Key Points From This Episode:

Quotes:

“I have to stay ahead of where the industry is moving and plot out the directions for FICO in terms of where AI and machine learning is going – [Being an inventor is critical for] being effective as a chief analytics officer.” — @ScottZoldi [0:01:53]

“[AI and machine learning] is software like any other type of software. It's just software that learns by itself and, therefore, we need [stricter] levels of control.” — @ScottZoldi [0:23:59]

“Data scientists and AI scientists need to have partners in software engineering. That's probably the number one reason why [companies fail during the operationalization process].” — @ScottZoldi [0:29:02]

Links Mentioned in Today’s Episode:

FICO

Dr. Scott Zoldi

Dr. Scott Zoldi on LinkedIn

Dr. Scott Zoldi on X

FICO Falcon Fraud Manager

How AI Happens

Sama

Episode Transcription

Scott Zoldi  0:00  

That blockchain then basically says, Okay, we have followed the model development standard, we have shown the assets that demonstrated that we followed all those steps. And as part of it, right? We will have a complete audit chain of every single step, including explainability, including ethics testing, including robustness and stability testing, including how you monitor this model when you're done with it, right? So it's great to build a responsible AI model from the get go, but you have to operationalize it.

 

Rob Stevenson  0:28  

Welcome to how AI happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we're about to learn how AI happens. Here with me today. On how AI happens is a fantastic guest. I know I say that all the time, but I swear this time I really mean it, because my guest today is a named inventor on over 130 active and pending patents recently, among them using Blockchain to ensure auditable ml decision making for which he accepted the global finance Innovator Award in 2023 He holds a PhD in theoretical and computational physics from Duke University. I'm telling you, not just to hat rack this guy, and he currently serves as the chief analytics officer at Fico. Dr Scott Zoldi, what a pleasure to have you on today. Welcome.

 

Scott Zoldi  1:31  

Thank you, Rob. It's great to be here.

 

Rob Stevenson  1:32  

You know, I was really struck as I was preparing for this conversation, not just by the sheer volume of the patents, but also the recency, you know that you are still out there doing your invention thing, which is exciting because, you know you have a chief in your title. I assume you're a very busy guy. You know, lots of meetings to prep for. There's investor meetings and podcasts for one. So I'm just tickled that you kind of still find the time. How does getting to roll up the inventor sleeves fit into your regularly scheduled Cao programming?

 

Scott Zoldi  2:06  

Yeah, that's a great question. You know, I view it as one of my number one responsibilities. You know, I have a group of scientists that report to me, and I think I every day, I have to earn their respect and admiration, and I have to stay ahead of where the industry is moving. And, you know, plot out the directions for FICO in terms of where AI and machine learning is going to have to move. And so for me, I think just critical being effective as a chief analytics officer, all the rest of those meetings do happen, right? I am a pretty busy guy, but I always want to make sure that I have two or three different sort of research avenues that are very, very interesting to me or to the team that we're constantly exploring, because that's going to allow us to stay ahead of the curve, but also address industry issues from an innovation perspective. So it's job number one in my perspective, and I never want to lose track of it.

 

Rob Stevenson  2:54  

I love that outlook, and I am also glad to hear you say that it's important for you to continue earning the respect of those reports. I'm not sure many bosses think like that. Is that something about them being scientists, or is that just a general leadership like having really smart people reporting to you kind of approach? Well, one

 

Scott Zoldi  3:13  

it's certainly for the scientists, right? I mean, I think it's important because each of them are very, very smart and brilliant in their own ways. But I think from a leadership perspective, also right to be effective is really to get that mind share and to be on the same page. And so, you know, it's critically important for me right to be obtainable, right and relatable and really in sync. And so, you know, I think it's probably something that more leaders should think about, right is really understanding that they have a privilege of leading some very, very talented people, and as part of that privilege, they really need to make sure that they're constantly making sure that they are understanding that where they are with respect to, you know, the employee's mind share and the direction there. So yeah, I think it is something that my leadership trait and quality, that morph should probably embrace.  

 

Rob Stevenson  3:13  

I have to agree with you. Yeah, when these folks bring their research or their areas of interest to you. You know, scientists are a curious bunch, right? That's, I think, and I don't think it's possible to be a scientist without having curiosity as like, this key sort of value. But they could probably go off on any tangent they wanted. What is the true north? How do you sort of keep them, you know, all rowing in the same direction, so to speak, and make sure that the research they're doing is meaningful to Fico.

 

Scott Zoldi  4:21  

So, you know, we have a few major sort of themes, right? So one of the themes is that we build industry specific algorithms, machine learning, AI. So like, if we have an idea that would improve, let's say scam detection, which is a heinous crime that occurs today, globally, and we'd say, okay, great. We have this idea. We think that we could improve our models very considerably by exploring this technology direction, we'll generally look at it and say, Yes, we think there's something there. We estimate sort of a business value, and then we go after it, because for us at FICO, we want to make sure that we're not just building commodity we want to build things that are very specific to the domain problems that our customers. Base from a detection perspective of, let's say scams or frauds or better risk decisions, but also in the light of how to operationalize and some of the regulatory sort of aspects of that. So that's one direction. The other direction is FICO, and our team is very focused on responsible AI, which is a broader topic, and that topic is really around, what are those algorithms that allow us to use these powerful tools in more safe and ethical and explainable and auditable ways? And, you know, so we have another track, which is a little bit more forward looking, but you know, as we look at the regulatory landscape with respect to AI and trust issues with AI, that's another really big one, because we believe there's tremendous power. And in fact, a big part of our company is focused on AI. We're an analytic software company that, you know, operationalizes this AI and machine learning. We need to make sure that we're constantly focused on how to make sure that there is trust in the algorithms that are being developed and that they can be monitored and audited. And so those are two major sort of directions. We certainly don't go off and you know, things that are completely orthogonal to business, but there's so much there, frankly, and it's such a developing field, that there's no lack of things to go and invent or to explore.  

 

Rob Stevenson  6:10  

Being in a financial institution, of course, responsible AI accountability of the utmost import, it ought to be, no matter what industry one is in. But it strikes me that for a company like FICO, it's probably more top of mind. Now, do you think that because you have spent a lot of time thinking and writing and working on responsible AI and accountable AI and Explainable AI, is that in parallel with working at FICO, was that before FICO, like, at what point did this become something that you were really focused on? Because it strikes me that's not merely because it's a regulatory thing based on the industry you're in.

 

Scott Zoldi  6:42  

So it's a great question, Rob, so part of it is part of who I am. So, you know, you mentioned earlier on that I'm a trained physicist, and being a physicist, right? I have these belief systems that there are fundamental sort of rules for how the world works, right? And, you know, explanations for phenomenon. And so, you know, I've always looked at machine learning AI from a view that we should understand deeply what it's learning, right? And you know, when I came to the field, originally, in roughly 99 it was a culture shock for me, right? Because, you know, in physics, we have these sort of fundamental understandings, but in machine learning, people use these algorithms and get these great detections. But back in those days, we'd have models that are very difficult to explain. And back in, you know, 99 when you try to and you know, Fico being this analytics software company that, you know, provides these analytics to different financial institutions across the globe, right? We would have to justify why you should trust this model or to work with them and their governance teams to make sure that they were comfortable with the use of that, and it was really hard back in the late 90s to get people to trust AI and machine learning. Roughly about 10 years ago, with all the deep learning sort of being focused on, I noticed that people no longer asked those questions, like no longer asked, Can you explain they were fascinated with prediction, right? And so models got more and more big, they got less and less explainable. They got darker and deeper and more black box. And at that point, right? I got very concerned, right? I got concerned about our industry. I got concerned about how AI and machine learning could harm people, right? And for FICO, but beyond FICO, because there's so many models that are used that are making decisions about you and I each and every day that I wanted to drive that sort of narrative right, and it's only gotten more and more important to me with things like generative AI, which are some of the largest models, right, and so, you know, I've been pushing from that point forward just a conversation, which is, you know, we need to recognize that, yes, some of these machines are really phenomenal, but We should be able to understand and monitor and audit right machine learning models and AI that will make decisions that will impact human life. And many of these models do right? They make a decision, and if you don't understand how they work, or you can't audit why I made that decision, and your life takes an entirely different path because of that. I mean, that is a really scary outcome, and we don't have to talk about AI use and policing and other things that are tremendously scary, but just decisions made about individuals with these sort of models require us to no longer be fascinated with how big they are and how much compute we use, and how many GPUs we have to purchase, and how many millions of dollars it takes to train a model. We should be really, you know, looking at this from a different perspective and saying, listen, these should be tools, tools that we know how to use, and tools that we know how to monitor our audit so we can, you know, make great decisions, but do so in ways that are safe. And I think that's where it started for me, and that's why, you know, I've had more than 35 patents now just in responsible AI, because of that sort of recognition that things are going the wrong direction, right in my perspective, and, you know, yes, Fico, you know, has a very particular view, because some of the decisions we make with these models are really important and impact individuals, but it's broader than just FICO, and it's broader than just financial services, and I think everyone's starting to understand that. I think the industry is getting more and more critical of AI, and that's probably a good thing, because it doesn't mean the model. Get dumber, right? It means that we just choose algorithms that we can, you know, have prediction, but also have the safety of responsible AI. In addition to that,

 

Rob Stevenson  10:08  

yes, it's not about them being dumber or slower, it's about them being safer. Surely,

 

Scott Zoldi  10:13  

absolutely it is.

 

Rob Stevenson  10:14  

I remember this time 10 years ago, when saying you had any sort of AI and, oh, I remember the parlance, it's a black box algorithm, which was used to say, like, it's so advanced, not even we know how it works. And that was a very like sexy thing to say on Sand Hill Road, you know, if you were trying to raise a couple million bucks back, you know, 1012, years ago, which is curious, because the people building this tech, surely so many of them were scientists. And to be a scientist, if you don't have explainability, if you don't have a methodology, you don't have a research paper, you don't have science. What happened?

 

Scott Zoldi  10:49  

Well, I think what happened was, essentially, you know, the demographic of who builds these models have shifted considerably, right? So, you know, my staff today still is, you know, a huge number of mathematicians and physicists, right? But if you look at other organizations, a huge number of computer scientists, right? And maybe in some instances, the people that are building these models don't actually understand the algorithms, and so it's not that they're, you know, using it, and they know they're irresponsible, they may just simply not know the harm associated with it. So I think, you know, I think the demographics of who build these models today have changed. I also think, with the rise of cloud computing, you know, we're putting AI and ability to build AI in more people's hands, many of them, not experts, not PhDs. Didn't invent the algorithm. May not even understand the math behind it. And then, you know, we've super sized it all right, with the GPUs and everything else. And so, you know, I think there's this sort of computer engineering sort of focus now to get these models larger and faster and bigger. And, you know, for a demographic that's more of, you know, mathematicians or ethicists, you know that understanding is important. In fact, I used to have this adage that I used to talk about, and I still do, which is explainability first and prediction second. You know, that's not a sort of order of priority that you generally see in many different AI applications, particularly those that are to your point, you know, struggling to raise sort of capital for the, you know, another round, right? And then that's, you know, the you know, fail fast break things, you know, sort of mentality. But if we're doing that and we're breaking around decisions on people, you know, it's just unacceptable, but I think that's, you know, how we've evolved over time, and that's why we're seeing now more and more sort of conversations about AI regulation, and we see more and more politicians having conversations about AI and we see that trust in AI is at a pretty low point right now. And so there'll be a natural sort of feedback loop, which says, Yeah, we're fascinated with what you can do with these large machines and GPUs and huge amounts of data. But we also, you know, we are concerned when it's going to be applied to us. And it's not just, you know, creating a video or drawing me a picture or singing me a song, right? When it's making a decision about my life. Then, you know, the stakes are a lot higher, right? And we're probably hair on the side of explainability as the Paramount number one sort of feature, and then, you know, prediction, right? Secondarily, because that's more around financial sort of loss, maybe for those extending a loan, or maybe a little bit more inconvenience on whether your card was blocked for fraud. Those things we can live with, right? But if we don't have that transparency, it's a problem.  

 

Rob Stevenson  10:50  

Certainly. Yeah, it makes all the sense in the world. Why someone like you giving your math background, would say explainability first you just gave me, when you said that, you kind of gave me flashbacks to algebra. Of my teacher being like, show your work zero credit. You know, when I would just, like, write like, what x, what I had surmise X equaled, without showing how I got there, right? But that's the same idea, right? It's like, you need to, like, map out exactly how you got there, because we can't be sure how you got there otherwise, and if you did so ethically, so, yeah, that makes sense to me. And you know, I was really interested when I was reading your blockchain patent, because it just immediately was so obvious, what a great use case it was. You know that when we think about, okay, the blockchain is ledger, and it is meant to map every step in a sequential way, that's what we aspire to do when we're building AI, right? And so if you could put your AI development into a blockchain, then bang, you have Explainable AI, right? Is it that simple?  

 

Scott Zoldi  14:07  

You have transparency on how it's built, right? And so, you know, I think one of the things that when I looked at the blockchain application was, this is complicated, right? There's 1000s of different decisions that get made when you build an AI or a machine learning model. And so one of the first things that I focused on was, okay, if we want to have explainability and want to make sure that we're following a responsible AI methodology, we need to have a model development standard. So that's number one, right? So, like, if I have, you know, 100 different data scientists on my team, I can't have them building models on 100 different ways, right? It just can't work that way. And so, and we would want the best of class, right? The best of class in terms of which algorithms are responsible and which ones are not. How will we explain things? Not 100 different ways to explain it, but we will be experts in these one or two different ways of explaining or how will we ensure that we're testing for bias and models, and what are those two or three algorithms that we'll use? And that's where the blockchain really was superpower. Powerful because we couple it with a mature model development standard, which says that, you know, everyone that works here, right, will follow these sort of directions and how we deal with data, how we clean data, how we represent this data, which algorithms we use, how we do explainability and transparency, right? For example, one of the algorithms that I'm really keen on are interpretable neural networks. So models that are interpretable machine learning are super important, because that means that you and I can go look at it, and it's not black a box anymore. It's it's transparent. We can go look at that. But then, you know, so on and so forth and so that blockchain then basically says, Okay, we have followed the model development standard, we have shown the assets that demonstrated that we followed all those steps, and as part of it, right? We will have a complete audit chain of every single step, including explainability, including ethics testing, including robustness and stability testing, including how you monitor this model when you're done with it, right? So it's great to build a responsible AI model from the get go, but you have to operationalize it and use it in production when you know to stop trusting that model and all those things are described in a model development standard that blockchain records all those steps, it also enforces that it's followed, right? So in some sense, I view it as an enforcement tool, which, you know sounds a little overbearing, right? Because we think of scientists as these tremendously creative people, and they are, and there are avenues for research and creativity, but in a production operationalized model, we have to have a set of standards that we follow, and we have to have, you know, people that do the work, people that test that work, people that verify that work, and that all gets on that blockchain, and when you're done with it, right, then we know right, that all those steps are followed. We have all the audit trail of the information. We're going to need to have a conversation on how to use this model, how to talk to governance teams and regulators around the model. And, you know, we just come out of it with a product that, if you follow the sort of recipe and that standard, you know, it's good to go, right? And you know, you'll have the complete operating manual. It's almost like, you know, you buy something and then you open it up and there's no operating manual, right? Or it's two pages, and it's not in very good detail here, it's completely laid out right for full transparency. And that's really what the power of that blockchain is for me, and we use it for all of our AI model development at FICO today, because it's so critically important that we follow those standards and we have that proof of work. And everything we do,

 

Rob Stevenson  17:19  

I kind of in the weeds. Follow up question to that, what is the actual experience for the employees? Is this like a product you have built that like they can add something to the FICO blockchain at a crucial part of the development How is it actually, I guess, to use your word, operationalize.

 

Scott Zoldi  17:34  

Yeah. So it starts with this concept of what we call an old concept, which we used to have before blockchain, which is called analytic tracking documents. So this was essentially a contract, a paper contract, between me and the data scientists. They would basically say, we're trying to build this type of model. You're using this sort of data. These are the success criteria, where there's many different ones of those, right? And in the old days, it'd be like a Word document, and then we'd agree on it, right? And then we'd have sprint reviews and agile development to ensure that we're tracking that all those steps are followed today, the experience is they work in a UI that we've custom developed for them. They enter things onto the blockchain. So they'll basically describe the modeling project. They'll describe each of those requirements. They'll establish success criteria that I'll either agree with or not agree with, right? We'll iterate all those sort of steps to show the assets aligned with our model development standard. These are the success criterias, and that everybody that's working on that project, and is typically more than one data scientist working on some of these models, have to sign off on all those requirements and success criteria. So that establishes the project, and then as they move forward, they have to persist assets to that blockchain that showed you know, the progress that they're making. And so there literally are signing onto that blockchain, right? Where you know that it's going to be, you know, Sally versus Sam, and what were those assets? And what's interesting about it Rob, is that it's not just like a checklist, right? It basically like, if Rob made a mistake, Sally's going to say, no, actually, there's a mistake there, and that's on the blockchain too, and then go back and fix it, right? And that's on the blockchain. So this sort of a live sort of stream of that experience, but they are interacting with that blockchain such that for all years to come we can go back. It's immutable, right? You know, we all make mistakes that the purpose is to recognize that and to correct that and show that that's part of the development. But that's how they interact with that blockchain.  

 

Rob Stevenson  19:20  

So would there be like a blockchain explorer? Like, the way there is for crypto transactions, there's like a FICO blockchain explorer that, if you're an employee of FICO now, you can just kind of go in and you basically can see all of the plumbing of how anything, at least in your team, gets done. Is that accurate?  

 

Scott Zoldi  19:35  

So yes and no. So there are, but all this blockchain is still permissioned, right? So, for example, it's still need to know some of the data in there is a sensitive IP, and so we give access to the blockchain to those that need access, right? So even with it's a private blockchain, so the public can't reach it, right? But even within FICO itself, right, there'll be permissioned roles, and so like a governance team, our lawyers, right? Myself, management. Will have access to that, and they can go review the development in blockchain, and you know, critical third parties that need access to that, maybe, you know, a governance team, not a client or a regulator that wants to actually see the proof of work, not just take someone's word for it in an email, could get access to it so it's still permissioned. But yes, once you get that permission to go look at a project, and you can go through every step and see how many mistakes Sam made, and how many times, you know, Sally told him to correct it, and that's fine, right? We're not trying to hide any of that. That's part of the honesty and transparency and how we establish trust, right? And that we're actually working to meet those standards. And if we don't meet those standards, Rob then the blockchain isn't complete and the project stops at that point, right? So, you know, I think one of the things that I love about it is this, right? Like, if you and I agree, okay, we need this low explainability, and we need this low performance, and we do that before anyone's worked on the project. Let's say the scientists spend four months trying to get there, and you and I were just unreasonable in our expectations. We don't then say, Oh, well, the team spent a lot of time, maybe you and I should just lighten up, right? There is no notion of lightening up, right? There was the requirements up front. They're met, they're not met. And that's really a lot of sort of integrity, right? There's a lot of integrity in that, because, you know, you don't get to change your mind when you're under pressure from a delivery perspective, right? Or someone leans on you, right? It's on the blockchain. This is what Rob and I decided this is what we wanted. We couldn't get there. Let's stop the project, and let's go back to the stakeholders, Rob and Scott around what their requirements are and success criteria is. And I think that right there is also one of the more important things, because we see, unfortunately, you know, data scientists under incredible pressure, right to deliver things, and we see a lot of them have moral, you know, and ethical sort of concerns about what they develop. And this kind of establishes up front before you and I, you know, before that is, you know, Sam and Sally start this project with us, they'll know our expectations, and if they have a concern that you and I are really not viewing the problem correctly, or they'll say it up front because they've agreed to it on blockchain. So, you know, it's just kind of this sort of level of, you know, honesty and integrity as part of the process too, which is really important these days. Also,

 

Rob Stevenson  22:05  

definitely, yeah, it makes all the sense in the world. Just like from an organizational development standpoint wise, like, okay, everyone's on the same page. We have this process for getting products out the door, and we can all be held accountable to our work and to our mistakes and to the awesome things that we get done. And it's right there. We can all go check it out. There's, like, a bunch of reasons why that makes sense. But then also, as you say, like, as AI becomes a more important topic for, you know, governments to care about. When the regulators knock on your door, right? Knock, knock. Hey, you're building AI. What are you doing over there you can, like, Here you go. Here's our blockchain. Here's literally everything, like, if you can find fault in it, I'd love to hear it. And you know, if you can hand that over, like you will probably be able to be compliant, assuming you have built it with the values that, such that you kind of explained Correct, right?  

 

Scott Zoldi  22:52  

I mean, I think, you know, one of the things that you know I track is how many organizations actually have a model development standard? So number one, we have one it's defined. Number two is, it's enforced, right? And so, you know, many regulatory bodies are just going to be thrilled that actually, it's not just something someone puts on the website, right? Be responsible with AI, right? Anyone can put that on the website, but developing a standard and then enforcing that standard, right, is super important, and then you're absolutely correct, right? If somebody has a problem with it, or they have suggestions around how to improve it, right? We don't have to argue about it over like, 100 or 400 or 500 models that are developed a year. That's not where you want to have the conversation argue about one standard, right? And if we can align on what that standard is, right? Now, I got scale, right, and that's really important, right? We have to be efficient. And so, you know, that is the right sort of way to look at it, which is we have a conversation about the model development center always happy to re look at that and improve it. We spend a lot of time on it, and we think it's really great. But there may be ways that we adjust it in the future, based on how things develop in the world, but then that's the asset that we develop on. And then the conversation is not about model 343, that we developed. It's about the standard. And then if you're enforcing it, right, then, you know, you follow the standard. And so it really, you know, a bunch of, sort of, growing up in AI, I was always, you know, fascinated with the fact that there's so much structure around software engineering and process so little around data science. And I think it's that sort of, you know, magic, sort of crystal ball e type, you know, let the scientists go in with a cauldron sort of view. But like this is software, like any other type of software. It's just, you know, software that learns by itself, and therefore, you know, we need those levels of control. And so this is what it allows us to do, is to have that around that standard, and then enforce a standard, same way that you'd have, you know, standards and software engineering from a check in perspective regression perspective, you know, code validation perspective, right? We need the same for AI and machine learning. You shouldn't be, you know, almost no standards in AI and tons of standards and software, because, at end of the day, right? AI is software. It's embedded in software, and they all have to meet similar sort of standards before it's applied to customers or operationalized.  

 

Rob Stevenson  24:59  

yeah. Yeah, I mean, apply it to the software development team, roll it out to sales. I feel like this framework could be used for any department, and because for every arm of the business that is shipping something, there are processes, there are compliances you need to make. There's like a prescribed way of doing something effectively. So it doesn't merely need to be the science, the software development, or the data science, or what have you I could see this easily mapping to a marketing team or a sales team too. So you know, this blockchain approach feels like it maybe is just it feels like where you are with your goal to operationalize AI in your team, right? And most companies I speak to, or probably all companies I speak to, are not taking this approach. I guess duh, because you invented it last year. But when you think about operationalizing AI, if you weren't to do it with a blockchain approach, what do you think most companies get wrong? I guess first, what is operationalizing AI? And then let's beat up some companies that you think are doing it badly.  

 

Scott Zoldi  25:52  

So operationalization of AI comes down to recognizing that we need to ensure that when we're developing AI or machine learning that will operate in the production system the way that we prescribe. So that could be, hey, this software needs to produce an answer that 10,000 times a second, and each answer has to be received in less than 10 milliseconds, right? So that's a set of constraints that will constrain the scientists in terms of, okay, how do we do that? Right? Like, you know, we can't have a humongous, large model, right? We have to have an efficient model. We have to choose the sort of technology for how we will create our features and how we will persist information about this customer from their past transactions. And, you know, we need that sort of tight interlock. So operationalization is basically understanding that, you know, there is no data science team and software engineering team. It's one team right with that are very deeply integrated where, you know, the software team will be working to provide analytics with, you know, what are those constraints for your model? And then analytics will be doing their very best to optimize. You know, how much value can I pack within this small sort of piece of compute and memory so that I can meet along with my partners and software, those 10,000 transactions a second and those low latencies, while still providing a product that provides business value, right, where the model is predictive and what have you. And that's really tough, right? That doesn't happen very often. It's something that we at FICO, fortunately figured out very early in our lives. I mean, one of the things that we take a lot of pride in Rob is that, you know, in the year 1992 we launched something called Falcon, and Falcon is the world's, you know, most successful application of machine learning in the financial and payment space, which is a fraud detection model, right? So, you know, based on the credit cards in your wallet, those credit cards could be monitored by this machine learning model, which, you know, did just precisely that, you know, we process, you know, huge amounts of transactions with very low latency and provide high values for scores that you make a decision about whether it's fraud or not. And it had to be done in a very, very short time frame. And so back in in 1992 we're already thinking about how to operationalize so we've gotten very good at it over the decades, right? And it comes second nature to us, but not to many other companies. Sometimes,

 

Rob Stevenson  28:04  

what do you think is the reason for that? Why do companies sort of fail?

 

Scott Zoldi  28:08  

I think the main reason they probably fail is that data science is sometimes treated as an academic sort of function within an organization, right? There's not a good mixing of talent from a software perspective and a data science perspective. And so, you know, most will have separate organizations. Many of those organizations will go build models in isolation, it could be a great model, and then they try to hand it off to a software team that has no idea how to operationalize that model, right, because the scientists made decisions and chose algorithms or the way they handled data in ways that are just violate physics. There's no way you can do it, right? And so they fail because they don't, from the very get go, understand that for every data scientist, you probably need to have two software engineers and QA person associated with him or her, right? And if you had this sort of mentality that, you know what, we're not separate functions, right? We're one team, then you start with those sort of critical conversations right up front, like, how are we constrained, and what we have to do when we produce this system? And that's the worst, right? Because, you know, still it's one of the largest problems in the industry. It used to be like, 90% of all AI machine learning models don't get deployed because of that disconnect. It's gotten better. I think it's, you know, somewhere in the 50% range now, but it's still a challenge, because we don't have this sort of notion that actually data scientists and AI scientists need to actually have partners in software engineering. And that's probably the number one reason why it fails is because, you know, I hand off my asset to you, and you say, Wow, sorry, you violated physics. It can't happen. And then game over at that point, right? They don't usually learn from the mistakes, right? And because you have this sort of siloed sort of mentality, right? And whereas, if you win as one in terms of how you would go and deliver that asset right from the very get go, you know, it'd be a joint sort of objective to get that analytic operationalized.

 

Rob Stevenson  29:59  

Well. Not we are creeping up on optimal podcast length here, and normally at this point in the episode, I ask the guests to kind of share some parting advice to the folks that are listening. In this case, though, I feel like the way you explained operationalization, the possibility of using the blockchain for expendable AI, you've given plenty of advice in marching orders to the AI practitioners within the reach of our voices. So rather than do that, because you're such a curious, interesting guy, when you think about this space, about AI outside of FICO, I want to know what really gets your attention and what excites you. If you think back to yourself as you know, the starry eyed academic publishing his first patent back in 2000 if you're just like, on vacation and you happen to be reading an AI publication or something, what is it? But that you find uniquely exciting?  

 

Scott Zoldi  30:47  

So right now, what I'm most excited about is, you know, the fact that we're getting to more interpretable machine learning, right? Like, you know, I think back to your point around, you know, the black box and how that was all Mystique, and now it's become less Mystique, right? As a fundamental physicist, right? My advisor, when I was at Star AI, sort of student going into industry, said, Hey, Scott, you know, when you're at Los Alamos, go hiking in the deserts and think about physics, right? And I still do that, right? I get out in the deserts, out here in Southern California, and I think about AI. And some of the things that I really think about is, you know, like, what do we have to do to fundamentally change the way algorithms work today so they get closer to the way that we work? I think we under appreciate just how darn smart we are and how well our brains work. Right? We're so enamored with the headlines of AI and machine learning, but fundamentally, the algorithms are not the right algorithms, and it'll continue to evolve. And, you know, one of the things that I've had to tell people is, like, with all this excitement around generative AI, that generative AI is not new, it's just that the computing has allowed us to build these humongous, you know, models, it will evolve, right? And I think that's the thing I'm most excited about. In fact, one of the things that I'm focused on, and maybe we can talk about in the future, is the concept of responsible generative AI, right? Which is, again, new types of algorithms, new types of constraints, so that we can use all that power, but the algorithms will look different, right? And I think just that constant evolution of algorithms is what excites me the most, and how they'll get more approachable. We'll develop more trust, and then at the end of the day, we'll have this sort of AI, human, sort of, you know, collaboration, where no one inside of that party, you know, whether human or AI overtakes the outcome, and we'll get to a better place. So that's what excites me is, you know, walking in the desert and thinking about AI and where it's going to evolve. And I think it's a wonderful place to be. It's a really, really exciting and, you know, I'm excited about, you know, the decades to come.

 

Rob Stevenson  32:39  

It's a very encouraging fear free approach that I'm really pleased you shared. And you know, I wanted to speak about responsible AI and generative, but we simply have to carry on here. We have lives to live and, you know, pods to cast and these sorts of things. So I'll have to have you back on Scott for a part two. We can maybe get into some of that stuff that was in my notes that we didn't even touch because we were having too much fun talking about other stuff. But hey, at this point, I would just say thank you so much for being here. This has really been a blast. I really appreciate the time you spent with me today.  

 

Scott Zoldi  33:05  

Well, thank you for having me, Rob. It's been great. Thank

 

Rob Stevenson  33:09  

you how AI happens is brought to you by sama sama 's agile data labeling and model evaluation solutions help enterprise companies maximize the return on investment for generative AI LLM and computer vision models across retail, finance, automotive and many other industries. For more information, head to sama.com you.