How AI Happens

Training Biometric Tech with Head of AI George Williams

Episode Summary

George Williams, a silicon valley tech veteran who most recently served as Head of AI at SmileIdentity, shares his views on the growth of AI in Africa, what and how biometrics works, and the mathematical vulnerabilities in machine learning. Biometrics is substantially more complex than password authentication, and George explains why he believes this is the way of the future.

Episode Notes

Goodbye Passwords, Hello Biometrics with George Williams

Episode 61: Show Notes.

Is it really safer to have a system know your biometrics rather than your password? If so, who do you trust with this data? George Williams, a silicon valley tech veteran who most recently served as Head of AI at SmileIdentity, is passionate about machine learning, mathematics, and data science. In this episode, George shares his opinions on the dawn of AI, how long he believes AI has been around, and references the ancient Greeks to show the relationship between the current fifth big wave of AI and the genesis of it all. Focusing on the work done by SmileIdentity, you will understand the growth of AI in Africa, what and how biometrics works, and the mathematical vulnerabilities in machine learning. Biometrics is substantially more complex than password authentication, and George explains why he believes this is the way of the future.

Key Points From This Episode:

Tweetables:

“Robotics and artificial intelligence are very much intertwined.” — @georgewilliams [0:02:14]

“In my daily routine, I leverage biometrics as much as possible and I prefer this over passwords when I can do so.” — @georgewilliams [0:08:13]

“All of your data is already out there in one form or another.” — @georgewilliams [0:10:38]

“We don’t all need to be software developers or ML engineers, but we all have to understand the technology that is powering [the world] and we have to ask the right questions.” — @georgewilliams [0:11:53]

“[Some of the biometric] technology is imperfect in ways that make me uncomfortable and this technology is being deployed at massive scale in parts of the world and that should be a concern for all of us.” — @georgewilliams [0:20:33]

“In machine learning, once you train a model and deploy it you are not done. That is the start of the life cycle of activity that you have to maintain and sustain in order to have really good AI biometrics.” — @georgewilliams [0:22:06]

Links Mentioned in Today’s Episode:

George Williams on Twitter

George Williams on LinkedIn

SmileIdentity

NYU Movement Lab

ChatGPT

How AI Happens

Sama

Episode Transcription

Rob Stevenson  0:04  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field, and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Here with me today on how AI happens is the head of AI over at Smile identity. George Williams, George, welcome to the show. How the heck are you today?

 

George Williams  0:41  

I'm good. This is great. Thanks for having me really excited about doing this interview?

 

Rob Stevenson  0:45  

Yeah, really pleased to have you for a multitude of reasons, not the least of which is that you have some interesting ideas about how long AI has been with us. And it's funny, every time I speak with someone, the dawn of AI gets a little earlier. You know, I'll speak with people and they'll say, I've been in this space since the 80s. Rob, I bet you didn't even know AI was a thing, then. But you believe the genesis of AI is even earlier than that. Is that fair to say?

 

George Williams  1:10  

Yeah, you can go back in history. I think in modern times, depending on which AI researcher you talk to, we're in our fifth big wave of artificial intelligence since computing started with Alan Turing in the 50s. In fact, some of the seminal words and concepts that we use, even still today like the Turing test, you know, that came from Alan Turing in the 50s. But if you actually look through a lot of the literature around, anthropomorphism around detecting humanistic qualities in other things in animals, projecting humanistic qualities onto other things like robots and automatons, you can actually go all the way back to the Greeks, where Aristotle actually wrote a lot about being able to separate the spirit in the material world, and being able to impart human qualities on objects can act in a humanistic way. And so, robotics, you can trace all the way back to the Greeks. And in many ways, you know, robotics and artificial intelligence are very much intertwined. So yeah, I like to go all the way back to the Greeks, for sure.

 

Rob Stevenson  2:21  

That is a fascinating perspective, because the simplest definition I've kind of come across for AI is that we are replicating human cognition, human consciousness. And that's what Aristotle was on about when he's talking about personification of objects. It's like, can we take what it is to be human? What is this knowledge in this this human spirit? And can we like project it on to other things? Isn't that what we're doing? We're just instead of projecting it onto an animal or a table, as Aristotle would have said, we're projecting it into a server farm or into devices? Sure,

 

George Williams  2:52  

sure. I imagine, you know, if if the Greeks had the level of technology that we have today, they might have been able to run with it. They might have created pure robots in ways that that we couldn't even imagine today. They were advanced in a lot of their own ways, but they had a lot of stacks of technology in which they had to build and unfortunately, they didn't, it took another couple of 1000 years, but but here we are today, talking about imbuing machines with humanistic kinds of qualities like cognition. And, you know, it really seems like it's closer than it's ever been. It's really exciting field to be in right now. I've been in technology for over 20 years now. And I'm just like, more excited to be in technology now than ever.

 

Rob Stevenson  3:37  

If only our committees had had a big enough lover, am I right? Yes, exactly. Yeah, they're good. That's an ancient Greek joke. For none of you out there. I think that was just for just for you and George. But in any case, smile identity certainly has a big enough lever. Let's talk about what you're up to. Now, George, I'd love to hear a little bit about you. And then what you're up to now in your current project.

 

George Williams  3:58  

Sure. So a little bit about me, as I alluded to earlier, and as all the gray hairs sort of our proof of I'm a Silicon Valley, tech veteran, I've been in this industry over 20 years. And you know, I think it's interesting. during that span, I've kind of split my career up into two parts. So first half, I was a pure software engineer and manager working mostly in startups in r&d in the valley. That was the first half of my career. Second half, I returned to my first passion in mathematics. And that's when I really started to embark on my career path and data science and machine learning. And these days, the incarnation of all that or the umbrella of data, data science, machine learning, which is artificial intelligence, and currently I lead an AI team. And we work on computer vision based biometrics companies called Smile identity, and we apply these techniques to biometric authentication. So our customers and our partners use our service So instead of passwords, right, they use their face, they use their ID cards in order to authenticate themselves online to various kinds of services. There are a lot of companies in this area, it's a, it's a very exciting area, our focus is mainly on right now the continent of Africa, which is undergoing a major digital transformation, much like the.com days, if any of you remember those times in the early 2000s In the US that's happening right now in Africa. And they're they're sort of leapfrogging they've learned from our mistakes last 20 years, and they've gone right into, they're skipping passwords. And they're going right into advanced biometrics. And so we we've launched our service. We've been in Africa for about five years. And we're in 52 out of 54 countries in Africa. And we're slowly migrating north, and we're going to be in the Middle East, as well. So that's smile identity. And that's what I do at the company.

 

Rob Stevenson  5:53  

Given your salt and pepper bearded veteran tech status, what made you decide to work within the biometrics vertical?

 

George Williams  6:01  

You know, it's interesting, thinking about this earlier, before the interview, I was thinking I was going back in time. And what I did, you know, I talked about like my career path, I had sort of two different the system side and the stochastic side, I like to tell people and so midway through my career, I decided to take a break from the Silicon Valley life. Working at companies working in industry, I joined a research lab at New York University, and the current Institute of mathematics. And the research group I joined was called the human movement lab. And we explored computing and mathematical techniques at the intersection of video analytics, and human motion capture. And so a natural evolution of this work included finding the best algorithms that can be leveraged for things like face detection, face verification, voice based speaker identification, and gait recognition, we worked on all sorts of crazy ideas where they were crazy at the time. And our lab was famous for training machine learning models that could automatically identify celebrities and politicians and low res YouTube videos such as, like 1011 years ago. And so looking back on that a lot of that work we did was seminal research in computer vision, and deep learning based biometrics that is more commonplace and commoditized in products and services that I helped build today. So I really love applying mathematics and computing to all sorts of problems. And I think Biometrics is a fun and challenging vertical to leverage these techniques. And I think it's super rewarding to see the fruits of the early days of those research days. And now to build useful important biometric services, such as the ones my team is building at Smile identity. So that's, that's sort of a brief path through my path. And biometrics. It started out with some core research. And I've moved that into into a mature stage of building these products from that research today.

 

Rob Stevenson  7:56  

So is the main use case now? User authentification? Like, can we were going to replace passwords? Or how would you say it's going to impact the average user.

 

George Williams  8:05  

I'm a bit biased here. Since my company built password list biometric based authentication technology, I'll just say that in my daily routine, I leveraged biometrics as much as possible, and I prefer this over passwords when I can do so. So I unlock my phone with my fingerprint, I unlock my laptop with my face, I bypass airport security with my eyes, I use the clear biometric service. So personally speaking, I'm all in with biometrics, but I'm not an outlier. I'm not alone, a lot of tech people and non tech savvy people are have embraced biometrics in their daily life and their routine. And this continues to trend upward. So you know, I think the writing is on the wall here. I'm pretty bullish on this prediction. But I think in the next five years, I believe biometrics will surpass passwords as the most commonplace authentication mechanism for devices and for online services. So this could either be gradual over the next five years, or there can be an abrupt event, which causes a mass shift to biometrics, for example, we've seen a lot of these advances with quantum computing over the last few years. And in one of the interesting results of this so called Quantum supremacy, is that a functioning quantum computer will be able to break the standard encryption protocols that we use today and reverse engineer almost any password within a matter of seconds. So I think a stunning development like this could prompt a dramatic shift in the adoption of alternate authentication strategies like biometrics. So I think it's going to happen and I think it'll happen fairly soon.

 

Rob Stevenson  9:48  

When it comes to using biometrics. You gave the example of unlocking your phone with your fingerprint or using your retina at clear at the airports. I love for you to help me remove my tinfoil hat a little bit, because I am a huge TSA PreCheck fan, I try not to speak about it, because I don't want everyone to sign up and then ruin it. But I have not done the clear thing because I'm like, I don't want to give them my I don't trust, I don't trust the government. You know, I don't want to be like to conspiracy theory, but I'm like holding back a little bit. I feel the same with my fingerprints, like Apple already has so much information about me. Do I also want to put on top of that? My fingerprint? What do you think about that point of view, where it's like, I'm just unwilling to concede parts of my personal biometrics to big companies or agencies,

 

George Williams  10:33  

I tend to take kind of a cynical view on this. And I like to tell people, it's like, you know, all of your data is already out there, in some form or another. You know, I remember early days in tech used to joke with my colleagues, it's like, all this technology, we're building right it. And those were the early days, this is even before cybersecurity was even a thing. And you know, we would joke at each other, it's like, yeah, you know, I hope one day everyone doesn't, doesn't give up all of their data to these databases and the services. And fast forward to today, and we readily give up our data, sometimes we know we're giving it up. And in other cases, we don't realize that, you know, it's, it's being an exchange. So I think we've reached a point of no return with our data. And really, the question is, Who do you trust with your data, and what kind of policies and implementations are in place to protect people and their data and their privacy? So I think we're really there's, there's no looking back, we live in, in the world in which we live in, it's not possible to really live off off the grid anymore. And so, as consumers, as people that use these services, we just all need to be very aware of the technology that underpins all of this. We have to be technologically savvy, we don't all need to be software developers or ml engineers. But we all have to understand the technology that's powering all this. And we all have to ask the right questions. And these days, we have to be involved in a lot of the laws and regulations that are surrounding a lot of the technologies that we are building. So that's kind of my my cynical answer to your question. But data privacy, I think, is just one just one of the challenges around all the technologies we're building around authentication, whether it's password, whether it's biometrics, privacy, ethics, transparency, these are important challenges that I think society in general, not just the technological community, but society in general is coming to grips with a lot of challenges around these technologies. I'd love to talk more about those challenges, if you'd like as well.

 

Rob Stevenson  12:44  

Yeah, definitely. One would just be ongoing security. And you sort of mentioned a little bit about that when you gave the example of a quantum computer being able to crack even a sufficiently advanced text based password. Wouldn't it just be a matter of time till that quantum computer could also point itself at biometrics? Or is biometric authentication just a an order of magnitude more complex?

 

George Williams  13:07  

Yeah, quantum computers, at least for the near term algorithms. If we look at some of the algorithms that we can enable, right, once we can stabilize these qubits, you know, breaking standard encryption is already an algorithm that once we get a quantum computer, we know it will start to break encryption. Now, some of the other things like completely secure zero trust communication, quantum machine learning, I think there's a lot of theories and research about how we approach these things. But no, no robust algorithm yet. But breaking standard RSA based asymmetric encryption, right, we know that we can do that with quantum computers, a lot of these other things are, are still up in the air if a quantum computer can solve them, or if a quantum computer can solve them in can do it more efficiently and quicker than a cluster of supercomputers. So I think the differentiation is important when we're talking about, like authentication and breaking authentication. But Biometrics is super interesting, because in addition to I think some of the systems problem, it does have some challenges with respect to bias, because we are leveraging machine learning transparency, because we are leveraging these days deep learning and deep learning is well known for being black box in the sense that we aren't 100% sure what these algorithms what these super powerful algorithms are doing. And then then there are a lot of the ethics questions around the use of biometrics and mass surveillance. So it's interesting as we move towards biometric systems, we do get into a another set of, of, I think, interesting challenges compared to existing password based and systems based kinds of authentication techniques.

 

Rob Stevenson  14:53  

I love it if you could outline the specific challenge with regard to bias because it seems that if you Were just trying to identify one individual bias wouldn't necessarily like as long as it could say it was this person is the bias that much of a problem it would only become if this technology was being used to identify large swaths of people at the same time that bias might be problematic. Is that a naive way of looking at it? Or what is the scope of the problem there?

 

George Williams  15:18  

No, no, I think you you hit the nail on the head. So for many, many, many years, so let's talk about the bias thing first. For many years, the main challenge with biometrics was accuracy. So even before I started doing research in these biometric algorithms 10 years ago, face recognition was a thing for many years, I actually talk and blog about, like the history of face recognition. And the early systems is face to face recognition, where we're laughable face recognition and fingerprint identification, iris recognition, a lot of this research was done even 2025 years ago, and the accuracy was super low. Fast forward to today. So now we have commodity neural network based algorithms that have improved the accuracy dramatically, and in some cases exceed human level performance. Now, I've seen this firsthand, by the way at my company. So in general accuracy of these machine learning algorithms is no longer the challenge. And so I think from a mathematics standpoint, from an algorithmic standpoint, we're pretty good. But you've probably heard the saying garbage in garbage out with respect to machine learning, and especially deep learning. And that's very true with biometrics. And most biometrics algorithms. The underpinning is a neural network. So garbage in garbage out what I mean by that. So for example, if your face recognition training data is only white faces from Western countries, your machine learning model, your biometric algorithm will not be accurate. If you deploy it in Africa, or the Middle East, we tried. And that's a big problem, because a lot of the seminal work like when I was doing it, 10 years ago, right, the datasets we had were white men, and they were like white men, white celebrities. And so a lot of the datasets that we worked with very early on, and in fact, the ones that you will find publicly available, if you wanted to train one of these models yourself, right, they still have this ethnic and racial bias. And so this training bias, right, is really all about the bias that's in the data set and in the data that was collected and who collected the data. And it's something that the biometrics ecosystem continues to deal with. I know firsthand, as I mentioned, before my company deployed space verification solution in Africa, and we're pushing it to the Middle East. And we spent a large amount of our effort and resources, curating the right datasets to train our algorithms, which I think are the best, ethnically unbiased algorithms in the industry. So this still a problem. So that's a kind of a mathematical problem. But it now gets into like a societal and ethical problem, because a lot of people feel like a lot of companies, a lot of government entities, a lot of governments, right, they feel like this stuff is good enough to deploy. But you know, they don't necessarily ask the right questions about this ethnic bias. And the research community, I think, is closing the gap quickly, but we still encounter we go into, like, we talked to some of our partners, and they said, Yeah, you know, we tried your algorithm, our dad's like, it's really good, thank you. But you know, we tried out a bunch of these other solutions, right, that purport to have high accuracy and unbiased data, and we find that the truth is, is different. So we're still dealing with a lot of these issues, especially internationally, in the US and in Europe, we're finding that we tend to be a bit more advanced in how we're thinking about this and the questions we're asking. And in fact, you know, finally, you know, the government, the US and European governments are starting to incorporate right, some of these issues and baking them into various laws. So there are various laws now that prevent in the US and in Europe, from capturing your biometric data, where you have to opt in, right, you have to explicitly opt in before someone collects your data. And that's, that's a good thing. But that does not exist everywhere in the world in the world is quickly moving to deploy biometrics for mass surveillance. So for example, on the one end, right, we have, you know, Western countries that are kind of dealing with this and coming up with legal frameworks to deal with this. On the other hand, you know, it's the wild wild west with respect to deploying biometrics, and it's an ethical concern. For example, I'll bring up the mass surveillance projects in China, where they are actually profiling certain parts of their population, certain parts of their society and automatically restricting services depending on ethnicity. In the best case, in the worst case, they are detaining people based on their inferred race, and this is happening now in certain parts of China, where they leverage mass camera surveillance to locate my Muslims, and detain them for quote, unquote, re education. And so it's very exciting, right the technology. And I think it's fun and challenging to apply this math. But you know, at the other end, putting my ethics hat on, I'm very concerned that we don't have the right frameworks in place to be able to guarantee certain rights to citizens across the world. With respect to this technology, it's getting better in many parts of the world. And I think it will get better over time. But the technology is imperfect in ways that make me uncomfortable. And this technology is being deployed at massive scale in many parts of the world. And that should be a concern for I think, all of us,

 

Rob Stevenson  20:45  

when you say that your algorithm is more unbiased, or is sufficiently unbiased. How would one go about measuring that,

 

George Williams  20:54  

like I mentioned, we curate, specifically for the countries that we go into. Even different parts of Africa are very different. And so before we go into a new country, we work with, we have a team of account managers on the ground that help us collect the right kinds of data, we work with our partners, because in many cases, we aren't the first biometric company that a partner has, has used. And they have a lot of critical feedback based on their previous experience with biometrics. And so we gather all that feedback. And for the cases where it's not a system problem, it's a machine learning problem, we know that we need to curate specific kinds of data that will address that particular partner's concern in that country. And so we've done this many times, we have a way to measure our accuracy in very specific ways. We actually measure not only our sort of our pure face record, or biometric accuracy, but we have different kinds of benchmarks that we leverage for different kinds of ethnic groups in different kinds of situations as well. But a large area for us is different benchmarks around ethnicity. So that's one thing. But in machine learning, you know, I think most people know these days that once you train a model, and deploy it, you're not done. In fact, that's, that's really the start of a lifecycle of activity that you have to maintain and sustain in order to have really good AI for biometrics. So we're constantly monitoring our models we're looking for where our models may be weak, and we have a pipeline in place to curate more data, we have human labelers human reviewers, that also surface issues in our pipeline, right? If we need to update a model, we can do that very quickly and deploy updates. And so these days with any kind of AI, you need to constantly monitor and update you need to have an operations pipeline that can be able to deal with rapidly changing conditions of how your model needs to operate accurately when we employ such a model. In fact, we were, we were doing ml ops before ml operations was a word. But I think for biometrics, that's very important. That's very key, not just for ethnic diversity, and accuracy, but also for fraud because the fraud landscape changes very quickly. So that in general, is our technique. And of course, right we're heavily involved in research. So there's a lot of state of the art stuff happening every few weeks. There are many computer vision AI research conferences, and we engage with researchers, we're we're up to date on the latest papers, and we review those and we capture a lot of the latest research that's important for us in order to meet our stringent benchmarks for biometrics, though at a high level, that's kind of our strategy is constantly monitor and update.

 

Rob Stevenson  23:48  

How is ml ops deployed within like fraud detection?

 

George Williams  23:52  

So you know, I think fraud detection is interesting. So first, I will say that if we look at and we talked about password versus like biometrics earlier, so the first thing to emphasize is that biometric authentication fraud is really different than sort of password based fraud. So password based fraud systems are vulnerable because either systems allow weak or reused passwords or require such a complex password that people end up writing them down on post, its, they still do that, or we're constantly are resetting them. So these are sort of system and policy vulnerabilities. And they're still commonplace. Even today, hackers and bad actors know how to take advantage of them. Multifactor authentication can resolve some of these issues, but they add more friction to the login process. And and we're actually starting to see a lot of successful hacks in the MFA space involving like text message hacking. So fraud in Biometrics is very different has a very different shape. For example, it's fairly easy to find images of people's faces online, such as via social media post or or dating apps and And we've seen cases actually where fraudsters locate their victims physically in the real world, take a picture of them either without their knowledge or through some kind of social engineering, and then present that image to our face. matcher. So, you know, what we've done is we've implemented several countermeasures to combat this kind of fraud. So in addition to face we require users to demonstrate so called liveness, at the point of capture, so what does that mean? So our algorithm is sure that the image was taken with the user's smartphone at the time and place that the authentication attempt was made, we have a lot of proprietary technology that we've implemented for this. And we also detect certain kinds of video injections or you know, hacking of kind of the software in the SDK. And more importantly, we capture a short video instead of a single image. So this extra dimension enables us to detect many kinds of fraud, including single stolen images, many kinds of synthetic and static forgeries. And we also train our models against sophisticated kinds of forgeries, like deep fakes by training on images. It's a form of adversarial machine machine learning if you've heard of that. And we train on specifically those images created by the most recent kinds of generative AI and deep fake generating algorithms. All of these algorithms so far today, they leave telltale signs and artifacts, either in the pixel space or in the scene reconstruction, and they can be detected if you specifically trained for them. Another countermeasure is that we do have a team of human reviewers and human experts that are also looking at the stream of data, when we can spot new kinds of forgeries as they start to appear. And like I mentioned before, right, we have a pipeline where, all right, we have new data, we have new labeled data, we need to retrain our models, we incrementally retrain our models, and we update our models, as soon as we need to. And having this ml pipeline already in place where it's like detecting that we might have a new kind of fraud. Having human reviewers actually verify it, label it as a new kind of fraud or a repeat of an existing kind of fraud, which we've already classified in the in the past. That's extremely valuable, new label data. And then from that, we have another process that automatically kicks off a new training job. And once that model meets the benchmarks that we talked about earlier, that gets updated to our servers, which are serving those models. And so it's a fairly standard from a 360 degree lifecycle perspective, I think it's fairly standard, what we're doing not all companies that claim to AI and machine learning, are capturing a lifecycle of ml ops. But I think a lot of companies will need to get there. I think for biometrics, we had to create this very early because we saw the need. And we saw the biometrics landscape, the fraud landscape, or need to update based on new ethnicities based on new geographic regions that we're entering, right, we had to sort of deal with that early, we had to create our own pipeline, but these days, right? You start an AI startup, its AI company, right? You can already get a platform that has a lot of these concepts, and this pipeline, this lifecycle baked into it. So yeah, I think it's pretty exciting with what's happening in the ML ops world. But that's that's kind of what we do to ensure that, you know, we're serving the most robust models that we can to our customers.

 

Rob Stevenson  28:39  

Fraud Prevention sounds like an endless game of Whack a Mole, right, because whenever any sort of an old truism in tech, whenever some guardrails were put up, or some sort of restrictive monitoring, tech is built, there's a workaround being formulated at the same time. Is that how you look at fraud detection specifically? Yeah,

 

George Williams  28:58  

and you know, I think the malware analogy is very apt, because as we've learned with malware, and I was in cybersecurity, I've been in cybersecurity also, as well as being in biometrics. Yeah, as soon as you come up with a countermeasure, it's pretty straightforward to you know, the fraudster can make a few small tweaks right in their code, which shifts the binary fingerprint in such a way that now the countermeasure you've spent all this time building right is fooled. And so fortunately, with biometrics right now, we do leverage I think a lot of what we've learned with that experience, but you're right, it's a game of Whack a Mole and, and the whack a mole game in this case with biometrics right is slightly different because it is so machine learning based and not necessarily systems based. And now the focus is not on code paths. Right now. The focus is on the weights in the neural network and protecting the neural network or you in understanding the mathematical vulnerabilities that can exist in machine learning. And so this is a very new area of security. And in fact, a lot of cybersecurity specialists don't know much about mathematical vulnerabilities in machine learning. I speak at a lot of conferences and cybersecurity conferences, and I've been telling people like this is coming, you better know your math, because the vulnerabilities are mathematical. They're not systems. Those were early days, when only a handful of companies, even large companies were deploying models at scale. But today, you know, I think chat GPT I think you brought that up earlier, right now CEOs and your average non tech person right now knows about chatbots. And they're experimenting with open AI as latest models. And because of that, there is this is like, in the past couple of weeks, it's crazy, you know, there's this massive rush to get some of these models right into our phones. Microsoft has just recently announced that they're going to take all of these open AI models, and they're going to put this in, you know, Microsoft Excel and in Microsoft Office. And so we are reaching a whole new level of mass adoption with these blackbox models. And we don't know exactly all the vulnerabilities. In fact, I think we know less about the vulnerabilities in machine learning and deep learning that we do with, I think your average piece of malware now your average piece of malware, right? You know, malware has been around for 1520 years. So we know a lot about the shape of that data, but much less is known about the vulnerabilities of models. So you're going to be hearing a lot more about this, I think in the news regarding cyber attacks that leverage mathematical attacks, model attacks, training, data, attacks, and and this is this is new for a lot of people in the cybersecurity industry. For us in the biometrics, we've actually worked with a lot of the researchers in this area through techniques like adversarial machine learning, right, we can mitigate, mitigate a lot of a lot of these things. But I think in general for the industry, for cybersecurity, in the security industry in general. This is very new, and everyone's going to get up to speed very quickly.

 

Rob Stevenson  32:25  

It's exciting. It's terrifying. We certainly live in interesting times, George, and this has been a really interesting conversation. And so here as we creep up on optimal podcast length, I would just say thank you so much for taking the time. I've really loved chatting with you today.

 

George Williams  32:38  

This has been fantastic. And thanks to you. And thanks to your audience for being excited and interested in artificial intelligence. It's going to be a wild ride.

 

Rob Stevenson  32:49  

How AI happens is brought to you by sama. Sama provides accurate data for ambitious AI specializing in image video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e commerce, media, med tech, robotics, and agriculture. For more information, head to sama.com