CyberLink Senior Vice President of Global Marketing and US General Manager Richard Carriere and Head of Sales Engineering Craig Campbell join to discuss the endless use cases for facial recognition technology, how CyberLink is improving the tech's accuracy & security, and the ethical considerations of deploying FRT at scale.
CyberLink's facial recognition technology routinely registers best-in-class accuracy. But how do developers deal with masks, glasses, headphones, or changes in faces over time? How can they prevent spoofing in order to protect identities? And where does computer vision & object detection stop and FRT truly begin? CyberLink Senior Vice President of Global Marketing and US General Manager Richard Carriere and Head of Sales Engineering Craig Campbell join to discuss the endless use cases for facial recognition technology, how CyberLink is improving the tech's accuracy & security, and the ethical considerations of deploying FRT at scale.
CyberLink's Ultimate Guide to Facial Recognition
FaceMe Security SDK Demo
Get in touch with Cyberlink: FaceMe_US@cyberlink.com
EPISODE 32
[INTRODUCTION]
[00:00:05] RS: Welcome to How AI Happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we're about to learn how AI happens.
Today on How AI Happens, we're going to dive into facial recognition technology. Like many elements of AI, the use cases of facial recognition are truly endless. Today's guests rattle off a handful of examples, and I had a feeling they could have even kept going. But rather than just explain the market opportunity of FRT, I wanted to get into the nitty gritty about the technology. How do developers deal with masks, glasses, headphones or changes in faces over time? How can they prevent spoofing in order to protect identities? And where does computer vision and object detection stop and facial recognition technology truly begin? To learn more, I'm joined by two experts from CyberLink, a company whose facial recognition technology has routinely had best-in-class performance. They sat down with me to explain the technical challenges they face and the ethical considerations of deploying this technology at scale.
[INTERVIEW]
[00:01:30] RS: Richard Carriere, number one, welcome to the podcast. How are you today?
[00:01:34] RC: I'm very good. Thank you. Nice to be with you, Rob.
[00:01:34] RS: Yeah. So pleased to have you. Also joining us is their Head of Sales Engineering, Craig Campbell. Craig, welcome to you as well.
[00:01:42] CC: Thank you for having me today.
[00:01:43] RS: I don't think we've ever spoken about facial recognition technology on this podcast before. So it's uncharted territory for me anyway. Well-charted in your cases. I guess before we get too deep in the weeds. Maybe, Richard, would you mind explaining a little bit about CyberLink, the company, the use case of the technology and the mission?
[00:02:01] RC: Yeah, absolutely. CyberLink is new to the table. We're a publicly-traded Taiwanese company that was founded 26 years ago to do multimedia software at the time. So if you’ve used the Windows PC in your lifetime, you've used PowerDVD at some point to play videos on your PC, whether you remember it or not. Our founder, Professor Jau Huang, is a professor of computer science who has remained at the head of the technology with CEO at the beginning. And he focused on technology in the last three years. I’ll explain in a moment. He came back as CEO.
So it's one rare company where we are very, very much technology-focused, product-focused and with a very strong leader for all that this time. The result is that we keep building on the technology that we developed. So our mission is really about bringing amazing experiences, making lives of people a little bit better through the use of digital media of multimedia. So that’s’ the start. I can explain to you a little bit the progression. How we went into AI, it's actually an interesting one. But I'll pause a moment.
[00:03:06] RS: Yeah. I would love for you to go there. I was about to ask you to follow up. At what point do you think it made sense for CyberLink to begin exploring AI and FFT specifically?
[00:03:16] RC: Absolutely. So I’d said about 10 years ago, we started using AI to develop features for editing software. It was basic at the time. But about five or six years ago, by then we were developing mobile phone apps, and mobile phones becoming much more powerful than they used to. In fact, in some cases, more powerful than PCs. So we're able to port these power-demanding applications. And we started playing with augmented reality, which was a thing there.
So one of our apps at the time was doing virtual makeup called YouCan Makeup. Very quickly, we saw an opportunity there to create something unique. So by putting makeup on the face of someone. We’re three guys there. And we're not necessarily familiar with all the intricacies. But depending on the skin complexion, the way you apply it, it's extremely complicated and precise. So our engineers developed a very precise 3D rendering of a face, so a mask. And that sent that company in the stratosphere. In fact, it was spun off four years ago, I believe. It did an IPO on NASDAQ, a billion-dollar IPO three weeks ago. So this thing is on the stratosphere. But meanwhile, we kept the core technology for facial recognition. And we ported it to applications that are more b2b. So how do we provide an SDK a little bit like Dolby provides the lines of code for the video-audio? We provide that to bring very precise facial recognition based on deep, learning neural network learning. So we create a database on our servers. Then we boiled it down in different versions. So we can port it at the edge on mobile devices, whether they're connected or not to the web, on PCs, on servers, wherever you want.
And because we keep pushing the envelope there in terms of improving the quality of our database, of our models, we're constantly listed by NIST as one of the top engines to recognize people in terms of accuracy. And when you look at the different categories, some people will be higher than us in one category. But overall, our score is probably unbeatable. Add to that the fact that, from the get go, going back to the Makeup app, we needed to run on mobile phones to platforms, Android iOS. We needed to run on Windows servers and Linux servers for web things. So we run on all these platforms anywhere on any platform. We work also very closely to the hardware manufacturers since day one, like all the chipset camera or computer manufacturers. So literally, every single one of our features is optimized to be the best in terms of performance, energy. It requires precision for use cases. So we have a very unique product now that we sell under the name FaceMe, and available for a variety of use cases that we'll be happy to talk about today.
[00:06:11] RS: Thank you for that context, Richard. I'd love to get into the SDK a little bit. Maybe Craig, you can chime in here. Where is the line, I suppose, between developing core technology that you know works versus enabling a developer to access the SDK and do their own work?
[00:06:27] CC: Well, you have companies out there that provide a service where they provide facial recognition technology that you can link it to. But you can't really develop with it, because it's running from the cloud. Whereas we provide an edge solution, which gives you, pardon the pun, an edge. By working from the edge solution, you have less security risk, you have less data flow going out. Because instead of sending that video or that photo, you're just sending a template, which is a lot smaller, a lot less risk. And then of course, you're a lot more flexible and what you can accomplish with this being from the edge, right? Meaning that if you have a solution that requires to be off the network disconnected, you would have a problem with most technologies.
[00:07:06] RS: I guess, from there, where are all of the use cases? Because once you have this SDK, I feel like it's kind of limitless, right? It's kind of up to the imagination of the user. So where do you all think about how this technology can, should, will be applied?
[00:07:19] CC: Well, there's a couple of ways that I can look at it. Basically, as a base of rule of thumb, you say, “If you can visually see something and identify a person from a photo or a video, our SDK can do that as well.” So anywhere where you can see where you need second authentication, you need to be able to identify someone in a frictionless manner, you want to ease traffic, ease burden. So that can be anywhere where you see a long group, a large group of people coming into any location where it could be entering your door, getting access to a plane, getting on a ship, trying to pick up your parcels, making sure that that package was delivered to the right person. Making sure the person who pick up the package is the person who should receive it. Maybe someone who's logging in your laptop. Is that the right person? Someone's doing a test. Is that person doing the test remotely the same person who should be doing the test versus someone sitting in his place? There's just so many possible usages that it's really up to you, your use case, and how you can think you can implement it.
[00:08:19] RC: And if I may add from a business perspective, you take all these use cases and you wonder what are the benefits for the customer, the end user for the business? And there's plenty. Like anything that has to do about security surveillance, access control, you can replace very expensive solutions. I mean, the RGB camera that you add and you connect software into an existing IP camera is pretty cheap. You can reduce the number of security people you need. Because every time there's somebody who's on a blacklist or unidentified, you send an instant message to the security people exactly where they are, and you can track them and connect into the video management systems and so on. So that's the obvious one. For the employees, visitors of that business, if we continue that example, they don't have to carry access cards, or they can pre-register they’re a visitor and they just walk into place like they own it.
I’ll give you an example. We have a customer in Asia, a factory that has 20,000 employees. It used to take about half an hour in the morning for employees at the beginning of a shift to get in, waiting in long queues. And other half hour after a long day to get out. They replaced the turnstiles and everything they had by lanes like the fast track on the freeway. And now people are walking there. They don't even slow down. Once in a few hundred sometimes, if they wear a hat too much or something there, there will be an exception message that will pop on the screen. So somebody will come and verify their identity. But that's it. They can verify also any like – Currently, it looks like there's a new variant that's popping up with COVID. So mask wearing, temperature things like that. We do that. We do recognize people with very high-level of accuracy even if they wear a mask. Like if you don't wear a mask, it could be up to 99.73% to be approximated. It's precise. And if people wear a mask, it can be up to 98.9% accuracy, which means that it's one-time in a thousand, more or less, that you would have to try again.
But there's also the benefits. If you think of retail that needs to reinvent itself, there's a shortage of labor, parts of retail experience. You don't have enough staff. So what do you do? Can you automate some parts there? Do you want the customers to interact and have a great experience? Think you’re a loyalty member at your favorite clothing store or a coffee shop. Instead of struggling with your phone and sign-in the password, you just walk in, they recognize you. They talk to you like you're the good friend. And they offer you what you had. It could have a lot of value. Or luxury goods store. You're a customer of the store. We're in LA here on Rodeo Drive, you go show up in the store in Paris, there's a long queue, but the camera recognizes you and say, “Oh, Rob is here.” So a private shopper calls your shopper in Beverly Hills and say, “Rob is here. Give me a few tips.” And then he comes or she comes, greets you, walk you in and you have the experience of your life.
So like there's a variety of things. But from both an end user and a company perspective, there's a win-win there. And never mind the security aspect. Nothing protects data or access, whether it's virtual or physical, better than facial recognition. It’s by far one in a million chance of a false recognition. It's better than anything else.
[00:11:35] CC: And all those are very precise uses. But you can be more vague. What about demographics? Who's in the building today? How many males? How many females? I don't care who they are. I just want to know general information. It can get very vague, too.
[00:11:48] RS: Yeah. When you rattled off all of the use cases, I suddenly became aware of all the times in my non-technologically-enabled walking around life where I rely on my human ability to recognize a face. And it's become an automatic thing. I don't think about it, right? But yeah, of course, the use cases are completely limitless.
And also, with like the retail examples, the dystopian version of this that I've heard played out too many times is like, “Oh, companies are going to push to put a microchip inside of your body.” And that's going to have your credit cards. And it's going to have your ID.” And then you can just like wave the way you wave an Apple Watch to pay. You'll just wave your bare wrist and it'll use a microchip. Well, that would be made obsolete by sufficiently advanced facial recognition technology, right? Because it wouldn't need to be inside your body. It's just a camera. So that feels like a much more tenable version of a public identification, I suppose, than a surgically implanted microchip.
You mentioned some really impressive numbers in terms of accuracy. I'm curious, I guess how you get to that level of accuracy? And related, what are some of the challenges that have been bestowed upon the development team? What are the things they worry about when they're trying to improve accuracy and make this tech more reliable?
[00:12:59] CC: From a technical side, the biggest problem is content. How you get your content and where you can get your content? So we are getting content. Purchasing our photos and images online. So anyone that's 18 and above, we can get content on generic, African-American, Caucasian, Asian, all these different nationalities in train our engine on. Problem is, is that in some East Asian countries, it's a little harder to get some of those images. So it's harder to match those. Whereas America, Africa, China, Japan, those are pretty simple to get content on. So they're very accurate on those kind. So in some of the other East Asian ones, we might have a little bit of an issue because just of getting content.
Not for identifying people, but maybe being accurate in male versus female is a more issue than saying, “Oh, that is Craig. Or that is not Craig,” right? And of course, when we're training our database, we're training on vector points on the face. So over 100 vector points. And each vector point, we're going to come at different angles, over 100 different angles. So that's a lot of data that we're crunching through. A bunch of numbers. So the more the photo, the more image we have a person's photo, the more data we have to identify them accurately. So it's really, for the most part, what is the image that we're using to match with for accuracy?
[00:14:21] RC: What we keep is a template. It’s 600 bytes to six kilobytes of highly-encrypted codes. There's not even a need to have an actual photo of somebody in the database. So people who are scared, they say they will steal my face. From a template like that, there's no way that you can recreate a face. Again, your audience will understand that. So it's super safe as far as what facial recognition brings to the table. Then the rest is really data protection, database protection that companies or providers need to do carefully. But that's where the challenge could be.
And the only other challenge more from a user perspective is the risks of spoofing. So somebody flashes a picture, or a phone with a picture, or a video of me. If there's no anti-spoofing measures in the solution, it will just unlock. In our case, there – Again, that's not NIST, but iBeta, which is an independent third-party. And there was also contests with IEE. Both cases, we scored them. IBeta, they have two levels. We had perfect score in both of them. In that case, the second one, level-two, was a lot of fun. It’s basically masks, like in the movie Faceoff, that we identified 100% of the spoofing in a sense. We do that also with the pictures and everything. And we have different ways to do it. So once you have that in place, somebody cannot pretend they are you. Your data is safe. This is where – There's no better way to protect access to anything that's behind it.
[00:15:52] RS: Can you share how that works? How the anti-spoofing measures work?
[00:15:55] CC: Yeah. So basically, with anti-spoofing, we have a few different options. 2D anti-spoofing, using a standard 2D camera. Basically, what we're doing is we're looking at your vector points and looking for changes. Because a person has trouble just standing still. So even when I'm talking to you, my face is moving a little. So as you're doing the anti-spoofing, it’s doing matches and see if there's a change in your wrinkles, vector points and all that. If there's not enough data, then it can go to a second stage where we'll ask you maybe to turn your head to the right, turn your head to the left. And if it still has trouble, then we'll ask you to open your mouth, smile, these extra things to make sure that you're alive person.
There are easier ways to do this so you don't have to do those extra steps. And that is by using a 3D camera. With a 3D camera, you can just tell your nose is further than your cheeks. You’re obviously not a photo or a video. And then you have infrared. We can do that as well. Whereas infrared, matching up with an RGB camera so we can match exactly that this is a person, a live person and not just a photo. And last, we have time of flight. So there are different ways we can do it. Again, time of flight, time of light, of laser, to the object and coming back. So we can tell you know all the different points and match it up with an RGB video, photo, video or photo. So those are the main ways that we do the anti-spoofing.
[00:17:10] RS: Those later examples would require more advanced cameras, right? So in the meantime, it feels like the short term, you would rely on just some more interaction from the user.
[00:17:20] CC: Correct. 2D is the cheapest and easiest way to go. But you have apple that has 3D cameras built into it. So a lot of devices have 3D cameras built in it these days. And then with time of flight and all that, these are extra sensors on top of that. So yes, that would be some extra integration work for the developer.
[00:17:37] RC: Yeah. But it's not that much. And that's a very neat technology. Qualcomm developed it. We showcased it at their booth at NRF, the retail show in January. The system was a POS system, a sales terminal, where both the customer and the cashier could log in using their face. And this thing works very well. And it's a few dollars. So if you build like a few $100 terminals or kiosks or something, it's really irrelevant compared to the benefits it brings.
[00:18:06] CC: Yeah. And it all depends on the use case, right? If you have low lighting or anything like that, you don't want to use a camera. Time of light would be more of a use case. Whereas if you have great lighting, then you don't need time with light. So each use case will require different solutions.
[00:18:19] RS: The prevalence of more advanced cameras in smartphones and iPhones and Android devices is only going to get better, it seems like. That's every few models. Like that's seems to be a priority that they improve the camera constantly, which makes sense, because capturing content is such a huge use case for devices.
Speaking of capturing content, I wanted to go back a little bit to how you were explaining, Craig, how you kind of find this data. I spoke with an individual who's developing autonomous vehicle software. And he explained to me one process by which you can take fog, for example, from a rain forest in Central America and use that data to train your vehicle and how it would drive on a road with fog in London. Is there that same sort of transposition of data with facial recognition technology? Can you extrapolate from one set of data into others even if it's not contextually 100% accurate with the faces upon which you're trying to detect?
[00:19:18] CC: Well, we do that with objects, right? Because right now, not only we do we detect a face and match a face. But we're also detecting mask, if you have a mask on. So we can tell the object mask, not just that your face is covered or your nose discovered. We're detecting the actual object. So we can tell if it's a mask, or your hand, or a book or piece of paper that you're putting in front of your face. We can also tell if there's glasses on you so that maybe we have trouble reading your eyes because your glasses are dark, we can tell you to remove your glasses. And last, we can tell if your head is covered, meaning with a hoodie. And we have customers who come to us and ask, “Can you detect a hardhat?” Yes. If you've got a specific model color hardhat, it's easy. If you have just a baseball cap it gets a bit more difficult, because there's different colors, different shapes. So then we have to have multiple different ones. So it is very easy to detect objects, and specific objects if you want generic objects, like just any cap or any hat being worn, and it takes a lot more data, a lot more training. So the more specific you are, the easier it is for the training.
[00:20:23] RS: I'm glad you brought up the objects part of it with masks or hard hats, for example, because it strikes me that at a certain point, it's just computer vision writ large, right? Like, even if you were to deploy your technology on the conversation we're having right now, there's three different kinds of headphones, right? Craig, yours has a microphone on it. Richard, yours doesn't. I have a microphone and a microphone mounting bracket in front of me. You have glasses on, Craig. These are all non-face things, right. But they are.
So what is – I guess, like, are you just doing computer vision? Is it worth it to distinguish what is the difference between computer vision and then being specific enough to say, “This is facial recognition technology?” At what point does one become the other?
[00:21:07] CC: So yeah, there are some separation in there, right? Because first you have to detect the face and identify the face be able to separate it from the object. So the two work together in sync so that you can then separate them and then identify them better. If I can't separate your face from the glasses or the mask, then I'm going to have trouble detecting just the mask and the glasses. Same thing with a hard hat. Because you might have other people behind you, and I see other faces. There's going to be complications there, right. So you do have to be able to map them each separately and identify them each separately, which means glasses, frames, basic frame set is what we're looking for. But they can be different thicknesses, different styles. Hats are the same thing. A hard hat is easy, but then you got baseball caps, cowboy hat, all these other different hats. They don't look the same. So the training will be different. So as long as you know the use cases and the specific models, it’s easier to train those items. But yes, objects are different than facial recognition. And the technology is slightly different in detecting them.
[00:22:09] RS: Is that a controversial point of view in your field to say that a face is an object?
[00:22:14] CC: Yes and no. It depends on the use case. In some cases, it would be incorrect to say that. In other cases, it is exactly what people want to hear, because that's all they want is demographics. It's an object. I want information about that object. I don't care who that object might be or what that object is. I just want some information. Is that object smiling? Is it sad? Am I able to see a full face? Am I a partial face? Any of that information, I want to get, right?
[00:22:42] RS: Right. Right. I was being a little cheeky there. But I think it’s interesting to differentiate the two. Especially in the retail case, you would be using both at once, right? Your facial recognition technology for the purposes of security, for the purposes of identifying someone so that they can pay more seamlessly. That's going on at the same time as identifying what thing they pick up so that you can charge the person appropriately. So is that an output? Is that a product that CyberLink puts out? Or is that not your focus right now?
[00:23:16] RC: In the case of retail, it's interesting, because we have yet to find a retailer who would recognize individuals without having them opt-in to be part of it. Because they would freak out their customers if Craig have never signed up and he comes back and they say, “Oh, Craig, how are you? Last week, you were looking at that shirt. I left it here on the site for you in case.” And you don't know who this person is. Me, I would just run out of the store. Although if it's my favorite random sneakers in my shop or whatever, I'm finding everything and everything possible, then it’s a different story.
So in that case, if anything, the company or the retailer who implements this to improve the experience overall, probably, in many cases, they are interested to let their customers know. So yes, there are cameras here and they are using facial recognition. But it's purely anonymous. We don't keep any pictures of you. And we just compile statistics, whatever. Then it's much less scary.
And just like there's a good example of retail. Not to specifically narrow down on some brands. But Walmart, for years, has been using some facial recognition technology to keep blacklisted people, shoplifters, known criminals out of their stores. They've been very vocal about that. Because if you're a customer at Walmart, you are very happy to know that you don't have a mass murderer in aisle number three when you are in aisle number two.
Other chains that I won’t name, but they kind of sneaked in on these things. And it was not well-informed. And they got caught by the press. Not so good press about what they did. So there is a very important and valuable business aspect to deploy this in any circumstances so that your customers, your employees, or your visitors, people understand why you're doing this, why it's beneficial to them. If you don't do that, then you might have some messy surprises.
[00:25:14] CC: We do have one customer who's using it for targeted marketing, right? Where as soon as she walks by a display sign, it detects that your male, age, and then puts up marketing material that for your gender and age.
[00:25:27] RS: Yeah, that I could see how it would be spooky to someone unfamiliar with the technology. But it's been going on in other capacities, right? Like Instagram, for example, didn't need to see your face, although it has, to bucket me as this particular kind of buyer persona and give me recommendations based on my activity and assumptions that makes about me. That's not new.
So I did want to get into some of the ethical considerations a little bit. You've shared, both of you, a little bit about the anonymization of data that's taking place. How you're not getting content on children. I guess, can you maybe just explain some of the considerations? Because this, obviously, is top of mind. People can get spooked out by this. You want people to opt-in. What are some of the ways you think about making sure that this technology is developed and deployed in an ethical, meaningful way?
[00:26:17] RC: I'm glad you're bringing this up. This is one of the most important things to us. I'm not saying it's distinguishes us from everybody else, but certainly from other players, is it starts with just the willingness to – From our mission, to be there to create things for the end user. So great experience, value add to the end user. So we're not going to do things that go against that.
So when we started with FaceMe, our CEO and I, we discussed about how would we do that? So we wrote on our website a statement of human rights. That anyone who goes to cyberlink.com, on every single page at the bottom, there's a link. They can read it, where basically, we state that we're never going to do anything that discriminates, that could be negative to any human being. And if we do, we will correct it as quickly as possible. So that's very – We have it in our hearts.
Craig was saying, and I brought it a little bit, that when our models learn, we use content. We don't scrub the web for all the social media pictures that technically, in their own context, are totally okay for pure unknown person or a company to look at. But when you bring it together, there's ethical dilemmas that some are dealing with. We don't have to deal with that. We prefer to invest a little bit more there and be super clean.
And then the other thing that I would say is there's many use cases for facial recognition. We gave you for two, three minutes a bunch of cases. They're all very positive for everybody. We can continue another 15 minutes of that. I can tell you how elderly people with dementia are brought back to their home when they don't have their ID in Taiwan, because the police can identify them with that. I can go on and on. Like it's different than the nailing a potential suspect or things like that.
So basically, our CEO, Jau, if he was on the call, he would say, “Rob, I'm not in the business of killing people. This is something we stay away from.” So I think that sums it up at the end. And again, facial recognition, people say, “Oh, they will steal my data, my identity.” And like, “No, no, no. Think of it the other way around.” It’s the best possible protection for anything you have. Like one chance of a million of cracking through that. Iris and thumb print, on their best days, it's one in 10,000. Based on my own little lock that I have at home, it's probably one in five fingerprints. This thing is terrible. Passwords, or the birthday of your firstborn, or things like that. It's like we know what people do with passwords, or pins. And do an authentication. And you can get rid of these identify how many buses there are on the picture and things like that, or letters that are impossible to read, like all these things that we've been living with more and more. So there’s benefit on top of benefits. And that will be my view, like the most compelling reason to not being afraid, which by the way, consumers are not afraid that much. When they find a convenient use case, they do use it more. I think companies are shy. They don't want to be in the press, as the press is definitely seeking stories. I mean there are companies that do bad things. There are things. But generally speaking, we need to talk about the good use cases. I think there's still some doomsday sayers there that exist since we discovered fire way, way back. They say would destroy the world. But facial recognition, by and large, it's an amazing technology. It's mature. All the problems of biases and things that were true three, four years ago is being solved by affordable technology now. So whether it's us or any of the players you find in the rankings in the top 10, you'll find that what they do are very, very high-quality products. And we encourage companies to do testing and do proofs of concept and experience for themselves. And there's a need. And we serve quite an important need in their training context.
[00:30:20] RS: That's really helpful, Richard. Thank you for explaining that to me. And I want to end here with two really selfish questions that perhaps I'm more interested in than the listenership. But one is, would sufficiently ubiquitous facial recognition technology remove the need for me to interact with the captchas that says, “Click all the boats.” You mentioned that at the end, Richard. I have like a tinfoil hat theory that that's less about figuring out that I'm a human being. And it's more about I'm training computer vision. I'm training like autonomous vehicles, right? So which is it? Like they're not trying to figure out it’s me. They're trying to collect data. Yeah?
[00:30:55] CC: I would say facial recognition will definitely alleviate that problem. We can't even guess and what they're trying to do with that information.
[00:31:02] RS: Right, right. Yeah. I mean, whatever it's being used for, if their contention, and that is to determine you’re human, well, that would be removed, right? That that excuse goes away, but with some advanced FRT. And then lastly, my other selfish question, I've heard this a few times, unable to verify it. This claim that there are more people in the world than there are possible variations of the human face. Is that true?
[00:31:28] CC: I have no idea. That's actually a good – I've heard that. But I've never heard anyone be able to prove that
[00:31:34] RS: Right? If anyone could, it would be you two. So maybe for the next episode, because the follow up question is like, “Could I then use CyberLink to find my twin who's in Denmark or something? And like, “Hey, we found someone that looks just like you.” But anyway, we'll circle back to that.
[00:31:50] CC: If you can collect all the images in the world, I think our SDK can find your twin.
[00:31:54] RS: I love it. We'll have to have that as the follow up episode as Rob meets his clone. He was like, “Who are you? Why are you talking to me in my Podcast?” But anyway, Richard and Craig, this has been fascinating. Thank you both for your time. Thank you for sharing with me. I've loved learning from you both today.
[00:32:07] CC: Thank you. It’s a pleasure.
[00:32:07] RC: Thank you for having us.
[OUTRO]
[00:32:13] RS: How AI Happens is brought to you by Sama. Sama provides accurate data for ambitious AI. Specializing in image, video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e-commerce, media, medtech, robotics and agriculture. For more information, head to sama.com.
[END]