How AI Happens

IBM Master Inventor & AI Advisor to the UN Neil Sahota

Episode Summary

Today’s guest is not afraid of a challenge. In fact, throughout his life, he has always sought to solve big issues through disruptive and innovative technologies. Neil Sahota is an AI Advisor to the UN, co-founder of the UN’s AI for Good initiative, IBM Master Inventor, and author of Own the AI Revolution.

Episode Notes

Neil Sahota is an AI Advisor to the UN, co-founder of the UN’s AI for Good initiative, IBM Master Inventor, and author of Own the AI Revolution. In today’s episode, Neil shares some of the valuable lessons he learned during his first experience working in the AI world, which involved training the Watson computer system. We then dive into a number of different topics, ranging from Neil’s thoughts on synthetic data and to the language learning capacity of AI versus a human child, to an overview of the AI for Good initiative and what Neil believes our a “cyborg future” could entail! 

Key Points From This Episode:

Tweetables:

“We, as human beings, have to make really rapid judgement calls, especially in sports, but there’s still thousands of data points in play and the best of us can only see seven to 12 in real time.” — @neil_sahota [0:01:21]

“Synthetic data can be a good bridge if we’re in a very closed ecosystem.” — @neil_sahota [0:11:47]

“For an AI system, if it gets exposed to about 100 billion words it becomes proficient and fluent in a language. If you think about a human child, it only needs about 30 billion words. So, it’s not the volume that matters, there’s certain words or phrases that trigger the cognitive learning for language. The problem is that we just don’t understand what that is.” — @neil_sahota [0:14:22]

“Things that are more hard science, or things that have the least amount of variability, are the best things for AI systems.” — @neil_sahota [0:16:26]

“Local problems have global solutions.” — @neil_sahota [0:20:06]

Links Mentioned in Today’s Episode:

Neil Sahota

Neil Sahota on LinkedIn

Own the A.I. Revolution

AI for Good

Episode Transcription

EPISODE 41

[INTRODUCTION]

"NS: There's still a very small amount of variation, which is ironically why at the United Nations, when we talk about autonomous vehicles, we don't talk about when we legalize them, so much as we talk about when do we ban human drivers?" 

[00:00:15] RS: Welcome to how AI Happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I’m your host, Rob Stevenson, and we are about to learn How AI Happens. 

[INTERVIEW]

[00:00:44] RS: Here with me today on How AI Happens is an AI advisor to the United Nations and Co-Founder of the UN's AI for Good initiative, an IBM master inventor, author of the bestseller Own the AI Revolution, Neil Sahota. Neil, welcome to the show. How are you today? 

[00:01:00] NS: Yeah, I’m doing awesome. How are you doing, Rob?

[00:01:02] RS: Doing really well. Thanks for asking. No one ever asks. I’m just podcasting my heart out and feeling lucky to be chatting with you, honestly. You have such an interesting rich background. And my audience may know you from a few places, not the least of which, is mike Tyson's Podcast Hot Boxing with Mike Tyson, which I listened to in preparation for this. 

And I got to say, Mike Tyson asked a really insightful question. It was something to the effect of how would an AI know if a punch's intent was good or bad? And I was like, "Does Mike Tyson know how good of a question that is?" Can you just kind of repeat your answer to that question? Because it was so interesting to me that it was like, "You know what? There's like implications for sentiment analysis, and just like the potential for technology to be misused," all of that wrapped up in that question, right? 

[00:01:51] NS: I mean, there is, right? We as human beings have to make really rapid judgment calls, especially in sports. But there's still thousands of data points in play that we can only – The best of us can see seven to 12 in real-time. I mean, you can look at the body language, the glint in the eye, even the motion of the punch, the little subtle facial expressions. There's over 2000 points in our face that kind of reveal intent. All these things snap together for AI easily in real-time to say like, "Huh, was that well-meaning, or an accident, or malicious?" That's really the power.

[00:02:35] RS: Yeah, it really just is. Another example of – It sounds like very basic to say, but how advanced human cognition is. Where like the example you often hear is how easy it is for you or I to say, "Oh, that's a dog and that's a cat." But then when you start trying to teach an AI to do, it it's like, "Okay, well, a dog has a tail. Yeah, so it's a cat. A dog has pointy ears. Yeah, so does a cat. A dog has a snout. So does a lion." A lion is a cat, right? It just like starts to fall apart the closer you look at it. That's the challenge. That's the podcast we have here today. 

[00:03:07] NS: Puppies or muffins, right? Was that the website? 

[00:03:10] RS: Oh, yeah. Exactly. It was like pugs or blueberry muffins. Yeah, yeah. That's a classic. 

Neil, I guess, can we start a little bit just with your background and kind of how you wound up in your current role, or I guess slate of roles more accurately? 

[00:03:24] NS: Yeah. I mean, put it simply, Rob, I’ve just always been the guy that took the path of most resistance. Very big about learning, problem solving and trying to do something different, or as we call it, innovative disruptive, I guess. 

And so, I was always the guy that took the hard classes, was trying to solve the hard problems. And I went up in the consulting world working with several global Fortune 500 companies. And I just remember – Man, I know I’m dating myself here. But 16, 17 years ago, when business intelligence started taking off, and I’m working all these big guys like Warren Buffett, Michael Eisner. They're like, "Man, it's amazing what computers are telling us." I’m like, "They're not really telling us anything. We've got these sweet tools that we can now collect and store tons of data. People can slice and dice it, make nice looking reports. But machines really aren't telling us anything. But could they?" That took me down this path of – Well, I was calling it enterprise intelligence at the time. But now we actually know it as artificial intelligence. And that got me a call from IBM R&D one day about some of my work. And next thing I know they're asking if I wanted to join a secret project back then called Watson. That was my real foray into AI. 

[00:04:46] RS: What do you think was the tipping point when people were saying, "Oh, it's so fascinating what computers can tell us." And you're like, "Is it, though? Not yet." Right? Was it the Internet was an increase in processing power? Where do you think computers sort of advanced past glorified calculators? 

[00:05:02] NS: It was when we started collecting data, right? That's the feel for AI. People have been trying to do AI, well, probably really since the 50s. And everyone's like, "Why are we suddenly in this wave now?" And I think it's we've never really had the data before. We have literally the power to collect everything and everything, and some people actually do that. 

[00:05:23] RS: Right. Where did the data come from when you were building Watson?

[00:05:26] NS: Because we were focused on the Jeopardy challenge. Jeopardy actually makes a database of like questions and things like that available to practice. We actually had a starting point there. We had the old shows. We could actually take transcripts and things like that. We're actually able to build a really robust database to train Watson. 

Ironically, the hard part was not so much all the possible topics, all these types of things. But it was more around the strategy of the game, right? Because you're not just playing to answer the questions. You're playing against the other opponents. Where were the daily doubles, and then the natural language processing that's involved. Because you think about the exception questions, like, which of the following is not? Or they're phrasing things awkwardly, because we as humans don't talk proper English. Or the whole concept that here's a game show where, they give you the answer, and you have to figure out the question. It was really the strategy and the NLP that was the big challenge.

[00:06:34] RS: And some of the categories are also word play, right? The clue is wordplay, and the answer is a pun, right? That happens sometimes. That's rather intuitive to someone, to a human being. 

I like that you brought up how it's less about getting the answer right, because you assumed that people on Jeopardy, particularly the two players Watson was playing against, right? Two of the best of all time. They know the answer to every question, right? And so, it's more about how you bet? Do you find the daily doubles? And do you buzz in first? Was that the advantage Watson had? Was that like Watson will always be able to buzz in faster than a human player? 

[00:07:06] NS: A lot of people wonder if that was the case. And there was a bit of a time delay. Like when Alex Trebek was reading the question, Watson wasn't set up with audio and – My design there. So that when Alex finished the question, the thing got texted to Watson to try and balance out the buzzing timing. 

[00:07:28] RS: Oh, interesting. There was no – Right. Because it's actually a visual cue, I gather. It's like there's like a light that goes on or off at the end of the question that that's when you can buzz in, I think. And at the time, you didn't have computer vision on top of the podium. 

[00:07:44] NS: No, no. Watson was in that little box you saw on TV either. 

[00:07:47] RS: Yeah, that was a prop. Right. That was a prop. It's like, "Here's HAL 9000 playing Jeopardy." Yeah. Also, an interesting thing about that Jeopardy game was that Watson got it right a lot. But what was funny was when Watson got it wrong, it was like way off, right? When a human player gets it wrong, it's like, "Okay, I could kind of see how you might say that." What do you think was responsible for that kind of deviation?

[00:08:10] NS: Honestly, part of it was we didn't anticipate some of these things. And we realized that we probably could have done the training a little bit more robustly. There was one question about this US Airport was named after a famous, I think, World War II veteran. And Watson really buzzed in with Toronto Pearson Airport. And we're like, "Oh my God." It didn't factor the geographic information in. 

And there was another time when I think it was Brad buzzed in, answered incorrectly. And then Watson buzzed in and gave the same exact answer. And we're like, "Oh, yeah. We probably should have factored that in as well." 

[00:08:55] RS: That is funny, because that's like a very standard sort of like software shipping problem. It's like, "Oh, once people started using this, we realized, "Oh, that's obvious. We should just like not repeat the same answer." Or, "Oh, yeah, Toronto is not going to be the answer for the American World War II vet question." But, yeah. I don't know. Some of these things are impossible. Or not impossible. Just unlikely to predict. You have to see something in the real world. See something deployed in production before you understand the problem. That's not unique to Watson, I don't think.

[00:09:23] NS: No. But I think that's the challenge we have with training AI, that we're looking to do a specific outcome. We're trying to think of some exception paths. There's just such a weird set of paths and funky things that happen in the real world that we just didn't account for, because we think like, "Well, we didn't think of it. Or it just happened so infrequently, it's not worth bothering about." 

But like that whole thing, like Watson repeating the wrong answer is like, "Oh my God. That was such a gun to head move on our part. I can't believe we didn't think of that."

[00:09:55] RS: Yeah, it's a little bit of a face palm. I’m imagining you watching it like, "Oh, come on, Neil." 

It's related to an interesting, I think, Evergreen question about AI deployment, which is when do you know you're ready to move from training into production? And I think one good answer is it depends on the stakes of your technology, right? Like, the stakes of being wrong in an autonomous vehicle are a lot higher than the stakes of being wrong in the content recommendation engine, on Netflix or something. That's one example. Where do you kind of stand on that? How do you know when you're close enough, you're good enough to move into shipping something? 

[00:10:33] NS: It's exactly – Actually, like you talked about it, Rob. It has to do with confidence level. We go through the training. And like with Watson and other AI stuff I’ve worked on, we're always kind of gauging not just level of accuracy, but the confidence the AI system has, and its answers or its recommendations. 

Netflix, if it's 80% good, 80% confident, that's probably all right. The worst thing that's going to happen, it's going to make a poor recommendation. But it can learn from that. But like the stuff we were doing in healthcare, we were trying to get to 95%, 98% confidence level, just because we're talking about a human life. A mistake could mean someone dies. 

And a lot of people always said like, "Why don't you try to get to 100%?" That's not possible. They're like, "Well why not 99.9%?" It's like you start reaching a point of diminishing returns, or just reach a point where there's just not enough data available to do that level of training. 

Some of this is just going to be the machine having to learn through experience. And it doesn't mean that Watson or some other AI system has the end-all be-all say. That's why we still have the human doctors. They're really the decision makers. We're just trying to give as good of tool set as possible for them, and like nurses and other practitioners to use.

[00:11:49] RS: It's interesting that to get 99%, you mentioned there's not enough data available. But then the idea is that, "Okay, it needs to get real world experience and then learn from that." But that's the data, too. That's the missing data, right? 

How much can synthetic data bridge that gap between the data available versus what you might need to go from to 95 to 99?

[00:12:13] NS: It depends on what we're doing. Synthetic data can be a good bridge if we're like in a very closed ecosystem or looking at somethings that we know or understand fairly precisely. 

So, like, synthetic data has been a great boon in training AI to detect financial fraud or money laundering. Because just the amount available variabilities and other factors that go into those transactions is very limited. And to be honest, we all hope the banks don't have a whole lot of data on that front that that would mean a lot of bad things. 

But you look at other areas of life where things, they don't mesh so cleanly. That there's a lot of subtle or hidden factors that we may not even be aware of that are actually coming into play. For example, bees. There's a lot of concern about the bee population dying off. And they make more than honey. They do a lot of interaction in the environment. And some people are saying, like, "Well, hey. We can create drone bees. We can train them to help spread the pollen, a couple other things. And we can manufacture honey unofficially. That's not an issue." 

But we're all sitting here going like, "Okay, yeah, we could do that. But bees probably do like a hundred things. And we're only aware of like 12 of them." Even if we want to try and create synthetic data about bee behavior, there's so much we actually don't understand about bees and their influence. That's going to be tough. And I think that's the challenge that we have. Synthetic data is going to work really well in some cases. But in other cases where we just don't have the more complete picture, we're going to miss things.

[00:13:57] NS: Yeah, it's a case of you don't know what you don't know. And I think, particularly with natural phenomena, it's good to be humble? And remember your place in an extremely delicate advanced ecosystem, that like on the time scale of humanity, we're just beginning to learn about.

[00:14:13] NS: It's true. And I hate to say this. At the end of the day, we as people still train the AI systems. And so, they're dependent on us, thankfully, to do that. But we can only teach them things that we're able to commoditize, things that we hopefully fully understand. 

And I think we all know about the challenges with like implicit bias like filtering through. But there are other things that we don't get. There's this no whole movement towards like medium data now that we don't really need big data. We need medium data. But medium data is dependent on understanding what is the right data. 

And you think about like learning language. We know for an AI system of it gets exposed to about a hundred million words, it becomes proficient and fluent in the language. You think about a human child, it only needs about 30 million words. It's not the volume that matters there's certain words or phrases that trigger the cognitive learning for language. The problem is we just don't understand what that is. 

And when we were all first learning our languages, we didn't have the vocabulary to even explain how we were learning. And so, we're kind of this weird thing, is like what is really the right data now? What are the right triggers for that machine learning to take effect? 

[00:15:33] RS: Yeah, you can even go one step further with language. If you learn like the most common 2500 words in a language, like, that's enough. Typically, you won't be fluent. You can't go read that country's poet laureate probably. But you can get around. You can be respectful in a language. Yeah, that's interesting, to shift from big to medium data, the tendency would probably be like, "Oh, it's more efficient. It's less processing power." But you still have to downsize. You still have to know enough to be able to know what's not relevant, right? 

[00:16:01] NS: 100%. And that's our challenge, is that we think we understand how some things work, or how we learn. And there are just cases where we really don't. It works, because it works. And that's kind of how it is. I hate to put it that way. Like, language. But that's the challenge that we face. 

It's like Elon Musk and the Neuralink. And can we you know put a chip in our brains? And can AI decode our brain waves and download? We don't really understand how the brain works that well, right? We're so far off from being able to do something like that. I’m not saying that these are not challenges that we can overcome. But these are things that we have to understand when we try and do AI. We can build systems that work really well with things that we understand very well. That's the big constraint that we have. 

[00:16:51] RS: What are the domains we know well and what are the ones we don't? Where are we with all this? 

[00:16:54] NS: I think things that are more hard science or things that have the least amount of variability is probably the best way to put it are the best things for AI systems. You think about like autonomous vehicles, traffic laws are pretty set in stone. We as humans don't quite follow all the rules. But there's guidelines, traffic lights, the signs, all these things. There's obviously the variability with bicyclist people or crazy human drivers. But it's more of a known quantity, right? There's still a lot of very small amount of variation, which is ironically why at the United Nations, when we talk about autonomous vehicles, we don't talk about when we legalize them, so much as we talk about when do we ban human drivers? Because that would actually strip a lot of variability out of the system. 

Those are great places where it's very limited variability for AI. The flip side obviously is where there's a lot of variability or things that I'll call it kind of the human factor that consciously or subconsciously we're not aware of. I’ll pick on the UN again, because they're very big on AI robot justices, or judges. We actually have enough data to do that. The problem is as we were researching this and thinking about all the bias and how we balance that out or strip it out, we learned something incredible, and that the biggest influencer on how a judge makes decisions is how hungry they are. Think about that, Rob. The more hungry they get, the more harsh they get, how do you account for that now when you're developing an AI system? 

[00:18:43] RS: You give your AI a peanut butter sandwich before you make a decision. It's easy. Next.

[00:18:50] NS: We thought could we time stamp some of this? Can we figure out a hunger factor? But it's like there's no way to know if this day the judge had breakfast. Or maybe they skipped lunch and they were even worse in the afternoon. It's too much variability.

[00:19:05] RS: Yeah, variability is an interesting way to anchor it. I’m curious to hear you speak a little bit more about the UN's AI for Good initiative. We've spoken a decent amount about some of the ethical considerations in AI here on this show. The greatest hits of which will be familiar to folks, right? The potential for bias. But should we automate as many jobs as possible, etc? Are these the things that the AI for good initiative is focused on? Or what is the thrift of their work? 

[00:19:33] NS: Some of those are. The AI for Good initiative revolves around the 17 sustainable development goals, the SDGs. People who may not be familiar, I won't rattle 17 off. But it's things like zero hunger, access to justice, good health, smart cities, all these things. And these are goals the member nations have agreed to and said, "We really want to try to accomplish by 2030." 

It's just that while the good intentions are there, the commitment to resources is a bit lacking. And what we've seen is that technology, particularly AI, is a good way to bridge a lot of that gap. And so, we've literally completed hundreds of projects in the last – Was it four and a half years? And we have 117 active projects going on right now. But it's all across these 17 development goals, and it's on a global level. 

And one of the things I’ve been a big believer of, and we've seen this materialized, is that local problems have global solutions While we're worried about job automation in the United States, they're worried about something very similar in parts of Asia, Africa, Europe. And what are the skills you should be teaching for the future of work. And food production, food security is an issue in a lot of places. People that really feel the pain tend to be the best innovators. 

And so, if we can arm them with the right tools, equipment, money, help, these kinds of things, we can actually solve some of the big problems in the world. And that's really the goal here, is can we tap AI to be part of this solution driving? 

[00:21:11] RS: When you think about arming these populations, does that mean mobile SDK, right? Like, edge computing? How do you bring in really advanced technology to remote areas of the world? 

[00:21:23] NS: It's all about partnerships to a degree. A lot of the big companies have joined as partners in the AI for Good ecosystem. And they're making their APIs, their computing power. All these cloud services, all these things are available for people to use. You have lots of people volunteering to be mentors, doing educational programs. We have in the United Nations actually an initiative called Global Connectivity, which is number two for Secretary General after climate change, where we're working with the member nations to actually build out the infrastructure to even support some of these things, like access to high speed Internet, access to mobile devices. 

It's a big, big challenge obviously to get all the remote places. And some of the even not-remote places kind of on par. But it's something that we absolutely have to do. There's just no way around it, because we've already seen it that like that child that gets the tablet at two-years-old or has access to a tablet at two-years-old, their cognitive skills and the way they think about how they can apply technology to solve problems far outstrips that kid that gets the tablet when they're eight.

[00:22:32] RS: Interesting. Can you give an example? What does that look like? I’d love to know some of the disruption coming of the two-year-olds these days.

[00:22:38] NS: It's funny, because we were visiting a very good friend of mine, and they have a two-year-old son. And he's saying like, "Hey, Uncle Neil. I hurt my finger." Like, "Really? Which finger?" And he holds it up, he's like, "My iPad finger." Yeah. It's not like your index finger anymore. It's like it's my iPad finger. I’m like, "Wow! That's a new perspective on life." 

[00:23:04] RS: Yeah. Yeah, it's easy to go dystopian with that. But the truth is, like, that two-year-old is beholden to the technological development process that resulted in that iPad, as another two-year-old is to a process that made a book, right? That's all out of reach. Even to myself, right? I wouldn't know how to make and print and bind a book, let alone an iPad.

[00:23:28] NS: No. But this is the challenge that we're facing, in that the future of work is around hybrid intelligence. It's basically the ability to augment human capabilities with machine capabilities and understand how to do that. Well, it means understanding what the tool set and what the capabilities of emerging technology are. And we've just seen the children get exposed to that at a younger age, as young as possible, essentially, unleashes that potential, right? They're more adept to realizing how they can augment things that they do with technology. 

Well, that's why the kids that don't have access to this stuff earlier are going to suffer later in life when it comes to where future of work is taking us. I mean, it's going to be more about creativity, problem solving, totally new applications of technology. And they're just going to be behind the curve. And that's the great fear that a lot of us have, is we may be leaving whole swaths of people behind.

[00:24:32] RS: Just by not arming them with this technology at a young age.

[00:24:34] NS: Yeah. I mean, think about how fundamental things are. I liken it now to like it's like starting to teach a child how to read when they're four versus when they're 10. 

[00:24:43] RS: Yeah. And when you bring up hybrid intelligence, how integrated are we talking here, right? I think that the human in the loop viewpoint, the idea of that AI is going to help you be better at your job, not take your job. We already are surrounded by tools. How much more integrated are you talking here? 

[00:25:01] NS: Honestly, I don't believe the terminator feature, Rob. I believe in the cyborg future. That we, as humans, are going to try and evolve ourselves by adapting technology into ourselves. It's going to be human-machine integration. I don't know. Some people might be freaked out by that. But we're already kind of in that direction, where we're already using AI. And the brain can still send signals to a stump, like if you're missing a limb. We're not trying to code your brain waves. But that triggers a process in your body. And so, using AI and IoT, we can actually capture muscle intended motion. AI's being able to code what that stuff means and like move a robotic hand. 

They've already got the experimental surgery, which has been I think done a dozen times now, where they can actually implant digital cameras into a blind person's eyes and transmit the signal to the brain. It's still black and white and a bit grainy. We know that technology will improve. 

I can envision a future where human beings will be like, "Well, we all should have that, because maybe we want to see infrared. Maybe we want to see images. Or the x-ray world, or whatever." I know that's more far-flung and more for future. But in the short term, we want AI to take some of the repetitious, tedious admin type of work off our plates so we can actually focus on the more value-add, more complex work out there. That's the way we really advance ourselves and advance our different fields. This is really where AI can really complement us. 

And let's be honest, Rob. I know I’m going out there with a black mirror reference. But if you ever saw the episode White Christmas, they take your brain m grams and create that cookie, which was your AI assistant. It knows you as well as yourself because it kind of is you and can anticipate your needs and stuff. That's like the holy grail. I think most of us would be like, "I totally want that. I’ll never forget my anniversary again." 

[00:27:09] RS: Yeah, it's cloning yourself as your personal assistant is what it is, right? Yeah. And it's well-pointed out that this hybrid nature is already here in a lot of ways. It's just not as like sci-fi as people think when they hear that, right? If you have a hearing aid, if you have a hip replacement, the animatronic limb, and that's probably not the right term for it, are all examples of this. We expect it to improve. We expect it to become more integrated. We expect it to be more advanced. That's just technology. And in another way, it's evolution, right? Evolution as a life form that grew through us. It feels inevitable at this point.

[00:27:44] NS: I agree. And I think that's the thing, that we all wish we could crunch numbers better, and process all these data points better, and anticipate better. And AI is a way to actually help us be able to do that. That's the key thing. This isn't about human versus machine. This is about humans leveraging machines. And we have to just always keep that in mind. 

[00:28:07] RS: Yep, yep. Got it. Well, Neil, we are creeping up on optimal podcast length here=. I don't want to let you go just yet, though. I would love to hear from you, what you're really excited about in this space. Because you have a bunch of different irons in the fire. I feel like you have your finger on the pulse in a lot of ways in various different use cases and applications of this tech. What is something that really inspires and excites you when you think of a potential application for this attack or a deployment of it? Maybe a company in the space doing really interesting work? What inspires you and lights you up when you think of where this tech is and where it's going? 

[00:28:41] NS: It's a great question, Rob, because there's so much stuff going on, that if I had to pick one, it's really about how we're using AI now to augment human creativity. You got organizations like ACSI Labs, where they're combining neuroscience, AI and the metaverse to actually tap into this whole digital twin thing, but give us a safe space to unleash our creativity to try the more risky ideas and see how they pan out. And, with the AI, able to generate random events and generate ever-increasing levels of complexity and challenge and really force us to up our game in problem solving. 

And what I’ve seen is that it really is making us not just better thinkers. But it's unleashing more innovative solutions. This started with the digital forms, like agriculture, and it's morphed to like business problem solving. But right now, we're actually working on something right now where we actually think sustainable mining can be a reality. And it's all thanks to the combination of , especially, AI helping us to up our game when it comes to creativity. That's what's got me jazzed right now.

[00:29:58] RS: I love it. The future is awesome. And it's coming faster than anyone anticipated. Neil, this has been a blast chatting with you. Thank you so much for being here with me and sharing all of your experience and all of your hot takes on this space. I really love learning from you today.

[00:30:10] NS: Thanks, Rob. Thanks for having me on, man. I had a blast.

[OUTRO]

[00:30:36] RS: How AI happens is brought to you by Sama. Sama provides accurate data for ambitious AI, specializing in image, video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, ecommerce, media, med tech, robotics and agriculture. For more information, head to sama.com.

[END]