How AI Happens

Watsonx.ai with IBM VP Data & AI Tarun Chopra

Episode Summary

Tarun is the VP of Project Management, Data, and AI at IBM, and he is here today to tell us how IBM is keeping up with (and innovating its own) technological trends. We learn of the common struggles in AI and ML, why you need to understand the problem you’re trying to solve before using any new technology, how IBM and Watsonx are helping businesses to tell their own unique AI stories, and why using a consultative method to introduce new users to AI is the most effective method.

Episode Notes

Tarun  dives into the game-changing components of Watsonx, before delivering some noteworthy advice for those who are eager to forge a career in AI and machine learning. 

Key Points From This Episode:

Tweetables:

“One of the first things I tell clients is, ‘If you don’t know what problems we are solving, then we’re on the wrong path.’” — @tc20640n [05:14]

“A lot of our customers have adopted AI — but if the workflow is, let’s say 10 steps, they have applied AI to only one or two steps. They don’t get to realize the full value of that innovation.” — @tc20640n [05:24]

“Every client that I talk to, they’re all looking to build their own unique story; their own unique point of view with their own unique data and their own unique customer pain points. So, I look at Watsonx as a vehicle to help customers build their own unique AI story.” — @tc20640n [14:16]

“The most important thing you need is curiosity. [And] be strong-hearted, because this [industry] is not for the weak-hearted.” — @tc20640n [27:41]

Links Mentioned in Today’s Episode:

Tarun Chopra

Tarun Chopra on LinkedIn

Tarun Chopra on Twitter

Tarun Chopra on IBM

IBM

IBM Watson

How AI Happens

Sama

Episode Transcription

Tarun Chopra  0:00  

I look at what's next now is a vehicle to help customer build their own unique AI story, they can use our framework and tool to build play with generative AI models machine learning models, they can use our data repository to store you know, empty amounts of data that is needed to tell their own story.

 

Rob Stevenson  0:19  

Welcome to how AI happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host Rob Stevenson, and we're about to learn how AI happens. My guest today cut his teeth in software engineering and has since forged a career leading teams of firmware engineers and performance engineers, as well as holding leadership roles in product management for IBM z and IBM Systems. Currently, he is the VP of Product Management data and AI at IBM, though too and Chopra Tarun welcome to the podcast. How are you today?

 

Tarun Chopra  1:09  

Thank you, Rob, for having me. You know, looking forward to this one for a while. I'm glad you're here.

 

Rob Stevenson  1:14  

Yeah, me as well. Did I do justice to your curriculum vitae? Is there anything you'd add in about your background that maybe I skipped over?

 

Tarun Chopra  1:20  

Now? Oh, you you you did, I think the only thing I will add is 20 years of IBM experience, as you said a lot of software engineering line management jobs over the last 12 months. So I've been leading IBM data and AI products for IBM. And if you haven't heard generative AI over the last three, four months that has taken over my life, and it's been an exciting time shaping IBM product strategy to bring these capabilities in the marketplace.

 

Rob Stevenson  1:44  

Yeah, of course. So it's already in your role. You are, of course tasked with understanding the ways lots of different companies might need AI services, and would use IBM to help bring their AI products to market. And so I'm curious in this really fast moving space, what is your intellectual diet? Like? What are you kind of consuming to make sure that you, you stay up to pace and stay abreast of aerospace?

 

Tarun Chopra  2:09  

The first thing, what I'm lucky in my role, Rob, is I get to talk to a lot of customers around the globe, IBM being an international company, you know, I just get a chance and privilege to talk to a lot of customers talk to a lot of partners talk to a lot of technology companies visit a lot of our customers day in day out. To me, that becomes my first source of information. Because you know, that's where I really get to know what customers are experiencing, what are the problems they're trying to solve, how they're trying to work with the technology, how they're trying to adopt these technologies, that technology is one angle. But to put these technologies in production mode in large enterprises, and large base is much more than technology. I use this word as people processes, compliance audit regulations, everything gets in there, jumbled mumbled. And then you productize this thing. So that's I will say that's my first source of information really getting that information. Second thing I will say is looking at the innovation carbon and what is really happening in the marketplace, reading a lot of literature, one of the things you know, I say to everybody, generative AI is a good example, if I would be honest with you, and I say I knew six months ago, this is going to be such a big thing, or I know IBM was working on it. But we were paying some attention and research was doing a lot of work. I had to relearn this space, I had to get myself educated. So my advice to the audience will also be there's so much information out there, there's so much valuable insights out there, keep keep up to date, do a lot of this stuff, right? So that I will say that's my second information source where I just go to the marketplace search, what's happening, what are the thought leaders are doing? And how can we bring some of these capabilities into into our solutions. A third thing I will say, Rob, for me is also listening very closely intently to my sales team there and the ground, and they are the one pushing a lot of these technologies and stuff. And what are they hearing, I get a lot of my useful information from there as well. So I will say customers first, I'm lucky there because I do get access to a rich set of customers globally. Second thing, the education never stops, you gotta keep looking, especially the fields that we are working in, you have to keep looking at all the avenues out there. And third, I will say just keep a pulse with your sales and marketing and client engineering teams to see what's really taking place in the marketplace.

 

Rob Stevenson  4:25  

Because of all the various customers you get to interface with you kind of have access to the front lines in a way. So I suppose the the answer this question ranges wildly, but what are the common challenges people are facing when you are when you're speaking with folks, what do you tend to hear?  

 

Tarun Chopra  4:40  

As you say it ranges widely right? I think I see a spectrum rough out in the marketplace. Some customers are are ahead in this AI adoption journey. Some customers are in somewhere in the middle. Some customers are starting key things that I see Rob one is skills as an issue, right skills and finding the right skills to do the AI journey, that's an issue across the globe in the marketplace. Second thing customers is also like, quite frankly, you know where to start. A lot of people get enamored by technology, right. But the biggest question in the AI space in the data space and the data science and machine, whichever the words you want to go use, this is what problems you're trying to solve. So finding a good problem to solve, and then applying the technology is also another important aspect that I see, in customers and clients journey, one of the first thing I tell clients is, hey, if you don't know what problems we are solving, then we're on the wrong path, we got to first figure we're going to debate it first, what is going on? The third thing I will say is, a lot of our customers have adopted AI, but they have adopted if the workflow is, let's say, 10 steps, they have applied AI to the one or the two steps, they don't get to realize the full value of that innovation, because there's eight more steps to be done in that workflow automation. So understanding the problem and bringing the full workflow under the microscope of the innovation becomes very important for the C level audience to see the returns on their investment. So having that debate and discussion and education becomes very important. For example, as a give you an example an IBM, we really look at our looked at our HR flow, and really automated the entire HR flow through AI and generative AI and those capabilities to simplify how we engage our employees into HR matters, right. But that was an end to end flow automation. And we saw quite a few benefit. But some of the traditional mechanisms will be just take two pieces of HR flow and try to do something and then go look at the next thing. But the problem with that is you don't get to see the whole return. So skills, where to start looking at an end to end automation and workflow. Those I will say two or three key important things. Usually I go engage with clients when we first started our conversations and and how we go about adopting AI and enterprise get and I'm not even mentioning the points now doing it at scale, the governance the compliance regulations that comes in at some point in time, but that is also a big, big topic in the marketplace.

 

Rob Stevenson  7:07  

Yeah, it sounds like first and foremost, you are pushing people to really hone in on what their the specifics of their problem is right? Which is this good partnership at the top of things. But that kind of leads, I think, into a little bit of IBM's larger AI strategy for helping companies. And this is a nebulous, perhaps big question. But IBM is a big company. So let me start broad and we can get specific as we go on and maybe earn the insight. But with as fast as the AI space is changing. What do you view IBM's role in the AI ecosystem? And how is IBM kind of working to help accelerate change?

 

Tarun Chopra  7:44  

Look, we've been in this journey, Rob, with Watson platform five years ago, right? And of course, there's some will say it was a successful thing. Some will say it was a not too successful. I kind of use it as it was a great learning experience for us, right? over the 10 years, we've been working a lot with our clients and implementing these capabilities to enterprises at scale. So if you ask me, what did we learn, but what would we really learn in that process? What we learned is, as I said, it's just not about technology, right? There's a lot that goes into big enterprises environment to adopt AI at scale, right? There's unique skills, you peep customers need help customer needs to know how to implement these capabilities that the compliance regulations, the auditing, you see all the rules and the laws, that is coming into AI. Because of that learning, we formed our own AI ethics board, right to make sure that we are adopting these practices, keeping fairness biases, everything in mind, we were the one who said we're not gonna do any more face recognition with AI because we didn't thought that was the right thing to go do. Because all these lessons came from the working the close working together, we are doing with our clients over the last 10 plus years, right. So when this next wave comes in with the generative AI and foundation models, we are very uniquely positioned to help our clients get the benefit, because we learned those lessons. So that's where if you have the recent news that we did is our Watson X platform, where x really stands for scale, right customers can do one or two prototypes and production when they're really looking at then hundreds or 1000s of these experiments, how to bring them into one full is a big challenge for us. And that's where we we introduced the Watson X platform, which has like three sort of components one is kind of fit that the.ai The tool for the builders, right both machine learning and generative the dark data, the open Lake House repositories to store all this data that is coming in. And finally the more important piece is the governance piece, right? Who has created the model where does the information is coming from when the model is running? What does mean for bias, robustness, accuracy, all that kind of stuff. And then finally, taking all these hundreds of models and feeding into your compliance and audit process automatically not one by one right automatically. So if you are a An AI audit person coming into a shop and saying, show me what you're doing and show me that what you were supposed to be doing is what you're doing. It has to be an automated process, it can be a visual inspection process, right? Because AI becomes a bigger black box. So we took all these kind of learning and capabilities and put into this one platform. And the role of IBM is to really on top of it, we have 1000 plus consultant, three and our foreign client, injuring people that are working with clients implementing these things, because as I said, skills is an issue for our clients. So that role in the broader ecosystem, and then more importantly, working with the broader ecosystem, working with AWS, working with Microsoft, working with other partners of our SAP and things, implementing these capabilities. To us as what we think is our role to play in the marketplace.

 

Rob Stevenson  10:48  

I really want to spend more time speaking about where Watson is now. But first, you mentioned quickly about facial recognition technology. So that is news to me that IBM is not going to play in that space, is that specific to facial recognition, or is that kind of IBM taking a stand on biometrics writ large?

 

Tarun Chopra  11:05  

We did that I think it was few years ago. But we just recognize that there's a lot of ifs and else with the facial recognition capabilities and technologies. And we decided that this is not the right forum for us to to play in the broader biometric is a different labor story. But we made a very strong pass at the facial recognition piece of it and said, We didn't talk that technology was there or was accounting for all the ethics that we things are important to play. And we said, we're not going to go do it. And that I said, IBM has a very strong discipline with AI, we have our own AI ethics board, we have our own chief privacy officer, we go through any technology that we bring Rob in the marketplace, it goes through the ethics board, we go through our framework to say does this technology is the right thing for the broader consumer and the enterprise business doesn't meet IBM standard and the federal standards that we are helping shape also in the marketplace, right. So so that goes through the entire process. And even when we are embedding these technologies within IBM technologies, it goes to the same vetting process, especially in the AI domain.

 

Rob Stevenson  12:08  

It's interesting that you kind of separate facial recognition from other biometrics, but as I'm thinking about it, it doesn't make sense. Like, for example, another biometric which would be for example, using my fingerprint to sign into my MacBook that was opt in, right? Like that was fully my choice. I consented to that. But the facial recognition stuff like an average Americans caught on camera like 200 times a day, I didn't consent to that, right. And so the idea there that's like, oh, well, facial recognition technology could be fed that kind of data, CCTV footage without my knowing it. That feels like a very different thing to a fingerprint, for example, is that kind of how IBM breaks it out?

 

Tarun Chopra  12:44  

Yeah, yeah. Similar to that. I will agree with that. But also just just some of the technologies nuances, right. Again, are we making sure that we are not purposefully in the technology biasing against any gender, or any ethnicity or any other gender constraints, right. So we just have to make sure that technology is firmed up enough for us to be very sure about the end results. And if we are not, then it doesn't make sense for us to put those capabilities out in the marketplace.

 

Rob Stevenson  13:11  

Yep, makes sense. So Watson, I'm sure like most people out there came on my radar when it went on Jeopardy. And that was, that was a really fun Rubicon moment, I thought very exciting. And that was 10 years ago, would you say 12 years ago. So I'm sure Watson has come a long way. So that was 10 years ago, or 12 years ago. Watson has come a long way. Undoubtedly, you mentioned a little bit about what Watson acts, but I would love to know what's happened in the last 12 years. And what is kind of the current application you're seeing for Watson X?

 

Tarun Chopra  13:39  

Oh, so first of all, we have implemented Watson in a lot of our customer environments in a lot of our own technologies. That's point one. As an example, for example, our voice recognition and chat boarding capabilities. Just one example. Bradesco is handling 90% of their call volumes through some of these capabilities, right? So we got very prescriptive with Watson and infusing the core of the Watson principles. People remember the Jeopardy thing, but the core technology pieces, we took all those pieces out, and infuse both IBM product said, but also help our clients get the value from the Watson capabilities. And now what we're doing with Watson X is taking that next step, where we are now not only us building capabilities, but we are providing our clients a platform to build their own I use the word Rob, when I talk to clients is interesting, you know, every client I talk to, they're all looking to build their own unique story, like their own unique point of view with their own unique data, right and their own unique customer points or pain points. So I look at what's next now is a is a vehicle to help customer build their own unique AI story. Of course, they can get help from us as well to implement that. But they can use as I said, our tooling our edit our framework and tool to build play with generative AI HOURS machine learning models, they can use our data repository to store empty amounts of data that is needed to tell their own story. And then to validate their story to provide authenticity to their story, they can use our governance framework, which is very important to scale. That was what that's what we learned in the Watson journey, right? How you scale this from one to 1000. In big enterprise environments, customers need help. So governance is a good mechanism to kind of bring all of this together, but I really look at Watson acts as that Canvas to help customers tell their own story in their own way with their own data,

 

Rob Stevenson  15:35  

could you maybe give an example of a way that customers are using Watson X,

 

Tarun Chopra  15:38  

for example, customers are using some of our Watson X capabilities, playing with large foundational models to simply help build a q&a system or a chatbot system or to help with the end to end workflow automation, in HR or in marketing or in sales, for example, right. So so from from automation, to some of the use cases that I'm seeing across the board, Rob is content generation content, summarization chatbots q&a S, and in some cases, very specific, for example, we have a case study out there where NASA brought a geospatial data to build their own unique foundation model. So every foundation model doesn't have to be large language model that rgpd has, right, we have customers coming to in where they want to bring their own specific data in a consortium to leverage the base capability of the foundation model concept, but very geared towards that we are for example, for cogeneration for productivity, we announced something called Watson quota system, which take just a simple use case Ansible scripting for automation, and completely automate that process. Right. So those are the examples we are starting to see all across my take Rob is right now is kind of the experimentation mode with the generative AI the next 1824 months, clients will take that experimentation then into prioritization mode. And then from there, when they're looking to scale, then the capabilities or governance and all this will start to kind of come in. But right now, I'll say Rob, we are in rapid experimentation mode across, I haven't been any meaning where a client has an ask, they are trying to frame their AI strategy they bought. That's the number one topic I see across the globe, across the set of clients in different industries, verticals, so forth and so on.

 

Rob Stevenson  17:22  

It does feel like every company is an AI company, right, or needs to at least leverage it in some way.

 

Tarun Chopra  17:27  

Or at least they need to have a point of view what they're doing right. And but that's where I come back to my original point what problem you're trying to solve, because then it's very easy to get carried away in the notion and spend millions of dollars in some cases, billions of dollars trying to do something without knowing what the end game is. So that's why it's very important when you talk to clients is first discussion is what is the use case? What is the problem statement? What are we trying to solve? Because then you can get a little bit more methodical prescriptive in charting out that course. But since the right now there's fear of missing out so much that everybody just wants to jump in, right, which is a good thing. But where we are bringing our knowledge set and our experiences is helping clients as much more than the right is let's let's step back for a second. And let's see where we can help you to solve the right pain point.

 

Rob Stevenson  18:16  

Yeah, of course. And that consultative approach is so important, because obviously AI has all this hype and sexiness attached to it. Everyone wants to use it, or tell their board or their VCs that they're using it. But I had a guest on from Bell Dahlia. And she was basically saying like, there's a lot of overkill going on. I can't remember exactly the example she gave, I think she was talking about regression analyses. But the point I took away was like, Why use a Super Collider when a pivot table will do? Right? And so that that feels like the first step of your process.

 

Tarun Chopra  18:45  

I give you a different lens of that example, right? Let's look at Oh, I can do a lot of point writing or content generation of content summarization to generative models, right. But let's say if the bill of doing all this stuff is millions of dollars, if you are the CEO or CIO, is the return on that investment is worth it. You have to put that in the context. What are you really trying to solve? Am I willing to spend $10 million? Because I can summarize my emails, or I can get some insights from my emails? In some cases, it might be okay. In a lot of cases, you when you once you start to put dollar figures around that problem, say because in the end, every client, right, what do you think is going on cost cost cutting, Return Value, with justification. So what's happening when you take these AI and then you start to morph and then suddenly the price tag starts to come in a lot of our C suite audiences and saying okay, as you said, Do I need a pivot table? Or do I need a supercollider? What do I need to go solve this thing? And that's where that consultative approach becomes very important because in the end, everything that especially the client that we work with, it comes down to return on investment, right? What am I spend Any and what am I getting?

 

Rob Stevenson  20:01  

Could I get your thoughts quickly on compute power and costs? Oh? Because as you say it can be fantastically expensive. And I guess my question is, do you anticipate compute power and the costs associated to drop as we've seen with other cloud computing and has come down in cost and a lot of cases, even looking at just like the chips inside the technology we're using all of that has become smaller and more affordable. Do you expect it'll it'll be similar with other areas of AI?

 

Tarun Chopra  20:27  

It might, but we are a little bit far away, because the number one thing right now is the GPUs, right. And there's a hot demand for GPUs. And there's only a few players. And for example, for us, we have months way to get GPUs in the marketplace. So at some point time, as more and more innovation comes in, these things will get better and better. But right now, there's a huge price tag. I'm not too concerned, the compute power side that the CPUs, but to when I'm inferencing, these models, I'm hitting somewhere at the GPUs level, right? Then you have to relate right? What is my workload? How many times I'm going to influence this thing? Or what is the computational power, I need to influence it now in cloud world is okay, cloud world is interesting, right? Because you don't have to worry about the setting up is. So it's easy to go spend your money. But it's also very easy to overspend your money, because there's nothing going on, right? So that's some of the debates that customers will soon I'll give you an example I was with a client and the client was they had this open AI thing going on, and they were experimenting with it. And the client was like, if I give unfiltered access to everybody in my shop, and let them go hit Open AI, the bills are astronomical. So I have even though they're charging, everybody's charging 20 cents per token or five cents per token, when you add up all those queries, they add up, right? And if you as an end user, have no notion of the cost, you don't care, you just write your query, what is my dog's name? Right? Even though you know your dog's name, for the fun sake, you will ask what is my dog's name and see if the right answer comes out? Right? So that's where these cost calculations starts to as customers go some experimentation to say, Okay, let me go now into production. Those are important topics and discussions and debates. And you have to factor all this in.

 

Rob Stevenson  22:12  

Yeah, of course. Yeah. Well, hopefully, we see that drop in compute power, that will certainly make it more accessible to many people, which is what we want to learn, I do want to make sure we spend a little more time with what's next. And specifically, it's been broken down into three subcategories we have, what's the next AI? What's next data and then governance. And so I'd love to know, why was it necessary to break it into these three components? And then if you can kind of run us through all three of them. I love that too.

 

Tarun Chopra  22:38  

Thank you, Rob. I think the the reason is because as we learn through our journey, clients needs just not a tool to play with models. They need a strong repository to strong a store a lot of data for these models, and then need a very strong governance framework to apply across these models to scale. And it's not all about also individual componentry. It all has to come together in one solution. And that's why the word platform right? So for me, what's next is our if I explain is a new AI and data platform to empower big enterprises, small enterprises, media and enterprises to train tune and deploy AI across their business. Right, leveraging. I think the key thing, Rob here is leveraging the key critical data, which is their unique differentiation. So we can think about what's next in three different components, and then all coming together again. So the three components as you mentioned, one is what's the next idea? Think of it as simply a tool for AI builders to train, validate Dune, deploy both traditional ml models, as well as foundational models and one kind of solution, right? These models combined best of breed architecture with a rigorous focus on data acquisition, governance quality, to serve enterprise needs. And what do I mean by that in the Watson X that AI clients can bring their own models, clients can use open source models. And more importantly, our clients are asking, Hey, IBM, can you provide IBM models where you are attesting to the data quality of these models, and that can pass their legal and compliance and regulation requirements? That is a big differentiation, that is a big topic for our clients, right? They can experiment with open source models, but when they want to deploy them when they want to put into production, they need somebody to stand behind to say, hey, the data that went into these foundation models have large language models. I am standing behind that this data has all the commercial licenses and all this other stuff right you can see there's a lot of suing and everything happening in the marketplace on some of the other stuff right? So that's one piece of what's next step data is what it makes possible for enterprises scale AI workloads using all of the data with the fit for purpose data lake house optimized for data and AI workloads right, which has queering governance built in open Lake House format. This is all based on open technologies. Presto is an iceberg. And finally the Watson X car governance is held in mind what I think is critic Go put AI into production by providing an end to end solution that combines both data governance and AI governance to help explain responsible, transparent, Explainable AI workflows to your constituents. Right? If you thought machine learning was a black box foundation model is even a bigger black box. There's tons of data that has gone in, to provide that value, right to provide that unique differentiation. And a lot of customers when they put them into production, or somebody needs to be able to explain what went in and what it is doing. Right. It can be a black box, when you are giving answers to your end customers. And somebody asked you the question, Well, how did you come to that conclusion? Well, I don't know. My AI said it right. It won't be palatable for brand from their brand equity perspective. So they are looking for ways on how to explain all the stuff that they are going to be putting out in front of their customers, right. Chatbot is a great example. If you put an answer out in a chatbot for your brand. And somebody says, Well, why don't you came to the hub? Why did you decline my loan? Well, the AI did it. That's not a good answer, right. So that's why governance is super critical. And I want the audience to leave with the fact that as I said, these are all three different, but they all come together as platform. And the other thing I will say is one of the things we have learned a lot of our clients wants to deploy these capabilities on prem, as well. So it's not just the cloud work. And that's why our hybrid approach, you can run into multiple clouds, you can run it on prem as well, all built on our OpenShift technology, right? That's a big plus for our clients, because they're looking for that flexibility, and the use case. And just to wrap it up, Rob, I will say the two use cases that we're working across the spectrum is around digital labor, automation, security, sustainability, code, modernization, all of those five or six use cases that are coming to the top of the heap when we are engaging with clients.

 

Rob Stevenson  26:48  

Thank you for walking me through that super helpful. And I'm really pleased to hear that the governance subcategories taken so seriously. The explainability is obviously super top of mind and important. And I remember there was a time when it was like cool to say, Do you have a blackbox algorithm? It's like, oh, we don't even know how it works. That's how advanced it is. And now not cool at all. Right?

 

Tarun Chopra  27:07  

Oh, I don't know if you saw the reason where that the chat TPB results are declining accuracy, because it's drifting a little bit, right. So when you put these models into production, it's not about just that time, of instance, in a production environment in three months from now you want to go back and see the framework I defined for that model to be successful. Is it still following their framework? Is it drifting? Is it is it going down because bad data is coming in, and the accuracy has gone down? So those becomes very important points in your enterprise environments.

 

Rob Stevenson  27:37  

Yeah, it's a constant struggle, constant monitoring going on. It's I'm glad to hear that that Watson Nexus includes that and such as the high level tournament, this has been fantastic chatting with you today. And before I let you go, I just want to ask you to give some advice to the folks out there in podcast land for our listeners who are forging a career in the AI and ML space. What advice would you give them?

 

Tarun Chopra  27:56  

The most important thing you need is curiosity. Be strong hearted because this is not for the weak heart. This place I tell my customers what I told you today might be false tomorrow because or untrue tomorrow because the space is changing. So fast so rapidly. Be be a curious mind learn a lot engage. Best way to get is to get your hands in take your field be an engineer or product manager or sales marketeer pick pick your domain, but be very adventure full because this space requires that kind of characteristics and demeanor to be successful.

 

Rob Stevenson  28:30  

Well, if that's the case, then I had to have you back on tomorrow. Darren, if the space is going to change that fast and you can update us on everything new that IBM

 

Tarun Chopra  28:37  

I might have something new to tell you tomorrow Rob

 

Rob Stevenson  28:39  

is doing. Well definitely have you back. I'll let you go a little bit between appearances. I don't want to dilute the brand too much. For now. I'll just say this has been really really great learning from you and chatting with you today. Tarun, thank you so much for being on the podcast with me today.

 

Tarun Chopra  28:52  

Thank you so much Rob really appreciate the opportunity.

 

Rob Stevenson  28:57  

How AI happens is brought to you by sama. Sama provides accurate data for ambitious AI specializing in image video and sensor data and notation and validation for machine learning algorithms in industries such as transportation, retail, e commerce, media, med tech, robotics and agriculture. More information, head to sama.com