How AI Happens

AI Industry Leader Srujana Kaddevarmuth

Episode Summary

How can we ensure that AI is used ethically in a rapidly evolving world? Today, we sit down with Srujana Kaddevarmuth to delve into AI ethics, workforce upskilling, and the need for global regulations to ensure ethical AI adoption and deployment.

Episode Notes

 Srujana is Vice President and Group Director at Walmart’s Machine Learning Center of Excellence and is an experienced and respected AI, machine learning, and data science professional. She has a strong background in developing AI and machine learning models, with expertise in natural language processing, deep learning, and data-driven decision-making. Srujana has worked in various capacities in the tech industry, contributing to advancing AI technologies and their applications in solving complex problems. In our conversation, we unpack the trends shaping AI governance, the importance of consumer data protection, and the role of human-centered AI. Explore why upskilling the workforce is vital, the potential impact AI could have on white-collar jobs, and which roles AI cannot replace. We discuss the interplay between bias and transparency, the role of governments in creating AI development guardrails, and how the regulatory framework has evolved. Join us to learn about the essential considerations of deploying algorithms at scale, striking a balance between latency and accuracy, the pros and cons of generative AI, and more. 

Key Points From This Episode:

Quotes:

“By deploying [bias] algorithms we may be going ahead and causing some unintended consequences.” — @Srujanadev [0:03:11]

“I think it is extremely important to have the right regulations and guardrails in place.” — @Srujanadev [0:11:32]

“Just using generative AI for the sake of it is not necessarily a great idea.” — @Srujanadev [0:25:27]

“I think there are a lot of applications in terms of how generative AI can be used but not everybody is seeing the return on investment.” — @Srujanadev [0:27:12]

Links Mentioned in Today’s Episode:

Srujana Kaddevarmuth

Srujana Kaddevarmuth on X

Srujana Kaddevarmuth on LinkedIn

United Nations Association (UNA) San Francisco 

The World in 2050

American INSIGHT

How AI Happens

Sama

Episode Transcription

Srujana Kaddevarmuth  0:00  

There are different areas that we are earmarking wherein we think that humans can excel, and we could build AI systems that could complement humans, but it should be driven by humans.

 

Rob Stevenson  0:12  

Welcome to How AI Happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we're about to learn how AI happens. Hello and welcome back all of you wonderful machine learning data scientist AI practitioner munchkins out there in podcast land, it's me Rob here with another classic installment of how AI happens. I have an amazing guest for you. Boy, how do I even begin to explain and sum up her experience and do her curriculum of vitae justice? Here. She is an AI leader at Walmart. She's a seasoned machine learning leader, sought out, advisor, expert, and evangelist, who has held numerous positions in data science and AI in the technology and consumer industry. She's an avid advocate of using data science and AI for social good. She serves on the board of the UN Association for San Francisco. She is a senior fellow at the World 2050 think tank and has been recognized as a lawyer by the American Insights organization, and she's right here with me on the podcast. Srujana Kaddevarmuth, welcome to the show. How are you today?  

 

Srujana Kaddevarmuth  1:33  

Thanks, Rob for having me here. I'm super excited to be talking to the AI enthusiasts out there, and looking forward to this conversation.

 

Rob Stevenson  1:40  

Yeah, me as well. You have such a rich background. Did I do it justice there, when I kind of stumbled through all of the amazing things you've been working on over the last few years?  

 

Srujana Kaddevarmuth  1:49  

you did amazing and I am happy to dwell into anything deeper as we have this conversation and as we take it forward,

 

Rob Stevenson  1:55  

there are so many things I want to speak to you about. And in addition to some of the research and in like your day-to-day work, you have this role where you're serving with the UN with other think tanks, and so it strikes me that you probably have access to a lot of other folks in the space and insight into what's kind of going on in a larger macro sense. So I'm curious, when you are meeting with that kind of groups, what are the topics that are coming up? What are people concerned or excited about?  

 

Srujana Kaddevarmuth  2:23  

Yeah, we are in this age of data deluge, with around 1.1 trillion megabytes of data getting generated every single day. We are in this age where 70% of the world's GDP is undergoing digitization. So there is this humongous amount of heterogeneous, non intuitive, messy data that is getting generated every single day, and it is transforming both corporate sector as well as the development sector alike. Some of the common themes that I see across both development sector as well as the corporate sector happens to be across these three areas. The first one is around ethical AI and AI governance. So this is an important topic for corporates, primarily because we see that most of the time, the algorithms are the statistical representation of the world that we live in. They come up with some outcomes based on the data that they have been trained on. However, that data may have some sort of a bias, and by deploying such algorithms, we may be going ahead and causing some unintended consequences, which may lead to some sort of regulation challenges, as well as compliance challenges for the corporate corporates really focusing on bringing in ethical AI and focusing on AI governance. This is happening across different industries, and when you talk about the development sector, even the UN has created a global pulse policy focusing on ethical AI governance and focusing on digital cooperation wherein they want to go ahead and establish effective policies procedures, and protocols for protecting consumers right to privacy and ensuring ethical AI deployments across different industries. So I see that there is a lot of synergy there. The second aspect, the second area where there is a lot of synergy, happens to be around using AI to drive innovation and economic growth. So I see that corporates are using AI, and this is being used with multidisciplinary topics and areas like blockchain, Internet of Things, er VR, as well as robotics and infusing AI with all of these to drive economic growth and monetization opportunities. However, in the development sector, there is also this focus and increasing awareness that AI can drive economic growth, and that is being looked at in terms of bringing in more parity between developing economies and that of the developed economies. And the third area, where I see a lot of synergies in both development sector as well as the corporate sector. And one of the theme that comes up often is human centered AI as well as workforce transformation. Situation, we are seeing that corporates are becoming more conscious in terms of how AI could impact the white collar jobs and will lead to some sort of a labor shift, and they are trying to build capabilities that could augment human capabilities, as well as looking at leveraging the AI with a human in a loop. Concept, for example, if you're generating the content, making sure that there's a human inner loop concept, so that we are able to use the human abilities of perception and discretion to do the right thing by our customers, by our associates. Similarly, in the development sector, there is an increasing awareness that AI could lead to a lot of unemployment, especially among the white collar jobs, or white collar workers. So that's why there is a focus on, how do we retrain and upskill them and make sure that we are ensuring that there is no dramatic shifts or negative consequences of the unemployment that will be caused because of the AI deployments. So I see that there is a common theme across these different sectors and industries, and these are some of the common themes that are coming up as we have these conversations with different think tanks and in the development sector,

 

Rob Stevenson  6:13  

When you consider the possibility of AI displacing white-collar jobs, you mentioned that one antidote is to upskill the workers themselves right, to make sure that they are skilled in areas that machines can't currently do. Would that responsibility not fall to private companies? And do they have any incentive to do that?

 

Srujana Kaddevarmuth  6:40  

Yes, they do have incentive to do that, and there is definitely some thinking that is happening. We don't see that there's going to be a huge displacement that would happen because of the AI this year or the next year, right? So it's going to be a process. So there is a learning and development department for most of the corporates, and they have been thinking of strategically coming up with some trainings to ensure that as we start driving more efficiency and automation using VI, how do we make sure that we upskill or transition people to the roles that require more of a human centric approach, and we don't necessarily displace these workers from the industry?

 

Rob Stevenson  7:18  

Do you have a sense of what those kinds of skill sets are the things that machines can't do or maybe shouldn't do?

 

Srujana Kaddevarmuth  7:24  

Yeah, so there's going to be a lot of focus around cybersecurity, right? So we would not necessarily have machines drive a lot of cybersecurity-related work because there is a requirement for human discretion there. There are going to be a lot of legal-related roles. There are going to be ethical AI or governance-related roles that require the algorithms to be trained in a certain way, right? So there are multiple roles that could be earmarked for humans, and there are some of the computer vision techniques that requires a greater amount of perception. There's a neurological aspect to that in terms of how we look at certain things and perceive certain things. So there are different areas that we are earmarking wherein we think that humans can excel, and we could build AI systems that could complement humans, but it should be driven by humans, right? So companies are now thinking of strategically creating those opportunities, making sure that there is the right kind of skill set, and especially when it comes to discretion about being good or bad or having some sort of devas in techniques, that's where human abilities come into play.

 

Rob Stevenson  8:31  

Is Human discretion just sort of Next on the menu, like, surely that's AI is coming for that too.

 

Srujana Kaddevarmuth  8:40  

It is coming for that too, but then there are a lot of these aspects around empathy, a sense of self-consciousness. So I think it's gonna take a lot of time for us to reach that place, right? But I think there is a development happening in that space, but it's not here now. It may be like, you know, multiple decades out, in terms of rethinking, of having AI achieve that kind of a consciousness, as well as discretion that humans have at this point in time.  

 

Rob Stevenson  9:08  

Now, the first thing you mentioned when kind of sharing what are the key concerns of these communities that you engage with, was this notion of the biased outputs of algorithms. And it was interesting to hear you say that you are so unsurprised that is the case, and that because they're I guess we shouldn't be surprised, though, because it's built by biased humans, and to then mitigate that, it's not a bug, it's a feature, right? It is the it is the direct result of the people who have made this tech. It reflects them to mitigate that, wouldn't that require some sort of higher ethical human to make it my question is like, do we have to be better people before we can make better tech?  

 

Srujana Kaddevarmuth  9:52  

I agree that's that's there with all the areas, right? So to be a better lawyer, you need to be a better human, because you need to. To understand both sides of the argument and make a case empathetically. So to do any job better, you need to be a better human. So that's the underlying premise. But when you talk about the algorithms, algorithms themselves are not biased. They just operate on the premise of the data that they have been trained on, and that data may consist of bias, and this data is mostly the statistical representation of the world that we live in, right? We have seen so many racist comments unleashed on social media because these algorithms were trained on the data which had such comments on display, and it provides the output accordingly, right? So it's really important for us to think of what kind of algorithms do we use? Are they transparent algorithms? Are they interpretable algorithms? Many a times we can decipher how certain algorithms came to a conclusion, but in certain scenarios, they act as a black box. Even the experts in the industry cannot necessarily decipher and put their finger on and say, like this particular factor that was fed in led to this kind of outcome, right? So this lack of transparency creates some sort of a concerns, especially when we deploy these algorithms at scale, because that way, we are institutionalizing the bias that exists within the data that is learned by the algorithms

 

Rob Stevenson  11:18  

when we speak about the possibility of mitigating bias and of making sure we don't displace or remove jobs from people. Do we need legal guardrails like, can we just rely on the free market to do the right thing here?

 

Srujana Kaddevarmuth  11:35  

No, I think it's important, and extremely important, to have the right regulations and guardrails in place. However, the regulatory framework in itself is evolving, and we are in the nascent stage in terms of the evolution of this regulatory framework. So it's really important to get the collective brain power in terms of putting these guardrails and coming up with these norms, etc. However, it's really important to have some sort of a legal mandate. Otherwise, it may be difficult, because the underrepresented and underserved population may be on the receiving end there, right? So that's why it's important to have diversity amongst the policymakers who can bring in different perspectives and come up with the right kind of policy. We should have an effective legal mandate to ensure that we have we are protecting the rights of consumers as they have the rights to privacy of their geolocation data, and we have multiple regulations like GDPR, CCPA in place, and FTC is coming up with new regulations every other day. So that focuses primarily on explainability of the algorithms, focusing on explainability as well as the interpretability and interoperability of these algorithms, is very, very important for us to explain how certain factors are contributing to certain outcomes. So if there is no legal mandate, then I don't think so. A lot of players would be incentivized to do the right thing. So it's important for us to bring in a legal dimension here,

 

Rob Stevenson  13:02  

the wheels of justice grind slowly set someone far more poetical than I and this space we're in moves lightning quick. And so regulation has always lagged behind the industry, but the industry has never been this fast and legality still moves at the same pace. Does that concern you? That difference in speed?

 

Srujana Kaddevarmuth  13:27  

No, I think this is not a very straightforward area, right? Like, you know, it's first of all multidisciplinary in nature. There's an aspect of neuroscience involved. There's an aspect of programming and computer science involved. There's a human aspect, there's an employment aspect. There are multiple things that are involved. So it's a very complex ecosystem that we are operating in. Drop and PII are considered very differently across different geographies as well. So sometimes geolocation data is considered sensitive in one geography, which is not considered sensitive in the other geography. So overall, the landscape in itself is complex, and especially if we expect the regulators, who are not necessarily from the domain, to write the regulations for this, it becomes difficult, something that I don't understand, I read, and maybe, like in I do, there's a lot of red tapism there, right? So that's why I think it may be evolving at the pace that it is evolving, but I'm really optimistic that a lot of good regulations are coming in as well as they'll continue to come up, and we'll be able to more steer AI development in the right path.

 

Rob Stevenson  14:35  

Okay, thank you for sharing all this. This is helpful, because you know you do sit in these these rooms, and you get to share these opinions with world leaders in this way. And so just to understand how people are thinking about this on a very high level is helpful. And you know, we can do both here. We can be at a macro level, and we can zoom in, because you do both, which is, you know, what makes you probably so good at your job? I'd love to get in the weeds a little bit with you, because in a lot of your roles, you have experience. And this need to deploy algorithms at huge scale, and so I'm just curious to hear from you some of the challenges that come along with that, because I've gotten to speak with people who are maybe at smaller startups or who are working in more siloed operations, and yet you are the companies you've worked at experiencing massive scale, and, you know, hundreds of 1000s or millions of customers. So I'm just curious, when you are deploying algorithms at scale, what are the some of the key considerations that you are reminding your teams of pitfalls to avoid,

 

Srujana Kaddevarmuth  15:29  

absolutely Rob So, especially in the larger organizations, are focusing on deploying algorithms at scale with the advent of generative AI. Now it's all about scale, and especially because we are now sitting on humongous amount of data that has got a lot of variety. So scale becomes important to drive efficiency and monetize within the organization and also to drive better customer experience. So deploying algorithms at scale is a journey of translating the insights that are generated from exploratory analysis into scalable models that can power products. It involves deploying the algorithms in the production systems and doing that at scale in an efficient manner. So there are a lot of benefits associated with deploying algorithms at scale. Firstly, it helps the organization work the AI value chain. It helps the organization utilize its cares data science and AI resources in a meaningful manner, and it helps foster innovation and drive standardization of technology capabilities across different channels, across different geographies. Some of the key considerations that I would think of or focus on, especially when deploying algorithms at scale, is around creating a proof of concept before going and deploying the algorithms in the production environment. This is really important to protect the resource investment in the initiatives, primarily because not all the projects succeed right when the projects fail, the best case scenario is that we have built only a proof of concept, and the worst case scenario is that we have built the entire product, end to end, and the results are not relevant. They are not good enough. So doing a prototype or creating an MVP and understanding the feasibility and then deploying that is going to be super important. The second aspect is specifically around the technical aspect of concept drift. Right machine learning algorithms get smarter over a period of time when they are deployed in the production environment. However, if they are not connected to the constant and diverse data feed, they degrade in quality, and that happens pretty quickly. And this is called as concept drift. So this is one of the key consideration. Is this important to connect the algorithms to the varied and Constant Data Feed and doing that in an effective manner, right? And the third aspect is the legal consideration that we are just talking about. So legal considerations are really important, especially when deploying algorithms at scale, because these algorithms can sometimes hallucinate and lead to unintended consequences, and especially if you are deploying these algorithms at scale, we are institutionalizing this bias, and there's just no looking back, right? So this may lead to some sort of a legal challenges and reputational implications for the companies that are deploying these algorithms. So taking care of these considerations is important, especially when we focus on deploying algorithms at scale.  

 

Rob Stevenson  18:26  

Yeah, certainly. And when you're speaking about building these and deploying these at scale, you will have to pick a model. And it feels like these models are in this sort of arms race right now, where they're there. One is better than another. A new one comes out, an old one that you thought was obsolete gets an update, and now it's back in the fight. How do you make sure that you are building something with like flexible architecture, so that if a new model come, when a new model comes out, that you're like, maybe this is better, if it for purpose, you can pivot it and not be stuck with the old one

 

Srujana Kaddevarmuth  18:58  

forever. This is very important. It's a fundamental phenomena in the corporate sector to create flexible models, right? The reason why we need to do that is because the technology in itself is evolving at a lightning pace, right? Whatever was relevant when I started my career is no more relevant today, and with the best information and knowledge that we have, whatever is relevant today may not necessarily be relevant tomorrow. So creating the capabilities that are flexible enough, that are adaptable, that can be there is a scope for these algorithms to evolve is going to be super important. Now, if we are working for larger organizations, there's no luxury of building one model to solve one specific use case, we need to build some generic models, and these generic models need to be flexible enough to accommodate certain real life scenarios and certain business use cases without having to undergo architectural redesign. So we need to. Focus on certain aspects, right? One of them happens to be around compute efficiency. Now we need to build the models that are compute efficient, and this efficiency keeps evolving, so the models have to be flexible. Many a times people want to build the most complex algorithms, but the complex algorithms don't scale the flexible ones, the simple ones, the adaptable ones, scale effectively. And even though we want to bring in novelty, we can keep that by bringing in some simplistic concepts. Many people focus on accuracy of the models. Now, accuracy of the model is important, especially in the proof of concept stage, but if you are going ahead with production deployment of these algorithms, there the functional usage, the computational efficiency, as well as the runtime, takes precedence over accuracy, right? And having this kind of a distinction is going to be super important. And the third aspect is around model wrappers, we acknowledge that these machine learning algorithms get smarter over a period of time, but if they're not connected to the constant and diverse data feed, they need it in quality. And to be able to connect with this diverse data feed, we need to build strong and effective data wrappers, or model wrappers, and these is the software engineering application, we need to do that because any kind of model failure that is happening because of the broken data feed is very difficult to detect, as compared to the outright application failure. So focusing on these aspects will help us build flexible models that can keep adapting themselves and solve multiple use cases and drive efficiency for the enterprise and drive by the customer experience.

 

Rob Stevenson  21:44  

Could you speak a little bit more about striking that balance between latency and accuracy?

 

Srujana Kaddevarmuth  21:50  

There are a lot of things that we could do to kind of bring in, bring the latency down and like drive right, accuracy, right? One of the things that we keep doing, especially when deploying the algorithms in the production environment, is around managing the compute efficiency. Right? Compute efficiency can be managed by having efficient architectural design. We can focus on looking at scaling up and scaling down of resources and doing some parallelization of these resources, as well as focusing on right kind of resource allocation the right phase of the project, right we don't necessarily need all these resources allocated in terms of the compute from the beginning of the project. And this resource allocation can vary. And there could also be a combination of looking at the GPU processors without the edge computing and having a combination of those and working together. So I think there are multiple considerations that we could look at, especially in terms of striking the balance between latency and accuracy.

 

Rob Stevenson  22:54  

Just from a user perspective, I can understand this, this balance, because I don't care how fast something is if the results are bad or not relevant for what I'm looking for. And on the flip side, even if it's the most accurate, if it's too slow, I won't wait for it, you know. So you know, kind of damned if you do, damned if you don't. But it sounds like you're saying it's a little bit of a false dichotomy. It sounds like you're saying you don't need to necessarily strike a balance, because you can bring compute time down and you can bring accuracy up in the same moment.  

 

Srujana Kaddevarmuth  23:24  

Yeah, we can do that, right. But if we, if there's a scenario, let's say, like, you know, we want to improve the model from 93% accuracy to that of 98 and that would mean that the computational efficiency would go down significantly, and the wait time would increase. Then that's the trade off that one needs to make in the medical scenario or in some airline safety scenario, this 98% accuracy takes the precedence over the computational efficiency. But if you're looking at some sort of serving recommendations to the customer, then 93% accuracy is fine, but you need to do that in a time bond manner, because go to market is super important. So there are scenarios where one takes precedence over the other, but there is a significant aspect of striking balance, and most of the times, it's all about striking balance, and that's what AI leaders focus on?

 

Rob Stevenson  24:21  

Yeah, that makes sense. Thanks for sharing that. Serjana, it's 2024 and I host an AI podcast, and so I am legally required to use the word generative at least five times, or the podcast, please kick my door down and they arrest me. No, I do want to actually speak about generative here before I let you go, and rather than kind of just tee you up to speak about it, which I know you could do beautifully, I just kind of want to ask you a more specific question, particularly because you have so much experience in retail, and it feels like the most common user facing use case of generative right now is basically the chat bot. And I wanted to ask. Ask you why you think that is is it just like the most obvious low hanging fruit, relative ease of setting up? Why are chatbots, where we're getting a lot of generative products right now?  

 

Srujana Kaddevarmuth  25:11  

Okay, so I have a perspective here on the generative AI deployments, right? So there is a lot of hype around we need to use generative AI across industry, as you rightly mentioned even in the podcast, it becomes a matter to talk about that, but it's very important for us to understand what use cases we are using generative AI deployments for right? Just using generative AI for the sake of it is not necessarily a great idea. We need to look at what are those business use cases that can use the conventional AI models versus generative AI algorithms. Generative AI algorithms have got a lot of challenges. We see that they require a lot of infrastructure investments, right? There are they are expensive. They are bulky. Models, transform models, and all of that very bulky, right? And it's not necessarily very easy on the compute as well as the infrastructure. The second aspect is the customization, if you are solving for a niche area, generative AI algorithms may not be that great, because we need small language models to solve and customize. Generative AI algorithms are good for generalization, whereas small language models need to be used for customization. The third aspect is around the energy consumption. They consume significant amount of energy, which means, like you know, it has got a negative environmental footprint in terms of the electricity and water usage, right? For all of these reasons, one need to be really careful of what sort of use cases we deploy generative AI algorithms for. I don't believe that generative AI algorithms only have chat boxes the deployments. Chat box is one of the deployments, primarily because the first level support is getting revolutionized with the generative AI deployment. But we are using generative AI for the recommender systems, right? So there are like, there's a huge application there. There is huge application in the space of search. There is huge application in space of content generation, as well as multimodal communication, not necessarily through chatbot, like through voice and other devices as well. So I think there are, like lot of applications in terms of how generative AI can be used, but not everybody's seeing the return on investment, and that's why people are literally cautious in terms of, how do we move and navigate in terms of using generative AI algorithms. Right? Use case, one should definitely use generative AI, but it should not necessarily be limited only to chatbots. There are multiple other applications that we could look at. However, just using generative AI for the sake of it is not going to help. There are multiple smaller models, like conventional AI algorithm, that can solve the optimization use cases in a more realistic and effective manner as well.  

 

Rob Stevenson  27:55  

So it sounds like you saying when I say why chatbots is more my perception, because Chatbot is user facing, and a lot of the more fun, exciting generative use cases are sort of under

 

Srujana Kaddevarmuth  28:04  

the hood. Yeah, there are multiple use cases for generative AI, yeah.  

 

Rob Stevenson  28:08  

Also you said the magic word, or, I guess, magic letters, which is ROI. And if you are trying to make a business case to your CFO, a chat bot replacing your customer service team, whoever they are. That's a very easy math problem, right? That's a very easy number to calculate.

 

Srujana Kaddevarmuth  28:25  

That's right. So it's like anything that you productize and have create a user experience. It's easy to one like, you know, to quantify the ROI as well as to monetize, right? So that's why you may look at the wrapper and think that it is a chatbot. But there are multiple other hidden use cases that charity TVI is trying to solve for  

 

Rob Stevenson  28:45  

Certainly, Now, one last question before I let you go, and that is that you know you see this whole wide swath of the industry you are conducting research you are in following these think tanks, as well as your full-time job and your nine to five. So when you consider the various aspects of AI that you monitor, what is the most exciting element of it to you, and what area gets you truly excited and curious?  

 

Srujana Kaddevarmuth  29:07  

Yeah, so when I started right, there was no specific domain as data science. 15 years ago, it was more in the research domain, but I got to work on some interesting problems, and I was open to the non linear career. So that is something that took me places. What really excites me about AI is a pace at which it's evolving, right? That gives me that kind of a spark and that kind of a curiosity and interest to keep myself updated with every single development that's happening in a particular area within the field. The field, of course, is like you know, very vast, and one cannot necessarily keep up with that place. But yeah, a particular area, like, you know, supervised machine learning algorithms is something that I'm really passionate about. So I keep working and trying to keep myself updated in that space. But very importantly, I feel that the domain is evolving and that it's. Self brings a lot of thrill to most of the designs enthusiasts, in terms of being a part of the this entire revolution, and it is transforming the overall technology space like none other technology, right? So that's why I think it's really interesting to see how things are evolving and being part of this evolving space.  

 

Rob Stevenson  30:21  

Definitely. This has definitely been an exciting episode, chatting with you today, and so thank you for being here and for sharing your perspectives and all of your experience. I've loved chatting with you, and I've learned a ton from you today. So this has been a delight.

 

Srujana Kaddevarmuth  30:32  

Thank you so much for this conversation. Looking forward to it. Take care

 

Rob Stevenson  30:38  

how AI happens is brought to you by Sama. Sama provides accurate data for ambitious AI specializing in image, video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail e commerce, media, medtech, robotics and agriculture. For more information, head to Sama.com