Generative AI is still part of the conversation, but perhaps more relevant today is the question of how to develop and leverage more advanced agents. Srini Iragavarapu is the Director of Generative AI applications and Developer Experiences at AWS, and he joins us today to discuss this key topic. Srini explores the evolution from basic generative AI models to sophisticated agents capable of autonomous decision-making and task execution.
Srini highlights the importance of integrating these agents into real-world applications, enhancing productivity and user experiences across industries. Srini also delves into the challenges of building reliable, ethical, and secure AI systems while fostering developer innovation. His insights offer a roadmap for harnessing advanced agents to drive meaningful technological progress. Don’t miss this informative conversation.
Key Points From This Episode:
Quotes:
“Think of it as an iterative way of solving a problem rather than just calling a single API and coming back: that’s in a nutshell how generative AI and the foundation models are working with reasoning capabilities.” — Srini Iragavarapu [0:03:04]
“The models are becoming more powerful and more available, faster, a lot more dependable.” — Srini Iragavarapu [0:29:57]
Links Mentioned in Today’s Episode:
Srini Iragavarapu: We migrated 30,000 production applications. This actually saved us 4,500 years of software engineering time.
Rob Stevenson: Welcome to How AI Happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we're about to learn How AI Happens. Here with me today on How AI Happens is the Director of Generative AI Applications and Developer Experiences over at aws, Srini Iragavarapu. Srini, welcome to the show. How the heck are you today?
Srini Iragavarapu: Thank you for having me, Rob. Really excited to be chatting with you about aws, the agents that we're working on, and the applications that we're building on top of Agents and Bedrock as well. Really looking forward to the conversation.
Rob Stevenson: Yeah, me as well. And particularly because Agentic feels like the current hype in our space. But we're at that rare moment where I feel like the hype maybe matches the ability. You know, like, we're still obviously speaking a lot about Generative, but everyone I speak to on this show, we're trying to figure out how to develop more advanced agents, how to leverage them in all these ways. And AWS is doing some really fascinating work in this space. And one guest I had on said to me, rob, the most obvious thing you could do with an agent is make a chat bot, train it with an LLM, feed it some company data, and then it can field basic customer support requests, and maybe your employees can query it for data. We're a little bit past that now, or at least you are at aws. So maybe at the beginning here, because you kind of speak a little bit about, I, don't like the state of the agenic AI union. Like, where are we right now in terms of capability?
Srini Iragavarapu: This is a very transformational time that we are in. In fact, I feel amazingly lucky and fortunate to be part of this journey. And the transition that we are going through over the last couple of years that has happened, and how Amazon and AWS are actually helping both enterprises and applications developers build similar solutions is actually transformational in a way. So you're spot on the journey that we have been in, especially in aws. Now, if I break it down into two pieces. One, the state of the union of agents. So let's first talk just briefly about the, system in general of how agents are in the world, and then I'll try to expand on where we are in the journey and how we'been innovating and evolving and helping customers solve some of the complex problems as well. With the advent and the popularity of generative AI, these models actually are very smart. They're able to not just answer simple questions, in fact that's how they started. But along with that there is reasoning capabilities that these models have. So we train the models in a way that they can reason them within themselves and say, oh wait, I need to be able to do a better job. Let me go back and do this too. So think of it as an iterative way of solving a problem rather than just calling a single API and coming back. That's at a nutshell how generative AI and the foundational models are working with reasoning capabilities. Now you take it into thinking of it as agents. Now along with providing one instruction to these models to do this work. That is how traditional AI, works. What we are essentially able to do is provide a set of instructions to these models and the models then use plethora of inputs all the way. Like you called out, there is company index that can be fed in additional tool hooks that can be provided for the model itself to execute and test and go through that journey. That's where the agents are right now within AWS and Amazon. We've been in this journey for a long time and I'll break it down into two parts. One, something that we have traditionally been doing for a while, the democratization of software and cloud operations. That is something that Amazon AWS has done for decades. We've took and taken the same approach for AI as well with systems like SageMaker and the Options that tools that we provided enterprises could actually build applications for AI. Switching gears in 2023 we actually announced Bedrock has been as a platform for generative AI. for democratization. We provide not just one model, not just two models, not just three models, every third party provider model all the way from our own models like Amazon Nova to Metas models and in fact recently The Deep Sea Car 1 is also available on Berock. What we have done is we've brought the best of the breed for enterprises. That's one. Once these models are available, on top of it we actually are able to build agents and build the platform system for anybody to build agents too. So that is another democratization kind of removed the heavy lifting of being able to understand and plug all of these versus with Bedrock agents you can actually leverage you as an enterprise or a customer can start leveraging the great foundational models and Some of the tooling like, hey, I have a knowledge base and I want to be able to ingest that data. And like you called out, you can bring in your knowledge base. There is guardrails. You want to be able to use those guardrails as an example if you're using DeepSeq would also suggest that you should leverage some of the Amazon Bedrock guardrails there are for you to be able to do this. These agents have memory also and that is very important. Think of it as almost like a separate person that you have assigned a task. These are autonomous, they're goal seeking. Like you essentially say, hey agent, can you do ABC for me? And the agent will actually figure out based on the foundational models and the memory and the inputs that you have given them and the configuration that you have done, you're able to do this. All of this is very possible right now with Bedrock agents. And so in 2023 we announced Bedrock agents in July that evolved into a generally available service in November. And last reinvent, we actually announced capability of multi agent platform and multi agent infrastructure. So this is not just one agent but multiple agents talking to themselves and each other. So you can configure it in such a way that they can talk amongst themselves and then come back with an output that is a platform layer. This is something that we've been doing now for a year and a year and a half. So any company, anybody can do this. I'll give you a couple of examples of companies doing this all the way from like you called out, the chatbot version of it, to helping with research invoicing and invoice processing as well. For example, gentech is a company that actually leverages Bedrock agentic framework and agent systems to help with their drug research. You really think about it, there is millions of combinations of genes and DNA research papers across the globe that need to be mined. Traditionally the researchers would actually run some experiments, run some simulations, read the papers, try to put one on one together. And this is millions and billions of combinations and it takes a lot of time. Gentech what they'were able to do was leverage Amazon Bedrock as a platform and Amazon Bedrock agents as a platform to build their own drug research assistance, which is what is helping their researchers as well. I mean it's actually saving them I think 43,000 years, like hours of time from overall time of savings of sorts. So there is companies like drug research, financial banks actually leveraging it to. I will switch gears to a little bit of how we Amazon internally are, Also leveraging these agents to build specified tasks for customers. In this case, I'BRIEFLY touch on QE Developer. Hue Developer is the generative AI most capable software development tool out there that uses the power of everything that Bedrock brings you. We have multiple agents in there. Like as an example. One of the things that we can do is co transformation. This is where I'll expand on something that you brought up. This is not just connecting data and building a chatbot. It is actually a lot more smarter and it'll go back to talking about how agents are autonomous, their goal seeking, multi reason, multi step process and not just one thing that is doing. The code transformation agent that we have can actually upgrade your Java 8 and 11 applications into the later versions Java 17. And this is not just a migrate my code. What this agent is actually doing is converting the code, running some tests. If the tests are not correct, then recreate the code, generate more tests and iterate multiple times so that it actually can upgrade the software to the latest version and have the test coverage too. In fact, we use this internally by the way, code transformation is something that we have used VI are a Java shop because Amazon uses all the programming languages internally but we have a lot of Java applications using the code transformation agent. We migrated 30,000 production applications from one version to another. I m like just think about it is 30,000 of them, which traditionally would probably take years and we'd assign engineers to do some work. Over time we were able to do this and we estimated how much saving this actually saved us 4,500 years of software engineering time. Putting in perspective, that is close to $260 million of savings both from an upgrade software standpoint and also the time it saved as well. All of this time now being dedicated to more creative ways of solving customer problems as well. So TLDR nutshell is we are improving on the platform and we are building agents for specific use cases as well.
Rob Stevenson: That is helpful. I did ask you a very broad question, which is the state of the agenic union. That is definitely helpful when we think about what agents are capable of broadly and then how are companies using them. Enterprise and smaller companies as well. The migration of 30,000 apps is. That's funny to me. Just because when you said oh it would take years, probably you just wouldn't do it. You know, probably no CTO would decide that's worth it. Probably the opportunity cost of having your team work on that versus something else. So this is something that is maybe not done otherwise. So that is exciting on the face of it. When you start speaking about the reasoning capacities and also the multiple agents kind of interfacing with one another, what you have there is this potential for massive productivity, of course, but it strikes me also catastrophic failure in the event that it was unsupervised. So when you think about agents interacting with one another, where is the human in the loop? What is the role of oversight in this case?
Srini Iragavarapu: That is something that we have been paying close attention to across the board as we automate all the services as well. There is multiple layers of this, the ultimate decision maker of what happens. As an example, the other software agent that we have is the software agent for code development. You provide a natural language command and this is an agent for that we built specifically for for a use case. Now in this case, the developer asks queue developer to build and write code. That is what the instruction is to the agent. What the agent actually then does is looks at your code, understands what the ask is and iterates. And this goes back to the multi reasoning capabilities based on the knowledge base it has, based on the memory from the past and all of that. It generates code and then the ultimate output comes back to the developer. Now I as the developer that ask the instruction is the one who's going to review and decide if I want to use the code as it is. Or actually you can further instruct the agent to say why don't you make additional changes along with this code could you add right tests as well? Can you look for exceptions? And you can have a conversation with the agent. So there is ultimately the human in the loop to make the final call. That's at one end of the spectrum. The other one is in bedrock. If you build agents, all of these come with observability alarms, which means you configure for failures as well. This is classic like how we do with every other service as well. When you try to automate it, you're setting up boundaries and paradigms to be able to do this and get alarmed and triggered so that when there is a failure or when there is an anomaly or when there is an exception, you as an operator are involved in the decision making to to that we are actually providing some additional tools like just so that to ensure that these models don't go do other things as well. Like another one is guardrails. Bedrock has a set of guardrails. There is a lot of them. And you could choose you as a customer, as you as a developer. Building agents can actually choose which of the guardrails you want to use and how you want to configure them. So we are doing both as you're the ultimate decider, you as the user, and providing you with tools and observability and maintenance aspects of all of this too, so that you are fully in control like any traditional software pieceful.
Rob Stevenson: Okay, that makes sense. So yeah, when you say any traditional software development, like when you explained the way that, okay, these interfacing agents will present the output the same way that a team of software engineers might present output to a director or a head of, or a team lead. Right. Like this is what we've been working on. Thumbs up, thumbs down, back to the drawing board, stamp print, what have you. So that is not totally dissimilar from the way work happens absent in AI.
Srini Iragavarapu: Absolutely. In fact, one of the recent changes we announced, and this is the transition that we're also going through and the improvements that are happening. The software agent that I talked about earlier, where you provide a natural language command and it provides you with a code and you can go back and forth and iterate, that's when I as a developer gets to choose. In fact, just two weeks ago we announced additional functionality and the feature support to this. With that, you can also tell the agent to run tests as the code is getting generated. So earlier there is probably a chance where the code that is generated doesn't run and then you have to provide further instructions. As of last week, the software agent, what you could instruct it before generating the code is along with doing what I'm asking you to do, could you also ensure that you run the tests? So what it is actually doing is it's iterating and multiply and actually doing the reasoning where it's running the test. If the test fail, that is your human in the loop directive that you provided, which is kind of how you alluded to. Right. Like as the code is generated, somebody peer reviews it and then says, yes, I won't accept it or I don't want accept it. In this case, we are empowering the agent itself to help make that call. Run the test. If you see these failures, go fix the code and then come back to me with the right code. After that, I'm still in the loop. So we're kind of doing both the, observability aspects of it, guiding the agent to do a better job and stay within the bounds with all the guardrails and the human deciding ultimately how this fits.
Rob Stevenson: I see. So that would be an example of like a custom alarm that a customer.
Srini Iragavarapu: Might build yes, again, if you want to start thinking about this in a way, if the code that is getting generated and it is failing test alarm and tell me that I need to intervene now in this case you don't have to because here the specific agent is we'telling it to go continue executing till the tests pass. So what it is doing is it iterating to make sure the test pass. But you can think of how with Bedrock and the platform that we have, we can configure systems in a way where you as a user or an operator are alarmed or ensure that it actually job fails if it goes beyond some certain threshold. All that configurability is what through the managed services that we have for Bedrock can provide as well.
Rob Stevenson: Okay, that makes sense. Can you speak a little bit more about the multi reasoning capability? Because we are so far removed from even a really technical individual's experience of interacting with machines and what they expect when writing a search or querying a database or what have you. What do we mean when we speak about multi reasoning? I'd love if you could just kind of expand on that.
Srini Iragavarapu: Yeah, this again goes back to the reasoning capabilities of these large foundational models that understand not just a question and go look up a database and answer the question, but the relationship of the question that is being asked. So that is one. And the skill that they have in the platform that we provide is the memory. Like remembering what happened in the past 30 days as an example. That's how you can configure the bedrock agent to say keep the memory it for 30 days. What it means is it knows what has been asked and what has been answered from you as a user over the last 30 days and what is the right and wrong. So what it is actually doing is think of it almost like a human being where you're asking a question and rather than just blurting out the information, it is understanding the context and the relevance. Then it is performing a task based on the guidance you as a user provided it. And if the task is not up to the mark, you can actually set guidelines and parameters too. Where you say execut it but I want you to give me X code coverage or I want to be able to do this, provide all sorts of instructions. Then it iterates through this and says oh wait, I think I should do a better job and I'll run through this, use the hooks that you're providing, go run some tests or use the additional memory and come back. So that complex system, the job of not it is sequential API calls, but actually iterative way of doing it. That whole system is the multi reasoning aspects of the model and which is the foundational models are the ones that are empowering this with all the systems that we have on top of.
Rob Stevenson: In the event an agent is going to conduct that contextual understanding or attempt to define more context, does that have an effect on latency?
Srini Iragavarapu: Yes. Some of these tasks actually are not as asynchronous and there is right reasons. One of the things for example we have a test agent as well, as the name is suggestive, you can ask the agent that we have to generate unit tests. Now you as a user could say generate unit test for my file or you could say generate unit test for a particular function as well, depending on the context that you provide. This agent actually does a job of multiple like it looks for focal methods in your repo and it says here is the function that you're asking me to generate or the set of functions that you're asking me to generate. It looks for the different focal methods, different functions that are in the repo. Does almost like a program analysis or sorts to be able to do this. And then it's not a single call of API and you're right in calling it out that. Does that mean that there is a latency aspect of it too? This is not a sequential job. You just call an API and come back. But then it can actually do this iteratively multiple times. It's in a loop with both, with bedrock agents and the agents that we built for software development, for transform or for testing, we are providing that information to the user. In fact, if you run the test agent and the dev agent in the ide, we're actually showing step by step progress as well. It shows like almost like a progress bar of sorts. It says, wait, I read five of your files. Now I'm trying to understand what your files are doing. Here is the change I just made, but looks like the test didn't happen. I need to go back and redo this as well. So you're almost like as if it's not happening in a black box, it's happening with your participation. As long as you want to be able to do this while it is autonomous, you're giving you all the insights into how this is happening.
Rob Stevenson: Okay, so are you saying that when a user gives instructions that would not yield a synchronous response, the agent is going to say, hey, I can do this, but it's going to take me this long? Is that kind of what you're saying.
Srini Iragavarapu: In a way, yes, because it says I'm going to read your files. Like if you run the test job right now, it says I have X number of files, 15 functions to look at. I'm going to start iterating through all of them. And it's ##quentially, providing that information. So the job is happening in this case. Now, in this case, this is a human triggered job. You could also configure jobs to be able to say, run this on my behalf and keep doing it in the backend. That's how generally agents are doing. People who help you at home, peers. In the same code review example that is happening, you don't always have to do pair programming. But in this case the way I'm explaining is actually like a pair programming exercise where you say, hey buddy, could you write my tests and then let's go talk about it and iterate on top of it. There's also another way of saying, can you go keep updating my tests and then come back to me once you're done? In the meantime, I'll do a couple of other tasks as well. So yes, from a user perceived standpoint, while this, this is latency, this is not traditionally, this is the discussion. In a way. You and I are having this conversation right now trying to discuss how agents are working and what AWS is doing. It didn't happen in one second. It is happening over the five, 10 minutes. And that is the conversation you're having with agents too.
Rob Stevenson: I see. So the way around users expecting immediate results is to just give the feedback of like, look, this is a longer process. We discuss it in consultation with one another. You're not going to get immediate results the way you would in another interacting with a different kind of machine or Google Search for example, or even the LLMs that are out there that would rather be fast than right. I've had this experience where thesems will never say I don't know, right, or they will never ask a clarifying question. You will just get an answer and it may or may not be correct. And so that feels like a fundamental shift in the way we interact with machines, but it feels more like the way we interact with a coworker where we've now, like, you have such little patience for machines typically, and yet you extend sympathy and patience in time to a human software engineer maybe, who would be like, hold on, what are you asking me to do? okay, well I could do that, but it's going toa take me until next Wednesday. That's a normal work conversation that happens. It's kind of happening again with the agents.
Srini Iragavarapu: Absolutely. I think this also ties back to the task at hand with QE Developer. If you are coding within the ide, you can have a conversation. There is a chat, you know, pain that you can actually talk to. The responses are instantaneous. In this case, you could ask questions like you would ask anybody else, whether it's your peer or online search as well. And with Q developer will instantaneously answer questions like tell me a little bit about how EC2 instances ask, how can I bring up, an S3 bucket? All of those, there's an instant answers in this case. As expected, these are simple calls to how we would respond. But tasks are on migration like the other transformation agent we have is the mainframe upgrades. In fact, one of the things that we announced at Reinvent last year was developers enterprises able to upgrade their mainframe applications. That's a huge deal. I some of the code bases for mainframe that you have, legacy code bases are ah, thousands, millions of lines of code. In fact, I have never really, in the last 20 years of my software engineering, I have never had to work on a mainframe system. But as I discussed with the customers and as we start talking to them about it, they have so much code base out of there now for that migration to happen. In fact, this Matt Gardin, the CEO, talked about it at Reinvent 2. It would take you five years or even some number like that for a large mainframe application to be able to migrate. Now what this agent is doing is shortening that time quite a bit. But to your point, it's not something that can happen in a second. It will literate, it will have to understand what the business uses, it will understand what the code bases are and then go through the journey too, I think. So it depends on the task, how complex it is, how much more dependencies that you have. As an example, the software agent that I talked to you about earlier, if you say run my tests as well to make sure that the tests are passing, then it certainly takes longer. And as a developer I do know that I'm expecting this code piece to be generated and test also be. But if you say I actually don't need to run the tests because I'll do them myself, then it's going to be shorter time turnaround as well. So it all depends on the task at hand.
Rob Stevenson: Certainly. Yeah. Now Seny, I want to understand a little bit more about how Amazon and AWS are looking at this organizationally, I want to understand a little bit more about the larger strategy here because the productivity gains are well stated and well understood. And when you put it in terms of work hours and the possibility to do these huge batch tasks you might not otherwise bother to do, that makes a lot of sense. And frankly it puts ROI onto some AI technologies in a way that your average Chief Revenue officer can understand and sign off on. Okay, so the productivity and efficiency gain part is well understood. But I'm curious if there's more to it here for aws, what is the strategy, the long term strategy with developing these kind of tools?
Srini Iragavarapu: This goes back to something that we have been traditionally doing. Bringing options to the customers so that the customers can actually define and derive benefit out of it. That even before the AI world that is something that AWS has been traditionally doing with various compute options, with various database options as well. Think of it the same lines. We are innovating all three layers. The first layer that we have is that a hardware layer that there is, there is chips like Tranium and Inferentia where you can train your own models, you can host your own models on compute as well that we provide. And that's something that is constantly innovating. So we do want to provide that functionality to and customers to be able to do this, bring your own hardware models. The second is abstract that out and then provide you with a platform where we are giving you all the models and all the platforming systems like Agentic systems or guardrails or knowledge basis where you as enterprise or a developer can start building applications leveraging the goodness. And then the third is building our own generative AI, applications and agents to queue developer is one of them. Queue business is another one of them as well. The idea that we have is there is customer needs in all three places and every customer has a different set of requirements and we want to be able to cater to them and bringing the same innovation that we ah, are bringing to ourselves and to the rest of the world as well.
Rob Stevenson: Okay Srini , it's a very exciting time to be working in Agentic and even for me to just be sort of speaking about it and covering it. I want to understand why now because we've had all of this hype with the last like six months or so around inic. Is there a particular technical unlock that we can look at in the last year or so that you think has led to all of this investment and advancement in Age Genic, specifically Amazon?
Srini Iragavarapu: We have been at this now for a While in fact AI has been artificialelligence and its users have been in use for us across the board all the way from Amazon.com to how we place our orders and procure hardware as well and software too at the same time we have this working within two so that's one piece of the puzzle. The journey of us bringing generative AI. Again, a lot of this conversations started with agents and all of this started with the foundational models and the accessibility to these foundational models with Compute being available. The innovation that we're bringing through hardware, Tranium and Inferentr for you to train your models and then similarly to be able to bring the best of the class models like Clods, anthropic models that there are cloud and Sonnet similarly in meta models too. So all of those are actually certainly adding a lot of the power to this. What is going on? Armed with how AWS enables customers, something that we have done traditionally. It's the same that we are doing too. I'm like the idea that we launched bedrock agents in 2023 is a testament to how we've been thinking about it almost now for two years. This has been not something that you know, I know customers are actually starting to leverage and start doing it and there is different use cases but the idea of us using generated AI, for coding Assistant or being able to build a platform started back in 2023 and even earlier and that's something that we have been doing for a while now.
Rob Stevenson: Gotcha. So yeah, this is not just hype@ah aws. This has been part of the plan.
Srini Iragavarapu: Again the example that I gave you earlier about how we used it internally and we've seen the benefit of this. The other use case of not just code Transform Prime Video which is internal team itself users QE developer and they have been looking at 50% acceptance rate in the code for a while. So internally we have been using all these tools, something that we seen value quite a bit and we've actually seen customers do this too. Ah, I'll go back to the test agent that we've done. Boomi is a company that has been leveraging some of this. Rather than doing the manual testing, they're using the Amazon Queue developer software agent and the test agents too that you have and it's saving them like 15% development costs overall. So customers are seeing real benefit. We internally are seeing real benefit. And the innovation that is happening too, it's very exciting.
Rob Stevenson: Srini now we're here at the beginning of the year and that's typically a good time to ask folks to sort of project what we can expect over the next calendar year. And you know, it's fun. Is that with you? I don't have to ask you to rub your crystal ball. I can kind of ask you to rub your crystal roadmap a little bit. And so I. I'm curious when we think about, you know, Agentic as a technology writ large, whether that's within AWS or without, what do you think we can expect to see over the next year or so?
Srini Iragavarapu: All three layers. I think there is a classic. Here is what from a pure roadmap that I could build for anybody observing is probably able to do this. And then there is a wait. This actually opens up so many different opportunities for customers and for everybody else as well. So the imagination and the creativity is what is actually going to happen. So this goes back to the models are becoming more powerful and more available, faster, a lot more dependable. The tooling on top of these models that Bedrock provides all the way from again guardrails to knowledge base indexing and multi agent systems that we have. The applications themselves are becoming a lot more powerful. Like if you and I were chatting about this four months ago before the reinvent time in December and November 2024, I wouldn't have talked about the test agent or the code review agent or the documentation generation agent. At that time we didn't have any of those. In fact at Re Invvent we announced this where now you can ask you developer to code review it for you and it will review the code, provide code recommendations of where your code isn't as performant or buggy and then you can go through that. So the number of tasks that are getting resolved or solved through these agents that is going to continue innovating as well. And to that end the imagination of how we can solve some of these problems and some that we're seeing from the customers all the way from like the medical research that I was talking to you about. In fact there is a company streamletit that actually does invoice processing themselves which is very, very manual task that anybody has to be able to do. The so to your earlier point of as customers are seeing the benefit and the platform makes it easier and easier for everybody to leverage this and we democratize it like how we've been doing it then it's the creativity that will start coming in to solving different sets of customer problems in all walks of life.
Rob Stevenson: It's exciting time to be in the space Srini and it sounds like you're doing awesome work over there. Hey, this episode has really flown by. I mean it. We've just covered so much ground here. So while we creep up on optimal podcast lengthier, I would just say here at the end. Thank you so much for being here, Seni, and sharing with me. Everything you're working on's been really fascinating learning from you today.
Srini Iragavarapu: Thank you very much, Rob. Yes, spot on on the this certainly is an exciting time just to be part of the journey from inside and how we are seeing customers actually start taking the benefit out of this and ultimately the end user end customer is the one who's actually taking the benefit all the way from cost savings to quality to be able to see creative applications and productivity go up as well. This is beautiful time for us to be around.
Rob Stevenson: I love it. I think we're going to find a better way to end than that. So I'll just say thanks for being here. Today's Srini
Srini Iragavarapu: Thank you.
Rob Stevenson: How AI Happens is brought to you by Sama. Sama's Agile Data Labeling and model evaluation solutions help enterprise companies maximize the return on investment for generative AI, LLM M and computer vision models across retail, finance, automotive, and many other industries. For more information, head to sama.com.