How AI Happens

GM for Amazon CodeWhisperer Doug Seven

Episode Summary

Doug explains how Amazon CodeWhisperer was designed to save engineering time, and reflects on what it means for something to be truly "generative"

Episode Notes

Generative AI is becoming more common in our lives as the technology grows and evolves. There are now AI companions to help other AI models execute their tasks more efficiently, and Amazon CodeWhisperer (ACW) is among the best in the game. We are joined today by the General Manager of Amazon CodeWhisperer and Director of Software Development at Amazon Web Services (AWS), Doug Seven. We discuss how Doug and his team are able to remain agile in such a huge organization like Amazon before getting a crash course on the two-pizza-team philosophy and everything you need to know about ACW and how it works. Then, we dive into the characteristics that make up a generative AI model, why Amazon felt it necessary to create its own AI companion, why AI is not here to take our jobs, how Doug and his team ensure that ACW is safe and responsible, and how generative AI will become common in most households much sooner than we may think.  

 

Key Points From This Episode:

Episode Transcription

Doug Seven  00:00

If I can take that maybe three to five minute job and make it a one to two minute job. And they do that a bunch of times. By getting rid of all like these four loops in the class files and some of the basic stuff, now you have more time to do the things that are more heavy cognitive load.

 

Rob Stevenson  00:23

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Here with me today on how AI happens is a man with a ton of experience in our space. Currently, he is the director of software development and the general manager of Amazon code whisperer, Doug Seven, welcome to the podcast. How the heck are you today?

 

Doug Seven  01:04

I'm feeling pretty good today. I'm glad to be here. I'm always excited to talk about code whisperer. So I'm looking forward to this.

 

Rob Stevenson  01:10

Yeah, me as well. I feel like my listenership will probably know a thing or two about the product, or at least be familiar with it. We can get into all that in just a minute. But I want to get to know you first, Doug. So first of all, we're here at time of recording at the end of a Monday, what is your day to day, like as the you know, in the director level at a company like Amazon? Is it just meetings? Is it lots of strategy? Do you get to read any code yourself? How do you kind of fill your time? 

 

Doug Seven  01:34

Wow, that's a good one. Monday's are particularly interesting day, because, you know, Mondays are sort of the start of the week. And so it's actually a lot of meetings, because we are using the day to kind of see where we ended the week before, and see what we're doing in the week coming. So it's, in some ways, it's almost like developer doing a daily standup, but at a much bigger scale. And so it takes, you know, half the day to go through a lot of that stuff. And to go through all the different products we work with and things like that. And then my afternoons are usually reserved for more product specific activities.

 

Rob Stevenson  02:05

Gotcha. So what is kind of the makeup of your team right now,

 

Doug Seven  02:08

the makeup of the team, we have anything in this space. So code whispers is sort of straddles the world of being a developer productivity tool, and generative AI based product. And so we're really made up of, you know, you could almost imagine what this would be when you have product managers who are they're thinking about what is our product strategy, where we go in understanding our customers understanding the competitive space that we're in, and identifying that direction for us, our engineering team who's responsible for building that product and building the thing that you touch and interact with? And then a science team, who is figuring out all of this AI stuff and all the large language models? And how do we make these things work? And how do we continue to innovate and create new capabilities in this space?

 

Rob Stevenson  02:52

Amazon, obviously, huge company loads of employees north of 80,000. At this point, I should think, and I'm curious, because software engineers are an agile breed, they like to build I like to get stuff done. I like to break things at a big company. Are you able to kind of remain agile and ship things, even though there's so many employees, and there's such a big, there's a huge scope of operation going on there? 

 

Doug Seven  03:11

Yeah, I think the crux of you know, how Amazon AWS work has been pretty well documented in our concept of the to Pizza team. And, and really the idea that big effort can come from small groups of people with really intentional motivations around what they want to go build. And so it's actually surprisingly easy to remain agile in a large organization, large company, because you sort of continue to operate in the small pockets, even teams to build big products can be made up of multiple smaller teams that are operating, sort of in concert with one another. I've worked in a lot of big tech companies. And so I think, you know, the idea of agility inside big tech is still very, very much the norm.

 

Rob Stevenson  03:50

So I'm not familiar with the to Pizza team idea, can you give us a crash course through that?

 

Doug Seven  03:55

Overall, I got some websites to direct you to. So the idea was started with Amazon quite a long time ago, when Jeff Bezos was still very heavily involved in in Amazon day to day operations. And the idea that your team should be no larger than what you can feed with two pizzas. And that's a good group of people. And if you really think about it, I've often throughout my career had a similar sort of metaphor of seven plus or minus two is a good team size, partially because my name is Doug seven, so I'd like to tie it back to me. But the idea that it's big enough that you have the capacity to get a lot of stuff done, but small enough that you have the ability for everyone to be heard and to participate and have some voice in what you're doing. And so this idea of like, oh, well, your team should be able to be fed by two pizzas as is a great sort of analogy for how you should think about team size.

 

Rob Stevenson  04:42

Gotcha. Okay, thanks for explaining that to me. I apologize if that as well tilled content soil at this point that the worrying thing is out there, but that's what you get when you don't have a technical person. I'm not but humbled podcasts are here ducks, so thank you anyway, for looping me and that is helpful. But yeah, I want to make sure we talk about code whisperer. We're getting there, I promise. First, I would just love to know, Doug, how did you come to be in this role where you are overseeing the product? Yeah,

 

Doug Seven  05:07

I kind of have a long tenure in advocating for developers and developer productivity and building developer productivity tools in various incarnations of my career, from being sort of running a startup where I was more of like an evangelist to helping to publish content and share information to developers about how to do software development, and providing educational materials to actually building some of the tools that developers are using to build their software. And then he mentioned kind of transition from that into building large scale cloud services, and things that we're leveraging AI in different capacities. And so that's kind of the culmination of all the right ingredients that bring you to, hey, let's build a developer productivity tool based on large scale AI services and provided as a cloud service. It's all those ingredients come together. So I joined AWS just a little over a year ago, code whisper product was already envisioned and that project was already started. And I joined to help bring it to market and continue to evolve it. 

 

Rob Stevenson  06:05

Gotcha. So like I said, I think that people tuning in will have some passing familiarity at the least with code whisper, but I would love to hear it from the horse's mouth. So what is code whisper? Let's just start there. 

 

Doug Seven  06:16

So in its simplest form code, whisper, we would describe it as a an AI coding companion. So for developers who think about things like pair programming, where two people are going to sit down and kind of attack a problem together, and bring both of their knowledge to that problem. Code. Whisper is the AI version of that coding companion to that pair programming that you're working with, where as you're writing code, it's able to suggest the code that you might want next, so kind of thinking ahead of where you're going, Look at what you're doing and saying, Oh, I see what you're doing. Well, here's the code you need, with the idea being that if we can suggest the right code, you know, maybe if we make a two minute job, a one minute job, or a five minute job, a three minute job, and we do that enough times in the day, you're going to be that much more productive. And that ultimately means you're going to be able to get your ideas from the whiteboard to product faster, and get those products shipped faster.

 

Rob Stevenson  07:08

So is it similar to predictive text or as a UI be more like on the other side of the screen, you have all these examples of directions you might want to go,

 

07:17

the predictive text damage is probably a pretty good one in the sense that if I'm working in my development environment, and I'm writing code, there's a couple of different ways that code whisper will work. And in one case, very much like the predictive text, as I'm writing a line of code, you'll see this sort of gray faint text appear. As you're typing, it's the suggestion of how you might finish that line of code or that block of code that you're working on. So I sort of heard whispers, looking at what you're doing and trying to predict the code you need and make that suggestion so that instead of having to type all of that, you can just press tab, and that code goes into your IDE. The other scenario, and these work interchangeably, you don't do anything different. But the other scenario is, you know, hopefully, a common practice for most developers is to write comments in your code for what the code is supposed to do. So you know, the idea is you start by writing a comment, this is here's my intent, I'm going to write a function that does this. And this, this, this this. And code whisper can read that text, that natural language text, and then suggest the code that fulfills that intent. And so the idea that I could write my intent as a comment, and then code whisper will generate that code for me saving me the trouble of having to write that code, and in many cases actually saved me the trouble of having to go look something up because I know what I want to accomplish, but I'm not sure exactly how to accomplish it. So I have to go search the internet, or look up some documentation or go phone a friend to figure out how to solve a problem and code whispers that friend who's doing it there for me, right as I'm typing. 

 

Rob Stevenson  08:42

So in an example, where you would be writing your own annotation, the following code is meant to accomplish x, it then going to be searching is like language matching to other annotations, or how is it coming up with the fulfillment of your intent.

 

Doug Seven  08:54

So behind the scenes for code whisper, there's a large language model that's capable of understanding the natural language, and then deriving from that natural language what code you want. So it's unlike a library where you might say, Oh, this matches, I'm gonna go search for annotations and pull some code snippets and give them to you. The large language models are really more capable of interpreting what they understand, and then producing the code that you need. So in effect, they're writing new code based on what they know about code. So if you train a large language model on a programming language and given enough examples of how to write different kinds of applications, that can derive from what your intent is, what code you need.

 

Rob Stevenson  09:36

Okay, that leads me to an interesting point, which is, how generative in quotes is this? Meaning? Is it only going to suggest code that has previously existed in production?

 

Doug Seven  09:50

That's a great question, actually. And the answer is no, it's not going to only generate code that it's seen before. Although that can sometimes be the case. But what it's doing is it's much Like, if you were to learn something, eventually you learn enough about that something that you can generate new things, right. So if I learn how to code in a particular programming language, once I know how that programming language works, I can stop following the examples and start creating my own code. So the language models work the same way they've seen enough examples. I mean, we're talking about billions of lines of code that we've trained these models on, that they can generate completely new code that has never existed before in the world, based on what they know how to do. Now, sometimes one of the fascinating things about how large language models work is sometimes they will produce something they've seen before, much like maybe you and I would do if we were working together. And we're working on probably like, Oh, I know how to solve this. I've seen it before. And maybe I'm gonna go to a website, I'm gonna copy something I've seen before. And I'm gonna put it in there. The language models can sometimes do the same thing. And so for code whisper, we've introduced a capability called the reference tracker, which is designed to help developers understand when the model is producing code that it seen before, particularly if that code is open source license code, it will say, hey, this code comes from our train, we've seen this on our training dataset before. Here's where it comes from. Here's the you know, open source project or repo that comes from here's the license it's under, we just want to make sure you know that before you decide if you want to use this code or not, 

 

Rob Stevenson  11:19

Is that the main reason why you would need to know if something was brand new or had been used before is the case that it might be under license?

 

Doug Seven  11:29

Yeah, the primary reason that you would want to know if it was replicated from its from the codebase it learn from as if it's copyrighted code, right? So copyrighted code would be anything under a license in an open source repo, maybe an MIT license or an Apache license or something like that. And that doesn't mean you can't use it, you're totally okay to use it, those licenses will tell you what the requirements are for using it. So being able to tell you, hey, there's a particular line of code, maybe you get a code suggestion, that's 20 or 25 lines of code that we say, hey, in this, there's one line of code that looks just like this code from this open source repo. So we want to be able to make sure that you know, a, which part of the code that is B where that code comes from, and then see what license it's under, so you can make a decision about what you want to do with it. 

 

Rob Stevenson  12:13

Okay, gotcha. Are there other reasons why one might want to know that this is pre-existing code or brand new code? Maybe

 

Doug Seven  12:19

but that's really the primary reason is knowing if you're using something that's copyrighted. 

 

Rob Stevenson  12:23

Got it. I'm curious to hear you kind of wax philosophic, on generative. And this is my favorite part of the episodes, we wax philosophic, because I can really put my liberal arts degree to use but does something need like in the example of code whisper doesn't need to generate brand new never before seen code to be considered generative?

 

Doug Seven  12:44

I think when you look at how the language models work, to be able to call them generative isn't restricted to just the fact that they've created something never before seen, if all they did was repeat things that they've seen before, maybe that we'd have some way of saying, hey, is this really tentative or not? versus, you know, the reality is, it's, it's a fraction of the time a small fraction of the time that the language models are generating something that isn't, you know, near identical match to something they've seen before, the majority of the time what they're generating is something that's completely that new. So in that regard, I would say, yes, these language models really are generative in the sense that they're taking what they've learned in the world, and applying that to create something that meets a need or requirement that has never existed before. You know, and again, maybe in a small fraction of cases, some portion of that has existed before. And code is an interesting thing, you know, if we're gonna wax philosophical about it, like, there's only so many ways to write a for loop or to, you know, handle some of these things. And so, you know, some things will often look very familiar, very repetitive. But the totality of what you're building is, in fact, new and different and novel, compared to something that would be there before you wouldn't you're not just recreating some sample, you're taking what you've learned and grew.

 

Rob Stevenson  14:04

Yeah, the whole is more than the sum of its parts, that old chestnut, right. But that is well taken that there's lots of self problems and mostly self problems, write that you don't need to reinvent the wheel every time you go to write some code, and that if this thing has operated before this line of code has worked, then why would you need to rewrite it? 

 

Doug Seven  14:23

Yeah, and I think, yeah, there's lots of solve problems. There's lots of unsolved problems. And a lot of the problems are yet to be solved, they're going to be solved with constructs that already exist. You know, I use for loops are kind of my go to because it's like, of course, for loops that have existed forever. When I was a kid, we'd go to the computer store, and we'd write you know, for next loops and have our names up on the screen 1000 times, you know, so it's a construct that has existed for a long time, but you use that construct to create new things to solve new problems. And so lots of problems will are yet to be solved and will use generative AI to help us solve them. In this case, it's about just helping to solve them faster. 

 

Rob Stevenson  14:59

This feels like an important use case. And the entire industry right now is this idea of a co pilot and AI copilot for that can be trained on a company's own data set that can be running next to you while you write code or while you write copy or while you put together your Salesforce plumbing, whatever the sort of the AI copilot feels like, it's really important. I don't want the trend. But just this is happening a lot. Right now it's a use case, what made it clear? Do you think that this was an important product for Amazon to put together? 

 

Doug Seven  15:33

So the idea for code whisperer came around some number of years ago, and part of this comes from two things, there's kind of two points converging, or maybe three points converging, in that we employ a lot of software engineers. And so we're acutely aware of the challenges that software engineers face on a daily basis. And the challenge with keeping up with an ever evolving landscape of programming languages and frameworks that they have to know the sensitivities around managing open source code, dealing with the security issues related to code, you know, all these things are problems that every developer faces daily. So we knew that from our own experience, but we are also hearing it from our customers who are coming to us and saying, We want to build applications and run on AWS, how can we be better faster at doing this. And so while all that's happening, the third part of that convergence is the capability of large language models was rapidly increasing, and then coming down in price. And the ability to create a large language model from maybe 2018 2019 to now has just exploded, like the size and capability of the models has really changed everything. And so when you start to put all these things together, we recognize the opportunity to say, well, what if we could apply these large language models to this problem that we are experiencing ourselves and our customers are telling us about, we could make the process of building applications easier, because we can help write code. And in particular, the development software developers are often like, they crave new technology. It's I think it's the nature of being a software developer, you're like always craving new technology. So it's a great place to use this technology in a really impactful and meaningful way really early. 

 

Rob Stevenson  17:13

Yeah, that makes sense. Do you think it is useful for code whisper to be trained on a company's own code base? Or is it sufficiently effective, having been trained on whatever it is that you're feeding it over there? Yeah,

 

Doug Seven  17:27

so the way I think about this is, I'll spoil my own answer by saying, I absolutely think it's important for code whisperer to be able to understand your private code, learn from that private code and be able to make suggestions based on that private code. And I get there by way of saying, that's the number one thing our customers have been asking us for since the day we put this out there. And the analogy is a little bit like, you know, if I start using code, whisper code, whisper knows 15 different programming languages, it knows all kinds of different frameworks and things, it's really capable of doing a lot of different things. And it would be like, if you had a large company, and you had large products, and you built with a lot of internal libraries, and internal API's, and things like that, and you hired a new developer on day one, they come in with knowledge of languages they come in with knowledge of frameworks are very capable, but they don't know anything about your codebase. And so they're productive, but only to a certain degree, they can help but only to a certain degree. And so the idea of customizing code, whisper, which is a new capability, we recently announced this ability to point code, whisper at your code base, and have code whisper kind of reason over that code base and get smarter about what your code can do. So that it can make the right kinds of suggestions. That's like having that same software engineer, three years later, they know everything about your codebase. They know how to do things. They're really efficient, they're really effective. And they can build all these kinds of things. 

 

Rob Stevenson  18:52

How have you seen recommendations change or improve once given access to a code base?

 

Doug Seven  18:59

It's amazing in one of my favorite things is all summed up this way. Because I thought, we have an internal channel where the developers are using code whisper across the company can kind of ask questions or share ideas or provide feedback, ask for features, whatever it might be. And somebody recently posted, we have a couple different customizations we use internally. That's how we kind of tested this capability out. And one of the engineers that was using one of the customizations came on he was his first time using it. And he posted how excited he was that he went to write some unit tests for some internal functionality. And we have a very specific way of doing it using some internal libraries, how we do it, and he went to write the unit tests for that. And code was generated the unit test perfectly in line with what our internal guidances and our internal direction that's and he was just blown away that it knew how to do that. That's cool. And frankly, there's a little bit of maybe fatherly pride that comes from like, this is working people like this. This is like really making a difference and people are not only are they like Have you been enjoying it? But they're also getting that productivity gain that the well that's the whole reason we built this what we want to get out of it?

 

Rob Stevenson  20:06

Yeah, that's got to feel good. Knowing just isn't an obvious thing. But you're just like, yes, it works like fist bump at the desk, right? And like, you know, it works. But then you hear those sort of qualitative experiences, it makes it all worth it. No. 

 

Doug Seven  20:17

Yes, absolutely. Like, just hearing. What's fun. This sounds kind of weird. But what's funny is when I hear people who are surprised that it does the things it does, and partially because like this idea of generative AI is still relatively new for a lot of people. It's not something that's commonplace. And even for developers, most developers aren't using generative AI and their development yet today. And so the sort of surprise factor of how capable it is where people like, oh, my gosh, you know, yeah, I read the marketing material. But wow, this is amazing.

 

Rob Stevenson  20:46

So this is a very foundational response to the is AI taking your job, so I apologize for restating it, but it is worthwhile is that AI is not gonna be for your job, it's coming for the parts of your job you hate, right? It's coming for the things that should be automated, that you know, the things that only you can do will remain your domain. And you'll be able to do more of them because of tools like whisper, for example. So one example you gave was like, Okay, if this code exists out there, no need to rewrite it. Here it is. Go ahead like that saves you, you know, a few minutes, could you share maybe some of the other examples of those parts of the software engineering role that maybe engineers wish they didn't have to do that could whisper is coming for?

 

Doug Seven  21:25

Yeah, I think you're exactly right. The idea here is that AI will replace the mundane parts of your job, the parts that are repetitive, as almost all innovation in the world has done right and replaced the things that are repetitive and mundane. And then a lot you more capacity to do the things that are novel and interesting. So the idea that like another one of my kind of favorite examples of exactly this kind of scenario is if I'm going to write a one of the common activities I would do as a developer is write a class file to represent a data object. It's not complicated. It's not rocket science, it's pretty mundane, it's kind of follows a pretty set pattern. And if I just say, hey, I want a class to represent this object, code whisperer can generate that code for me. And so I go back to the same concept I was talking about before, if I can take that maybe three to five minute job and make it a one to two minute job. And I do that a bunch of times. By getting rid of all like these four loops in the class files and some of the basic stuff. Now you have more time to do the things that are more heavy cognitive load, solve the novel problem, figure out the new thing, solve the problem. That's an unsolved problem today, versus spending all your time on this sort of mundane, repetitive stuff. Unit tests are another great example. Nobody really likes to write unit tests are just sort of an unnecessary thing. So if I can say he called whisper, read a unit test for this, it gets that done for me really quickly. And that can do something that's more interesting. 

 

Rob Stevenson  22:49

Yeah, absolutely. Now, anytime we get into automation, generative, et cetera, et cetera, it's worthwhile to speak about guardrails and what kind of security is being put in place? So what's going on over there with code whisper in terms of like the guardrails you're putting up? How are you making sure people stay on the straight and narrow. So

 

Doug Seven  23:05

this was a really important thing for us, kind of from the very beginning, when we approached the idea of building AI that can participate in the software development process, essentially, building AI to collaborate with you on software development, taking a very responsible approach was really important to us. And so from the very beginning, we had a tenant around the responsible use of AI. And so we wanted to do a few things. One I mentioned earlier was this idea of the reference tracker, that if we're going to suggest to you code that exists out in the world, and particularly if it's under some kind of license, we should make you aware of that. And we shouldn't suggest that code to you, unless we're telling you where you can find that code and what you can do with it, to make sure that you're coding in a responsible way. Similarly, if we're going to participate in software development with you sort of in this collaborative capacity, we want to give you the tools to make sure that the some of our collaboration is safe and secure code whisper will generate great code. And we do a lot of things on the back end to detect for security issues, or bias issues or toxicity and things like that, to make sure that code that's coming out of code whispers really good quality code and really appropriate code. But as soon as it becomes a collaboration with someone else, you kind of lose control of what could happen with that code. Right now, a human and an AI are working together. And mistakes can be made between the two. And so we want to put tools in the developers hands to be able to find issues. And so one of the other features of code Whisperer is a code scanning capability. Where we will search you can go in and run the code scan and we'll look at all the code not just the code, the code was generated, but all the code, code that from two years ago, and identify if there's any security vulnerabilities, or maybe some cryptography errors, or you know, you're not following best practices, things like that. And we'll identify those for you and say, Hey, here's we found an issue here. And you can go fix that. But you know, this is the same kind of thing you might use in the CI CD pipeline. We're just sort of shifting left and putting it in the developers hands to make sure we can be as responsible as possible. And then as I mentioned on the back end, as we're, as we're training the model We do a lot of work to filter the training data to make sure that we're getting, you know, we don't want to garbage in garbage out problems, we want to make sure that data is of high quality. And then we want to put the right kind of capabilities in place as we're generating code to make sure that nothing unwanted is coming out and nothing that that has, you know, you don't want anything that has toxic language or has maybe some bias implications or things like that. And so we want to put all those protections in place. That was hugely important to us from the day we started. 

 

Rob Stevenson  25:26

Yeah, yeah, of course, when you're looking to prevent the garbage in garbage out problem, what is the garbage in you're trying to prevent?

 

Doug Seven  25:33

Yeah, when we're preparing the training dataset, for the model training, we really look for a lot of things, some of it's just deduplication, to make sure we have a good clean data set, some of its, you know, searching for known toxic things that you might feel toxic language or biased code, examples, things like that. And to some degree, there's a bit of sort of program analysis work that we would do to just make sure that the code that's going in is actually functional working code. And so there's a lot of work that goes in, in all these different capacities in that training data set to make sure before we start the model training, that that's a really good set of training data. 

 

Rob Stevenson  26:07

Sure, sure. So this is a product by software engineers, for software engineers, and such, it will remain I imagine, but do you think that there is a possibility that you wouldn't need to be a software engineer to use a kind of code writing copilot, it may not be code whisperer, but is that just like the natural progression of this kind of tech, 

 

Doug Seven  26:27

I do think this is what generative AI is going to do for us, right is the ability to use this intelligence to help us in our daily tasks in all kinds of different ways. We started with the software developer, and particularly the professional software developer, someone who, you know, that's their, that's what they get paid to do. And they want to use tools like this to make them more effective and faster, that the same technologies can be used in different tools that meet the needs of different people. So if you're someone who maybe is a subject matter expert, who you're not really a software developer, but you know enough to kind of build some frontline applications or things like that, this kind of technology can just make that that much easier. Because then the idea, like if you think about one of the ways I was describing using code, whisper, I'm going to write a comment in natural language, and it's going to generate the code. So the idea that I can describe my intent, and then the AI is going to generate the code I need, that falls right in line with the idea that I don't have to necessarily be super technical to do this. And some of that will come down to do we limit the scope of what you're doing. So if we, you know, someone were to build a tool that's around a specific scenario, maybe you limit the scope of the kind of code you create only to a certain set of code. But there's a lot of different possibilities with this, I think we're gonna see generative AI kind of, we're already seeing it today. But I think we're gonna continue to see kind of getting infused into almost every workload, regardless of what kind of professional you are. Or even if you're not a professional, there's various other tools you might use. I think we'll see gender pay I play role and all that.

 

Rob Stevenson  28:00

I think so as well, Doug, I don't think anyone would bet against that possibility, having seen some of the awesome stuff coming out, especially within code whisperer, as well. So, hey, we are creeping up on optimal podcast length here, Doug, this is flown by but it's because I love chatting with you. So we have to wind down here, but at this point, I'll just say thank you so much for being here, Doug, and sharing with you about the product. I've loved learning about it today. 

 

Doug Seven  28:19

Hey, Rob, this is great. I love the philosophical questions. It's a really, sometimes you know, you just got to kind of think about how these tools applying what you do with them. But it's great to have the conversation. I really appreciate it.

 

Rob Stevenson  28:31

How AI happens is brought to you by Sama. Sama provides accurate data for ambitious AI, specializing in image video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e-commerce, media, med tech, robotics, and agriculture. For more information, head to sama.com