How AI Happens

Carrier Head of AI Seth Walker

Episode Summary

Using Data Science to Solve Business Problems with AI with Seth Walker Episode 89: Show Notes. Today on How AI Happens, we are joined by Seth Walker from Carrier, one of the leading companies in generative AI creation, to discuss how he and his team use an agile and almost chaotic approach to solving business problems with AI. We delve into how to measure success when it comes to building AI models before our guest stresses the importance of prompt engineering skills. Finally, Seth tells us about all of the new AI inventions he is excited about.

Episode Notes

Key Points From This Episode:

Quotes:

“In many ways, Carrier is going to be a necessary condition in order for AI to exist.” — Seth Walker [0:04:08]

“What’s hard about generating value with AI is doing it in a way that is actually actionable toward a specific business problem.” — Seth Walker [0:09:49]

“One of the things that we’ve found through experimentation with generative AI models is that they’re very sensitive to your content. I mean, there’s a reason that prompt engineering has become such an important skill to have.” — Seth Walker [0:25:56]

Links Mentioned in Today’s Episode:

Seth Walker on LinkedIn

Carrier

How AI Happens

Sama

Episode Transcription

Seth Walker  0:00  

good data, scientists can build strong models, they can do really good coding. They can really articulate problems from a technical perspective and be efficient and optimize the processes from a compute and everything perspective. A great data scientist, and one that is much rarer to find is one who can go talk to a business partner and explain it to them in such a simple way that the business partner walks away understanding what the solution is.

 

Rob Stevenson  0:24  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Okay, hello, again, all of you wonderful artificial intelligence professionals out there in podcast land. It's me, Rob here with another classic installment of what I hope is your favorite technical AI podcast, and I have a great guest for you. He has had roles in data science and research at loads different organizations. Currently, he is the head of AI over at carrier, Seth Walker. Seth, welcome to the podcast. How are you today?

 

Seth Walker  1:16  

I'm doing all right. Thanks for having me. Really excited to be here?

 

Rob Stevenson  1:20  

Yeah, me as well. And you shared with me that Seth is the Egyptian God of chaos a moment ago, which I did not know that. Maybe not the best approach for AI, right? It's like a little bit of chaos. And maybe it's built in some attention mechanisms to make sure things don't get too off the rails? How do you weave chaos into your role?

 

Seth Walker  1:38  

Yeah, absolutely. You know, I always joke about that, because I feel like, you know, throughout the course of my career, you know, a lot of the AI teams that I've worked on, they've been, you know, very small affairs, right. I mean, AI has historically been something that's a little more experimental and speculative, that companies are looking to invest in. But sometimes that level of investment can be a little bit tepid. And as we know, throughout, you know, the last several, a couple of decades, maybe slightly less than that, since AI has really taken off, there have been some significant ups and downs. You know, companies have invested significant amount of funds into the creation of AI teams and AI solutions that frequently did not necessarily pan out. So I always joke about being, you know, kind of chaotic in my approach only because as a small team, you kind of have to be scrappy, especially when it comes to data science solutions. Because data science solutions often, you know, require you to be very creative in your approach, they require you to wear a lot of different hats, because a lot of times the best solutions are solutions that integrate into you know, in systems and products that you may not have the skill sets on your team to be able to enable those types of integrations. And so you really have to kind of be all over the place in some ways, in terms of being able to Agile ly move from one pivot from one thing to the next. And you never know when your models are going to fail. So you always have to pivot. I joke that when I say that we are agile, we mean it in the truest sense of the word, a lot of teams, you know, they approach agile, and I'm pinning this word, but I say they're original, meaning that they use agile tools to such an extent that they become the opposite of agile, right? Like they use the tools as a, almost it becomes more waterfall in nature, right? And so, you know, we try to be truly agile on our team.

 

Rob Stevenson  2:30  

They're agile, as long as the agility requires them to use the very specific set of tools and approaches, right. So you were on stage at AWS reinvent, and you spoke a little bit about how the CEO over there kind of gave a mandate to establish a center of excellence in the AI side of house, I want to get to that, can we first maybe hear a little bit about the company for those who maybe aren't familiar with carrier, the specific kinds of work you're doing? Are there? I would love to know a little bit more about it, and then how you wound up there too.  

 

Seth Walker  3:47  

Yeah,absolutely. So carrier is largely I think people think of it as an H fat company. So the founder Willis carrier over 100 years ago, invented the modern air conditioning system, like the technology behind modern air conditioning, which you know, in many ways is probably one of the most disruptive technologies that has ever been created. And in fact, we would not be speaking here today, if it weren't for air conditioning, because the amount of heat that you know, data centers and GPUs and things produce is not practical right the systems have to be cooled and they have to be cooled efficiently in order for you know, the actual hardware to be able to properly function right. So, you know, in many ways carrier is kind of the necessary condition in order for AI to exist from the get go. But, you know, recent years, you know, carrier has gone through a lot of transformations, being part of a larger company and then being spun off into a more narrow company that was focused on not just h fac, but also refrigeration and fire and security. And you may have heard recently that we are engaging in a divestiture of our fire and security business and part of our refrigeration business and at the same time, acquiring a new European a spec business and that's called Viessmann, where they have a really large footprint in the residential European market that is focused on heat pump technology, which is more eco friendly. The electric powered and uses a new type of technology for the movement of heat and efficiency of heating homes. Right? I always joke that, you know, our kind of what we do is just we move air, right? We move heat. It's just the laws of thermodynamics. But AI can be applied in so many different ways. You're a carrier, and I can talk about those in real quick. Do you want me to get into that? Or would you rather than go to the background first?

 

Rob Stevenson  5:24  

Yeah, let's get to know Seth, what's the stuff of it all?

 

Seth Walker  5:26  

Okay. So you know, my background actually is very, I think interesting. And not typical for a lot of folks that go into AI and data science, I think when you hear a lot of AI professionals, you hear, you know, their engineering background, they have a computer science background, no physics, I've seen a lot of physicists that get into the AI space. And those are certainly like really great backgrounds to have in order to be able to be successful in creating AI models. But I come from a more social science background. Now, for those of you that don't know, I know, they call like, you know, things like physics and biology and chemistry, you know, the physical sciences, the hard sciences. But I jokingly say that, you know, at least physics has largely defined laws, right? If I, you know, throw a ball, we can calculate the trajectory of that ball. Now, tell me model that helps to, you know, easily define the behavior of people. And then we can talk about complexity, right. So, in many ways, the social sciences are the hard science in terms of the level of complexity of the kinds of phenomena that you're trying to model. But another thing that people don't know is that when you get into the graduate level of the social sciences, it becomes very, very, like practical statistical, like machine learning, kind of methods, right? Because, you know, at this point in the stage that we are at as a civilization, in terms of scientific method, and everything, you know, we want numbers to be able to back up any of the claims that we're making, right, we're in the world of, you know, falsification, and statistical analysis and everything. And so even in the social sciences, whether it's through surveys, or through other types of experiments in everything, or whether using observational data, a ton of machine learning and stuff goes on in the background, you cannot be successful in the graduate program in the social sciences without it. And so I think, in many ways, it has actually made me a better data scientist, because a lot of the things that, you know, I've historically in my career looked at, or like productivity enhancements, and things like that were, which are fundamentally, you know, human behaviors, right, they're trying to understand the way that humans, you know, their behaviors and how they react to things and how we can predict what they're going to do. And so a very Lynn's very nicely into a lot of the applications that we have from a productivity and like kind of operation standpoint. But after I left grad school, I went to the health care segment where I worked for many years, very complicated area, and the industry. And then after about five or six years in the healthcare space, moved to carrier very much, you know, a believer in the ESG goals that carrier has, and trying to do better for our planet. And yeah, it's been quite the run ever since.

 

Rob Stevenson  7:47  

It is such a unique background, it's so common to have engineering at minimum, yeah, lots of physicists, I've noticed that as well. Definitely, like you say, stem hard sciences. But it's good to know that there's other paths of the space. And it's an exciting time to be in this space. So I do love hearing about that, because there's maybe not the, the well trod path through this function like there are in other departments in the business, right. And in sales, for example, where you know, you there's sort of a pathway before you. But yeah, it's exciting, I do love to hear that there are all these different ways to the function, but you have primarily served in kind of data science roles prior to this. And I feel like for a while now, every company has hired a couple of data scientists because Because Because just like because it's like a sexy thing. To add it all, you got to have a data scientist, now, having people with the relevant titles on board, very different than establishing a Focus Center of Excellence to unlock the business value of AI, to quote your CEO, right. And so I fear that people out there listening, they may be in a situation where like, they are a very highly skilled, ml engineer, data scientists, and maybe their company isn't really action to take full advantage of what they can accomplish with the tech. So I would love to hear a little bit about how you went about that mandate about like, how do we unlock the power of AI? How do we get the right people in place? And how do we build an organization that can actually put this stuff to work?  

 

Seth Walker  9:13  

Yeah, absolutely. So you know, I, you know, joke earlier about the process part about being chaotic, right, and about how, you know, you have to be super scrappy as an organization. But at the same time, you know, I've learned a lot of lessons along the way about the importance of process and everything. And so that's really what it comes down to, because, frankly, you know, you've seen a lot of companies make these investments in AI. And there have been some pretty big failures, right? Or some, at least not realizing the value that, you know, no return on investment in these initiatives. And so what I found, right, is that really building the AI models and stuff like that, that's not the hard part, right? I mean, it's actually relatively easy to get some data, you know, engineer some features, and, you know, look at the, you know, leaderboards or use any sort of auto ml solution, and you'll find that a lot of these you know, modeling architectures can Do some pretty good results. What's hard about generating value with AI, is doing it in a way that is actually actionable for a specific business problem. And I found that there's about 80% of the time. There's two ways in which you know, projects fail, you always hear the kind of talking point that 80% of model scales and ever make it into production. During my time at Humana, we managed to flip that, and we it was about 80% of marble halls didn't make it into production setting. But we did it because of good process and through focusing on the right kinds of problems, right? Less on what kind of model should we use in that kind of thing, more so on? Is this a real business problem that can be solved by AI? Do we have the kind of business stakeholder investment or the ion that we need in order to successfully push it from end to end? Meaning that they're also fully aware of the risks that we could potentially go in and try to build a model and then not work? Are they aware of the lift that is required from an operationalization perspective, meaning that we need to actually integrate whatever solution we're building into some sort of workflow, right? That means integrating into an end tool, or integrating into a process, I always say, when people ask me what is data science, you get lots of different answers. But one that I really liked, that actually was pinned, and I'm gonna, I'm not gonna steal credit for by one of my favorite data scientists I've ever worked with back in Humana, he said, the data science is just the act of using machine learning or using data to solve problems or to optimize decision making. And that's really what we're doing. Right. And so when you think about how to solve business problems with AI, you're thinking about, where in the business? Are we making decisions that influence the outcomes we want to influence? And how can we connect those specific decisions to specific AI capabilities that we can use to help optimize it, whether that's optimization through making better and more accurate decisions, or whether it's optimization through the lens of, we have too many decisions than we can possibly make? And we need the AI model to help us prioritize, or, you know, optimize what we're doing it on? Right? And many in between, right? That's a very simplified version of it. But then on top of that, you know, like your data, you know, are you getting access to the right data? Do you have the right kind of data? Does it represent, you know, does your training data represent accurately your production data? You know, all these types of things? And so when I'm thinking about, you know, how do we create success around AI? It's actually less about, you know, can we build the models, it's, you know, we have lots of people that can go models, right, and we have really strong talent. But it's about setting up those processes, so that when we're thinking about the end to end model lifecycle, we know from the get go, that we're pretty sure that we're going to have success, that we have a very clear vision, and a very clear articulation of what our end goal is, and how it's going to fit into the business process or into a product or wherever it's going to sit. And we have a very clear articulation and vision for how we're going to actually go through the process of, you know, solutioning, on the use case, working with our business partners all the way through the build and production of the model. So that's kind of one piece, right? But it's like, how do you take that, but then scale it to an entire organization? Right, that works great for a single team? But how do I, you know, I've been in this world where like, I was in the embedded team, you know, like I was in a specific business segment. And we were very successful. But then I look at the centralized teams who would very much struggle sometimes with being able to kind of, you know, create the value that they need. And you know, it's hard, right? I mean, even if you have a ton of expertise, it's hard to kind of take those learnings and apply them. And so what we've created here at carrier is kind of a five pillar strategy. You know, number one pillar is always going to be your data, you always have to make sure that you're providing people with, you know, access and easy access to the data that they need. But more importantly, as we start to kind of socialize and democratize AI, we need that data to be in the format that reduces the risk of having problems with your model. Meaning that, you know, things like leakage, like the common kinds of problems that you see that results in model failures. The other pillars are having a centralized AI platform and giving people the toolset and the technology stack that they need in order to actually build and operationalize the model and perform the deployments and monitoring that needs to be done. Having strong governance around it that's kind of tied into that platform to ensure that we're minimizing the risks of the business. Because sometimes it can be hard to really think about the way in which your model can introduce risk to the organization. And then really, to other people, just as people. So number one, like how do we enable individuals throughout the company that are outside of our team, and enabled them to be able to have the learnings and educate them about AI and educate them about the tooling and educate them about the risks? And then finally, our core delivery team, which is our core folks that are building kind of the initial use cases? How do we, you know, foster their talent and foster their capabilities and give them the kind of pathway they need to be successful?

 

Rob Stevenson  14:47  

This framework is, I fear, a little different than the standard skill set, you know, an ML engineer or a data scientist might have and yet, it's so important for them to be able to do their jobs and to feel like they're contributing at a high level to their company, right, that their insights they're generating are being acted upon how much responsibility lies with the individual contributor here to do some of this work to make sure that they're being heard, and there's buy in, and they have all this connection to the right business stakeholders and all the stuff you just mentioned?

 

Seth Walker  15:23  

Yeah, something I think a big part of this comes down to, you know, I think there's a tendency, especially among some, like super technical folks, to focus a little bit too much on the technical component of the solution, right. But you know, we're in a business. And even if we weren't in a business, whether you're in a nonprofit, or just, you know, whatever it is you're doing, you still are trying to solve problems. And those problems ultimately, in many cases, involve, you know, people that need to action on whatever the solution is that you're needing to build. And in my experience, as long as you're creating good solutions, and you're able to properly articulate those solutions to who, you know, the stakeholders, you know, the rest will follow in many ways, but this is why I always say, like, you know, when I'm scrappy, when I say that, a lot of it is just the ability to kind of really dive in and understand things from the business perspective. Because without that business perspective, and without understanding what their processes are, and what their problems are, and how they use the information and how they actually work on a day to day basis, we almost always do kind of a day in the life exercise, whenever we're trying to come up with a new, like, solve the new problem is that, you know, being able to understand what that is, is going to make you more successful. And so you can't like if you're especially if you're on a smaller team, the job of an individual contributor, if you want to be truly successful is that you have to go out there and you have to really think like a person that's in the business, you have to really think about what their needs are. And then you have to be able to accurately communicate that I always talk about, you know, people will sometimes say like, what separates like a, like a really amazing are great data scientists from a good data scientist, like a good data, scientists can build strong models, they can do really good coding, you know, they can really articulate problems from a technical perspective and be efficient and optimize, you know, the processes from a compute and everything perspective, a great data scientist, and one that is much rarer to find is one who can go talk to a business partner and explain it to them in such a simple way that the business partner walks away understanding what the solution is, right. And they think about the things that they're building not as just some sort of abstract, you know, kind of technological solution, but an actual process that will be deployed within the business in order to, you know, effectuate an actual solution or effectuate value. So that's the role that individual contributor plays now, from my perspective of, you know, from a leadership position, and from thinking about, you know, a COA, I want to be able to, like, abstract that away, not everyone's gonna have those talents, I don't necessarily want everyone to have those talents. So we do want to think about how can we organize and structure our COE such that we can kind of take the people that, you know, have the skill sets, maybe like, you know, what you might traditionally think of as a product owner or something like that, or now I hear the word like AI, translator, things like that, who can kind of make the connection to the business problem and understands the AI solutions, just enough to be able to kind of bridge that gap between the business and the tech teams, and then be able to accurately and also effectively communicate with the technical folks so that we can gather the appropriate requirements and everything. So it really just depends on where you're at as an organization in terms of scale, and in terms of the kinds of resources and things that you're able to take on. But again, the key is no matter what, regardless of what skills you have, or what kinds of roles you have is that you just have to be able to make that connection, you have to understand exactly how your solution fits in with business. In order to do that, you have to understand the business.

 

Rob Stevenson  18:33  

Okay, yeah, that is helpful. And once you have that in place, once you understand these of the business, let's assume you're successful with the how here, how do you then measure success? How do you know it's working?

 

Seth Walker  18:46  

Yeah, absolutely. And I actually think that there are a couple of different ways like in terms of as you mature, a low your kind of journey, right? As you mature as an organization, the first level of success that you're always gonna see, is like, just how many models that we put in production, right? I mean, that's always the first one, because putting models in production is hard. I mean, even literally, not even thinking about is it being integrated into some sort of, you know, solution or process, but literally just getting the model to run on a regular basis, and output scores, and be able to, you know, effectively monitor that is, you know, a very difficult thing to achieve, you know, shockingly difficult sometimes, depending on the tooling that you have available, especially if you're working on prem, I've been there, very sorry, if you are having to try to operationalize models on prem. But the second level, right would be when we start to understand like, okay, so of all the models that we tried to build, you know, how many not only ended up in production, but how many of them actually ended up integrated into a business process. So it's actually being used, so you're starting to get from the level of, okay, we're building models and they're working, but now we know that the business is using it, and that's, you know, another one, but the top level of maturity is when you're actually able to start to measure the impact that your models are having. So this is really hard, right? It's a hard question to answer and that you get it a lot like, What value did this produce? Well, there's really a few ways you can solve it, it really is going to depend upon the specific solution that you're putting into place. But we did this. And I was actually fortunate that Humana because they had a ton of experience with kind of randomized control trials and things like that, but we would actually, for every model that went into production, where it was possible to do so we would deploy a randomized control trial, we would take some random sample of, you know, whatever it was, we were trying to change, whether that's, you know, people or processes or whatever, we would randomly assign some of them to the model, randomly assigned some of them to the original process that we're hoping to replace or augment. And then we would compare the results, and then you just multiply that out, across, you know, some point in time, and that tells you the actual value that you were able to achieve. And so doing that, you know, we were able to prove like is pretty definitively as you can possibly prove a thing from a scientific perspective, that our models were able to generate, you know, millions of dollars. Right? So I mean, that's the most mature way to do it. But you have to be really careful, right? Because you have to be, you have to be thinking about this stuff from the very beginning. Right? That means from the very beginning, before you ever put, you know, I say pen to paper, but free ever write a line of code, you should be understanding exactly where in the business process, your solution is going to fit in exactly what kinds of you know, outputs from your model that you need to, you know, create or, you know, produce in order to augment or help to optimize those decisions. And then you also need to be thinking about how am I going to get that feedback loop? How do I get that return? And understand, like, what's actually happening? Like, who actually responded? Or did they actually use the tool or things like that? And then understand beforehand, you know, what is your measurement strategy going to be at the end? Like, how are you going to prove this to me that you have value because it's going to come? Look like maybe some of you are really early in your AI journey, you know, you have executive buy in and they're going to be, you know, they're going to be excited, right? But the day will come when they start to ask the question, we've invested, you know, X millions of dollars into your group, where's the value? How much value did you produce? What did you do to, you know, increase the value of our company? And you need to be ready to answer that question.

 

Rob Stevenson  22:08  

Absolutely. Every function in the business has to answer that question. At some point, I don't care what your role is. And if you can answer it, well, that is the difference between getting promoted and getting laid off.

 

Seth Walker  22:20  

Exactly right. And, you know, for some teams, it's easier, like IT teams, you know, we're like, we're managing the infrastructure of your entire company, it's kind of easy to say like that, that's valuable, right. But, you know, data scientists, we straddle, you know, a line between technology and business operations, we have to kind of sit halfway in both worlds. And so, you know, not only, you know, can we get away a little bit with like, oh, yeah, you know, it's kind of very exciting. It's AI, you know, very cool. But at the same time, like, you have to be thinking about it from a business perspective, because ultimately, that's what your solutions are meant to drive.

 

Rob Stevenson  22:53  

Yep, definitely.Okay, that explains the how over there at carrier, I got to ask about the what I need to know what you're building over there.  

 

Seth Walker  23:00  

Yeah, so we're building all kinds of stuff, right? So then it's like really hard to even decide where to start. But we typically differentiate into two buckets. And I have to, first preface this with saying we are not the only teams that carry are doing AI. So I don't want to do any disservices or anything. So some of the other teams that are doing some really great work here. But we kind of focus on two different areas, which is our product teams, and then our enterprise kind of business and operations types of teams. So, you know, obviously, one big area where we're focusing generative, right, you know, there's been a ton of excitement around generative AI, you know, the technology has, you know, leaps and bounds improvements in a very short amount of time. Very shocking, honestly, the extent to which it matured with almost zero warning. And then, of course, like the, you know, the hype that comes up with that, right, and the potential and the ideas that are, you know, percolating very quickly, as a result of, you know, this transformation in this technology, you know, has produced a lot of excitement. So, you know, we're obviously making a large push into this, but where I think that we're maybe thinking of it a little bit differently than a lot of companies are and where, you know, I personally have been kind of pushing us to go, is that a lot of the solutions, I think, and this is very true for just like, it was true for classical AI, where I was talking about, you know, you're using it in kind of very specific ways to enhance specific operations and specific business capabilities, is the same thing can be true for generative AI. I know, when we think of generative AI, we think of chat CBT, we think of like going to the website, and, you know, give me an itinerary for my vacation, or, you know, maybe even write me some code. And that's great. And it's incredibly powerful. But when you think about it from a business user perspective, like what does that do for them? Right? And in many ways, I think, you know, except with the exception of some power users, not always much, right? I mean, it's amazing, and it's incredible, but it's not necessarily geared towards solving a particular specific problem. So what we want to do when we think about generative AI applications is we're thinking about what are the specific types of problems that we can solve with generative AI? So once The second type of problem that we can easily solve with AI is with generative AI in particular is, you know, accessibility to knowledge, right. So, you know, that's kind of an easy one, it actually is a chat based application. But we have tons of information out there, we have tons of documentation, whether that's on the HR documentation, it documentation, or whether it's even as something is complicated from an engineering perspective, like our technical documentation around our products, and, you know, kind of the maintenance and everything around those products. So how can we take this massive amount of information that is very difficult to kind of sift through and make it more accessible to our employees, right. And again, this is getting to like a specific thing and a specific need that they have. And so you know, we're doing a big push into, you know, knowledge bases and things like that, like vector databases, and you know, doing the search on that and trying to optimize and understand the best way to get that done. Another way in which we're doing this is through actual batch processes. So this is where I think a lot of people are missing on some of the benefit, which is that, you know, people are just constantly generating ideas, chat, but there's, you can do things in batch, right, you can take away some of the complexity and abstract away some of the complexity for the end users. So one of the things that we found through experimentation with generative AI models as they're very sensitive to your prompting, right? I mean, there's a reason that prompt engineering is becoming such an important skill to have. And so if your users aren't well versed in the proper way to engineer their prompts, or don't understand the level of experimentation that might need to go into prompting in order to get good outputs from your model, then you are kind of setting yourself up for potential failure. But if my team who does have the expertise, can create kind of batch processes around specific needs of the business where we've already done that experimentation, and we understand like, precisely what kind of prompt engineering, or what modeling architecture or what foundation model is the best for this particular use case, and getting, you know, optimal answers for that use case, then we can take all that complexity away from the end user. So an example of this that I think a lot of companies are even pursuing is, you know, like our like summarization, right, so call centers, you know, we get a lot of calls into our call centers, we want to be able to take in those audio files, we want to be able to extract insights from them. In the past, extracting insights from them were either, you know, it was clunky, because you have to have a person to go through and listen to the calls, or we were using outdated NLP techniques to kind of look at, you know, what's words are being used the most, and we all know that, you know, really get necessarily great insights from that. But with generative AI with its ability, you know, and I say this, you know, kind of with air quotes here, for those, you know, you can't see me but and air quotes is that it kind of understands context, right, and it's able to better understand what it is that you're trying to pull out of it. So we can get greater insights. And we can also produce summaries of the calls, that then takes the lift away from the call center representative to have to go through the long process of notating, the call and summarizing what happened, right. So this is something that the end user never even asked to touch, we handle it on the background. Another one that got talked about, which let's see over here, we're done was actually mentioned by the CEO of AWS, during his keynote speech at reinvent, was using generative AI for OCR. This is another batch application of generative AI, where a lot of the complications and the challenges with using traditional OCR techniques is that you have to be very explicit about the information that you're trying to extract off of your documents. So like location on the document, where the information is the exact wording of what you're trying to extract, I think it's very sensitive to those types of changes. So that's a challenge like so one of the use cases we have is like, we want to be able to allow customers to upload their utility bills so that we can then help them with creating better efficiencies around their energy consumption. But if you've ever seen utility bills from different companies, they're all over the place, right. And they even use different words, they might say, like total usage, or total consumption, those two things mean the same thing. And the numbers that they're reporting are represented on the same scale. But they're completely different words in completely different formats. But generative AI understands that usage and consumption in this context means the exact same thing. So if we can use generative AI to kind of abstract away some of the complexity of OCR, where we have to do minimal coding in order to extract that information, then that's incredibly powerful, right? That means OCR requires so much less work to make like in like, an effective solution for you, right? And there's a million use cases like this. So I always talk about like classical AI is having, you know, core foundations that everyone knows, right, you have like, supervised learning, unsupervised learning, anomaly detection, forecasting, optimization, you know, all these different types of foundational capabilities that you can employ with classical AMI AI models. But we should be thinking the same way about generative AI with generative AI. It's things like chat, it's content generation, it's summarization, it's document analysis and q&a. It's, you know, coding and research and analytics. And then eventually, of course, as I mentioned in my talk at AWS, like the Holy Grail would be the future of AI agents, right. That's what we need to be think about AI from a gender perspective and where we're heading the future. And of course, we're doing a ton with you know, traditional AI methods as well. With a lot of focus on, you know, forecasting for, you know, sales and supply chain and demand forecasting things along those lines, we're doing a lot of things when it comes to, you know, operational improvements and productivity improvements with traditional AI in terms of prioritizing workloads, you know, based on what outcome it is that we're trying to achieve things along those lines, I can talk more about it, but I feel like I've gone a long time.

 

Rob Stevenson  30:22  

It sounds like you have a lot on your plate here. Lots of applications. And I want to ask you, such as because it sounds like you have such a finger on the pulse of every you know, what's kind of possible currently and thinking beyond chatbots. For example, with generative, I would love to know what you're excited about when you if you were to take off your head of AI for carrier head off for a minute. And just imagine the 14 year old boy inside of you who is excitedly reading research papers like, oh, wow, this thing is kind of cool. Like what in the space right now? It just gets you excited, even if it has nothing to do with H vac or, you know, applications at carrier?

 

Seth Walker  30:54  

That's a great question. I mean, I think for me, it's more about how these tools start to become better at anticipating people's needs and starting to make decisioning on our behalf. Because if you've had any experience with like Microsoft copilot stuff, it's like amazing, right? It's really cool. It can do really cool stuff. I love the ability, like in teams to be able to just have a recap of meeting for me, right? You know, many times I'm late to meetings these days, like it just will tell me right there. And then like a recap of everything that's happened up to that point. That's amazing. But what it can't do, right is like, use my calendar, right? It can't schedule meetings for me, like, I'm excited for it to get to that type of level. And I know maybe that sounds like not what the 14 year old would have wanted. But you can, you can take that you know, and kind of extend it to anything, right. And actually, if I'm being honest with you, I'm a huge Video Game Nerd, I love playing video games, one of the talks about 14 year old set, the thing that I'm most excited about is the potential to have like characters inside video games, that will actually respond realistically to you. And so you get away from the world of canned responses and make it more dynamic, like your interactions with those characters. So I imagine like there was this research paper that came out recently that was using GPT, in order to kind of create a simulated world. And basically, you had like these different avatars, right, and they put them in a little town, there was like a little town that they created with the video game type of environment. And each of these characters was controlled by a different instance of chat CBT. And it was able to like move around the environment and everything and interact with the other characters. And they had some form of abstract and memory and things like that. But all that to say is that they actually, like, acted like people, right? I mean, they woke up in the morning, and they drink their coffee, and then they would go to the coffee shop or to work and they would work and they would interact. People even organized a party at one point. I mean, like imagine that and like a video game world, right? Like if you could have a world that was like so dynamic, like in an open world setting where you know, you're talking to characters, and then based on that interaction, they make independent decisions about things, right. And then those independent decisions can have ripple effects. Now I can imagine the complexity that that would lead to right, how do you create a narrative around something that you can control? Right, it'd be difficult to build those guardrails. And plus, there's all kinds of implications from a compute and everything perspective, right, and making that efficient. But you know, so that's what 14 year old self would be excited about probably even younger. But I mean, again, you take that and you apply it to any kind of technology we have today. I mean, if you can imagine, like how, you know, series, great, you know, these types of applications are amazing. Take it to the next level wizard of AI, where it's able to better understand your desires or your needs. And then couple that with ability to accurately take action, like on the device, or whatever it is, or whatever it's controlling, and be able to help to kind of just optimize our day to day environment and kind of take away some of the mundane aspects of our lives, right. That's kind of something gets me excited.

 

Rob Stevenson  33:36  

Do you remember what the title of that paper was? I'd love to put it in the show notes.

 

Seth Walker  33:40  

Oh, my gosh, I don't remember it offhand. But I will send it to you after this really cool paper. You can find it on archive.  

 

Rob Stevenson  33:46  

Okay, cool. Yeah, I definitely want to read it. It's just you know, a chat GPT and farm basically, it was incredible. Right now they're organizing parties, when they start organizing political parties, it may be time to intervene. Until then, it's just a fun thought experiment. Seth, man, this has been packed with information. Thank you so much for being here and for sharing all of your experience. And I think this is gonna be fantastically helpful for the folks out there who want to make sure that their work is being seen and heard and utilized in a meaningful way. So thank you for joining us and being on the show today.

 

Seth Walker  34:14  

Yeah, absolutely. Thank you, Rob, for having me. This is really great. This is a very unique experience. I love listening to podcasts. So it's kind of cool to actually be on one so I'll definitely be sharing this with my kids.

 

Rob Stevenson  34:28  

How AI happens is brought to buy Sama. Sama provides accurate data for ambitious AI, specializing in image video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e commerce, media, med tech, robotics, and agriculture. For more information, head to Sama.com