How AI Happens

Google Cloud's VP Global AI Business Philip Moyer

Episode Summary

Whether you’re an individual or an organization, the recent advancements in AI are bound to have left you with a few mixed feelings: from concerns over the unanticipated harms that could arise with this new technology, to excitement for all the opportunities it could bring. Joining us today to explore this topic is Philip Moyer, VP of Google Cloud's Global AI Business.

Episode Notes

 Philip recently had the opportunity to speak with 371 customers from 15 different countries to hear their thoughts, fears, and hopes for AI. Tuning in you’ll hear Philip share his biggest takeaways from these conversations, his opinion on the current state of AI, and his hopes and predictions for the future. Our conversation explores key topics, like government and company attitudes toward AI, why adversarial datasets will need to be audited, and much more. To hear the full scope of our conversation with Philip – and to find out how 2024 resembles 1997 – be sure to tune in today!

 

Key Points From This Episode:

Quotes:

“What's been so incredible to me is how forward-thinking – a lot of governments are on this topic [of AI] and their understanding of – the need to be able to make sure that both their citizens as well as their businesses make the best use of artificial intelligence.” — Philip Moyer [0:02:52]

“Nobody's ahead and nobody's behind. Every single company that I'm speaking to, has about one to five use cases live. And they have hundreds that are on the docket.” — Philip Moyer [0:15:36]

“All of us are facing the exact same challenges right now of doing [generative AI] at scale.” — Philip Moyer [0:17:03]


“You should just make an assumption that you're going to be somewhere on the order of about 10 to 15% more productive with AI.” — Philip Moyer [0:25:22]

 

“[With AI] I get excited around proficiency and job satisfaction because I really do think – we have an opportunity to make work fun again.” — Philip Moyer [0:27:10]

Links Mentioned in Today’s Episode:

Philip Moyer on LinkedIn

How AI Happens

Sama

Episode Transcription

Rob Stevenson  0:04  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Oh, right podcast lander. Welcome back to how AI happens. I have another fantastic guest lined up for you today. He is the global VP of AI business and AI engineering over at Google ever heard of it? Philip Moyer is his name. Philip, welcome to the show. How are you today?

 

Philip Moyer  0:47  

I'm doing great. Rob, thank you so much for having me.

 

Rob Stevenson  0:49  

Just a moment ago, I just double checked your title. And we had a good chuckle about how like, the more senior you get, the less we tend to care about titles, and you quip that titles and jokes don't get more interesting, the more that you speak about. So maybe we should just sprint through this and not talk about what you do. But I guess just like to introduce you to the audience a little bit and understand more about who you are and what you're working on. Would you mind sharing a little bit about your current role and kind of how you got there?

 

Philip Moyer  1:14  

Sure. So I'm part of the Google's AI engineering teams. And what I do is I have a team of black belts that work with some of the largest customers in the world. And then I work as well when commercial strategies for our AI business.

 

Rob Stevenson  1:26  

So that takes a lot of shapes I imagined. But you recently wrapped up this worldwide Gen AI listening tour for for lack of a better term. I don't think that was like an official trademarked t shirt with tour dates on it or anything. But it sounds like he was kind of taking stock of how Google's customers are intending to use Gen AI. Is that a fair characterization?

 

Philip Moyer  1:47  

Yeah, absolutely. I got an opportunity last year from roughly about end of March to December to talk to 371 customers in 15 countries and five different continents. So I got a really incredible opportunity to understand how people are thinking about AI, how they're using AI, their hopes, their concerns, the opportunities, the problems with this generation of generative AI.

 

Rob Stevenson  2:10  

Interesting, I want to ask like a very vague question like what did you learn? But I guess I should be a little more specific. First, when I looked in understand like, who are the customers like what size companies and and who are these individuals that you're serving?  

 

Philip Moyer  2:22  

It's been incredible, because I've had an opportunity to meet everything from startups, some of the biggest unicorns that are out there some organizations that are just starting to build applications using generative AI, to some of the biggest companies in the world. Carrefour is a great example that I spent some time with over in France, organizations like Verizon here in the United States, organizations like Woolworths down in Australia, a lot of electric brands over in Europe, a lot of the manufacturing organizations over in Southeast Asia, some of the conglomerates. And then as well, a lot of the government's what's been so incredible to me is how forward thinking and how forward thinking a lot of governments are on this topic and their understanding of the really the need to be able to make sure that both their citizens as well as their businesses make the best use of artificial intelligence,  

 

Rob Stevenson  3:09  

when you say best to use, does this mean they are actions to use it for business gains? Or are we talking about accountability? What what is their concern?

 

Philip Moyer  3:17  

It's really interesting, I spent a lot of time on this topic last year around the risks and the opportunities of AI and what needs to be managed? Well, I call it kind of the management framework of AI. It's everything from the building of the models, the training of the models, and the use of the models. There are risks and opportunities in each one of the areas. As you can imagine, a lot of the press that's out there today is around things like copyrights that go into the building of the models. But when you're building those models, it's equally important to understand who's building them. What's the provenance of the information? What's the freshness of the information? What's the accuracy of that information? Are you doing it based on an average amount of information on the internet or the information that's behind your paywall, or firewall? And so that building of the models in a tight coupling with GPUs and TPUs and the sustainability around that there's a whole variety of risks and opportunities in that building space. In the training space. Increasingly, I think that organizations are going to want to understand who's training a model inside of their organization? Are they poisoning the model? Can you have an insider threat to a model where I can train a model that says, The bank's customer service phone numbers, my phone number, when you do health care, like when we released med POM here at Google, we had to have a diversity of training clinician types. So nurses, doctors, researchers, we had to have a variety of wide diversity of patient types to understand, you know, is this answer giving the right kind of an answer for this type of population? And then as well, we had to have a whole variety of work that went into what's called adversarial datasets like asking these models some of the most awful questions you could imagine. And then in the usage of AI, what people are finding is if they use the wrong model, it can be super expensive. Like if you have to, you're supporting millions of customers with really large models, these things can get expensive. So very quickly, when you're using these models you have to worry about, can certain people inside of your company ask certain questions of certain models. So you might not want certain people to be able to ask questions of your contract contracting AI software or model or your your HR model. So model creation, model training and model use, or everyone from governments and companies around the world are recognizing there's both risks and also requirements that you need to manage.

 

Rob Stevenson  5:27  

When you speak of adversarial datasets do you use me and like you're trying to anticipate how someone might use this stuff or evil? Like how can this be misused? And let's not just wait to find out let's try and do some defense before that happens.

 

Philip Moyer  5:39  

Yeah, increasingly, I think that your adversarial dataset is going to need to be audited Gemini the really large multimodal model we just released, and we're super proud of the work that we did to have third parties actually audit our adversarial dataset, an adversarial data, a question would be, hey, teach me how to build a bomb. That's where you go right away, do something violent, or do something illegal, or do something that's going to influence you know, like, push out false information, as example, or deep fakes? That's what we call our adversarial data set. And when you are throwing these questions in the training process, and making sure that the model doesn't respond to those kinds of questions that they basically declined to answer the question, or they declined to, you know, to be involved in any kind of information exchange related to that kind of content.  

 

Rob Stevenson  6:24  

Right. And it says, like the how to build a bomb example is a good one. But it says can also be as low stakes as just like you mentioned something about HR policy or something about oh, like, I shouldn't be able to ask my company co pilot LLM, right? chatbot, whatever to hey, how much is so and so make, right? Or what are the details of this contract with those customer things as low stakes is that

 

Philip Moyer  6:44  

great examples, you know, in the healthcare space, you know, if you want to say, hey, does this individual have a following genetic condition? Generally, a model will say, I can't answer that. But you say, what's the dosage of this individual's medication, you can infer a genetic condition. And so it's something as nuanced as that. And one of the challenges I think people are realizing is that we unstructured content is just that unstructured. Like, we've done a lot of really good work in enterprise technology of securing row and column level data. So you can say, like an organization like a city or JP Morgan, the investment bank can ask the wealth management corpus of data, any kind of information, and we've been able to secure that. And it's, it's really well structured. When all of a sudden, you ingest a whole bunch of investment banking reports, or a whole bunch of wealth management reports that are unstructured in the form of just content documents. We don't have good structure like good Row Level column, kind of like in a contract, I can't say you can't ask a question of this section of the contract. That's still hard. And in generative AI, it's all or nothing right now. Either you ingest the whole contract or you don't. And so I would just say that some of these things, I think people are kind of waking up to these, like, how do I control the model from doing things? It's not supposed to? How do I control usage? How do I actually audit the questions that are asked of a model, just think about the auditability of the security of most large companies, you can see where people log in from, you can see what people are doing. And they do that just to kind of secure the perimeter and secure the content information. Right now, we don't have a lot of auditability of these models. And so in right highly regulated industries, starting to see a lot of activity as it relates to the auditability. And the transparency, the adversarial datasets associated with the use and training of these models.

 

Rob Stevenson  8:26  

When you think of adversarial datasets vary tremendously. industry to industry like that question at what is so and so's does this medication is a good example for healthcare but not relevant to you know, self driving car company, but there's something equally adversarial, you would need to plan for. So how much of this responsibility is Google's? And how much of it? Is the companies deploying the technology? Or do you view it? It's both?

 

Philip Moyer  8:50  

Yeah, it's a little bit of both. I mean, Google, what we do is that we rate every single question and answer that goes into a model against a roughly about 16 different things. So we actually give a rating, a numerical rating of saying this is hate speech. This is toxicity. This is violent speech. This is obscenity. And every time we implement a language, we are in a row to implement approximately 133 languages. So we have had 33 available and we're in the process of releasing over 100. Right now, every single time we do a language, we actually rate in that language against those 1616 items. So is this obscene in Korean? Is this hate speech in Japanese? Is this toxic in French? So we actually give that rating and we reveal that rating to users. So if they decide they want to have a higher bar for what questions or answers are going to be answered, they can do that. We will not answer questions above a certain level of toxicity or hate speech or otherwise. And so it definitely is a shared responsibility model where we're doing our best around some broad categories and then domain by domain and use case by use case companies are going to have to implement that genetic profile as an example, or in their contracting process, they can't you know, implement. So it'll be we'll do some of it, and customers will do some

 

Rob Stevenson  10:07  

Gotcha. So there's a certain amount of it off the shelf that Google provides. But then there's also here are the settings right here is the threshold for what we consider obscene, which you know, is in the eye of the beholder, I suppose. And then at the extreme level, there's just like, Hey, you can't use this for this, I don't care who you are, then then it's up to the individuals, the companies to whatever is specific to them in their domain, that's now your, your responsibility.

 

Philip Moyer  10:29  

If you think about a gaming company versus a healthcare company, like the makers of Call of Duty or fortnight, you're gonna have a lot of violent speech, in and around that topic, but you don't want that same level of violence speech in a healthcare setting as an example, or in a financial services setting, or in some other settings. So, so yeah, it really will be controlled, we'll give tools provide tools. And then the users will implement based on those tools. Like one of the really interesting things that I think it's interesting emergence right now is a concept that we call a model card. It's a metadata specification around our models. And so we say, Hey, this is when the model was trained. This is the safety harness that we have. This is the adversarial dataset. This is the what we've used to be able to do an episode dataset. This is the freshness of the model as an example. And increasingly, we build in some technology so that any model that you're running inside of our vertex platform, I can say, what's the freshness of this model? What's the safety harness. And so we built this like a data spec. I know that organizations like open AI and Microsoft are doing the same thing hugging face are doing the same things, I think that we will converge on an industry data specification that says this is the provenance of the information that's in this model. And this is the provenance of the safety harness that's in the model. And that's going to be important. And so I think most regulated entities are going to need to have are going to need to register the models that run inside of their organization. So they can say this is a registered model, or this is an unregistered model, we might not want to run out and register models, because it doesn't have the transparency, you need to know that it's safe

 

Rob Stevenson  12:02  

register with whom the government know, registered with the company.

 

Philip Moyer  12:05  

You know, in the same way, in large organizations, there has been this long standing technology called a CMDB, which is a configuration management database, where if you're JP Morgan, as an example, and you have 1000s of applications that are running, you need to know what software is running in your environment. So it needs to get registered somewhere. But it's actually running. It's a valid set of code that's running on people's desktops as an example. And so most large organizations have a notion of the software that's been registered and valid to run in their environment. And in the software that's not allowed, especially in like regulated entities or government entities, you know, the software that's running in your environment. And so you'll know the models that are running in your environment as well. I think people will expect that you register, like with the corporate IT department that I'm going to run this model, and so that they know there's not something rogue that's running in their environment.

 

Rob Stevenson  12:56  

Okay. It's almost like HR compliance at that point. Right.

 

Philip Moyer  13:00  

Exactly. Yeah. I mean, there's a large company, you don't want that models running in your environment. So you got to have something that says, What's, what's a known model? And what's an unknown model?  

 

Rob Stevenson  13:10  

Right, right. So you got to go on this listening tour and survey companies of just about every size, probably tons of different use cases. I'm curious what kind of common threads you were finding? What were some of the challenges that kept coming up over and over again, that you were hearing, even regardless of company size or industry,  

 

Philip Moyer  13:26  

early on, it was understanding what is generative AI? Is this thing explainable? Is it transparency? Is it something that's going to give us unintended consequences? And so I would say really, in that, in the springtime of last year, I spent a huge portion of my time kind of explaining what is generative AI? Then we started getting into the era of security, because if you recall, back in that time period, most people were using consumer grade AI, where I had one company over in Germany, tell me that they were toy manufacturer, and they push their price list into one of the big public models. And you could ask the model, what's the profit margin they're making on every toy? And so then he got into this, like, how do I secure my information and don't let Google or Microsoft or you know, anthropic or otherwise steal all my IP. So then we got into the security phase. And I spent a lot of time explaining that we use what's called an adapter layer, which allows you to maintain all the training in a secure layer inside of your Google tenant that only you control. And Google doesn't know what's inside of it as an example. And so, so I got from what is generative AI to how do I secure the AI? And then very quickly, I will tell you, as I got into the, I would say into the fall time period, it was a lot around how do I start my first use case in an unforgettable way. Now you've seen all the scary stuff that comes out like generative AI can make up medicines, it can take two Latin words put together and say you should treat this with this medicine doesn't exist on the planet Earth. So people were like, how do I pick something that I'm not going to regret doing? And so it was kind of how do I get that first thing started? And then as I got into the like a later fall All, I had a lot of people going saying to me and this was lots of boards of directors, they were like, Oh, my God, who's ahead in this space is my competitor, outflanking me. And how do I scale this? How do I do this at scale? Because I have hundreds and hundreds of ideas, but Oh, my God, like all the things we just talked about, you know, around model building and training and usage, people were starting to discover, they're like, Oh, my God, this is gonna be hard to scale. So everyone was worried about who's ahead. And in the fall, I had to tell a lot of boards of directors, I said to him, Listen, nobody's ahead. And nobody's behind every single company that I'm speaking to, has about one to five use cases live. And they have hundreds that are on the docket. And you have to imagine that it's roughly 1997 in the internet, in 1997, it was MSN and America Online. And people thought that those two platforms were going to control the world at the time,

 

Rob Stevenson  15:56  

wow, I haven't heard MSN and yours. That's such a throwback.  

 

Philip Moyer  15:59  

Yeah. And so when I was at Microsoft at the time, and I said, it feels exactly like 1997. And Amazon has been born and Google hasn't been born. And eBay hasn't been born and all these wonderful companies, and Yelp and Uber, and everybody else has yet to be born. And so just recognize that we're in 1997. And we are all every company I talked to, has 100 use cases they want to do. And they're trying to figure out how to do this at scale. And so that's really where we're at right now. I think that 2024 is going to be a lot about how do we scale doing accurate and safe AI for the enterprise? Like, how do we do it at scale?

 

Rob Stevenson  16:36  

It's so interesting that people are terrified that they're like falling behind. And you basically have to tell them that you're all at the same stage or also that the idea of it being a race is irrelevant. It sounds like that's kind of the the very non technical therapy, you're giving these people.

 

Philip Moyer  16:51  

It is. And I have to say it's consistent. It's financial services, it's healthcare, it's retail, it's luxury brands, it's gaming companies. I mean, literally, all of us are facing the exact same challenges right now of doing this at scale.

 

Rob Stevenson  17:03  

So when you speak of doing it at scale, this is the concern for all these companies who have between one and five live and 95 others that they would like to. So how is Google action to help them deploy generative in the enterprise at scale.  

 

Philip Moyer  17:16  

So I tell everyone that we're on a journey, we being the whole industry on a journey to rewrite the entire computing stack. So there's a whole brand new layer of silicon has to be put down around the world. Then there's a new class of software, it's called foundation models. There's a new set of management tools that you have to do LM operations at scale, like things like prompt management at scale. There's a new storage algorithms called a vector store that you're going to see people have to vectorize their contracts and their HR records and their datasets. And so brand new storage algorithms called vectors vectorization. And then there's applications. If you rewrite the silicon and rewrite the management tools and rewrite the storage algorithm, and you have this new class of software called models, you're gonna rewrite every piece of software that's out there to use generative AI to be able to give natural language interfaces. And so what Google is doing is that we are providing technologies at every single layer. So we are a proud partner, and Vidya and we also have our own TPUs. So we have GPUs, and TPUs. And we have this, we have a carbon neutral footprint around the world because these models consume huge amounts of power. And so that's a sustainable layer of silicon, we're deploying it around the world in GPUs, and TPUs. We're both providing our own models, things like Gemini being multimodal model, palm, and Cody, as well as we provide over 130 other third party models. And so we're making sure there's a big ecosystem of models that you can pick the right model for the right job. We have a set of tools called vertex that help you do LM operations, things like prompt management and embeddings, and fine tuning and reinforcement learning with human feedback tuning. And then we also have tech tools that make it really, really easy for you to vectorize your content, we have a thing called Enterprise Search, where I can just throw all my contracts in it, and answer questions just on that contract database, or I can throw, you know, a whole bunch of data in it from a customer service department. And it doesn't go off. And hallucinated just answers questions from that content. And then we're also building a set of applications. Obviously, our workspace products we're adding do at you know, we're adding this capability to all of our workspace products. We're also built what's called an anti money laundering tool that actually helps you like eliminate false positives and more accurately identify anti money laundering your contact center applications. So Google is providing technologies at every layer in this rewrite of the stack.

 

Rob Stevenson  19:34  

Why do you suppose that the stack needs to be rewritten rather than just put another couple Jenga blocks at a 90 degree angle and the top of the stack?  

 

Philip Moyer  19:43  

Most of the world like when you think about like the data lake houses and the data warehouses and the data, pick your favorite, favorite metaphor for data, you're trying to pour things into these KPIs and these dashboards, these beautiful dashboards, and now I'm able to ask questions like, Is there an invoice that looks like this that we've ever paid in the past? And that question is really hard for a structured store to answer. Generative is great at that. Because you can say, oh, you know what? This invoice seems to have three things that were delivered on the exact same date as the following things over here, the exact same product description, the exact same location, the exact same size, and the exact same quantity. That's something that doesn't exist in an application today, like it doesn't go to that level. Or I can say, Does this invoice match the contract? Like extraordinarily difficult thing inside of the retail industry, when you have 20 drills show up as opposed to the 100 that you ordered? But these are 20, higher quality drills? Like how do I even associate that with the contract and in the contract, it says, you know, equal or greater product I can deliver to you. So is this an equal or greater product. So the things that we can do with generative AI, there's a whole new class of applications that we can reason with, in a way where in the past databases where either it was very binary, either the number was or wasn't. With generative AI, you can have the application layer do reasoning that you couldn't do in the past, I use an example of I took a an image of store shelves with lots of chips on it, I threw it into our Gemini model, I said, Hey, what are the names of the chips on the third row, and are any bags missing? Now design a bag of chips that would stand out from all these other chips, and the model did all of those, and it actually counted roughly the numbers that were missing and had the exact names. And then it actually created a net new product to help me be creative. You can't do that with databases. And most of our applications have been built on these like highly structured stores with highly structured dashboards. And so when you have that now, it's like introducing a new employee into your environment, it has the ability to reason and be creative and do rote work, you know, like looking at counting bags of chips that you would have to send somebody in to do and all of that stuff, our existing application layers don't do and so you want to integrate that you want to integrate that into all the applications that you have.

 

Rob Stevenson  22:07  

And does that all need to be trained on a company's proprietary data?  

 

Philip Moyer  22:10  

Yeah, yeah. And this is one of the reasons why I tell people that we spent most of the 90s, putting things into databases. And then we'd spent most of the late 90s and early 2000s, putting an HTML interface on and then a mobile interface. And now we're about to go through what I call the vectorization of information of knowledge inside of companies, we're going to vectorize, the customer service department, we're going to vectorize, the contract database, we're going to vectorize, our merchandising strategy. So vectorization is going to be done on the company's data itself. So yes, absolutely. And the information that's the most highly curated is the information that's behind the firewalls and behind the paywalls. I think about Bloomberg as an example, Bloomberg, I think on their website, they said they have something like on the order of like 100 petabytes of information that's been curated. And they have really extraordinary information, they have news broadcasts, they have information around discounted cash flow, they have information around private assets that they've curated, Moody's is the same way, you know, where they have provenance of every single company for 475 million companies, they can tell you who the ownership structure is based on curated information they did. And so that information needs to be put into a mechanism into kind of vectorization, where you can now ask questions and ask that information to reason using these generative AI models. And so I expect that most companies are going to are going to vectorize, their company level information, they'll vectorize, their division level information, and then even employees will even have their own AI models that kind of follow them professionally inside of the company and just as an professional throughout their career. And so I think we'll we'll get AI to be extremely personalized as well. So we'll all have models,

 

Rob Stevenson  23:58  

surely auditability gets a little more complicated with the vectorization process. No?

 

Philip Moyer  24:04  

It does. That comes back to our original conversation is that as you're training that model, who's doing the training inside of your company, what questions are they asking of it? What are the questions that people are actually asking? So if I say should I extend credit to this company? Or should I extend credit to this individual that needs to be auditable in the same way that it can be audited today in a normal lending process? Regulators will come in and say show me your risk tables show me how you make decisions on these particular credits. You're gonna need to audit the models as they make decisions on credit or as they make as they make recommendations on care. Or as they make recommendations on things like PII data for personalization, if it's in something like media, or it's in something like something like retail, so yes, you'll need auditability of that both the questions the answers as well as the training.

 

Rob Stevenson  24:53  

Gotcha. Well, Philip, this has been packed with information here and we're creeping up an optimal podcast length before I let you go though. I want to ask you kind of a big question. And I apologize for its vagueness. But what are you most excited about as you speak to all these companies who are presumably about to be deploying generative and lots of different use cases, what gets you truly excited?

 

Philip Moyer  25:14  

I talked to a lot of board directors and they'll say to me, you know, how much more productive can my team be? I'll say to them, you know, you should just make an assumption that you're going to be somewhere on the order of about 10 to 15%, more productive with AI. We've had some statistics where we saw six to 11% increase in some of our software developers, we've worked on organizations that do marketing content platforms, where they're able to create 15 20%, more marketing content. And that's interesting, productivity is really important for a lot of organizations, and it's really going to do some wonderful things. But the ability to be able to increase proficiency for people in their job in the education industry, we're seeing that you can bring somebody up to an average level of proficiency in about a third of the time that would normally take. So if I have to train a customer service department, a new rep, customer service rep, I can bring them up in a third of the time. So if it would take me six months, I can bring them up to speed in two months. That is amazing for things like job mobility, for people to be able to do more in their roles and be able to really be able to take on more for their company and go into a lot of interesting areas that maybe a company couldn't go into in the past. The third thing, though, that I get really excited is what we call is job satisfaction. People that are using AI on average, 85 90% more satisfied with our job. And I tell people, we have the opportunity to make work fun again, because AI can take away a lot like that just really awful stuff like Oh, I'm putting out a marketing piece of content. Is it compliant? Am I violating any of our compliance standards? Or I'm doing anti money laundering? I gotta keep clearing, no, that wasn't money laundering. That wasn't money laundering, I'd keep clearing all these things are Oh, my God, I'm producing marketing content, I have to take the same marketing content and put it out in 17 different languages and five different formats. And I got to reformat this content into 140 characters all these times. And so taking that road stuff away, I get excited around proficiency and job satisfaction, because I really do think like I said, we have an opportunity to make work fun again.

 

Rob Stevenson  27:13  

I love that you say that. Because when people tell me people ask me is AI coming from my job? And I usually tell them, No, it's coming for the parts of your job that you hate. So I'm glad to hear you validate that because you would have a clearer idea of it, then I would. That is really encouraging. Philip, this has been really, really great talking with you. Thank you so much for taking the time and sharing your experience and wisdom with me today. I have loved chatting with you.

 

Philip Moyer  27:32  

Absolutely. Rob, thanks so much for having me.

 

Rob Stevenson  27:36  

How AI happens is brought to you by sama. Sama provides accurate data for ambitious AI specializing in image video and sensor data and notation and validation for machine learning algorithms in industries such as transportation, retail, ecommerce, media, med tech, robotics and agriculture. More information, head to sama.com