How AI Happens

Zapier Lead AI PM Reid Robinson

Episode Summary

Here to shed more light on issues of generative data quality is the Lead Project Manager for AI at Zapier, Reid Robinson. Reid begins by explaining how he ended up at Zapier, and how his journey as a founder helps him in the work he does today. Then, we discuss whether AI can be differentiated from automation, how the Zapier team engages with ML accuracy and generative data, how Zapier uses generative data with its clients, and why AI is still best enjoyed by those who already have some technical knowledge. To end, Rob hints at his next big idea and venture, and he shares some important advice for listeners who want to do more in today’s world of machine learning.

Episode Notes

 

Key Points From This Episode:

Quotes:

“Sometimes, people are very bad at asking for what they want. If you do any stint in, particularly, the more hardcore sales jobs out there, it's one of the things you're going to have to learn how to do to survive. You have to be uncomfortable and learn how to ask for things.” — @Reidoutloud_ [0:05:07]

“In order to really start to drive the accuracy of [our AI models], we needed to understand, what were users trying to do with this?” — @Reidoutloud_ [0:15:34]

“The people who being enabled the most with AI in the current stage are the technical tinkerers. I think a lot of these tools are too technical for average-knowledge workers.” — @Reidoutloud_ [0:28:32]

“Quick advice for anyone listening to this, do not start a company when you have your first kid! Horrible idea.” — @Reidoutloud_ [0:29:28]

Links Mentioned in Today’s Episode:

Reid Robinson on LinkedIn

Reid Robinson on X

Zapier

CocoNFT

How AI Happens

Sama

Episode Transcription

Rob Stevenson  0:00  

This idea of the LM enabled copilot. This comes up constantly. We're about to see a whole new class of entrepreneurs starting companies that are built just around that. And it sounds like you did it in a couple of days. just plugged it into Slack. So,

 

Reid Robinson  0:13  

got even a couple of days, you could do that. I want to be clear, anybody listening to this can do that within like 10 minutes.

 

Rob Stevenson  0:21  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. All right, we are back here at how AI happens with another fantastic guest. I can't wait for y'all to meet. He is the lead pm of AI over at Zapier Reed Robinson Reed, welcome to the podcast. How are you today?

 

Reid Robinson  1:01  

Good. Thanks for having me on Rob. And hello, everybody. You got read here?

 

Rob Stevenson  1:05  

You gotta read? Do you ever do any bits about like read right or left on read or anything like that? I feel like it lends itself to that.

 

Reid Robinson  1:11  

My Twitter handle and one elsewhere is read out loud.

 

Rob Stevenson  1:15  

That's good. Yeah, name related puns outstanding. You have a really interesting background. And it's kind of non traditional, although What does traditional even mean in our space anymore? You know, but we'll get into that you're doing some really cool work over there at Zapier. So first, I would love it. If we could learn a little bit more about you. Would you mind sharing about your background and kind of how you came to be in this role?

 

Reid Robinson  1:34  

Yeah, happy to. So yeah, as I mentioned, doing product management here at Zapier, overseeing kind of everything on the AI side, I describe myself as a bit of an oddball person internally, where I don't technically own any particular product. But I tried to basically help us advance everything we're doing with AI within the company. And that incorporates not only our product development and strategy started out in more of like the AI r&d area, but it quickly became about internal enablement for AI. I also came in to own our AI partnership strategy. So working with open AI, Google, Microsoft, and the kind of whole breadth of AI partners out there, as well as even down to the nitty gritty of our legal and privacy and conversations on how we want to take stances for AI. So yeah, get to do quite a bit here. And as mentioned, I definitely do have a bit of a non traditional, I guess, I started in the world of hardcore, cold calling sales actually is like what I started my career with, and then got into tech thanks to someone who had left the one of the companies I was with, and started, the classic SDR wave start in tech, worked my way through solutions consulting over two product partnerships, which is how I got the over to Zapier and actually left Zapier because I did a hackathon with a buddy of mine from Zapier, when NFT craze was like really hitting, and we ended up winning some prize money. And we had users wanting to use our product, which was really just an on ramp for creators to get started in the world of NF T's. And yeah, we raised money, we ran with it as a side project at first Safira is pretty cool about that. And then we left to run it full time, sold it within that year, and then got to rejoin Zapier to help do a lot of as stuff sounds a lot of fun stuff there.

 

Rob Stevenson  3:25  

That's kind of the dream, right? Like, especially given like the NFT cycle that we experienced that you got in build something sold, it got out, you probably feel pretty smug about that, don't you?

 

Reid Robinson  3:36  

In some ways, I'll be honest, the the world of entities is actually what introduced me to generative AI, which is kind of cool. I got into general AI, at least as we currently talk about it, because of what artists and creators were doing. For those not familiar, like the NFT world has a pretty big ecosystem of artists doing really cool stuff with AI, I had actually done a generative AI stamp album, it was one of the things that I worked on. And I've also collected quite a bit. But quite honestly, like going back to my year and a half as a founder. I loved building and nurturing and working with a team. And like just doing cool stuff and hearing from users that like this made an impact on them. I loved that. I loved that side work. I get comments every now and then from some of the employees we had on when I'm doing my next startup so that they can work for me again, which is quite honestly, the biggest comfort or biggest thing that gives me happiness to hear back from that time.

 

Rob Stevenson  4:30  

Yeah, it's quite an endorsement Right?

 

Reid Robinson  4:31  

Yeah, it feels good. Although some of us feel sad.  

 

Rob Stevenson  4:34  

Yeah. But again, you're gonna have lots of jobs in your life, probably and you have lots of opportunities to do cool things. So maybe it doesn't have to be sad. It's interesting that yeah, the generative aspect of NF T's that kind of gives you the bug is that where you sort of develop these technical chops because you're right, like when you said that you had a non traditional background, I thought you were gonna say like, Oh, I didn't go to college, you know, or like I didn't study computer science, but like to start as an SDR cold call. like cold email like that is far more non traditional.

 

Reid Robinson  5:03  

Yeah, I love it, though I have often struggled like, and I do a bit of coaching and with some people internally before and was doing more like manager life. And one of the things that I constantly observed is sometimes people are very bad at asking for what they want. And I would say, if you do any stint in particularly the more hardcore sales jobs out there, it's one of the things that like, you're gonna have to learn how to do to survive, you have to be uncomfortable and learn how to ask for things, as you realize, like nobody just magically hands you money. I mean, maybe if you're selling AI API keys these days, maybe a little bit better. But back then I was selling like conference tickets for people and they had like, 24 hours to book with us to your cold, literally dialing them through their operators. So nobody wants to talk to you.  

 

Rob Stevenson  6:00  

Yeah, you cannot be afraid of imposing because sales itself is an imposition, right? Even in the case where it's something that that person might really need or can make use of, it's like, Hey, here's this thing that you didn't know existed, I'm going to tell you why you desperately need it. The whole thing is, is that

 

Reid Robinson  6:16  

I loved you know, anything in that world, we had something called the one call close, which was essentially where you randomly dial someone, and they liked what you've told them so much, that they just give you their credit card number on the call for multi $1,000 purchases, and sign themselves up to travel right. And of the two of those during that time, period. Man, it was amazing. Just to be like You're telling me there's a conference on this exact topic that I love that people like me will be at. And I'll say, Yeah, that's exactly yes.  

 

Rob Stevenson  6:44  

Shut up and take my money.  

 

Reid Robinson  6:49  

Yeah. And they were like I was like, was like, remarkable. But yeah, most of the time, not that. But anyway, going back to the point, yep. Sure.

 

Rob Stevenson  6:55  

Tell me about developing those technical jobs, because now you're it sounds like you're kind of like an AI swiss army knife over at Zapier. So where did that these technical experience kind of come from?

 

Reid Robinson  7:04  

Yeah, I know you got a credit a bit of not a fiance neurodiversity, or something people always ask me like, What do you think I get shiny object syndrome might be another way to put it. I certainly fall into that a bit. And I've learned in my career that hurts me in certain ways I've needed to learn how to recognize and grow from. And in other ways, it really helps, because I could be sitting there on one project and just think of some random idea that I want to experiment with. See what's possible, try to push that forward and come out with some sort of insight or know how on something new that we weren't previously trying as a business or even just as myself, and that I don't know, that weird knack of how my brain seems to operate, I've learned in that way does lend itself very well. I love tinkering, I love coming from an idea to the point of like, just the early cusp of creation, is like that spark stage is just what gives my brain like a lot of dopamine, I guess. And so that's really helped me develop as I've never really taken a coding course or anything like that. But you know, at this point, I developed apps on Zapier. I do a lot of technical stuff in that world. I was just in terminal today, I still have no idea what I'm doing in terminal. And yeah, I think it's just a lot of desire to see your idea. Come to Life, and having to learn how to make that happen.

 

Rob Stevenson  8:34  

Yeah, I find just in time learning is often the best, right? And then can you learn enough to be dangerous to ship something to execute and maybe even learn enough to know when it's time to call it an expert? Right?  

 

Reid Robinson  8:46  

Yeah, exactly. As soon as I start publishing my API keys to a public GitHub repo, I'm like, Oops.

 

Rob Stevenson  8:52  

Well, they'll tell you on GitHub, you know, the extent of your deficiency, right? Exactly. Like oops, yeah. So given our audience, I assume they are pretty familiar with Zapier, if you have ever tried to plug one software into another, I assume you have used this tool, or at least are familiar with it. And I want to understand from you a little bit more about the AI at play behind the scenes there. And first, I'd like to ask, because of my granted limited and not particularly recent experience with Zapier, but where would you draw the line between AI and mere automation?

 

Reid Robinson  9:31  

I mean, I really wouldn't say there's a line any more. Right? I think were largely seeing is well two sides. Number one, I think what we've been seeing for the last year and a half in particular, when we first got the open AI app onto Zapier, and this was back before the chat endpoint was just the old schools completions. Shout out to anybody that remembers DaVinci oh three and People were putting AI into their workflows, right? Like they were incorporating an element of non deterministic behavior. That was a pretty novel breakthrough for a lot of people, particularly then just like what you could all of a sudden start to do with your workflows and what you could have made possible. And I think we continue to see that as probably one of the largest forces. I think the other side, though, that is certainly emerged as the whole, like agentic side, which is more like now you're having an AI system, control the automation, and actually make choices about what automation it happens. The way we often sometimes talk about that is like, do you have determinism at the core? Or do you have non determinism at the core?

 

Rob Stevenson  10:44  

What is the difference between those two things, philosophically? And those meetings?

 

Reid Robinson  10:48  

Yeah, so to give you an example, in a traditional zap, just like a structured workflow, you might have a give you a really common one that we saw a lot, right? You got a new email came in, right? Like can you yourself, you know, you might have a new email come in from someone who I don't know, company wants to be featured on a podcast, right? You might have a step then in a workflow, that is a charge Beatty step or a Gemini step anthropic, whatever that has a prompt that essentially like tells you how you would typically like to reply to that email, maybe you have a look up information about the company, figure out how big they are, what market they serve, all sorts of things, you then have the model output a draft, body of an email response, and then your following action would be creating a draft email reply in that same thread. So that, you know, when you come into your workday, you have email drafts to review, like they've already been roughly drafted in your close to your style as the up get the model to do. That's an example of like deterministic behavior visits, you know, email, chat, UBT. Email, right. Whereas, non deterministic core is, you give them an agent and assistant, whatever, the ability to check your emails, the ability to do some research, the ability to craft emails, the ability to maybe DM you in Slack, and you just kind of tell it, I want these things to happen. Go use the tools that are at your disposal to fulfill my wishes, and see what happens.

 

Rob Stevenson  12:19  

I'm glad you mentioned chat GPT. Because it's not merely been an opportunistic thing to pipe into your product. If memory serves you were like a launch plugin for GPTs. Is that right?  

 

Reid Robinson  12:32  

Yes, we've done quick p&l on Chrome with them three times now with open AI, both for plugins and for most recently on the GPTs. So

 

Rob Stevenson  12:39  

with the extent of that partnership, was it just like you kind of checking okay, this is accurate enough, our customers can use this or how did you know that it was gonna

 

Reid Robinson  12:48  

be a good fit? Let's rewind to Plugins. That feels like a lifetime ago, but it was like just over a year ago. The plugins launch was a an exciting one for us. For anybody not familiar with Zapier launched with plugins with something called AI actions in the plugin, which enabled you to give chat GPT the ability to execute any of the 20,000 actions on Zapier, and again, a non deterministic way. Essentially, chatroom T would just say, you'd say, I want to give chatroom T the ability to send emails on my behalf or draft emails and send slack messages. And then it would try to determine what your request was. And then Zapier would turn that into an API request. How we got to serve at the level of what was good. That's a good one, quite honestly, at that stage, we just wanted to see what people could do with the technology. I think we got it to the point where like, from our r&d efforts, it was like, a went from pipe dream of like, Could we get a model to do this? Could we get a system to do this? And we did. And then we said, Okay, can we make it work? All right. And when we launched, we were about 50% accuracy. And that was 50% accuracy of a very loose metric of what accuracy means. I think at the time we were really just using, did the API request, not return like return a 200? And did the user not give it a thumbs down? Right. So pretty loose, like Did it do the right thing? whole other ballpark?

 

Rob Stevenson  14:14  

Did it succeed? Yeah, I needed the person like it. Right. Okay.  

 

Reid Robinson  14:18  

Very different metric. So that we had a very rudimentary understanding and very rudimentary perspective on how that would work. Particularly again, it was so new at the time, we didn't know what use cases customers would really want to do that with. Turned out it's really not too different from what they do with workflows. It's what we learned.

 

Rob Stevenson  14:37  

Gotcha. It's an interesting, like, larger point of like, How accurate is accurate enough? And I guess it depends on the use case, and depends on how high the stakes are. But how do you kind of think about accuracy over there?

 

Reid Robinson  14:48  

Yeah. So I think for us when we when we're looking at their accuracy, it certainly comes time to how is it incorporated into the product that we're thinking about the accuracy? And secondly, put stage of product are we Trying to get to market? And how much do we even know about how we could make that accurate without starting to get it in front of users and see what happens. So for instance, you know, we had ones like the, for anybody that's used Zapier recently, or in the last couple months, we built something called AI zap builder, internally referred to this as zap guesser, essentially, which is the service that you say, hey, every time something happens in Slack, I want you to add it to Google Sheet and send it to Trello. And that would create like a draft outline of a workflow for you, that would get it in front of you. When we first started working on that and get in front of users, it was pretty early days. But it was one of those things where in order to really start to drive the accuracy of that thing, we needed to understand what were users trying to do with this, what were real like, because we didn't really have a concept of what a natural language, definition or description of a workflow is, right? Like we can sit around and think about how a user might say that. And we can go through, you know, some past emails that we have from customers and past ways that users talk about it. But it's very different when you put a product in front of them and give them like that amount of text room to start running off of, to see what they input. So I don't want to say it's a chicken and egg thing. But it's a fascinating one where you you don't even know how to potentially build the right frameworks off of, I think at the time, we were using, like synthetic data to just generate descriptions of workflows, right? Like, Hey, can you just like, describe this series of workflow as if it was like a sentence, right, and kind of seeing what it would come up with. But as I'm sure most people listening to this, no synthetic data is quite different from real human data. There's only so good as those things can get. And humans are very non deterministic.  

 

Rob Stevenson  16:48  

So when you say different, you mean worse?

 

Reid Robinson  16:52  

Yeah, available, but worse. Yeah.

 

Rob Stevenson  16:55  

So the generated data you've seen, in what ways has it been worse?  

 

Reid Robinson  16:59  

Yeah, to go through. And first of all, I want to give a shout out to Kristin Keller's our staff, data scientists on our site working on AI stuff. She's basically when I start to get into the world of like, our actual data accuracy, and evals. And stuff is basically her and I just get to learn from her, which has been phenomenal. But I know on this side of things, when we like, again, going through that example, that model of being able to predict something from a combination, sounds pretty straightforward. But oftentimes, what we found is what a user might enter into our zap generator tool, the model is just wrong. Like it comes up with a lot of like, bad description, sometimes it's like, as best as we can try to prompt it, the outputs that it would have the classifications that it was doing, or just wrong sometimes. And what we ended up realizing this was we were spending a lot of human time evaluating the synthetic data to make sure that it was accurate before trying to do anything from it. And for instance, one of the things we found was that a few 1000 examples that were not very meticulously manually validated, performed worse than a tiny subset of human generated ones that we had created. That was pretty startling, because at the time, especially this was last year, there was a lot of reports on how valuable you know, you wanted larger datasets in your training, and you wanted to, you know, do a diverse amount of datasets. And for us, we quickly found that I think it makes sense, right, like quality and quality out type of thing. And, you know, it wasn't just about having the synthetic data, that could give us a sheer volume of it. But it just performed worse, because it just wasn't the way that humans actually talked. And that's what the model at the end of the day needed to predict. Or the other problem of it just being wrong, sometimes, like it would just come up with a bad description, or would come up with a bad classification.  

 

Rob Stevenson  18:44  

Yeah, real world data is always better. For now. Anyway, you know, the the map is not the terrain. Do you think that that will always be the case? Or is the fact that generated data is worse right now, is that because of its infancy? Do you think it'll get better over time, and eventually, maybe even be just as good?  

 

Reid Robinson  19:01  

Yeah, I certainly think that models will get better with generated data. And I think the problem of the data just being straight up wrong, will get somewhat better. That being said, we still expect them to make mistakes, particularly on complex tasks, it doesn't have context or data on to really know what to expect. Again, if you keep them on like a zap, your workflow could be like 89 steps, right? And a human might just be able to describe that in a succinct way. And you try to throw that into an LLM today, and it's gonna give you this like just almost like a CSV feeling kind of output. Originally, you prompt it a bit, but it's still not going to be like how a human would typically describe because humans gonna have a different perspective of what they're trying to go for. So don't expect that to completely go away. And second thing is really the fact that it doesn't look like human data, again, like the way that you best prompting as we possibly could get To still comes down to the fact that humans talk weird, I guess. And particularly when you're asking a human to describe a complex process, humans are going to have very different ways to talk about that, that depends on their job, their training, their background, their language, so many different factors, that, you know, just having the diversity of ways that humans think as part of that is a lot more valuable than what like a strict model is gonna get at this point. So that one unlikely to go away.  

 

Rob Stevenson  20:33  

Because we're speaking about synthetic data and generated data, I got to ask, how are you kind of using generative over there? Is it working behind the scenes? Are you offering it up to customers? How is that taking place?

 

Reid Robinson  20:44  

Yeah. All right. So step has got like two main camps on that all three, I guess. One is using it within a workflow is probably the easiest way to talk about it. That's what we talked about before, that's, you know, you want to be able to do something with a model in the middle of a deterministic workflow, right? A lot of use cases for that we see a lot of content is probably the number one use case, right? Just anything related to content, customer communication, as well, whether you have like training a for instance, training in open AI assistant, on your documentation prompts. Apologies for all the engineers listening to me say the word training. But that is just how users talk about this. They love saying they're training their models, but I'm like, You gave it a prompt. But anyway, so using the assistance API with files and prompts, and being able to incorporate that to help you reply to customers, and reply to inbound or outbound email creation is really cool. All sorts of like Slack bots, huge amount of slack bots on Zapier with knowledge bases, I built out one for our engineering team that does that for what we call our engineering index, like our internal repo. So if people have questions, they can actually ask it and it spits out, it goes out does the research within an assistant and comes back with the answer and replies in a Slack thread. So cool stuff you can start to do there. The other side of when what like our engineers build is, you know, I call it more like a under the hood type of stuff. And that's sometimes things that users see and feel like the AI is app builders a great example, we built out chat bots as like a chatbot. builder for folks, we have a lot of trying to meet users at the right time and place with where AI can actually solve a pain point is probably one of the bigger areas that we focus on. For instance, formatting text in a workflow always been a long standing annoying thing to do in Zapier because typically if you got something that came in from a customer, but your type form had full name, but your HubSpot needs first name, and last name as two separate fields, well, prior to this AI components that we were working on, you needed to figure out that you needed to use a step in Zapier called format or and then click on text and then click on split the text and then map the right field. And then you know, like it was a lot to even figure out to know how to do. And now when you're working through Zapier, we just kind of as you're going through your workflow, you say, Do you want to format this value? And then you just use natural language and say, Yeah, I want to split this. And then our system will just like add the right format or step for you. We actually do the same for cogeneration. We saw when Chuck tea came out the usage of our code steps and Zapier, like really went up almost at the exact same point. And so we actually met users in our editor with a kind of chat code assistant that was, you know, we gave some information on how our code steps operate to help them along. And that really helped a lot of users use code steps in their workflows. So that's one side of like data under the hood. And then lastly, we have an initiative that internally we called disrupt the core. And this is where you have like that team of hackers building something as if they were trying to destroy the company, right? Which is awesome. I love that. And for us, this is what we've called Zapier central now that we launched, which is like an agentic, workflow product, at its core, eight uses kind of that tech that we had early on that was saying AI actions with the plugin. And it enables a user just to build bots for their workflows without like any you're not doing any field mapping, you're not building these, you're really just describing to a bot, what you would like to do your authenticating your apps, we even built out a function that allows for kind of rag over apps as well. So you don't actually need to like even upload your files from air table or Google Sheets. You're just connecting them. And it's going to do a rag over them as part of that. And just experimenting with like, how far do we push that? Like how close can we get real valuable workflows to the point where you don't actually need to Build a workflow. Yeah, we just launched a chrome central for that. That's actually quite fun. So that's like the three buckets, if you will,

 

Rob Stevenson  25:07  

gotcha. When you mentioned the slack bot, that is trained on like internal engineering data, like this idea of the L, I'm unable to copilot. This comes up constantly, you know, and it's like, I feel like there's we're about to see a whole new class of entrepreneurs starting companies that are built just around that. And it sounds like you did it in a couple of days. just plugged it into Slack. So

 

Reid Robinson  25:28  

Oh, I got even a couple days, you could do that. I want to be clear, anybody listening to this can do that within like 10 minutes. I literally just built one today. On our biz, I had a buddy of mine wanted to build an app into Zapier, and I didn't my mind wanted to try this experiment. And so what I did was I worked with Chatri T to generate a terminal script that would allow me to get all of the files from a GitHub repo like that I had in a zip folder, that the problem is if you try to upload that folder to a GPT, or the assistants, it doesn't really work, because it's too many files. It's in a weird format. But apparently, chai tea and terminal can convert all of those folders into a single text file with like markdown separation of the information to make it easier for the model to understand from the retrieval and just build that as a GPT. Or as like a Zapier bot that you can then use. And I literally did that, like between meetings this morning. Like that  

 

Rob Stevenson  26:33  

don't spend $20,000 on that software.  

 

Reid Robinson  26:36  

Yeah, and that level of like, because I was told I shared it with our developer platform team. And but now what you could do is like, we could potentially try to experiment putting that into our Zendesk flow, right. So if tickets come in that that bot might be able to help with that or more technical, like it could take a first stab at or apply and give our team like a draft, which we actually do have some of that in our own Zendesk sidekick. Say there's some cool capabilities, I'd say, the people who are being enabled the most with AI in the current stage, are the technical tinkerers. I think a lot of these tools are too technical for average knowledge workers. But if you're at the kind of more technical edge of like, not an engineer, but you're technical enough, I'd say you're in the sweet spot currently for who AI is helping the most.

 

Rob Stevenson  27:27  

Yeah, that describes you and me, I think. So I'll pat myself on the back here at the end of the episode. But before I let you go, since you're an entrepreneurial guy, and you have former employees asking you when your next company begins, I want to ask you when your next company begins, but if you were to begin one, what space would you want to work in? What kind of company would you want to start?

 

Reid Robinson  27:48  

This is funny, I give one quick thing I started did my startup we launched about three weeks before my first kid was born. So quick advice for anybody listen to this. Do not start a company when you ever first kid. Yeah, have one kid at a time. Horrible idea. Yeah, exactly. So these days, when I think about starting a company, I really try to think of ones that you can do with as little amount of investment need as possible. And ones that you're going to be able to find a community of 1000 5000 people that are just pumped with what you're doing and paying you 20 bucks a month, something like that is honestly like I would say dreamstyle ones that I think through right now. And in terms of spaces. I think education is going to be really fascinating, I think through a lot of the longer term impacts of AI in the workplace and what it's going to mean, I think, for better or worse, we're going to see a huge need for a lot of education tech, coming online very quickly, as you have workers who are not traditionally impacted by large shifts in technology all of a sudden being impacted, and what that's gonna mean for education, and how you learn skills and knowledge. And I think there's a lot of ways that tech can help with that. The other side is also just taking this AI tech into niche industry is like one of the things working at Zapier, I'm always fascinated on the breadth of tech. That's niche. Right, like CRMs for hair salons. That's the thing. So like, what is everything we just talked about with these models that can help you do things look like for folks who are constraints, right? Like a buddy's plumber, an electrician, like what is their life look like in a way that might actually benefit from these sorts of things?  

 

Rob Stevenson  29:34  

Yeah, that's a good outlook. There are all these niche areas that just need you. We're so used to working in tech, we're just like, everyone has this functionality, right. And then you look at like, what is the POS that you know, they're using at the bar and like, I think we could beat this, you know?

 

Reid Robinson  29:48  

Yeah, that's fun to work on.

 

Rob Stevenson  29:50  

Fun to think about. Fun to think about and fun to have you on this episode with me here read so as we creep up on optimal podcast length, I'll just say thanks so much for being here and for sharing what you're working on today. up here and your outlook on some of this stuff. It's been really, really enjoy chatting with you. Awesome,

 

Reid Robinson  30:04  

great job with you Rob. Thanks everybody for listening.

 

Rob Stevenson  30:08  

How AI happens is brought to you by Sama. Sama provides accurate data for ambitious AI specializing in image video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, ecommerce, media, med tech, robotics and agriculture. More information, head to summit.com