How AI Happens

Assessing Computer Vision Models with Roboflow's Piotr Skalski

Episode Summary

Piotr discusses his criteria for evaluating computer vision models, as well as a breakdown of what makes Meta's recent "Segment Anything" Model exciting.

Episode Notes

Today’s guest is a Developer Advocate and Machine Learning Growth Engineer at Roboflow who has the pleasure of providing Roboflow users with all the information they need to use computer vision products optimally. In this episode, Piotr shares an overview of his educational and career trajectory to date; from starting out as a civil engineering graduate to founding an open source project that was way ahead of its time to breaking the million reader milestone on Medium. We also discuss Meta’s Segment Anything Model, the value of packaged models over non-packaged ones, and how computer vision models are becoming more accessible. 

Key Points From This Episode:

Tweetables:

“Not only [do] I showcase [computer vision] models but I also show people how to use them to solve some frequent problems.” — Piotr Skalski [0:10:14]

“I am always a fan of models that are packaged.” — Piotr Skalski [0:15:58]

“We are drifting towards a direction where users of those models will not necessarily have to be very good at computer vision to use them and create complicated things.” — Piotr Skalski [0:32:15]

Links Mentioned in Today’s Episode:

Piotr Skalski on LinkedIn

Piotr Skalski on Medium

Make Sense

Roboflow

Segment Anything by Meta AI

How to Use the Segment Anything Model

How AI Happens

Sama

Episode Transcription

Piotr Skalski  0:00  

We are drifting towards the direction where users of those models will not necessarily have to be very good at computer vision to use them and create complicated things.

 

Rob Stevenson  0:14  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence.

 

Rob Stevenson  0:23  

You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Here with me today on how AI happens is Developer Advocate slash ml growth engineer over at Robo flow. Peter Schell ski, Peter, welcome to the podcast. How are you today?

 

Speaker 2  0:52  

Thank you very much. Thanks for having me. I'm based in Poland. So for me, it's quite late. So I'm a bit tired, but still excited to be part of the podcast today. So yeah,

 

Rob Stevenson  1:03  

yeah. Thanks for Thanks for meeting me halfway here. I needed to the very end of your day, it's the beginning of mine. So thanks for sticking with me here burning the midnight oil. So we're over there. No, we're in Poland. But you are a developer advocate slash ml growth engineer, I'm not sure completely explained that you are more than you are more than your title. And in this case, you're more than two titles. How would you kind of explain your current role at river flow?

 

Speaker 2  1:25  

Yeah, I always have a little bit of trouble to explain what it is. But it's really close to being a developer advocate. So ML growth engineer is just a fancy name, mainly because I'm partially marketing team partially engineering team. And I focus mostly on our users who are building stuff using machine learning. So my responsibilities basically to be this, this person, that when you think Robert flow, you think about me, when you have problem with our product, you can reach out to me, and we can discuss it. And my responsibility is to show you how to use our product in conjunction with all sorts of other models. So this is basically my responsibility. Yeah.

 

Rob Stevenson  2:15  

Got it. Thanks for clearing that up a little bit. So would you mind sharing a bit about your background? Because before you were a Developer Advocate, before you were showing up on Robo flows YouTube channel, assessing computer vision models, you are yourself a practitioner, and you are more rolling up your sleeves? Could you share about that background?

 

Speaker 2  2:32  

So yeah, yeah. So originally, I actually finished civil engineering. That was the study that I finished when I was at their university for the first time because I went back, after a year of working in that field, I decided I want to do something else. And I went back to the university to do computer science once again. But long story short, after a few months, I got my first job in machine learning slash computer vision, and I never looked back. And for the past five, six years, I was doing mostly computer vision projects, in many cases multiple at the same time. So working for many companies, simultaneously, large ones, small ones, startups, different sizes, but I was always the guy who was responsible for researching the models and deploying them. So I have quite a broad background when it comes to computer vision in practice. And I was looking for new options. I was always quite close to educating other people, but mostly in written format. I was doing blogs on medium for a few years. Were simply Culverhouse class 1 million reads over there, which is maybe somehow so yeah, congrats. It's a milestone. Yeah. So there's that. And because of my open source project called make sense, which is also an annotation tool, just open source written in TypeScript, people in the rubble flow, were quite aware of my existence. And when I send the resume, they were happy that I've done that. And I joined quite fast. So so that's the story behind me joining the team.

 

Rob Stevenson  4:19  

Will you tell me more about the invitation side project you had? Long story

 

Speaker 2  4:23  

short, my first job after I switched from civil engineering to software engineering, was in JavaScript, I was front end engineer, but that was pretty complicated product. So I was not like, you know, responsible for typical stuff. Like, I don't know, CSS and all that stuff. It was company that was doing image processing tool in the browser. So there was a lot of logic, like you can imagine like grouping objects, rotating groups of objects, all sorts of scaling, copying, pasting, all sorts of stuff that is happening like inside To that editor in the engine itself, so it was not really me coding HTML, rather a lot of logic in TypeScript. So I had a background in TypeScript. But what I really was doing in my free time I was doing first statistics, then machine learning, and then computer vision. So I was very passionate about the computer vision at this point, I just didn't had a opportunity to showcase that. So I decided, okay, I would like to combine what I know, which is the JavaScript part, and what I'm interested in, which is computer vision, and try to build something. And at that point, it was like five years ago, if you would go back in time and see the scene of annotation tools that you had five years ago, it is like day and night, compared to what you have right now, literally, there were just a few tools that were running in the JavaScript in the browser, most of them required installation, and they were based on some ancient technologies. So if you install that on Mac, it'll behave completely differently than if you installed that on Linux, for example, like battens were not working all sorts of different weird behaviors. So back then I decided, okay, that's pretty good idea. I was even thinking about maybe commercializing that and building some company around that, but it never materialized. So I just kept it fully open source at this point that has like, I don't know, 2500 stars, so two and a half k. So for something that I started when I was still living in a dorm room, and I don't call out at all in that project anymore. It is respectable. And I checked, because I have Google Analytics plugged in. So I know, more or less how many people visit and you know how many projects, I still have, like, I don't know, around a dozen 1000 visitors weekly. So the tool is still very much alive, although I don't do anything. And back then I called it smart enough. So they don't pay a lot of Bill on AWS for that. So yeah, it is something that exists that helps other people build projects for free, and grow their deep learning skills from time to time I see somebody on YouTube, for example, doing a tutorial about computer vision, object detection or other things, and they use my tool and then I'm super happy. So yeah, so that's the story behind?

 

Rob Stevenson  7:33  

Do you think you'll ever revisit it? And maybe try and make it into a business? Or are you really pleased with the open source aspect.

 

Speaker 2  7:39  

So five years ago, when I really started a see the growth in stars, like for the first few months, I was really thinking about it, because honestly, looking from the perspective, that was the right time to double down on that. But you know, I had this gene of creating company being kind of like that entrepreneur, but I was too young and maybe too scared to double down. And if I would have my mentality back, then I'm pretty sure that I would go for it. But I didn't. And at this point, right now, there was absolutely no point for me to do that. Because any competitive advantage that they had, by being one of the first who done that is already gone, like that train is already way gone. So I don't really think so. What's super cool to have this kind of project in your portfolio actually was offered a few times to sell that projects for a reasonable amount of money. And they didn't. And given the fact that literally every job that I got over the last five years, every I was even thinking about it some time ago, every job that I got for five years, is in some way tied to that project. People write to me, Hey, I saw that project, maybe you could help us. So it's such a cool thing to have in the portfolio. There's no point really to sell that. Because maybe it's good money, but it's not money that will change my life. And at the same time, it's such a good marketing tool for me to get other job offers that makes no sense. So it will stay open source. That's That's what I think.

 

Rob Stevenson  9:22  

Yeah, yeah, I love that approach. And it sounds like it'd be being accessible. You realizing that you're giving this information out there for free people can just have it and it reflects well on you. It shows that you have this this expertise and knowledge, which is not dissimilar from what you're doing. Now with your advocacy part of your role at Robo flow. I'll sum it up in a quick way that's probably hamstringing it but people can check out for themselves. Basically, you publish quite frequently on the robo flow YouTube channel, talking about lots of things but primarily you are assessing new computer vision models. And so people can go check that out and they should because it's as they come out as There are published you are covering them, you are showing how to use them, which is fascinating to watch. I'm curious to hear from you, how you like to assess these tools? What are your tools for assessing whether a model is doing something new and exciting or worthwhile? Or has use cases? How do you sort of break these models down for folks? Huh?

 

Speaker 2  10:20  

So maybe just one sentence to add to what you said about what I'm doing over there. So not only I showcase those models, but I also tried to show people how to use them to solve some frequent problems. Yeah. So it's not only about showcasing the model, and showing how it works, and how to load it and how to run an inference. But also, how can I use it in use cases that are quite typical? But coming back to your question, how do we assess those? It's all depends on the model itself. So what it does. So there are some use cases that are pretty well defined, like I don't know, real time object detection. Yeah. So if I have an opportunity to play with new model from that space, I was doing so much stuff with real time object detection over my years, as a computer vision engineer, I already have a list of things that I look for list of things that I know, that may surprise me positively or negatively, and can open some routes for that model to thrive. Yeah. But there are also some models that are so unique, have completely different approach to solving some problem. And then, obviously, I have some things on my list, but it's mostly about how can I creatively use that model to either solve some use case that is already well defined. So maybe there is a new model, maybe I can apply it in some an obvious way in object detection, for example. But it's also about having kind of like the open mind and being able to connect the dots. And knowing that, okay, this model can actually do something that was never done, for example. And I'm obviously trying to do that on my own. But I also tried to share with the community, either on LinkedIn or on Reddit, and people over there in the comments can come up with all sorts of crazy IDs. And that's great. So I don't know for object detection, real time object detection, the things that I'm most interested in. Sure, obviously, if it's real time, then it's need to be real time. Yeah, so obviously, real time really doesn't exist. But with some assumptions, you can say it real time is 30 frames a second. Yeah. So if it's too slow, then obviously we can maybe use it in some use cases, but it will complicate our life. So anytime that the model can really fulfill that requirement for real time object detection, I checked that on my list. And we can discuss about other things. Obviously, if we want to use it in production that does need to be reliable. And by that, I mean the accuracy. Yeah, so there is no point in using new model if it's worse than the previous state of the art model. So there is also that, but there are also other things that are kind of like less obvious, those two points kind of like come to everybody's mind quite intuitively. But if you use those models, and you spend years in the community, what you care about is support, for example. So this is something that is happening all the time in computer vision space. And I believe in another machine learning fast growing space is that people don't support their work. They release paper release code. Very often, although I have a lot of respect for people who release code in open source. Very often that code is far from being perfect from engineering perspective. And they just forget about it yet. They just move on to next paper next project next model. So sure, if the model is state of the art, that's great. But if nobody is supporting that, then I guarantee you in six months when all dependencies will have version, next version, next version, next version, and they will deprecate something, nobody will support that model. Nobody will fix those problems with those models. So then obviously, you can have people that are proactive and just fork the project and kind of like support it for community that happens. But it's certainly much easier when you have organizations that maybe not promise to have long support for the model but have the history of supporting models for the long time and there are certainly groups of people who demonstrated that in the past that they will not abandon that project after a month. And that's a big plus on my side, because it takes a lot of headache from my head into somebody else's them. That's perfect. What else is there, there are some engineering things like I don't know, is the model easy to actually install, easy to use. So that's another thing like, frequently, what is happening is there is no in Python, we have PIP packages, that's the system of distributing the code. Very often, many of those models don't have any package, which makes installation of that, and I'm not talking about single time installation, because you know, one time I can handle that, I will figure out how to plug all those things. But if I want to have reliable system that will run for years. And I want to automate stuff over there. So I want to have Docker images, see ICD pipelines that would be able to build those images, test changes, all sorts of other things. And I don't have P packet, I have a lot more headache than if I have it. So I'm always fan of models that are packaged, and have some sort of a better quality of engineering. So he would gladly take one percentage from the MA P metric. And don't use state of the art model, but have somebody on the other side who cares about that project, who develops it, who maintains all those things. So those are also things that I care about. And coming back to what I said, there are also sometimes models where I don't have those guidelines. And I can just guess, but they create so much bass, that you kind of like internally know that that model will do a lot of good things in the community. And then it's completely different approach, you need to be more of like it's more about you being able to recognize properties of that model that are unique. And you need to simply have enough experience because you've seen like 10s Dozens of models in the past. And here it is something that is completely unique. And obviously this is something that I gravitate towards those models, that's what's happening. Well, one

 

Rob Stevenson  17:21  

that you gravitated to recently that I think meets all those criteria you outlined, right is easy to use is supported by an institution that will presumably continue to support it whose use case is immediately apparent. But also interesting and novel is Meadows recent model I think came out last week segment anything which you and I were about to record, and then the model came out and you're like Rob, my world has just been rocked. I have to cover this, we need to record later, which was totally fine. But your most recent video is about how to use the segment anything model by meta. And I'll include a link for people to check that out if they want to get into how they can fire up their own instance. But I'm curious, we know those those three criteria you outlined it meets those. But what is exciting about the model technologically what has you fired up about this specifically?

 

Speaker 2  18:09  

So typically, my calendar is pretty organized. And I know when I do work, but from time to time, the problem is that my job is I need to react. So when something exciting happens, I need to reorganize my whole calendar, because that most likely mean that for the next few days, I will try to create demo projects, try to record video try to write a blog post about it read the paper. So that was the case. So yeah, once again, I'm sorry for that. As for Metis new model. Yeah, it's really exciting to see something like that the model is creating the bass because it's doing things completely differently than we are used to. So like I said, we have some already recognized use cases where we feel comfortable like computer vision engineers, okay, I have object detection, I have segmentation. No, I Oh, okay. Our clients have this need, maybe we will apply this, maybe we will apply this we have already those recognized routes we feel comfortable in. And here we have new model that is approaching those things differently. And I'm not saying that's the first model that is approaching those particular cases in this specific way. But it's very good at it, partially because Mattis spent a lot of time building a completely new data set. And that just the thing with deep learning very often even models that are not groundbreaking from the engineering architecture perspective, if you will take them and train on high quality large data Study, you will get good results. That's what they did. They have, I believe, 1 billion masks on, I believe 11 million images. So that's on average, a bit less than 100 Masks on average per image. So you can imagine that's a lot of work to create something like that. They approach like I said, the problem of segmentation completely differently. So what we see usually is we have polygon or mask, and we have label. And we know that here's a person, and we have the mask that describe the person. And here is the dog. And we have the mask that outlines the dog. Cool. Well, they said, I don't really care about what it is, I care that it is a thing that is there. So what that means is, in the past, when we talk about, I don't know, for example, y'all act or mascara, CNN, those models could recognize things based on the data set, they were trained on, in the sense that they had like limited amount of classes that they were capable of detecting. And if you show something completely new, they will fail. Yeah, because they don't recognize the properties of the thing. As the concept, general concept, they recognize those specific things. And Sam is doing things completely differently. We don't really have labels with names of objects, we have a lot of masks describing all different items on the image. And it can recreate that it can actually detect plenty of objects on the scene without knowing specifically what it is. But because we have other models, we can leverage that to solve new things, to build new things and solve old problems in new ways. So there will be plenty of spaces where that model will shake up the current status quo. And those were first that comes to my mind is obviously, image editor, video editors, labeling tools. So the space where we work, I can tell you, like everybody in this space right now is scrambling to add that model into that set of capabilities of those editors. Because annotating polygons is very time consuming. If you want to do it takes 10, sometimes even more times more than the bounding box. And just imagine that you can draw the actual bounding box. But instead of having only those four points, you have accurate mask around that object. So yeah, it's very exciting. You wouldn't

 

Rob Stevenson  22:45  

be drawing a bounding box, right? Like you would be clicking on an item. And then it's,

 

Speaker 2  22:49  

you have you have two options to provide an input, you can either click wherever on the image, you'll see suitable and the modal can kind of like sort of extrapolate towards the bigger objects that the point that you selected as part of that object. There is a small problem with that, because the model itself understands ambiguity, in some sense. So you can imagine that people don't see me right now. But I have a T shirt that has some sort of logo on it. And just imagine that you selected a point on that logo. So in reality, what is that you are after? Are you after that logo? Are you after the t shirt? Or are you after the person that is wearing the t shirt. So we find you increase that ambiguity, because you have those multiple levels that you can go after cool thing is that the model understands that ambiguity exists. And you can actually ask the model to return all those masks to you. And you will be responsible for post processing them. And that gives you an opportunity to select the one that you are interested in. The problem with most of the architectures right now is that even if they have like some clever way of segmenting stuff, usually they don't give you the choice. They just have some idea that they hard coded into that model. And that model behaves this way. So it will return your I don't know always that higher level objects, always the person here you have that level of granularity and you can select which of those levels would you like to operate on? And that's absolutely awesome. And I think that's another thing that will certainly open up the possibilities. And I was playing a little bit with that in my latest demo. And that's why bounding box is better because that gives you this constrain and the model will try to find the object that feels that bounding box As much as possible. So in many cases, it's less ambiguous. Yeah. Although you can handle the ambiguity in the code side, maybe labeling editor, it's actually better to have this kind of like way of doing that this way. Yeah, I will say that the customers will decide which one is better. And most likely, at some point, all those annotation tool will either implement both ways or just figure out a good way of solving that problem from the customer perspective. So but there are two possibilities, like you said, you can do it either with points or with the bounding boxes.

 

Rob Stevenson  25:36  

Now, the use case for image editing and video editing was immediately what stood out to me. And I am not a machine learning engineer, right? However, here I am looking at this model and thinking, well, I could really use this, this could really helped me in my work. Tell me if this is a myopic way of viewing it. But does this you think, signify a trend whereby these models are being pointed at traditionally non technical professionals?

 

Speaker 2  26:04  

So first of all, I had actually quite an interesting conversation on LinkedIn. Over the weekend, I posted a meme about people who talk about AI. And that's, like gigantic group of people and people who actually implement AI is just a fraction of the group, it blew up on LinkedIn. But we had plenty of comments on that meme. And many of people were non technical. So this is something that is rarely happening to me, when I'm posting something is usually those people who respond to what I post are people who are relatively close to the epicenter of the things happening here. So close to ml, close to computer vision. And with those models, it is clear that there is a group of people who are either non technical, or they are coders but not really related to field of computer vision, who immediately see value in that model. So this is also another thing that's kind of like the gives you this hint that this model might be a bit more than just another modal is when you feel the bus. And that bus is going way outside of your typical circle of people who are just interested in Oh, that's just another model architecture here, you immediately felt that there are other people. And actually there was a lot of conversations in the comments where people were like, kind of like, in many cases, people who create those models don't even think about the use cases, they are just doing that for the sake of doing maybe not in the case of metta AI, because obviously they are part of a larger organization, I doubt they are doing something without having an idea. And I think that ideas VR, yeah, we all know that you'll likely want to use it in the virtual reality setting. And I see how that can be also very useful in those scenarios. But because that model is so, so unusual, and give you so many new options, it will definitely fit in many spaces. So it's quite typical that people who build those models, they in many cases don't really care. Yeah, I have a lot of respect for them. Because I know that I wouldn't be able to do those things. It's different thing for me to read the paper and understand what they wrote, then to actually come up with that crazy idea and being able to implement it for the first time, I'd say that there is a steep learning curve between those two levels. And I doubt that I will ever reach that next level because I simply feel quite comfortable where I am between serving as the translator between those higher in the ranks and those lower in the ranks. But they often treated as a puzzle. Yeah, oh, I want to build something that solves it this way. And I did it and I feel happy with it. I have new state of the art score. And like I said, up to the next project. I don't care a lot about this anymore. I'm solving others. And here there are other people. Like I said, I immediately saw the use case in video editing. I immediately saw the use case and graphics design. Yeah. So it happens. And I think that it's very good that it happens and coming back to your original question or at least the second part of it. I think that because we are slowly but surely drifting towards language models. So I have this very cool example with different model. It's called grinding Dinah also pretty, pretty new one. So when I was a computer vision engineer, I used to solve problems like count how many people are traveling on the sidewalk on the left side of the road. Yeah, when you think about it is like I need to detect people, I need to write a logic that will be able to somehow filter out those people who are on the left side work. And then I need to write a logic that will be able to count those people, or, I don't know different use case, count how many people are sitting on chairs, for example. And then you need to detect chairs, you need to detect people need to come up with the logic to be able to say that some person is actually sitting on people, not the standing next to it, it needs to be over the chair, there has to be some overlap, plenty of things that you need to care about, you need to write that logic for experienced computer vision engineer, that takes, I would say days or weeks depends, yeah, if you want to have that logic to be like bullet proof, you need to test that maybe weeks. Yep, here, we've got new model, you can describe staff with language. So I can actually create a prompt. And in that prompt, I can say the tech people sitting on chairs, and that model will return all the the bounding boxes of people that are detected, that are actually sitting on chairs, I don't need to write all that convoluted logic to figure out who is sitting who is not sitting what's happening, I can describe that with language. So just by that you can imagine, what is the difference in experience level of person who used that model to solve actual business problem, A versus B into A, you need to have experience in to have weeks of time, in B, you have model, you write a correct prompt, and you can basically have the same outcome. So because of that, because of the trend of using prompts to describe what you want to do, and having those models that are clever enough that they can figure it out, I think that we are drifting towards the direction where users of those models will not necessarily have to be very good at computer vision to use them and create complicated things. Because they can be good at software engineering, and they have access to the right model and know how to use it. And they can create very complicated stuff. So yeah, there is certainly a trend here.

 

Rob Stevenson  32:29  

So attaching a language model to computer vision model, this is going to be the Rubicon for non technical people to use it, right. And that's kind of what we're seeing with Chet GPT. In the example you gave before, where it's like, oh, I can just tell the model segment Peter, and it's gonna know that means his face, his neck, his T shirt, his hat, his air pods, even right? It's not just gonna give me your face, or like, it's going to give you automatically. But it'd be like, Do you want his hat to be included or not? Right? Do you want to Deripaska included or not. So it knows that, that's maybe part of you, but also that I might be asking for something different. And then giving the user the option to decide that feels like the point at which someone like me, who doesn't have experience in engineering, or writing code or anything like that will be able to use a model like this. Just as a side note, in Photoshop and Pixelmator and other image editing, the segmentation has existed for a while with like the magic wand tool, and you can click and then if you drag your mouse away from the thing you're trying to select, the segmentation will broaden and cause intolerance. And it's always worked like okay, like not great, you know, but that is the same thing that we're seeing here is like, what is the tolerance for what you're trying to select? So the tolerance for Peter would need to be pretty high to include your hat and AirPods, for example, yeah,

 

Speaker 2  33:48  

but this time, instead of having, I don't know, just a hard coded amount of pixels that you pad, your segmentation is more substantial, because it's not like adding a layer of redundant stuff around the segmentation is like, do you want to include this thing? Or this thing? Yes. So it gives you this level of control, first of all, but also, it's much more intuitive to use without knowing the software engineering that can maybe later on, fix some issues or something like that. Here, it's like, okay, I would like to include the face and the t shirt and the hands and the hat. Thank you very much. And that's awesome. And as for the chat, GPT by the way, this type of the model that can take image and text, it's called multimodal models. And it's a think right now, because of obviously GPT. Obviously, those models existed before, but GPT has this capability of just broadcasting to large amount of people that something existed. They didn't know it before that and We still don't have the access to chart GPT. With image models, we can only guess what it will be. But one of the guesses is that some of the things that we discussed today will be included in that model. So I'm actually very excited to have my hands on that model. Because I'm actually curious whether or not it can do those things like, can I just say, give me bounding boxes for people sitting on chairs, because there are some models who are capable of that. But obviously, tippity is like a very broad model, it can solve multiple things. And because of that, it might not be the best at solving specific cases. But we'll see certainly, even if it will not be like the best at it, it will still give plenty of capabilities to people who have no experience in software engineering. And that is awesome, in my opinion. But I also know plenty of people who are a little bit pissed about it, because up until now, they had this license of solving those problems, because they were computer vision engineers, and suddenly, other people will be able to do that. But I guess we are living in the age where a lot of those licenses that exist, that will no longer apply, and the more because it will democratize access to many of those. So it's pretty interesting where it's happening.

 

Rob Stevenson  36:22  

Yeah, yeah, that's what we want. I think the democratization can only be good that more widespread access to this tech. So Peter, this has been a fantastic conversation related out here for us how to assess models, why models are exciting. So at this point, I'll just say thank you so much for sharing your experience and wisdom and perspective. I've loved chatting with you today.

 

Speaker 2  36:40  

Thank you very much. I was a bit tired. I hope that it's not noticeable, but a part of that it was a pleasure to talk with you. So yeah, it's also really cool topics. They can talk hours about themselves. So my pleasure.

 

Rob Stevenson  36:56  

How AI happens is brought to you by sama. Sama provides accurate data for ambitious AI specializing in image video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e commerce, media, med tech, robotics, and agriculture. For more information, head to sama.com