How AI Happens

Transfer Learning & Solving Unstructured Data with Indico Data CTO Slater Victoroff

Episode Summary

Slater Victoroff, Founder and Chief Technology Officer at Indico Data, explains his company's approach to transfer learning, how they are solving the problem of unstructured data, and the current limitations in the field of AI.

Episode Notes

Irrespective of the application or the technology, a common problem among AI professionals always seems to be data. Is there enough of it? What do we prioritize? Is it clean? How do we annotate it? Today’s guest, however, believes that AI is not data-limited but compute-limited. Joining us to share some very interesting insights on the subject matter is Slater Victoroff, Founder and Chief Technology Officer at Indico, an unstructured data platform that enables users to build innovative, mission-critical enterprise workflows that maximize opportunity, reduce risk, and accelerate revenue. Slater explains how he came to co-found Indico Data despite a previous admission that he believed that deep learning was dead. He explains what happened that unlocked deep learning, how he was influenced by the AlexNet paper, and how Indico goes about solving the problem of unstructured data.  

Key Points From This Episode:


“Deep learning is particularly useful for these sorts of unstructured use-cases, image, text, audio. And it’s an incredibly powerful tool that allows us to attack these use cases in a way that we fundamentally weren’t able to otherwise.” — @sl8rv [0:02:44]

“By and large, AI today is not data-limited, it is compute limited. It is the only field in software that you can say that.” — @sl8rv [0:19:27]

“That’s really this next frontier though: This is where transfer learning is going next, this idea ‘Can I take visual information and language information? Can I understand that together in a comprehensive way, and then give you one interface to learn on top of that consolidated understanding of the world?’” — @sl8rv [0:26:05]

“We have gone from asking the question ‘Is transfer learning possible?’ to asking the question ‘What does it take to be the best in the world at transfer learning?’”. — @sl8rv [0:27:03]

Links Mentioned in Today’s Episode:

"Visualizing and Understanding Convolutional Networks"

Slater Victoroff

Slater Victoroff on Twitter

Indico Data


Episode Transcription

Slater Victoroff  0:00  

We have gone from asking the question is transfer learning possible to asking the question, what does it take to be the best Middle World at transfer learning?


Rob Stevenson  0:12  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence.


Rob Stevenson  0:20  

You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way.


Rob Stevenson  0:33  

I'm your host, Rob Stevenson. And we're about to learn how AI happens.


Rob Stevenson  0:42  

Joining me today on how AI happens is the founder and chief technology officer over at Indico. Slater Victor off Slater. Welcome to podcasts. How are you today?


Slater Victoroff  0:50  

Thanks so much for having me, Rob. I'm doing great. How about yourself?  


Rob Stevenson  0:53  

I'm doing really, really well. I'm excited to get into this stuff. Because there's so much around the world of data that we can get into. And I've spoken to a bunch of AI professionals now on the show, and no matter what vertical, they're in whatever the application of their technology, a lot of them are going through some of the same problems and like data, data data is what comes up. Is there enough of it? What do we prioritize? Is it clean? How Do We notate it? Et cetera, et cetera? That's kind of where a company like Indico comes in. We'll get to that though. Before we do. I would love to just learn a little bit more about you, would you mind sharing a little bit about your background and kind of how you came to found this company?  


Slater Victoroff  1:28  

Absolutely. The way that I would explain the founding of Indico, all comes back to something I said to professor of mine in 2012. The war is over deep learning lost. Now I call that moment the most wrong I've ever been. But, you know, in 2012, it wasn't a particularly strange point of view to have. And right around that time, you know, I was I was a sophomore in college. And you know, I very luckily kind of contributed to a somewhat impressive paper in ML, kind of my first real attempt to contribute to the field. So of course, at that point, I knew everything there was to know about the field. And I started doing Kaggle competitions with a friend of mine, his name is Alec Radford, and the two of us would go on to found Indico. And for the first six months, we were really using these traditional ml techniques, right actually boosts and regressions, and, you know, traditional kind of scikit learn sort of stuff. But Alec had this notion in his head, you know, for a very long time that there was something a lot more interesting and appealing and deep learning. And of course, me knowing everything in the field, having already written it off, I said, you know, that's, that's cute, you know, you'll, you'll go play in your corner, and I'll be over here doing the real work. And something interesting happened after the first six months or so, which is that the traditional techniques we were using, stopped being effective. And deep learning really started to come into its own, right, and this sort of After Alex net, right, so we've got some sense that this is possible. But really, actually, after those first six months, the traditional techniques never want again. And something that I thought at first, you know, maybe this is a sporadic, right, maybe this is kind of a one off, I started to realize by trying really, really hard to prove otherwise, that deep learning really did offer this kind of step change in how we approached certain problems, right? And maybe to be clear, you know, I'm not one of those people that believes deep learning is the sort of universal magic panacea, right, I'm very anti that camp. But deep learning is particularly useful for these sorts of unstructured use cases, image, text, audio, and it's an incredibly powerful tool that allows us to attack these use cases in a way that we fundamentally weren't able to otherwise. And now, for me, who's this huge language geek, once I flipped over to the other side, I was incredibly excited. I'm like, wow, these are amazing tools, right? I wanted to know how to approach these sorts of problems for so long. So we said, Okay, let's try to put this into the real world. Let's try to get some people to pay us for a couple of these projects. And what we found really, really quickly was that while academically, this technology was really amazing, there were some massive barriers to actually bringing this into production. In an enterprise kind of environment. Even when you had you know, a lot of hardware, your disposal is still very, very difficult. And that really was the genesis of Indico was his question of can we solve the problem of making this new technology more accessible? Now, we've been in business for close to a decade at this point. So there have been a few evolutions in division over time. In the early days, the most ambitious version of this we could possibly imagine was, you know, we were making API's for developers, right? We said, okay, you don't have to be a PhD. You don't have to be Alex Kruszewski right. To do this, you know, you just have to be an ordinary developer. And over time, what has actually happened and if you look at into code today, we've really increased our ambition quite significantly. And I would say that the way that we're really trying to solve this problem today is empowering, even non technical user to take control of this technology in a way that is transparent and empowering, right? And I don't think pretty much anyone else is actually focused on that problem, not just how do I deliver sort of a use case to these people? But how do I actually make this technology usable and transparent, right to someone who doesn't have any technical foundation for it? For a quick history lesson, that's later, would you mind explaining? What happened that kind of unlocked deep learning and sort of made that initial comment you made, that the war was over deep learning and loss? So wrong? It's a really great question. And it's a couple of things that coincided all in one moment. But to maybe first paint, you know, what was the situation in 2012? In 2012, deep learning, which, you know, in some senses is quite an old technique, right, the first basic perceptrons were introduced in I want to say 1955. In you know, in some people's mind, deep learning is still those perceptrons. Right. So, a lot of the basic concepts had been around for a very, very long time, but for various reasons. And actually, it's a host of reasons. Part of this is computation. And part of it is the algorithms weren't very good part of it, is we weren't really sure how to benchmark these techniques, right? Decoding just didn't work, right, in all of the ways that we measured AI, right, deep learning, it was just, it was compute expensive, right? It was very power hungry. It was, you know, not data efficient, right. And it didn't even get you to a good sort of place in efficacy. And it was so unpopular, that actually only and, you know, there was a series of papers mostly coming out of MIT about like, oh, deep learning is so stupid, deep learning is the worst, right. And that kind of happened all through the like 70s and 80s. Right. And all of this got to the point where it was so unpopular, that there were really only three researchers and research groups in the world that were still interested in deep learning in sort of the early 2000 times. And this was University of Toronto, University of Montreal and NYU, those went on to become the heads of, you know, Google and Facebook's AI programs. But that was really the environment that we were in where there's, you know, this was like some really weird niche area, right? People working on deep learning for a long time, no one has been able to get it to work, right? We've got these other techniques that, you know, in the way that we test, these seem to work much better today. So why would I even spend the time to learn about deep learning, right, that's a dead end in my mind. But these three folks, and that's, you know, Geoffrey Hinton, Yan Lacan, and Yoshua, Bengio. That turns out, they knew kind of better than everyone, right, they kept working and working, working, it was a series of these kind of incremental algorithmic improvements over time, right. And it's incremental steps that a lot of those we don't even still use today. But they were really important getting these first networks working. And then one of the other really big breakthroughs was the GPU, which more or less overnight, though, you know, it took a long time to figure out how to actually get your code onto the GPU and run these models. But there's this really nice quirk of math, that means that the math that you need to do for deep learning is really, really close to math that GPUs are very good at. So it was like you got this 30 40x increase in compute power, you know, in a really short period of time. And in an era, right, where Moore's Law has, you know, been over now for quite some time, you're not seeing those kinds of bursts in compute power happening anywhere else. And it turns out that that really was the bottleneck for a lot of the historic deep learning techniques is that we just didn't throw enough compute power at them, right, we just didn't actually give them a shot to work. And the big turning point there is why keep talking about 2012 is the Alex net paper, right? Alex Kruszewski, in 2012, had this deep learning entry to image net, just like mopped the floor with the competition. And a handful of years later, deep learning was so good at that competition, that we had to discontinue the competition, because we're like, we're so good that it's not even useful to look at this as a metric anymore. Right? It was like when they blocked Jordan from participating in the dunk competition. Yeah, it's just like, look, it's not going to be interesting. If we look at it this way. Right? Like, we're too good. Exactly. Yeah. So what is the application then of deep learning that indicates using? Yeah, so our focus really is on this human machine interaction? Right? Our philosophy is that deep learning really is critically useful in these unstructured use cases. Right? So images, text documents, you know, that's very, very important audio and things along those lines. And one of the things that's interesting, maybe just to take a an example of contract analysis, right? There's a couple of things going on there. But one of the things that's really key is that when you think about the end use case for a contract, and there's gonna be a lot of different ways you use a contract in your company, right? I might be checking the terms before approval to say like, Hey, yeah, this is good to sign, right? I might have to do an audit of all my historic contracts to figure out, Hey, where'd the data rights look like X, Y, or Z? You know, maybe I didn't think very much about that at the time, all sorts of reasons you might be sifting through a contract. Now a problem is in all those processes today, the way that they're traditionally done, it's kinda like you fill out a spreadsheet, you've got very little transparency into it, probably Bob and Jill and Sue, right, all the people that are doing this process are all doing it in the kind of slightly different way, they're doing it in this extremely manual way, there's really no way for them, even though again, deep learning theoretically works really, really well for these kinds of problems, there's no way for them to actually get this sort of AI intelligent assistance in the work they're doing, even though it's theoretically a really good fit. And that's really the the gap that Indico fits into the analogy that we use, it's almost like an AI powered bionic arm for the knowledge worker, basically, you know, you load your data in, right, whether this is contract documents, or pictures of trash or tweets, right, and you've got sort of a particular task that you're trying to do probably extracting some data, you know, routing some data, right, you know, you've got a lot of different things that you can do in the platform, but some specific task, and then Indico, you basically start doing that. And then we've got in the background, a deep learning based model. And we'll we'll talk about it and you know, a little bit more detail. It's basically watching what you do, and learning in a lot of ways. Okay, how should this be done? How do I can drive consistency here? And how am I then going to help you do this in a much more efficient and effective way? And, you know, obviously, try to take as much of that work off your shoulders as possible, is that what is meant by quote unquote, solving the problem of unstructured data? That's exactly what we mean, when you look at how people traditionally break up the unstructured data scheme, like if I am XP company, and I want to build a sentiment analysis engine, right. That's how people I think, traditionally, think about these unstructured use cases, highly, highly fractured, right? The question of exactly how you're going to get after that is actually pretty multifaceted. Even though sentiment analysis is the most boring, basic hello, world kind of thing you can think of right? It turns out, everyone actually thinks about sentiment analysis a little bit differently. And believe it or not, if you're thinking about building any unstructured use case today, just because the traditional requirements for the amount of data, you've got to label right, the number of services you gotta plug in, you've got to have a data science team, right? You've got to have this whole compute infrastructure, right? It's actually about $10 million for the organization to get one use case into production at a relatively large company, which is kind of nuts. If you think about it now No, to Google, where every incremental bit of accuracy on AdWords targeting right is just printing money by the truck full right? That's totally fine. Right? So $10 million, for them absolutely nothing to like, blow a billion dollars to get a couple dozen of the world's best researchers in incredibly profitable, brilliant decision for them. Right? Not the case for most organizations, right? The vast majority of use cases, right are not these two or three at the top that print money, like ad targeting, you know, for Google and Facebook. And so they have to approach AI really in a fundamentally different way than every one after that. So really, what we're doing from a structural perspective, if you will, is we're taking that average cost to build out a new unstructured use case down from $10 million in sort of this old, highly manual way of doing it down to around 100. Grant, right, a huge part of that is we need less data. A lot of that is you know, we're taking care of the ML ops and that piece as well, right. Obviously, you've got to have data annotation, things like that integrated into the product in order for that to work, right. But that's the net net, really, of what our customers are getting.  


Rob Stevenson  13:16  

Is this downsizing, shall we say in needs of investment in needs of hardware, indicative of where the space is going, when I look at the way others or of technologies have progressed, a computer used to be the size of a racquetball court. And now if it's in my pocket, there has been this pressure to make things smaller, more affordable, more accessible. Do you think that will is a necessary outcome for AI technologies? Or will there always be some need for huge amounts of investment and huge amounts of resources at your disposal?  


Slater Victoroff  13:47  

It's interesting in AI, because I'm sort of two minds, right? Because I think the answer is sort of Yes. On both sides in that, I think you do. See, on the one hand, there is a lot of research being done around how can you make these models more efficient, right. On the other hand, when you look from the perspective of what is actually coming out, the vast, vast, vast majority of companies out there, right? It's sort of this big singular model mentality, right? You really don't see people dealing with this problem of how do I deploy 1000 concurrent models, right? It's how do I deploy one model and have it processed all of the internet's content? And I think that's actually really interesting. It's to your point, it's actually really different for how a lot of other fields have been going. While AI is becoming more accessible. I think that the heights are also growing in AI. So what I would say has happened is that we've instead, just practically and this is where things sit today. I think they've probably sat here for the past, maybe two or three years, but the thing is very well could change. But it's more that we've adopted a consistent cost of an experiment. So we're sort of like okay, we are okay, spending, maybe it's somewhere between two weeks and two months letting this model train, and then you back solve for, okay, what's the size of our data center and you're like, alright, this is the size of the model that we're going to make. So it's almost the inverse today, right? Where I think you do see a really big consolidation. Open AI, I think is a great example, you know, where you building these really massive data centers to train up absolutely stupid sized models, really to understand, you know, what happens when you do make things this much bigger? I don't want to keep going in that direction. Certainly making models bigger makes them better. You know, you can't argue that it is simply true, right? But I'm also a big believer in the lottery ticket hypothesis. The lottery ticket hypothesis is basically saying that the reason we're getting this incremental improvement from these bigger models is not because having a bigger model is fundamentally better, but rather because a bigger model is basically allowing us to randomly choose between more smaller models, right? And that, you know, that kind of implies that there are better ways of creating these, initializing these, and we just haven't necessarily figured them out yet. Right, the smaller model was out there to be found. It just took more resources to be found. Exactly right. And I think that not that many people are focused on that problem today, I think people are aware that it is a problem. Obviously, Indico is extremely focused on it, I hope more people continue to be focused on it in the future. But I would say that the seas have not yet swung in that direction very far, I would still say maybe the world out there is 80%, this $10 million per project, kind of, you know, sentiment analysis is crazy technology sort of worldview, right? Maybe now 20% of us have gotten over into this Indico mentality of like, you can build these out for 100 grand, right. And, you know, we're obviously even trying to try to get that lower trying to make that easier, right. But I think that right now, when you look at a lot of the rest of the 20%, a lot of people are still struggling quite a lot to even execute on that kind of 100 grand sort of price point, in fact, to the point where the failure rate that you see in our space right now, rather, I'll say the success rate of getting into production, and actually having success metrics is 11%. Across our industry, really bad Indico we're actually at 97% success rate in production. So we're, you know, we're very, very happy about that. But I think it goes to show that, you know, it's a new market, and people are taking a while and it is difficult to make this technology more accessible. And there's not as many people working on it as I'd like, what does that number mean, the 11%? Like one, I guess, where does that come from into? What does that represent 11% of the time, there's an actionable outcome from training through to production, or what does that refer to? It is pretty outcome oriented. So maybe first, that number actually comes from a couple of different spots, it's confirmed, you know, plus or minus a couple percent between Gartner and Forrester and HFS, and a couple of other analysts. So it's actually as disheartening as it is, it's pretty well validated. And really, what that is, is the alignment between the initial expectations that are set for the project, and what is eventually achieved in production. Right. You know, often that comes down to some efficacy rate, usually, it has more to do with an ROI than anything. But I guess, really, the ultimate judge is whether the thing stays in production, and whether the people who were part of it, leave it in production and say that it achieved its desired goals. And so really, what they're saying is that 90% of projects are not hitting that pretty basic metric. And to even break it down a little bit further. It's something like 60 to 70% of the projects are failing before they hit production at all. And then the remainder there, they're getting into production and then failing at some point in production.  


Rob Stevenson  18:43  

Okay, got it, this barrier to entry, then this high price point and this high likelihood of failure, it means that AI is really only happening and an exclusive well funded areas that can afford to fail. And in the same way, as again, was like the racquetball size computer, it was a non trivial thing to assemble that and to have processor power. Now, processor power is somewhat of a trivial thing. So what is the commoditization that needs to happen in this field for opportunity to be more widespread? Is it data?  


Slater Victoroff  19:16  

I think that that's a part of it. But I think it's not sufficient. Right? I think that data is maybe the easy answer. Because certainly there is like a data future I can paint that would solve the problem, right? You know, like, yes, if there were like widely accessible data and like gobs and gobs and gobs for all of these these things, right? That would be one thing. But I unfortunately, don't think that that is very practical, right? I think like if you were going to solve this by like generating more data, it's just not how I view it, right? And here's actually what I think is very interesting. I think compute looks at in pretty much every industry computers like a nonfactor. Now. The exception is AI though, and this is something that I think a lot of people don't recognize. I would say that by and large AI today is not data limited, it is compute limited. It's the only field, I think in software that you can say that. But you can actually, even me as an individual, like on my laptop, I can actually collect more training data, but pretty trivially in like a couple of days, right, that I could even use in a training regimen, right? That's not even more than I could use in a training regimen. It's like I can and have created datasets larger than Microsoft could use even to train their models, again, in like a couple of days from a laptop, right? So there's enough data out there, we've got the internet, and the Internet has a lot of data. And it's pretty accessible turns out, right. But everything we do is compute limited, first and foremost. And then the second piece, I would say is that we are supervision limited. So it is really this question much more about, I mean, it's business and non technical processes around defining what does success look like? How are we actually supposed to do this process? Right? And it turns out, that's actually a very hard thing to do. And I would say that, once you're in a situation where you've resolved compute, that becomes a limiting factor. And then like data sort of solves itself. If you can get those two done, would resolving compute, would that remove the need to have clean annotated data? Or does it just assume perfectly clean annotated data, there's no situation in which you can remove the need for clean, annotated data. But let me maybe refine that like a little bit, because I think a lot of people have maybe some mistaken assumptions there. The way that I would analogize AI is you are programming with data. So thing is like, there is no programming language out there, that is going to mean that you don't have to write code. That's kind of what I hear when people ask that question like, do we ever have to not enter? So it's like, no, that doesn't make sense. You're kind of like missing the forest for the trees, right? And I do get people have this sort of notion like, Oh, I just wanted to, like, magically autonomously work and like, do the stuff. Right. But yeah, I think that's not quite the right way to think about it. Right. I think if you think about it much more from the perspective of I am programming with data, there's a couple of pieces there that I think are missing, if you think about the programming experience that we do today, right, and why AI really does end up failing in a lot of cases. So first and foremost is if I'm programming with data, I need to be able to debug that data. I think a lot of people have this notion that I'm going to go out and get some pristine label annotated data, the first shot, and that's all going to be right, and I'm just going to train my stuff. That's like, it doesn't work. There's no possible world in which that works. Any more than you're going to write your program and have it run with no mistakes the first time. So I think that what people miss is a, it's much more important to build tools around curating the data that you've got than going out and just getting gobs and gobs and gobs of it, you've actually probably got enough data. Right. And that's the second point, right, is that I think people try to combat quality with quantity. And that's actually a really bad strategy here. Right? I think a lot of people misinterpret a lot of the research out there. And they say, like, if I've got noisy annotations, it's okay, so long as I've got a lot of them. That's actually not quite right, right? Because your noisy annotation is still a definitive, you know, assessment of truth, right? You're giving instructions in your messy data, just like you're giving instructions in code, right? So you know, having more code doesn't give you a better program. Instead, it's this notion of like, okay, how do I actually come up with an ML scheme where I have a terse language, so I need as little data as possible to define it. And I've got these really rich tools to debug the data that I've got, and make sure that I'm actually obstructing this and you know, building this program in the right way.  


Rob Stevenson  23:40  

So this notion of perhaps prioritizing data, less being more as long as it is cleaner, as long as it's better annotated, etc. This originally is a computer vision problem. Correct in terms of determining which data to prioritize? Is that still the case?  


Slater Victoroff  23:55  

The most influential paper for me, I think, in my whole life came out in 2014, called visualizing and understanding convolutional neural nets Island Fergus, I probably shouldn't say that, you know, clarify. They're one of our competitors now. And, you know, they're the CFO, but I think it was where we got turned on to this notion of transfer learning, right. And, you know, this is, you know, convolutional neural nets, obviously, that is in the computer vision space. And that was where I think we got really turned into this notion that you can start training a model with this, like, very big, broad data that actually, you know, quality's not not so important there. And then allows you to create this like very, very clean guiding data that's going to sit on top of that. And, you know, they had this amazing state of the art with just six examples, I think of cats and dogs, they were able to set a state of the art in, you know, telling the difference between cats and dogs with 12 examples in total, which was just like, by popping. And I think the other thing that they really dispelled in that that becomes key to this whole notion is this notion that deep learning is a black box, and it's unexplainable. They, I think, you know, back in 2014, this paper is sort of the definitive guide to debunking that notion, because they basically slice up the network in you know, every possible way you can conceive of right? They're like, alright, what part of the image? Is it paying attention to? Right? How's it digesting that? Right? What is it good at recognizing what is it bad at recognizing they're like, alright, you know, neuron 52846 is the one that is recognizing this variant of Beagle face. Right, you know, just incredible, incredible detail. And what they really turned us on to right was this notion that maybe this was generically applicable. Right? You know, again, 2014, this was even a very cutting edge notion in computer vision, right? I think that even as of 2018, or 2019, if you were building a computer vision model that wasn't fundamentally based in these transfer learning techniques, you were pretty far behind the times, right. But I think it's only really today, even that that's even moved over into the natural language processing space, right? We think, you know, Bert came out in GPT, right? You know, these are all of these transporting techniques, you know, in language in the same vein that you originally saw in computer vision back in the day, but it took years in a space moving at lightning speed, right to really be able to move from image over to text, but that really was where the inspiration came. And now there's a very interesting additional evolution happening on top of that, where people are asking this question in documents, for instance, you know, when I've got a complex table, or maybe I've got an appraisal of a house, and you know, I have images of rooms in there, right? You know, when I really want to be able to think across this visual information and this textual information, it's like very natural for humans, right? Makes sense that you'd want to do that. That's really kind of this next frontier, though, right? This is where trans learning is going next, is this idea. Can I actually take visual information and language information? Can I understand that together in a comprehensive way, and then give you one interface to learn on top of that kind of consolidated understanding of the world? So this shift, I guess, or refocusing into transfer learning, has that affected your own approach Indico transfer learning when we were starting out was in DECOs. big bet, right? We're like transforming is going to be the thing like, trust us like, this is where it's at. I think, thankfully, we were right. I don't think anyone would question that. You know, we were we were correct on that. But I think one of the things that's really interesting is that it's gone from sort of a buzzword into a really deep donate. So I think it used to be this question like, are you doing transfer learning? Are you not doing transfer learning right now everyone is doing transfer learning in some way. But it is as much art as it is science. And that's often how I sort of describe it is that we have gone from asking the question is transfer learning possible to asking the question, what does it take to be the best Middle World at transfer learning? And I think that, you know, academia is kind of shown it's a deep enough space where that makes a lot of sense. You know, I think the two things that have really come into crisp focus for us that support that kind of around that are one machine teaching, which we've kind of implicitly been talking about this whole time. How do you supervise these models differently? How do you more closely aligned supervision with the way that humans teach each other? Right? Because I think everyone would agree that that's better, right? And then the other one is multimodal fusion, which we were just talking about, which is this question of, okay, how do we actually combine different modalities like text and image to, you know, again, it's all about let the human train that thing more easily, it's all about put more control in the hands of the people. It's all about explainability is useful insofar as it allows you to have control. And so you know, we believe, for instance, very much in twinning together those notions of you know, if you've got a notion of explainability, you got to have some control panel for that on the backend as well.  


Rob Stevenson  28:40  

Slater, we've covered so much ground here, and I want to cover just a little more before I let you go. But I want to indulge the 14 year old version of you that like first started to get really excited and nerding out about this space, when you take stock of where AI is now and where some of the growing opportunities are. What gets you really excited about what we can see from this space in the short to medium term.  


Slater Victoroff  29:05  

Well, maybe one thing I will say to all the 14 year olds out there is that when I was 14, I didn't know how to program a line of code. I did not know what AI was. So if you're 14, you don't know what you're doing. That's fine. I'm much more excited about AI now than I was at 14, which I will take as a good thing. Actually, I think that's probably a plus. But what I'm intensely excited about today, machine teaching is a big piece. Multimodal fusion, obviously big piece transporting is a big piece, but to me it is all in service of this question of how do you solve the human machine interaction problem. We've gotten to the point where the algorithms are capable of things that were just impossible even a short amount of time before. And really the research question is pivoted around this notion of how do we expose this to humans? How do we decompose problems in a way that is both straightforward for a human to understand and control and also is then really effective? for ml algorithms to learn from on the back end? Right? If you know and it's a great example of something that a couple of years ago we were so far from being able to do something like that, it didn't even make sense to ask the question where like, I was like, This is crazy, right? Like, come back later. But today, we've got the signs that show like this is totally possible. We've just got to do it.  


Rob Stevenson  30:20  

I love that Slater. This has been fantastic. Thank you for being here and sharing your expertise with me today. I've loved learning from you.  


Slater Victoroff  30:25  

Thanks so much for having me. It was a total pleasure.


Rob Stevenson  30:33  

How AI happens is brought to you by sama.


Rob Stevenson  30:38  

Sama provides accurate data for ambitious AI specializing in image video and sensor data and notation and validation for machine learning algorithms in industries such as transportation, retail, ecommerce, media, med tech, robotics and agriculture.


Rob Stevenson  30:56  

More information, head to