VP of Technology Ben Upcroft explains Oxbotica's vision of "Universal Autonomy", and the process developing software that enables vehicles to seamlessly transition between on-road and off-road environments in a safe and autonomous manner.
Oxbotica is a vehicle software company at the forefront of autonomous technology, and today we have a fascinating chat with Ben Upcroft, the Vice President of Technology. Ben explains Oxbotica's mission of enabling industries to make the most of autonomy, and how their technological progress affects real-world situations. We also get into some of the challenges that Oxbotica and the autonomy space, in general, are currently facing, before drilling down on the important concepts of user trust, future implementations, and creating an adaptable core functionality. The last part of today's episode is spent exploring the exciting possibilities of simulated environments for data collection, and the broadening of vehicle experience. Ben talks about the importance of seeking out edge cases to improve their data, and we get into how Oxbotica applies this data across locations.
Key Points From This Episode:
Tweetables:
“Oxbotica is about deploying and enabling industries to use and leverage autonomy for performance, for efficiency, and safety gains.” — @ben_upcroft
“The autonomy that we bring revolutionizes how we move around the globe, through logistics transport, on wheeled vehicles.” — @ben_upcroft
“The idea behind the system is that it is modular, enables a core functionality, and I am able to add little extras that customize for a particular domain.” — @ben_upcroft
Links Mentioned in Today’s Episode:
EPISODE 31
[INTRODUCTION]
“BU: All vehicles, anywhere, in any place, in any location, in any environment should be able to leverage autonomy. And we've designed autonomy in a way that is agnostic to the hardware. So that's the vehicles, the sensors. Agnostic to the types of environments that we're operating in, be it off-road on on-road.”
[00:00:25] RS: Welcome to How AI happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens.
[INTERVIEW]
[00:00:54] RS: Joining me today on How AI happens is the VP of Technology over at Oxbotica, Ben Upcroft. Ben, welcome to the podcast. How are you today?
[00:01:02] BU: I'm great. Thank you very much. Thank you for having me on the show.
[00:01:05] RS: So pleased to have you. I'm at the beginning of my work/podcasting day here in Denver, Colorado. You're at the end of your day, toiling away on artificial intelligence in Oxford in the UK. What was your day like? What's a typical day of work for you?
[00:01:20] BU: I think toiling might be not exactly where it was. It's actually super fun. I'm really privileged to be in such a position to be able to play with autonomy, play with robotic vehicles, being the age today where we see where artificial intelligence and machine learning comes to our roads, comes to our minds, comes to the domains that we're operating in. So it's the opposite to toiling. Absolute fun, enjoyment. Of course, its challenges. But it's an area that I've always wanted to be in.
[00:01:46] RS: Well put. I love that. And I love hearing that you do have that joy still for the work. In your position, is your day a lot of meetings? Do you get to tinker a little bit with the technology? Are you in the codebase? What kind of fills up your time?
[00:01:59] BU: I try to do all of it. It's a bit of both, bit of meetings. We've got a large team of developers, technicians, engineers, that go from everything from algorithmic development, all the way to deploying on to our vehicles and making sure our autonomy works with our vehicles and fleets and our customers. So meetings is a part of it. But you know what? I learned a whole bunch from all those meetings. I learned about things that I'd never be able to do alone. It scales in such an amazing way.
I get to jump into the codebase. I get to jump into different aspects of our codebase, everything from C++ development on vehicles, to Python machine learning off-board. So it really is a really lovely blend of being able to do both the management side of things and the technology as well and pull that all together. That's part of – I think, the challenge that I have personally, is how you bring really complicated problems like autonomy together with humans and the complications that you have with sets of humans and making all that work together to a common direction and a common goal.
[00:03:01] RS: Yeah, that's fascinating. I want to get into your role a little bit more. I guess, first, it maybe it would do to explain Oxbotica a little bit, because it's a really exciting company with ambitious and fascinating vision. Would you mind explaining a little bit about the company and what you all are trying to accomplish out there?
[00:03:20] BU: Yeah. Oxbotica is about deploying and enabling industries to use and leverage autonomy, autonomy for performance, for efficiency and for safety gains. We're not a vertical in which we're trying to do the whole industry. We're not just a robo-taxi company. We're not just a mining company. We're not just a grocery deliveries company. What we do is provide the autonomy that enables those industries to use that system and leverage what I just talked about. It's a software platform that allows for full autonomy, everything from the sensors, all the way to driving through the world. That world could be on-road. It could be off-road. On-road in an urban environment for taking passengers via shuttle, or taking groceries from a warehouse to your door. Doesn't matter what we're delivering. We don't mind if it's a human or milk and the groceries that you have.
We take that a little bit further. We take it on to off road, where you're delivering rocks in mines. Point A to point B, it doesn't matter to us. That's about how our autonomy platform allows us to be agnostic to the domain, to the vehicles that we're working with, to the sensors that we work with. So it really is this ability to be able to use autonomy in any of these industries to leverage what we've got, Oxbotica inside.
[00:04:40] RS: Got it. So I just say not just robo-taxi. Is it still in the domain of vehicular transport? Is that kind of the main application, though? It's like to be used, basically, in logistics? In the movement of the transport of something? Would you say it's still under that umbrella?
[00:04:55] BU: Absolutely. And we see that the autonomy that we bring revolutionizes how we move around the globe through logistics transport on wheeled vehicles?
[00:05:04] RS: And is that because of improvements to efficiency and safety? What is responsible for the revolution?
[00:05:10] BU: Where do I start there? So efficiencies. Efficiency is huge. There’s safety as a part of it. Look at all the kinds of what we can do to help the kinds of safety in all different domains that everyone's really worried about? Mining, industrial domains, refineries, solar farms, airports, ports, on-road. The number of accidents that happen can be dramatically reduced through autonomy. The autonomy allows a whole bunch of advantages in that space. It's always active. It's always understanding what's going on in the world. And it's got this 360-degree panoramic view coming in at a very high rate. Instead of a human looking at tiny little mirrors, tiny little squares every now and then to be able to get that kind of scene understanding, we're doing that at such high frame rates with such sophisticated sensors that enables this unprecedented capability for safety, for efficiency and for performance.
If you look at some of the domains that we're talking about, they're designed around being able to move as much as you can with one driver. What if you decided to change that and that driver wasn't part of the whole equation? That driver actually might be part of an equation that allows many, many vehicles to drive from a remote location? Suddenly, you've taken out a whole big chunk of how a vehicle is designed. How it's built. Because I changed the shape of the vehicle, I might change how we transport the routes that that vehicle operates on. They might be smaller. They might be modified in a different way.
Then I start changing about what that domain is about. How we design those domains? It could be a city. It could be an urban environment. It could be an airport. A lot of what we design is around how we move things around the world. The implications of autonomy I think is just so expensive. I can't wait to see what happens in the next coming years.
[00:06:54] RS: Yeah. The limitations of having a human driver, they are myriad. And I think the mind naturally goes through the design of the cabin in a car, that's the seats are facing forward. There's this driver's seat, right? You would remove that necessary after the design. But there's other things, too. As you say, the movement of things from one place to another, logistics, it's easy to overlook this entire industry. But like, literally, every single item around me is the product of logistics. Like it had to move through space and time to get to me. So it touches everything.
My point, though, is that another example of a limitation, I spoke with an individual who was working on the Virgin Hyperloop. RIP. I do believe they've shut down now. But he told me one of the limitations was that it could only accelerate so quickly, because it can be designed to accelerate faster than a human body can withstand, I guess, inertia. And so it's like, yeah, like we can't go this fast, because a human body would be turned to mush in the cabin. But freight doesn't have that limitation. So you could conceivably move things a lot faster, too, if you just get rid of the need to account for a human body.
[00:08:05] BU: Yeah. So safety and comfort kind of a balance, right? When you've got a human in the loop. Once comfort isn't required, you need to, of course, make sure that the system is safe and safe for everyone around it. But the ways that vehicle can move and maneuver, even stop and wait for things to happen and allow the world to pass by for a little while is so much less important now than for a human that has to – Is on the clock, that kind of thing.
But now, the deliveries, for example. I no longer have to worry about the time that a human is in there. Like I’m just having to understand that they've got a 12-hour shift and they've got to make sure that these many deliveries happen. I think the way we think about that changes as well because of how you can move and maneuver a vehicle without humans. And of course, I have to worry about eggs in grocery deliveries.
[00:08:57] RS: That's a great point. So I do enjoy this utopic vision of a much more efficient, much safer, fully autonomized vehicular transport world. What do you think are some of the limitations preventing us from being there? What's kind of holding back autonomy from being more widespread?
[00:09:16] BU: I think there's a few aspects. It's not just about creating autonomy. I think we're seeing autonomy. We're able to drive out in roads now, public roads. We're able to do this autonomously. We're able to do this on domains, in airports, ports, mines, refineries. We're able to do this autonomously now. It's around how you create an ecosystem and the confidence in that ecosystem to start swapping over to autonomy and using those benefits, leveraging those benefits. That ecosystem includes regulatory bodies. Ensuring that we have the policies in place, the legalities in place, the regulations in place, council buy-in, government buy-in for what autonomy can do. And they're confident that it is safe and more likely safer than what currently happens now.
[00:10:00] RS: Is that safety currently a limit of technology? Or do you think it's more public perception and trust?
[00:10:07] BU: It’s around public perception and trust, and putting into evidence that you can build that trust on. That safety that's needed is at multiple levels. It comes from not just being able to show that your vehicle is autonomous and can drive in this particular domain. You have to demonstrate. You have to have strong conviction and evidence around how your system is designed. This is a new technology. People have to understand that you've thought through carefully how you've architected such a system. And what I mean by that is, you need to be able to show that you have redundancy and independence in a whole bunch of pathways of your system. From the sensors, I must have multiple sensors of the same type. So if one goes down, I've got others to back that up. I must have multiple pathways in my algorithmic and computation path.
Why I need to do that is I need to have to be able to check online and validate online my system, my algorithms, and be able to test if something has gone wrong, and be able to flag that so that I could allow my vehicle to either operate accordingly or go into a maneuver that allows me to ensure that I'm safe.
We've created a system that allows us to do that all the way through from sensors, all the way through algorithmic pathways, to what we ultimately do is provide as a trajectory that the vehicle follows. You also need to have all the evidence that backs that up. And we're one of the few companies in Europe that have demonstrated this with and worked with the government to be able to show this evidence through some of the standards that are now coming out in a global sense.
So safety is being built into our design and in our architecture right from the start. It can't just be an add-on. It can't just be something that I think about later. And you have to demonstrate that. You have to be able to demonstrate that to the regulatory bodies, but also to the public and make sure that there's trust from the public.
One of the high-level things that we think about in terms of the trust and confidence for public is how when people get in at vehicles and how quickly they start talking about something different to autonomy, because it's an exciting thing right now. Like to jump in a vehicle that doesn't have a driver, it's really cool.
And very, very quickly, what we look for is that passengers start talking about something else other than the autonomy system. And the time it takes for them to do that gives us an understanding of how well we're doing. From a high-level, from a subjective point of view, but on average, it’s s about 10 seconds that someone's super excited about the vehicle. The vehicle takes off, and in about 10 seconds, they're completely bored. They're talking about something different because the vehicle is doing what it's supposed to be doing, what they expect, how they expect a vehicle to behave. And that's our kind of metric for that 10 second kind of rule. If it's any more than two seconds, what are we doing wrong?
[00:12:49] RS: Yeah. Yeah, that's interesting. Because when you get on an airplane, you don't look at the person next to you and go, “Can you believe we're about to fly through the air? It's just normal. You just you put on your headphones and you start looking for a show to watch or read your book, right? It's become normal. It's not amazing to get into a giant hunk of metal and soar through the air anymore, right? It's just become a standard part of transportation. And so we should hope for autonomous, right?
[00:13:13] BU: To be honest, I still jump in a plane and go, “I know the physics. I know how it all works. And I still am amazed that a big tube with some flaps at the side actually allow us to hover in the air, fly in the air.” So, I agree. I can't wait till we get there at the autonomous sense as well, where people just accept and assume that they're going to be picked up in an autonomous platform and that all their groceries are going to be brought to them in an autonomous vehicle. I don't think it's too far off.
[00:13:39] RS: It is good to keep that boyish wonder, I think, and not be too jaded on the amazing things happening all around us. On a related note, not understanding how this tube of metal works. Just side anecdote. An aeronautical engineering professor at Purdue told me, “Rob, never get in a helicopter. I have no idea how they stay up.”
[00:13:59] BU: They are remarkable. I agree.
[00:14:02] RS: So, yeah, terrifying and exciting in any case. I'm interested in the way Oxbotica is developing its fleet, for lack of a better word, of vehicles. Because as you say, it's not just about robo-taxis. There are all these other applications. It seems right now that the goal is move the actual vehicle from A to B safely autonomously. When it gets there, though, and perhaps it's not – In the example of a mining vehicle, or perhaps a construction vehicle. Now this vehicle has to interact with its environment in an advanced way. Is that part of the roadmap for automation as well? So like it's not enough to just get the crane to the construction site. But now we want it to be able to move and do the construction work on its own, too?
[00:14:43] BU: Yeah. To be able to do those endpoint type problems, absolutely. Now some are easier and some are harder than now. So the impact that you make right now might be different to the impact that you make with vehicles that are interacting with the terrain or other parts of the world.
So I guess we focus more on right now where we're doing less interaction with the terrain itself at the endpoints. But we're also understanding that you have to do that, for example, mining. You need to be able to tip the dirt out at the endpoint. You need to accept that dirt from a shovel at the start point.
There are points which are customized, and that's where we use our ecosystem, our customers, our partners that we work with, to bring that kind of expertise in. We don't claim to be experts in mining. We don't want to be experts in mining. We don't claim to be experts in taxi logistics, or grocery deliveries. We create an ecosystem of partners that allow us to leverage the expertise of that domain that people already have been working in, know how to do that, and work closely with them to be able to ensure you can do those customized-specific parts of a particular domain or a particular industry requirement.
[00:15:53] RS: Yeah, that makes sense. Sorry, I'm kind of jumping around a little bit here. I'm really interested also in what you had said about it's not enough to develop the technology that works. You have to do it with safety in mind. You have this thoughtful approach on building trust with the public, on publishing data, on demonstrating exactly how this works. So the making sure that it works, the proving through all this documentation, engendering trust, etc., is one piece of it, perhaps two pieces of it.
There's also this notion, with any AI technology, about being thoughtful of how it's going to impact the world. Imagine that autonomous vehicles are everywhere. They're completely ubiquitous. That's going to have a significant change on cityscapes, on the way the world works when you step outside. Is that something that you keep in mind as you design this technology at Oxbotica?
[00:16:44] BU: Absolutely. At the moment, given autonomy and it's nascency in terms of as a production system, as working with the world now, I think it'll require a demonstration of autonomy at scale before we start thinking about how we change the environment, change the cityscape, the mindscape, the industry, wherever we're moving and using logistics. It'll take a while for that to happen. But I think that's when industry operators, city designers will start leveraging, “Oh, we've got autonomous vehicles that work now. How would I make that and leverage that for how I design my next town, my next mine?”
For example, if I could change the size of vehicles using autonomy, because I can take the driver out, for example. In a mine, in an open pit mine, which is a big cut mine that puts a massive hole in the world, I can change the road size. I can shrink that down. I can cut at steeper angles. And I can have a lot less footprint. The impact on the environment is a lot less. It's huge what we could do from just that alone.
In terms of cityscapes, do I need to park anymore? Maybe. Maybe not. It depends on what happens to the vehicles when they drop them off. They still have to drive somewhere. But parking is different. There's no doubt about that. How we store our vehicles is different. Does every single person have to store a vehicle anymore? Can we share vehicles? And that's a part of an investigation that's coming through some of these ecosystems that we've created, some of the operators that look at passenger shuttles, look at how they want the town to operate, the city to operate. So I think it's going to lag a little bit. It's not probably part of what Oxbotica does itself. But we are absolutely working with the ecosystems that we've created, the partners that we've created to enable them to leverage that.
[00:18:33] RS: Don't you think that when autonomous vehicles are more ubiquitous, that it would be too late? Don't you think people need to be planning for it right now?
[00:18:41] BU: I think we're always evolving. You're right. For a particular city that's locked in, we may not be able to change that. But I think we're always evolving. I think we're always thinking about how we can change the next design. I don't think we're ever static. So now I don't reckon it's ever going to be too late. It just might be a bit slower in some cases.
[00:18:58] RS: Yeah, that's fair. So this application of your software to all manner of vehicles, and the way they will reinvent the design of those industries. And you gave the examples of the way that a mine is dug. Is this what you mean when you refer to a holistic view of automation? How does that play into the way you're designing your technology?
[00:19:18] BU: I was talking about universal autonomy. So a holistic view of autonomy is that we think that all vehicles, anywhere in any place, in any location, in any environment should be able to leverage autonomy. And we've designed autonomy in a way that is agnostic to the hardware. So that's the vehicles, the sensors. Agnostic to the types of environments that we're operating in, be it off-road or on-road. That universality part of it enables us to not have to change very many lines of code to be able to work in a mine, or a refinery, or for grocery delivery in an urban town. The way we've designed the software was that we didn't bake-in any assumptions from the start to say that we're only going to be driving on roads. That's okay to do. What we want it to be able to achieve is autonomy across all these different domains. So we didn't bake that in.
To give you a real example, if I had used lane markings as a way to be able to localize my system, that would not work. It would work perfectly on-road. It wouldn't work off road. There are no lane markings. Of course, we leverage lane markings. Of course, we use lane markings when we're there, but we haven't baked it into the base core system so that I rely on it.
So the idea behind the system is that it's modular. It enables a core functionality. And I'm able to add on just the little extras that customize for a particular domain. So if you were to come talk to Oxbotica today, you'd be able to jump in a vehicle on-road, you'd be able to jump in a vehicle off-road. And you'd have multiple platforms that you can do that on, and the code will be the same. There'll be no changes in that code.
We configure our platform to be specific for a particular domain and to take advantages of some of their scenes in those domains. But none of it is baked-in. None of it is locked-in. So the universality part of it is that we have created a system that easily allows us to scale across different environments, different domains, different vehicles, different senses.
[00:21:21] RS: Okay, fascinating. So forgive my naivete, because I'm neither an engineer nor a technologist. So the idea here is that it is more about processing data from sensors as it comes in, as opposed to any preconceived determinations based on training data. Is that fair to say? Like, if something can go on-road to off-road, that means that there's more training that has happened, right? A larger data set. But the idea is that it can shift seamlessly that if you – For example, like some autonomous vehicles, if they get off of a highway, they're broken, right? Like they can't even handle suburban streets. But the idea would be for your vehicles to go anywhere.
[00:22:01] BU: Exactly. And the way they can go anywhere is that we use generic features, either learned or handcrafted, to be able to input into the different types of algorithms. And we have multiple algorithms. Some are traditional, or modern optimization techniques, and others machine learning, deep learning techniques. Some need training. Some don't. What that means is I'm able to go into any domain and I can use the information that I'm getting online to help me in the future and just get better in that particular place.
One way of thinking about this is, if I have a bus route in London, that bus is never ever going to drive in Kuala Lumpur, Malaysia. It's never going to go there. So I should become an expert in London for that particular bus. And the vehicle that's going to work in Kuala Lumpur should become an expert in Kuala Lumpur. What's really cool is I can now use both the learnings from both those and combine them. But I don't need to be able to be an expert across both of them straightaway. Does that make sense?
[00:23:06] RS: Yeah. There's sort of a fine area in the middle of that London, Kuala Lumpur Venn diagram, right? And that's what you would kind of apply to both. But they don't need to be trained in the same domain, I suppose. Could you give an example? What's the cross-contamination, I guess, there that you could use to improve either device?
[00:23:23] BU: Yeah. Often, off-road or industries are not always in the prettiest locations. And they deal with really difficult environmental conditions. Could be fog, dust, storms, snow. They might not always happen in an urban environment. Dust –
[00:23:38] RS: Certainly in London.
[00:23:40] BU: Yeah. Dust, it just doesn't happen in a lot of urban environments. You have to drive for very, very long time for many, many billions of miles before you come against that edge case in an urban environment. But in a mine, I happen to come across them every few days. I can use those learnings, transfer them across, so that when that freak dust-on comes through our urban environment, I'm all ready to deal with it. It's really interesting. All vehicles can learn from each other.
When I deploy a new vehicle, it's not a new driver. It's got all the experience from all the other vehicles, fleets, and across those domains. It amplifies the safety case around autonomy. If I have a human learner driver, they start from scratch. Pretty rarely has anyone, their friends or family, told them much about their experience about driving. It's like jumping in the car, and you have to learn pretty much from scratch. It's rare that I've ever told anyone about my experiences of driving to help someone else drive. Whereas autonomy, all vehicles, al fleet are learning all the time, and not one of them is going to be a beginner driver.
[00:24:49] RS: Yeah. It's like when Neo opens his eyes halfway through the first Matrix film and is like, “I know kung-fu.” “Show me.” Right? It's like you don't do that as a human driver. But like you wouldn't like say, “Ben, I know how to drive in fog.” But an autonomous vehicle can, for example. That's such an interesting cross example of, “Oh, we can take the dust data from the mine and apply it to our urban environment driving.” Because these edge cases, like fog, for example, is common enough, but frustratingly uncommon if you're trying to train technology, right? Like it's a natural phenomenon that you can speak, “Oh, there's fog over here. Let's go collect this data.” You kind of have to wait and get it right.
And I'm wondering, when it comes to collecting data, could you just drop a miniature autonomous vehicle with all of the sensors that a large one would have and be like, “Alright, this is cruising around a swamp in Bali, because we know it's quite foggy there. And we'll harvest our fog data in a rainforest, and then we'll be able to use that on our streets.” Is it that simple?
[00:25:44] BU: It is actually. And I'm going to take a little step further. We do this right now. And this allows us to scale with some of our customers, some of our partners. We've got partners like ZF, who’s a tier one, supplies to OEMs. But in our case, it’s going to create at production scales, autonomous shuttles for us, in which we'll put our autonomous software in.
We have partners that do groceries and deliveries, Ocado, which do the logistics systems and the automation in warehouses, with our autonomy from warehouse to door around the world. Now, one of our partners is BP. So I'm standing around solar farms, wind farms, refineries. We've already got and collecting data all the time from those areas. And our autonomy system learns from that and enables all those different environments, and the learnings from those environments to be incorporated is fantastic. You still have to drive billions of miles to get the kinds of edge cases that we're talking about. We've come up with a solution where we do virtual validation and verification, which we'll be releasing soon. And it enables you to be able to collect in a simulated environment, edge cases that provide the coverage across the domains that you're about to deploy to before actually even putting your vehicle into that domain. It's super exciting. And it enables us to use and exploit some of the machine learning, deep learning techniques that have come about over the last number of years to synthesize information, scenarios, data of these edge cases that you'd never come across in the kind of timeframes that you're looking at, unless you do this billions of miles with 1000s of vehicles. We're able to do this in a virtual way and use that to provide even more experiences for our vehicles, and also to provide the kind of confidence for the domains that operators and the customers that we're working with for the domains that they're operating in. So it's super exciting times.
[00:27:36] RS: Yeah. Can you speak a little bit more about how that works? Because it sounds like you're taking non-vehicle data. If you were to put that data side by side with something that a sensor on a vehicle might take in, it probably would look different. How are you able to translate it?
[00:27:49] BU: In fact, it looks identical. We generate data that looks identical to sensors from a camera, from a lidar, from a radar, from GPS, IMU, real odometry. We’d synthesize exactly what it would look like. And then not only synthesize it for one particular scenario. We synthesize it and adapt it for many, many types of scenarios, not just by blurring or slightly modifying that particular scenario. We're actually sampling from space that allows us to understand what's not normal. It's really easy to come across normal. And that's why you have to drive billions of miles, right? I drive many, many miles. They're all normal. And hopefully, I come across something that's not normal so I can get that experience. Why not sample from the not normal and get that really, really quickly so that I don't have to drive those billions of miles?
So we use deep learning methods that allow us to be able to generate, synthesize that kind of information. And then we can bring that in and test our autonomy stack, a software system, in simulation. And our autonomous system doesn't know if it's in simulation or not. It can't tell the difference. Allows us to test through those scenarios, find the areas that we might have to improve upon, or validate and verify that we can operate in all the types of conditions, environments for that particular domain. So we can deploy and drive some miles, but not the billions of miles to get a full validation and verification of our system.
[00:29:12] RS: How much real-world data do you need to be able to accurately synthesize data before you can synthesize accurate data?
[00:29:20] BU: Not very much at all. You need some, but it's not the brute force, massive amounts that you often hear about for deep learning. We can create of off very small amounts of data. And I'm talking a few 1000 frames of images, and laser point clouds, and radar point clouds. And we can generate the coverage that you need for particular domains that you might be operating in. That's the key, right? You need to get coverage of all the edge cases for those domains to be able to say, “Yes, this system has been validated. It's verified. I'm able to deploy this in a safe manner.”
[00:29:54] RS: Would you say there's been this shift in AI from more data is better, to less is more? Like, the more specific, the more relevant you can get? Like it's not a war for data. You need to be more precise.
[00:30:09] BU: I don't know if there's a shift. But I think there's a way of thinking about how can I do this efficiently? What makes this more efficient. There're only a few companies in the world that can have the kinds of sizes of data that we used to have to need for machine learning. Now, we're coming up with methods that allow us to transfer from one scenario to another scenario, or one type of action to another type of action. I think that's expanding. I think that's changing in how we use the type of data, how we inform the inputs into a model so that we get the kind of outputs that we need without having to collect all the data from all around the world.
[00:30:43] RS: Yeah, yeah. Makes sense. Ben, this has been fascinating chatting with you. Before I let you go, I want to ask you just to riff a little bit on what you're most excited about in the industry. It could be related to autonomous vehicles or not. When you just take in the scope of what's possible right now and what's going on in the field of artificial intelligence, what gets you really excited for the future?
[00:31:05] BU: I started my career in physics. And I've loved math, and I've loved physics forever. And I got to the end of my degree in physics in my PhD, and it was exciting. And we're finding out new things. But I was always drawn to how does the brain work? How the heck does that happen?
And I made a big jump from physics into robotics, with my fingers crossed, hoping that it was the right jump to do. And I've never looked back. The excitement and the ability to understand how intelligence may work, and then try to maybe not replicate, but try to emulate or use the kind of understanding that we do have of intelligence, and create our platform that is intelligent and interacts with the world in a way that we're doing in the community for autonomy gets me up every single morning. It’s so exciting, the understanding to how you might improve an algorithm to do prediction. That I can predict the scene that I'm looking at out to a few seconds, and then take that to tens of seconds is something that I absolutely enjoy. I am so privileged to be part of this community and have the ability to shape it and be part of the journey over the last 20 years on where autonomy is gone, where robotics has gone. I can't wait to see what we do with AI and machine learning beyond autonomous vehicles. What can robotics bring to the world after that? It's a very exciting time to be in.
[00:32:40] RS: Ben, thank you so much for being here. I really love chatting with you today.
[00:32:43] BU: Thank you very much. It's been a pleasure.
[OUTRO]
[00:32:52] RS: How AI Happens is brought to you by Sama. Sama provides accurate data for ambitious AI. Specializing in image, video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e-commerce, media, medtech, robotics and agriculture. For more information, head to sama.com.
[END]