How AI Happens

dRISK CEO Chess Stetson & COO Rav Babbra

Episode Summary

In this episode of How AI Happens, we are joined by autonomous vehicle experts Rav Babbra and Chess Stetson. Respectively, Rav and Chess are the COO and Founder/CEO of dRisk and in this episode, they discuss their work at dRisk and their mission to solve a critical aspect of autonomous driving: making AVs 100x safer than human drivers.

Episode Notes

dRisk uses a unique approach to increasing AV safety: collecting real-life scenarios and data from accidents, insurance reports, and more to train autonomous vehicles on extreme edge cases. With their advanced simulation tool, they can accurately recreate and test these scenarios, allowing AV developers to improve the performance and safety of their vehicles. Join us as Chess and Rav delve into the exciting world of AVs and the challenges they face in creating safer and more efficient transportation systems.

Key Points From This Episode:

Tweetables:

“At the time, no autonomous vehicles could ever actually drive on the UK's roads. And that's where Chess and the team at dRisk have done such great piece of work.” — Rav Babbra [0:07:25]

“If you've got an unprotected cross-traffic turn, that's where a lot of things traditionally go wrong with AVs.” —Chess Stetson [0:08:45]

“We can, in an automated way, map out metrics for what might or might not constitute a good test and cut out things that would be something like a hallucination.” —Chess Stetson [0:13:59]

“The thing that makes AI different than humans is that if you have a good driver's test for an AI, it's also a good training environment for an AI. That's different [from] humans because humans have common sense.” — Chess Stetson [0:15:10]

“If you can really rigorously test [AI] on its ability to have common sense, you can also train it to have a certain amount of common sense.” — Chess Stetson [0:15:51]

“The difference between an AI and a human is that if you had a good test, it's equivalent to a good training environment.” — Chess Stetson [0:16:29]

“I personally think it's not unrealistic to imagine AV is getting so good that there's never a death on the road at all.” — Chess Stetson [0:18:50]

“One of the reasons that we're in the UK is precisely because the UK is going to have no tolerance for autonomous vehicle collisions.” — Chess Stetson [0:20:08]

“Now, there's never a cow in the highway here in the UK, but of course, things do fall off lorries. So if we can train against a cow sitting on the highway, then the next time a grand piano falls off the back of a truck, we've got some training data at least that helps it avoid that.” — Rav Babbra [0:35:12]

“If you target the worst case scenario, everything underneath, you've been able to capture and deal with.” — Rav Babbra [0:36:08]

Links Mentioned in Today’s Episode:

Chess Stetson

Chess Stetson on LinkedIn

Rav Babbra on LinkedIn

dRISK

How AI Happens

Sama

Episode Transcription

Chess Stetson  0:00  

With an AI, it's a completely different issue. You don't need to just be testing on the rules of the road. In fact, computers are good at encoding rules, you need to be testing it on its ability to have common sense.

 

Rob Stevenson  0:11  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Here with me today on how AI happens are two gentlemen doing amazing work in the autonomous vehicles field. First up on my E right is the COO over at derisk. Rev. Barbara. And also joining us on his right is the founder and CEO over at direct Chess Stetson chest Welcome to you.

 

Chess Stetson  0:56  

Thanks very much. Rob's great opportunity.

 

Rob Stevenson  0:58  

And I should call out at the beginning that for the listener out there, if you have been working in the autonomous vehicles field. Or if you have ever purchased a very handsome cowboy hat, it's likely that Chess has affected your life in some way or another you are at Stetson, like these debts, and there is relation there, correct?

 

Chess Stetson  1:14  

Well, I'm from Texas, let me put it that way.

 

Rob Stevenson  1:17  

There's only one Stetson in Texas I think is what they say over there. But in any case, we will not be talking too much about cowboy hats. Today, we have so much to get into. Let's start with a bit about de risk because you're doing amazing work over there. And I don't want to butcher the 50 word pitch here. So maybe chess, would you mind sharing a little bit about the company. And then we'll get into how you both wound up there and joined and started the company and all the work you're doing? Sure.

 

Chess Stetson  1:40  

So derisk is solving a very central part of the autonomous vehicle problem. But we think it's the most important part, helping autonomous vehicles get to that point where they are 10 times as good as a human driver, or perhaps even 100 times as good as human driver. If a human driver can solve what we think of as four to five sigma worth of events, sigma being the classical statistical measure of standard distributions at 99.999. What derisk is doing is getting it so that autonomous vehicles can solve for that extra decimal that extra nine or nine, nine. And that's the difference between AVS as we have them right now, where they're really great demos. In the best cases, they can drive around the city and almost never get into collision. But when they get into collision, it's really hilarious and sometimes not hilarious. Sometimes you really want to avoid between that current state and what the real future is, which is a commercially viable AV, with inexpensive sensors. That is a much, much better driver than any human. That's that last part of the problem that derisk are helping to solve right now.

 

Rob Stevenson  2:46  

Got it. Thanks for the overview. We will dig in there in a lab in ways but I want to make sure that we get to know everyone first. So Rob, let's take it over to you because we would love to hear a little bit about your background and how you wound up or how derisk wound up on your radar, or LiDAR, I suppose.

 

Rav Babbra  3:01  

Yes, that's true. It's a bit of a long story, though. Rob, I spent 20 years the first 20 years of my career working at a company that no one remembers a company called Nokia, obviously, great innovation pioneer of its time almost a Google of its time. And when I was made redundant in 2011, I really had that passion for innovation, world's first etc, etc. went off and joined UK Government, a company called Innovate UK, who were the government arm responsible for handing out government grants. And the government of the UK at the time, were very, very interested in and still are interested in autonomous vehicles in the UK, they want to solve the problems, things like mobility problems in the UK. So living far away from railway stations living far away from main roads, that sort of thing. So I think they call it transport poverty. Of course, the UK is an island. So everybody here has in the family average number of vehicles is 2.2 vehicles per household. Now, of course that can't keep going. And of course, offices in London. It takes ages and ages to commute to London now. And of course, when you get there, you need a car parking space. So Jess and I recently worked out that in the UK average cart only moves for about 6% of its time and the UK Government have picked up on that. So if it only moves to 6% of the time, something clearly is wrong with the equation there. Now, when I was working at Innovate UK, a company called de risk and applied to a competition that I was running, it was called the caf sim competition. And when I read their competition application, I thought, Wow, these guys are doing something special. It's something that will change the world. I must work with them. So I had possibly the best interview ever in history. It was actually on the top of the what was the dam called chess

 

Rav Babbra  4:51  

bridge. It was the Hoover Dam, not a hard word for an American to remember.

 

Chess Stetson  4:58  

So I had my name Enya.

 

Rob Stevenson  4:59  

they're damn I dare you.

 

Chess Stetson  5:03  

Three River Dam okay. Oh yeah, the Hoover Dam.

 

Chess Stetson  5:06  

Sorry, guys. So I had my interview with chess at the top of the Hoover Dam whilst I was out near Vegas, and yeah, it really captivated me. I had to work for this company. And history speaks for itself. I've been there for three years and one month,

 

Rob Stevenson  5:19  

what was the competition? So the competition

 

Chess Stetson  5:23  

was called calf sim. So at the time, the UK Government had realized, I think the world had realized that to test an autonomous vehicle, if you do the math, you're going to have to drive 15 billion miles to be able to check that it can respond to everything that can happen, risky scenarios that can happen in front of that vehicle. Now, of course, nobody's going to drive 15 billion miles, and even with a fleet of 10,000 vehicles, 15 billion miles is quite a long way and quite a long time. So I think that that point, the sector had finally accepted that simulation or testing and simulation was the only way forward to do it faster than real time. So UK Government spun up something called the caf sim competition, which was basically to use the simulation environment to be able to improve the connected and autonomous vehicle space, and de risk applied with the competition application, which was nicely titled, The world's first true driving test for the autonomous vehicle. The idea behind that, Rob, was that the UK government's set the sort of problem statement, and the problem statement they had at the time, was basically, they were worried that somebody like a Tesla, somebody like a cruise would arrive in the UK park outside Department for Transport head office, and say, here's our vehicle. It's an SAE level five vehicle, why can't we drive it on the UK roads? Now? Exactly that time we went through the process problem with the UK Government. And the process problem at the time was they couldn't say no, because there was only two hurdles that some new manufacturer has to get over to be able to drive on the UK roads. Firstly, the driver must have a driving licence. Well, of course, that fails straightaway, because this vehicle that Mr. Musk has brought into the UK doesn't have a driver. And the second problem was any vehicle, every vehicle coming into the UK must pass what they call a homologation test. And that validation tests basically goes through vehicle structure, and it says, Does the vehicle have a functioning steering wheel? Does the vehicle have a functioning set of pedals? Of course, the answer is going to be no so that at the time, no autonomous vehicles could ever actually drive on the UK roads. And that's where chess and the team at derisk have, have done such a great piece of work. We supplied to them last September, a piece of software that basically resides on the Department for Transport shelves right now. And what they're able to do is scrape a handful of scenarios that we've collected for these sorts of tests that we've collected from real life events that have taken place from all over the world's roads, when we've captured that from CCTV. From front facing dash cams, from insurance reports from accident reports, and from interviews with drivers. And they could present those to Mr. Musk, or Mercedes or whoever and say, here's some scenarios, go and try your testing and simulation and see how you get on. And of course, Rob, if they come back and say we passed 99% of these, it gives the UK Government some confidence and also availability to benchmark against other systems.

 

Rob Stevenson  8:31  

So it sounds like the approach taken by D risk or prescribed by D risk is not here's how to make a right turn. It's here are examples of all the terrible things that could go wrong and a right turn, let's train you to avoid those and then you will be able to make a right turn. Is that right?

 

Chess Stetson  8:44  

Yes, exactly. Right. Yes.

 

Chess Stetson  8:46  

And it's fitting figures, right turns the same thing as an unprotected left turn here in the US. So yeah, if you've got an unprotected cross traffic turn, that's where a lot of things traditionally go wrong with AVS. They fixed a lot of them but there's a ton more, they still haven't fixed and you see it anytime you travel in an FSD Tesla, or even if you travel in the really, really highly constrained ones are way more cruise, you will still see it doing things that show that it doesn't have a lot of common sense on a cross traffic turn. And they've worked out a lot of the common situations. But it's the really, really uncommon situations where common sense would help you a lot that AVS still don't quite have. And that's what we're trying to solve for.

 

Rob Stevenson  9:26  

Yeah, it sort of turns the approach on its head because maybe this is naive, but my understanding of how ABS were being trained was alright, let's teach it how to drive in a straight line. Whereas your approach is like let's teach it how to drive on the road and then it will naturally drive in a straight line or make a right turn as the case may be. So when you go to collect these extreme edge cases. First, let's start what makes something an extreme edge case.

 

Chess Stetson  9:51  

Certainly, it happens infrequently. So that's almost but not quite by definition. Things that happen all the time. Time are things that we design roads around that we design rules around. So almost by definition, things that happen frequently are not edge cases, whether it's less than point 1% of the time point oh 1% of the time, it's somewhere in that long tail, that will make it an edge case. For us. Also, it's something usually that involves high risk. Now we have a special mathematical way of balancing those two things. High risk being, if you had the vehicles move a few centimeters differently than they are right now, then you'd end up with a collision that would cause injury or property damage, hopefully not very fatal injury. But we have a measure for risk, we have a mathematical way of balancing those things. Of course, different people have different definitions. And we have a kind of a framework, whole software platform that's a little bit flexible to that and can allow different AV developers or different regulatory authorities to define what they think of as risk. But ultimately, it's some combination of it's infrequent. And if it goes wrong, it's going to hurt you or it's going to damage your property. And both of those things together can allow us to map out a test space, if we have enough data on how those things do end up happening. And we collect them is Rev said from places where they tend to happen. So I'm protected cross traffic turns at high busy intersections on ramps off ramps, places where they've already happened. accident reports, that's where we collect the evidence of the edge cases. And then we put that and put them all into one place. So they can be tested against.

 

Rob Stevenson  11:31  

Rob mentioned a second ago that you're also collating data from like insurance reports. So are you taking text data and then turning that into something that can train an autonomous vehicle? Yes,

 

Chess Stetson  11:41  

we do a certain amount of natural language processing, we have a set of rules that we can use to look at a claim, like an insurance claim or a police report, and reconstruct what the accident that happened was. And we can do that at different levels of fidelity, if we know a lot about the location, we can do it with a lot of accuracy. If we don't know much about the location, then we do it with lower fidelity. But then we also have a way of balancing that when it comes out to the final test.

 

Rob Stevenson  12:07  

That's fantastic. It allows you to be more precise, right? When you get CCTV footage, the overwhelming majority of the time nothing's happening, or nothing interesting to you, anyway, is happening, whereas presumably a collision report like that's the kind of thing you need to be tuning into.

 

Chess Stetson  12:21  

That's right. So we buy post hoc taking the bad things that happened, we can prevent them from happening in the future. And there's another thing that we do also, which is we do a lot of remapping of one scenario that happened in one location to another location. And so it's not quite generative AI, because generative AI has the problem of hallucinating, it can make up things that would never happen. And it could really distract your self driving car, if you use a lot of generative AI, at least as we mean it right now, generative AI itself is sort of a broad term. So there's a lot of ways to think about that. But what we do is we have an AI based technique for warping, a scenario that happened in one location onto another location. So in contrast to the way autonomous vehicles are traditionally worked, where you drive a wire around one location, as much as you can collect all the weird things that can happen when you're driving, and then put them in a huge data lake alongside a huge, a much huger amount of boring driving, we can predict not just what has happened in the region, but what's going to happen by doing a good job of taking scenarios that have happened in one place and mapping them onto another place. So that's something we do a lot

 

Rob Stevenson  13:28  

of, how can you be sure that in that example of warping, an edge case, from one location putting in another? How can you be certain that the simulation is real world accurate?

 

Chess Stetson  13:40  

We have a metric space that we calculate, we calculate post hoc, a lot of metrics on what's going on how close two vehicles ended up to each other, how close the average number of vehicles in the scene were to each other. And we can say something like, for example, if this remap scenario has a lot of the same statistics as other scenarios that happened in that space, but one thing different, like two cars got way too close to each other, it might be a good test. And so we can in an automated way, map out metrics for what might or might not constitute a good test, and cut out things that would be something like a like a hallucination, and then have a certain amount of human curation. So I don't know how much we've shown you of our tool. But this tool allows us to look at what we call embeddings. Kind of like the way an AI would see the world big projections of data, maps of clouds of different kinds of parts of the feature space. And sometimes we can automatically cut out large swathes of good or bad scenarios in the metric space even if the metric space is like 1000s of dimensions. Gotcha.

 

Rob Stevenson  14:40  

So I guess I should call out that you are not developing necessarily software for autonomous vehicles to use you are developing a standard for which that they can apply write a test for how safe autonomous vehicles are and part of developing the test is coming up with all of these edge cases and kind of determining how an autonomous vehicle I should operate, is that correct? Well, that's what

 

Chess Stetson  15:02  

the competition that rev mentioned originally set aside to do. But I would demurrer a little in the sense that I think that we are developing a training tool for autonomous vehicles. And some of our customers use it that way. Some of our customers only use it as a post hoc test. The thing that makes AI different than humans is that if you have a good driver's test for an AI, it's also a good training environment for an AI. That's different than humans. Because humans have common sense. We know that when we give you a driver's test, we're just testing you on your ability to follow the rules of the road. And we know that when you get on the road, you're going to do everything you can in order to avoid getting killed. And all you have to do is make sure that you know the difference between a double line and a single one. With an AI, it's a completely different issue. You don't need to just be testing on the rules of the road. In fact, computers are good at encoding rules, you need to be testing it on its ability to have common sense. And if you can test it on its ability to have common sense. If you can really rigorously test it on its ability to have common sense, you can also train it to have a certain amount of common sense. It involves a much huger test base, it's much longer than the driver's test, it's at least hundreds of millions of scenarios, if not many, many more. But once you've got them, you can train it on 25% of those 10% of those, test it on another 10% of those and make sure it's good, then later on, test out another 10% of those, collect some more tests and another 10%. And continue building out the way you would build out any AI training and then testing on a really rich collection of data. So it actually is both the difference between an AI and a human is that if you had a good test, it's equivalent to a good training environment.

 

Rob Stevenson  16:43  

Okay, thank you for clearing that up. I'm really glad you called out this difference between a human driver and an AI driver. Because when you take your driving test, it's been a while since I took one. But there was not like a they didn't give me a scenario where Oh, and now a bouncing ball comes out from between two cars and a small child is chasing after it. What do you do is the driver, right? It was just like, can you obey a stop sign? Can you turn right? Can you parallel park all these things, but for an autonomous vehicle is being trained on all of these really, really rare but important moments. So it comes back to this idea that the standard for an autonomous vehicle is so much higher than the standard for a human driver? More of a philosophical question, I guess. But do we expect that to continue? Is that how we should be looking at things? Or that as a factor of time will people begin to trust these things more and expect them not to necessarily be orders of magnitude safer than a human driver?

 

Chess Stetson  17:37  

I think that there are multiple views on that in the testing space. I think that almost everybody that thinks about autonomous vehicle testing and homologation agrees they've got to be at least as good as a competent human driver, probably a bit better than a competent human driver. I think everybody would probably agree with that. I mean, Rev. You'd agree with that. Right? In the UK CCAP space, most people would agree and AV needs to be at least that good right now. Right?

 

Chess Stetson  18:03  

I think the benchmark is a human driver, the average human driver, whatever that means, it must be at least that capable. But just to echo something I read in LinkedIn this month, chess, was that I think a certain mare in San Francisco commented that to certain operator around those areas, is six and a half times worse than a standard San Francisco driver right now. Right?

 

Chess Stetson  18:29  

So there's been a lot of controversy about that statement. But whatever the threshold is probably better than a competent human driver is the minimum possible threshold that even the most relaxed people in the space would agree to, we have a view inside of de risk, which is that if we think that a decent human driver is between what you might think of as four and five standard deviations out in the space of possible cases, that a good AV has got to be six standard deviations out. I personally think it's not unrealistic to imagine AV is getting so good that there's never a death on the road at all. Not apart from somebody that has a heart attack whilst walking on the sidewalk. I think that that's actually a realistic future. I think a lot of people disagree with that. So I'm going to put that out as my personal view.

 

Rob Stevenson  19:17  

Okay, yeah, that's good to call out. It just feels like So in that example of the San Francisco Mayor, there was on the heels of a tragic example where a dog was hitting, struck and killed by an autonomous vehicle. And I remember reading the news article, and I was like, this is newsworthy, because it's an autonomous vehicle. And that's the only reason. And I think, of course, the software developer was like, we're not at fault that this was a tragedy, this kind of thing happens. Cars are dangerous, but the tolerance for it is so much lower with an autonomous vehicle. If a human driver had struck a dog, it's probably happens unfortunately, every day. It's not news, but it's news because it was an autonomous vehicle. So I just wanted to call out this like parity between what is the tolerance for a driver per human versus a driver?

 

Chess Stetson  19:59  

Yeah, I mean, the I think the direction behind your question is, are people going to accept it more. And, sure, they might. But one of the reasons that we're in the UK, so you can tell I've got an American accent, I happen to be joining you from Southern California at the moment. But one of the reasons that we're in the UK is precisely because the UK is going to have no tolerance for autonomous vehicle collisions. In the US, we've been beta testing autonomous systems with real drivers. And there's good parts of that and bad parts of that. But in the UK, there was a collective decision by what I'm going to call the seek of community, the Center for connected autonomous vehicles community, and by the government, by the hysteresis, an existing legal framework that Rav already said that you're not going to have any vehicles on the road until they can really be tested beforehand. And that is to say, without a human driver without a safety driver. And that basic breakdown is the necessary breakdown for AI, you need to have third party testing to have good AI, you can't just have aI trained and tested by the same people because you get an echo chamber. And then when it gets out into the real world, it's going to do weird stuff. If you've got that basic breakdown, and if you've got a system and AI training economy, where there's different parties that are responsible for making sure that AI is good than the parties that are building the AI, then you've got the basic framework for good, responsible AI. And I think that you won't have to worry about whether people except AI is slightly better than human drivers. Because again, I think you're gonna get ais that are much, much better than human drivers, I think we can expect that. But in a way, it rests on having very little tolerance for mistakes.

 

Rob Stevenson  21:42  

Yeah, of course, that's a good call out that maybe the public perception is not so important as whatever the oversight is in assuring safety is the case. And there's public backlash to all these kinds of things. A video came across my feed the other day about people responding to seatbelt laws and DUI laws in the United States. And they'll be like, What is this a communist country? And it's like, we're trying to make people safer. And it was still, you know, in that case, it was still not accepted. So as long as there's some kind of oversight and assurance of safety, whatever is newsworthy, maybe it's not so important. I do want to hear a little bit more about the actual test itself. Rob, could you maybe share some details about for D risk assessment for an autonomous vehicle? What is the passing grade?

 

Chess Stetson  22:22  

Question? So I think the passing grade is something that we would allow UK Government to sort of assess, so right now, they don't even have a benchmark between certain companies, ATS systems. So I think that would be something that we would work with the UK Government on, by testing, firstly, first few companies coming through, and then maybe reproducing that testing or maybe the failed tests on a private Test Track, and then setting that benchmark, and maybe using some lessons learned from maybe the aviation industry or the nuclear industry to try and better that again, and again, so that we're not taking any steps backwards. We're only ever taking steps forward towards more safer and functional autonomous vehicles.

 

Chess Stetson  23:09  

Can I also jump in on that one, please? Yeah, so I'm going to be a little bit bolder. And first of all, give a number and then also be a little bit real about where training and testing of AVS is, I'm going to say, again, I think it's number of standard deviations out in what we would call a metric space. So people talk about six sigma. And we're not the only company to mention this, although we might be the first company, that there's some notion of number of things that you can do up to point 999 9% of events. And you can actually characterize scenarios and their relationship to each other on on a landscape. And as you move out from common ones in that landscape to the edge, you can calculate, you can interpolate the profitability of the scenarios, and we can give you a score, we can say that you are forcing the driver, five sigma drivers, Six Sigma driver. And this is something that we are still advocating for. That's our score. Other people might envision different scores. And we are advocating for this kind of score with transport authorities. And I'm not going to call out any one of them right now, because we're still in talks. But we in the sense that the UK is funded us to come up with a driver's test or advocating for a particular way to do it. And we think anything of that flavor, whether it's our suggestion or consortium suggestion is going to be good. Now, the real part of this is there's not a single AV developer that has subjected itself to external testing yet, not because we don't think it's a good idea or because the British government doesn't think it's a good idea. It's simply not the way the space has evolved. So de risk right now we're working with an AV developer that is going to be showing the UK Government, how good external testing third party validation is going to happen. That's the course that the UK Government has taken, which I think is great course. We've got two deployments coming out where we'll be acting as the third party validator and then showing it to a transport authority to assess whether it's a good validation tech Make. So I think you're going to see that sometime in about 2024, you're going to have a system that has got a Six Sigma score report where a driver could reasonably come out of the vehicle that is running in two different places in the UK, Cambridge and several places around the Midlands. And at that time, we'll be able to say, Yeah, here it is, this is a bomb proof AV with a Six Sigma performance score. So I think that's where it's going to be. But real talk, no AV developers subjected themselves to something like that yet from a third party validator that's recognized,

 

Rob Stevenson  25:33  

it seems like eventually they will have no choice. This is just how certification and legal approval of systems like this have, you know, as rev called out, like with, if it were to mirror the way that airplanes are regulated, eventually they will have to subject to this kind of test. Why have they not yet.

 

Chess Stetson  25:51  

I think it's because of the funding framework. So most AV development has been driven by Silicon Valley funding. And it's kind of like cowboy land, they take on themselves that they can solve the entire problem. Their engineers are also very much like that also, and a lot of us are engineers, I like to think of myself as an engineer, we love solving problems all on our own. So you put those two things together, you put people that can spend 10s of billions of dollars, up to hundreds of billions of dollars on a set of companies 10s of billions dollars on a given company. And the company is completely incentivized to try to solve the problem itself. It doesn't want to subject itself to external validation. And you gotta hand it to him, they are running a V is around the cities. Now, even if a lot of people would say that those AVS are kind of brittle, and they're kind of burning money like wildfire, you know, they're just constantly burning money doing this. They've gotten it to a certain level of technological tour de force, which even if it was brute force, it was still a tour de force. Now, is it going to be enough to get over the line? Well, maybe we're biased, but we don't think so we think that the natural thing to do is third party validation. And you can't really have a good AI without it, especially not something that's got to be safety critical. But I think that the Silicon Valley way of doing things assumed that if you put enough money into it, and you put all the smartest people behind it, that you could build something that was so incontrovertibly better, that everybody else would just fall in line and then use the first mover company who would be able to make all the money. I think that's proving itself is not a good way to do it. But there's, you know, turning the ship is taking a while. So we think what you're saying is inevitable. But I don't think everybody else has thought it.

 

Rob Stevenson  27:28  

I mean, there's so many examples of it, like the emissions test is a good one, you have to take your car to have its emissions measured and make sure that it's compliant. And if Nissan just said, Oh, trust us, our emissions are compliant. We don't need to test it. No one would believe them, right? Like you have to submit yourself for this kind of test. It feels like just because autonomous vehicle is new and shiny and sparkly. It's not going to be immune to that same kind of regulation. Surely,

 

Chess Stetson  27:51  

it's exactly right, Rob, the UK Government currently toying, and they're in a difficult position. They don't want to stifle innovation by setting standards, which are kind of benchmark whether that be too easy or too hard. But at the same time, they want to be able to define some way of getting better and better. But like I said, there's this balance between stifling innovation and leaving it to be cowboy landers, chest set, to make those people drive forward towards that goal, or whether to set a standard and just allow people to game up to that standard and do nothing else.

 

Rob Stevenson  28:25  

I see. Yeah. So delicate dance, I suppose they're playing between innovation and regulation. Yeah, as always, going back to some of the edge cases a little bit. I'm curious to hear what are some areas where autonomous vehicles are still really struggling? That is the fun part. To me. It's like, yeah, we know that they can stop at a stop sign and obey the speed limit. But what is the final frontier of AVS, being able to drive on the road? What are the really hard things for them still,

 

Chess Stetson  28:47  

so one of the ones that happened recently, and it's funny, because now they're all up on LinkedIn, and Twitter, and so on, a number of the ones that have been really high profile recently are things that we worked on with our customers ages ago. So I remember there was one AV model in San Francisco that drove right past it down to electrical wire. And that's one of many debris examples that we have in the Knowledge Graph of edge cases we've assembled. And we have been working with AV developers on that and many other similar kinds of examples for years. Now, we don't make a practice of publishing our tests. You'll see probably a few more of them from us, just for outreach reasons. But in generally, we want them to be unexpected. But it was funny to see that one come out and then realize the AVS developers we've been working with are immune to it because they can solve that problem, because they had already been exposed to it or one's very like it. But the model in question, it's not an AV company that as far as we know, is subjected itself to any kind of third party testing. So they're brittle in that way and they ran past some downed electrical wire and they ran past and police tape. And what's funny is they didn't even go and show post hoc that they could solve the problem later on, which is Another funny quirk about the way these ABS companies work at the moment. So that's a good one there, you know another? Where do you even start? A tumble, we're getting caught in the draft between two platooning class eight semi trucks going across the country, just tiny puppies running off leash into a parking lot. Rev. I don't know, anything that occurs to you at the moment. There's so many in the Knowledge Graph, I almost want to hunt through the system.

 

Chess Stetson  30:27  

I'll give a couple of drops of some of the ones that I found particularly interesting, and I hope you don't cast dispersions about me about these ones that I've picked in particular. But we recently ran a public survey to gather things that people had seen on roads, using a partner that we work with called Digi cities in the UK. And they went out and interviewed the public and said you were on what's the weirdest thing you've ever seen on the road? And one guy explained that he was following a truck that was carrying porta loos. I think they're called porta potties in the States. And he could see that they were slowly vibrating loose, and these things fell off the back of his truck into the path of this driver. That was certainly an interesting one. So we've had that one. How did

 

Chess Stetson  31:09  

they refer to it? They said it started to shed its load. No.

 

Chess Stetson  31:15  

But other ones that we found particularly interesting in recent times. And Rob, you mentioned this before, as a human driver. If you're going down a lane, there are cars parked on each side, a ball bounces between two parked cars. As a human driver, you probably make an assertion or an assumption that a child is going to follow that ball. An AV doesn't have that moment, it doesn't have that thinking brain. Another example is near me that we have some very delicate and intricate small country roads. At nighttime, a human driver uses the reflection in the cars that are parked on the road to see if any headlights are reflecting to see if anybody's coming do that just yet. So again on to you. Of course a navy won't be able to looks like big stacks of leaves in the middle of the road. That's another example. But we've had examples of things like like actually standing up and being a person. So maybe like a homeless person standing up in the middle of the road. What else have we had, we've had foxes fighting in the road, we've had wheelie bins blowing across the road, we've had manhole covers that have been either stolen or removed. And of course, would nav see and note that that manhole cover wasn't there, just now, maybe we would as human drivers, but with the Navy.

 

Chess Stetson  32:29  

I will also say that I think that there's a larger class. I mean, everything we've just talked about is kind of in that fun territory. And they're real, no need to be tested on. But they all make us think of some particular asset or some particular event or something that looks different. But there's a huge class of scenarios where you as a human can just predict where agents are going to go. Even if it's really common agents, cars, trucks, pedestrians, VR users, we say pedestrians on scooters, you can just see where somebody's gonna go, you as a human have the basic common sense to know where they're gonna go. And to know which are the high risk humans, you don't have to pay attention to every single player in the scene, you know which players in the scene to pay attention to. And you can see where they're gonna go. And maybe there's like three or four of them, you can see a car is trying to make a turn, a vru is going to try to go around it, and that there's another vehicle that they're both trying to avoid and try to turn their behavior around. There's a huge class of scenarios like that, that we're constantly working with customers on, where it's really tricky for them to get the controls right around all of them. And getting it right is critical to good driving. Because you need to be quick, like a quick Uber, you need to get people from here to their and make it efficient not drive like a grandma not get in the way of emergency vehicles. And so you need to be able to differentiate between those scenarios in which you really do need a lot of care, you need to slow down wait for everything to figure itself out. And those scenarios, we don't need to slow down so that you can get there efficiently. And AVS still even aren't good at that very large class of scenarios where it's not a weird thing in there. There's no Christmas tree falling out of an airplane is just a lot of things moving in a way that common sense can tell you what's going to happen and the AV doesn't quite have it.

 

Rob Stevenson  34:09  

Right, right. But for the AV to have it it needs to be trained on for example, a Christmas tree falling out of an airplane,

 

Chess Stetson  34:15  

well, maybe maybe it extends that much that might depend on on the system under training. But at the very minimum, it needs to be trained on all the other ways that things can move in real life in order to be able to to extend or to extrapolate to that one that is looking at right now, as opposed to just training on exactly that one or training it on much more anemic lower complexity scenarios.

 

Chess Stetson  34:43  

So one example just to jump in there, that may make some sense is that we collect data from India what India is one of our richest sources of data. And recently we collected one example which was a highway, the car in front suddenly swerves because it realized in India where is a car tailgating another car is there's a cow sitting in the fast lane of the highway. Very important lesson that was learned there of course by the driver, but we were able to train and use that in our system. Now of course my children asked exactly straightaway Dad, what relevance does that have to London roads