How AI Happens

Autonomous Aerial Disaster Relief with Head of Engineering Ian Foster

Episode Summary

Have you ever considered the risk to the humans involved in disaster response? Joining us today on How AI Happens is the Head of Engineering at Animal Dynamics Limited, Ian Foster, to discuss the technology they’re developing in the hopes to eradicate human involvement in life-threatening deliveries, rescues, and disaster response.

Episode Notes

Ian discusses what unique problems aerial automated vehicles face, how  segregations in the air affect flying, how the vehicles land, and how they know where to land. Animal Dynamics' goal is to phase out humans in their technology entirely and Ian explains the human involvement in the process before telling us where he sees this technology fitting in with disaster response in the future. 

Key Points From This Episode:

Tweetables:

“Drawing inspiration from the natural world to help address problems is very much the ethos of what Animal Dynamics is all about.” — Ian Foster [0:02:06]

“Data for autonomous aircraft is definitely a big challenge, as you might imagine.” — Ian Foster [0:16:17]

We're not aiming to just jump straight to full autonomy from day one. We operate safely within a controlled environment. As we prove out more aspects of the system performance, we can grow that envelope and then prove out the next level.” — Ian Foster [0:19:01]

“Ultimately, the desire is that the systems basically look after themselves and that humans are only involved in telling the thing where to go, and then the rest is delivered autonomously.” — Ian Foster [0:23:45]

“The important thing for us is to get out there and start making a difference to people. So we need to find a pragmatic and safe way of doing that.” — Ian Foster [0:23:57]

Links Mentioned in Today’s Episode:

Ian Foster on LinkedIn

Animal Dynamics

How AI Happens

Sama

Episode Transcription

[INTRODUCTION]

[00:00:03] RS: Welcome to How AI Happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. We're about to learn, How AI Happens.

 

[INTERVIEW]

[00:00:31] RS: Here with me today on How AI Happens is the Head of Engineering over at Animal Dynamics, Ian Foster. Ian, welcome to the show. How are you today?  

[00:00:39] IF: I'm great. Thanks, Rob. Good to be here.  

[00:00:41] RS: So pleased to have you. It is the beginning of my day. It is the end of yours. We're a transcontinental podcast now, it would seem.

[00:00:47] IF: It’s very exciting.  

[00:00:48] RS: It is. I thank you for sticking with me here and dealing with the time change, but there's so much awesome stuff to go into, because of the fascinating work you're doing over there at Animal Dynamics. We will get to that. First, let's get to know you a bit. Ian, would you mind sharing a little bit about your background, and then how you came to your current role at Animal Dynamics?  

[00:01:06] IF: Sure. Yes, I'm an automotive engineer, by training. I was fascinated by cars from a young age. I worked initially in motorsport, and automotive on transmissions and powertrain, so very much in this mechanical world. I did that for quite a long period of time and generally solving the same sorts of problems all the time. Faced with the same issues that you're trying to overcome, maybe the power is a little bit higher, maybe you want to go a little bit lighter, or a little bit faster, or whatever, but it's the same challenges that you're  being posed all the time. I fancied doing something different, and found out a bit about Animal Dynamics, and what could be more different than what Animals Dynamics do.  

We're developing, highly efficient, autonomous systems for land, sea, and air and solving problems very much from first principles. It's a blank sheet of paper kind of a task. There's no real preconceived, or preordained direction. There's the limitations are much more varied than in the automotive space. Obviously, drawing inspiration from the natural world to help address those problems is very much the ethos of what Animal Dynamics is all about. Not everything we do is bio inspired. We're very pragmatic about finding the right answers for any given problem, but at the end of the day, nature has had millions of years to hone its solutions. So there's plenty that we can learn from in that space.  

 

[00:02:28] RS: As well pointed out that there is no real playbook for the type of company, that type of tech you're building. I'm think everyone's probably been in meetings previously, where your chief marketing officer, VP of engineering can say something like, “Look, I've been building ad tech products for 30 years. This is how we've done it before.” Right? That's sometimes really helpful, but not the case with the technology you are developing. We've been teasing it now for a couple of minutes. Let's talk about the actual tech you're developing. Would you mind just sharing about the mission of Animal Dynamics and how that's being carried out?  

 

[00:03:01] IF: Yeah, sure. Like I said, Animal Dynamics, fundamentally, we're developing autonomous systems that solve challenging problems. There's an awful lot going on in the world today. Things are changing very quickly. There's conflict. There's climate issues. There's forced displacement of people all over the world. Acute food insecurities. There's definitely as a result of all of those things, the need for solutions that can protect human life, and to assure the delivery of essential supplies to all sorts of different environments. A lot of those environments are quite dangerous.  

In many cases, the best solution to the delivery of things to people in need are going to be aerial solutions and they're going to be autonomous solutions. We're working on those systems. We're working on the aspects that go around that, hand in hand with the development of that, which are the infrastructures that are going to need to support those, and then to enable them. We're also trying to understand and work through the regulatory frameworks that will keep all of us safe as we roll out those sorts of products. It's a large mission.  

[00:04:09] RS: Could you maybe give a real world example of where Animal Dynamics has offered a solution? What is the real world occurrence, whether it's a natural disaster or some civil unrest, something like that? What is the delivery mechanism? What is the technology actually delivering?  

[00:04:23] IF: Yeah, sure. You give one example yourself right there, which is a natural disaster in the event of a flood or a landslide that is taken out the infrastructure by which people would normally be supplied with the food, the water, the medical supplies that they need. It's people who've been cut off, or it's areas where the infrastructure doesn't support the delivery of those sorts of life saving supplies. The systems that we're developing at the moment, aim to fill that need, aim to get those deliveries to the people that need them, but at the same time, keep human beings as much as possible out of harm's way. That where the autonomous aspect comes in. That's why we want something to be to be an autonomous system is that, you're not sending humans into a dangerous area at the same time is trying to provide for people who are already in danger.  

[00:05:12] RS: Got it. The why air is the best, that makes sense, because in these situations. We've all seen the news footage is like, here's this road, this highway overpass has collapsed, right? Or this road has been flooded over. Great. The trucks are not delivering the medicine, right? Or the hospital beds or what have you. That explains why air is the best. Then can you speak it's a little more on, it's maybe a silly question to ask on an AI podcast. Why is it better to not have pilots?  

[00:05:39] IF: Yeah. It's a perfectly valid question. To some extent, it's easier to put a human inside this thing and let the human work out everything that needs to happen, right? It's a big challenge to make something autonomous. But the environments that we're moving into are dangerous, by definition in the event of a conflict or a natural disaster. These are uncertain conditions. They could be long distances that you're potentially traveling over. You could be the first into that area that has changed beyond all recognition over the last 24 hours. Everything that you knew about, it could now be different.  

As a result, humans are great. We're incredible beings, and we can do amazing things, but when we're in the process of trying to protect human life on the ground, you don't want to put someone in danger, if you can avoid it in order to be able to do that. Taking the pilot out of harm's way and putting them somewhere else, somewhere safe, they can operate this vehicle from somewhere that's perfectly safe. It means that you're minimizing the risk of the system that you're deploying. That goes for logistics, resupply, delivering that humanitarian aid in this whole situations that we're talking about there.  

It also goes for whether use cases, you could talk about aerial crop spraying, for example. That's a pretty dangerous job. The agricultural aircraft operations have a pretty poor accident record. They kill on average, something like nine pilots a year in the US alone. Over the last five years, they've been averaging that number. That's an area again where if you can take that person out of harm's way and have an autonomous vehicle in there instead, then you're protecting life.  

[00:07:08] RS: Even in the case of those pilot flying over a farm delivering pesticides, or what have you. Probably a safer scenario in flying over a giant open field than the ones you're speaking about, right? So if it's dangerous for these pilots. It's even more so flying into the aftermath of a hurricane or perhaps even a warzone, so makes all the sense in the world when you explain that. The opportunity to take the danger to human lives out of the equation, at least in the pilot capacity. Now, we've spoken a decent amount on the show before about some of the challenges associated with automated driving. I would love to hear you rattle off some challenges of putting that in the air. What are the unique challenges for in an aerial automated vehicle?

[00:07:50] IF: I mean, a lot of the challenges are very similar. You need to plan a mission or a route. You need to understand where you're going. You need to monitor the environment around you assess how that's changing, and react to those changes. If you deviate from your plan, you need to manage that. You need to stay safe. All of those things are common, but there are obviously some pretty key differences. I mean, in terms of simplistically, when you're driving a car, you're on the ground. You're working in 2D forwards, backwards, side to side. When you're talking about an aerial vehicle, there's a third dimension of that. The vertical dimension. That potentially gives you a lot more to look at and to keep track of. There's a lot more to look around you and try and spot.  

At the same time, it gives you another direction that you can move in to take avoiding action from something. Obviously, depending on what the performance and the capability of your platform is. You can potentially move up or down in order to take avoiding action. It can space your obstacles out more. They can be further apart, but at the same time, they could be faster moving and coming from any direction. The perception challenge is broadened all around the vehicle. In terms of vehicle movement, stopping isn't really an option, on a ground vehicle. The vehicle can come to a stop if something goes wrong. With aerial vehicle, you need to keep operating safely until you reach the ground in a safe manner.  

That's an area where some aircraft can struggle something like an EV tool. It can potentially struggle with that if thrust is lost, that can become a real challenge to try and manage with the platform that we're developing at Animal Dynamics, the Stork platform, which is our aerial logistics vehicle. It’s based on a parafoil wing vehicle. It can carry quite a high payload. 135 kilos, it can carry that over long distance. So up to 400 kilometers, but because it has that powerful wing, it's capable of gliding on powered for quite long distances. In the event of say a total loss of thrust, you've still got control over the air vehicle, you can still guide it safely down to the ground and make a fairly normal landing in that instance, which some aircraft can't necessarily cope with.

[00:10:00] RS: It’s a crucial difference between one's normal conception of a winged aircraft versus a helicopter or what a drone basically is. A friend of mine is a helicopter mechanic. He mentioned that if something goes wrong in an airplane, you can glide it down, but your helicopter has fallen right out of the sky. With the craft you're developing there, there's that added element of safety. Now is all that open airspace in that third dimension is that an advantage, just because there's more space that your craft can move in? Or is it a limitation because there aren't things stop signs and stoplights for computer vision to hone in on?  

[00:10:32] IF: To some extent, it would appear like it's an advantage, because it's a nice big open area or it's not regulated in terms of what direct – it looks like it's not regulated in terms of what direction you can go in, but airspace is segregated. There are areas, there are keep out zones that you have to avoid, depending on what type of aircraft that you are operating. There are segregation for from altitude, as well. There's altitude, you can't necessarily go above or go below depending on exactly what ground conditions you're flying over, whether you're near an airport, whether you're near a built up area, and what your use case allows you to do, what you've had the regulatory framework sign off. It looks like it's completely open, you can go wherever you want, but actually, it's well defined as to where you can and can't fly. There's still rules to follow.  

[00:11:18] RS: Right. Is that like a specific border that you can train the technology not to cross? It does feel like it'd be a little more challenging than say, okay, don't – for example, this car should not drive into this wall or this fence, right?

[00:11:30] IF: It's definitely something that you need to add to your mission planning layer. It's definitely a set of data that you need to bring in when you're planning your route. The no fly areas, the areas that you can't go inside, the altitudes you can't go above or below. Yeah. That all needs to be part of your mission planning and therefore, in the event of any emergency condition, you need to make sure you still don't conflict with any of those particular limitations.  

[00:11:55] RS: There is no as the crow flies route for this, right? At the same time, there's also no like, Google Maps or ways for you to plug into, “Okay, take this route.” Right? It would seem like it would have to be bespoke.  

[00:12:07] IF: Yeah, absolutely. Yeah. It is a bespoke route planning solution that you need to put in place and at the same time, was trying to put the base level of functionality in which allows you to avoid all of the things that no fly zones that we've just been talking about, and takes you on the most efficient and effective route. There may be other customer specific aspects that they want to add in other features, they might want something to be as fast as possible or as efficient as possible, or all the things that Google is starting to offer, that offers you. We do want to reflect some of those things, but it's starting from scratch. It's not something that's out there and available. It's something that we need to create.

[00:12:44] RS: Right. You mentioned a moment ago how a car can abruptly stop, if it needs to. An aerial vehicle cannot, right? It needs to come to a safe landing. Is it harder to park a car autonomously or land a plane autonomously?

[00:13:01] IF: They're both significant challenges. That's definitely true. When you're parking a car, there are other road users. There are pedestrians, potentially. There are definitely things that you need to keep track of. Then there's the dynamics of the vehicle itself. The control of the vehicle. The control of the powertrain and the steering, and all the other things that go into making that maneuver. You'd be a bit less worried about wind gusts, for example, when you were parking your car than you would when you're trying to land an air vehicle autonomously. There's probably a few more disturbance inputs into that maneuver when you're landing an air vehicle. Like so, there's a third degree of freedom that you have to worry about. I wouldn't say it's necessarily an order of magnitude more difficult, but I think it's probably more difficult to land an air vehicle. Yeah.  

[00:13:44] RS: Ian. Isn't every input, technically a disturbance Input?  

[00:13:47] IF: Yes. Yeah. I mean, there's the deliberate inputs that the vehicle is putting in, but then there's the external disturbance inputs. Yeah.  

[00:13:53] RS: Things like wind, air pressure.  

[00:13:55] IF: Yes.  

[00:13:55] RS: Other aerial bodies.  

[00:13:58] IF: Yes. All the things that the perception system is perceiving around the vehicle that it wants you to take account of and it needs the control system to be able to take account off.  

[00:14:06] RS: I'm personally constantly disturbed by inputs. I understand that compulsion on the part of these drones, but could you speak a little more about the automated landing. Are these vehicles detecting runways or how are they managing to land themselves? The vehicle that we're developing, the Stork vehicle that we're developing is intended to not need a huge amount of ground based infrastructure, because of the sorts of use cases that we've talked about the humanitarian aid delivery, and those sorts of operations. These aren't necessarily areas where you're going to have a pre prepared runway to go and land on. It needs to be able to deliver or to land in as many places as possible.  

We're trying to somewhere that's relatively unprepared as much as possible. In terms of making that landing, the end goal is to have a system where you can point it where you want to, go click somewhere on a map and the system will fly there and get to that position and assess whether or not that somewhere it can land. If it can, then it'll go ahead and land there. It'll pick out all the features that it needs to in order to make sure it lands safely and brings itself to a stop. That's a process that we need to build up to. That's something that we need to start off with a proportion of that capability and build towards it. The initial rollout of that capability is something short of that, where we direct it to a landing point and we get to that landing point, and maybe it takes an image of that, maybe it sends that back to base and says, “Is this okay?”  

It looks like it's okay to me, do you agree that this is still an okay place for me to land? Then we take that learning, and we build up from there. As we improve our perception systems, recognition of features on the ground, whether that's through a LiDAR system that's picking up particular features, or whether it's through optical sensors, cameras using image detection and recognition of various different objects on the ground. We build up that capability. We're using those different sensor inputs to assess whether or not that landing site is still appropriate, whether or not that's somewhere that we can put down.  

[00:16:06] RS: As far as I know, there's no Google CAPTCHA for click on all the tiles that have safe landings. Where is the data coming from? How are you going to train the technology to notice the safe landing zone?  

[00:16:17] IF: Data for autonomous aircraft is definitely a big challenge, as you might imagine. So for landing sites, there are sets of satellite data publicly available that covers the whole planet. Some of those datasets are already segmented to divide the datasets up into different categories of land use. Obviously, we can create, have created machine learning algorithms that will take those datasets and segment them for ourselves that allows us to do route planning and to select landing sites from satellite data. You obviously need to overlay other sets of data and fuse those together in order to get the full picture. Ground elevation data, for example to assess gradients, land use case data, no fly zones, and things like that which allow us to plan the route, but also to then select the landing site.  

It's an even bigger challenge than that, because ultimately, the real world is changing. There might be datasets out there, but at a very basic level. The northern hemisphere, winter looks very different to how it looks in summer. That satellite data. The algorithms that we're training on that data need to understand how those landscapes change as the seasons change. Then again, to go back to the use cases we've talked about in the event of a natural disaster. What you're looking for, in terms of a landing site might not be there anymore, because there's been a flood or there's been a landslide or something similar. The ground picture can change wildly. The mission planning data that you do, based on readily available pre prepared data gets you so far, it gets your plan to mission, but you still have to assess the ground that you're flying over, as you're progressing, as the mission unfolds.

We can use test vehicles for things for in flight. We can fit a combination of center sites, to test vehicles I, radar, LiDAR, cameras, and gather that data for ourselves. Then there's also obviously simulation environments, we can use simulation to challenge our systems against simulated events to assess its behavior and its performance. For those simulations, you'll need good models of the sensors that you're using and good models of the of the rest of the system. Then you can build up a full system model and check its response. That could be a simulation of just the perception system in isolation, or it can be a simulation of end to end, the perception system perceiving something, the system, then the control system, then initiating a response, and then the vehicle dynamically responding to avoid something or to during a landing run. You can assess the vehicles behavior end to end.

Fundamentally, just to touch on the point that I mentioned before. Fundamentally, it's about a staged rollout of capability. We're doing that in controlled environments. We're not aiming to just jump straight to full autonomy from day one. We need to take a staged approach to that building the capability, building the knowledge that we have, and assuring that we steadily expand our operating envelope. We start with a proportion of the capability. We operate safely within a controlled environment. We gather data the whole time that we're doing that. Then as we prove out more aspects of the system performance, we can grow that envelope, and expand the operating environment that we're able to work in and then prove out the next level.  

That means that we can actually get these systems out there and working in the real world in controlled cases, controlled use cases, but actually start to make a difference to people's lives. As you start to make a difference out there with our products before the full capabilities is available, we can still deliver something that's going to make a difference to people even if we've only got a proportion of that capability, but the key to that state rollout is actually keeping humans in that loop at the right level as we progress.  

[00:20:03] RS: I'm glad you mentioned human in the loop, because I wanted to make sure we spoke about that. Where does the human in the loop involvement make the most sense? Then how do you decide where to implement it?  

[00:20:14] IF: As we’re developing the Stork aerial vehicle. This is all part of the systems engineering process. The various different actors who interact with the air vehicle are all part of the overall system. They all provide inputs. They take outputs. They give and receive information, just like any other part of the system. The process of designing the system looks at the flow of data or the flow of that information around the system. You need to understand what the limitations of each part are, where is it best to get the information from? Can we rely on that sensor at this moment? Or can we rely on that actor at this moment? Where is our redundancy? What's the latency of the signal that we're receiving?  

While you're developing the system and understanding the system like that, you can choose where you'll need human input, because we don't quite trust yet, the signal that we're getting from that sensor, or where do you need a human, at least to oversee the input that's coming from that sensor and provide a check on the information that's coming from that sensor or from that system. That's all modeled during the design process. It's revisited as the system is changed and upgraded. You make sure you do the right safety analysis to make sure that we're safe to operate.  

he design process tells us where we'll need human oversight for a decision. At the earliest stages of development, those instances are going to be more common. They're going to appear in more areas, because you're still in the process of building confidence with the different parts of the system that are providing the information and making the decision. The modeling also tells us whether there's going to be enough time for that human to actually make that intervention. That can be a risk. For example, with the early implementations of autonomous driving, that's something that we do see in those sorts of situations.  

The system is working fine, the vehicle has control, but at the point where the car decides it doesn't know what to do, and hands back control, the humans supposed to then take over, but if it was something really straightforward, the car would probably know how to handle it, right? The car would probably already be able to deal with it. There's a good chance that when it needs to hand back to the human, it's actually something quite complex that it doesn't know how to handle. That might require 100% of the driver’s concentration. The driver goes from 5% to 100%, in a split second and that's a pretty tricky shift to make.  

When you're designing for humans to be in the loop, we need to make sure that the warnings or the requests for information or for input come in plenty of time to allow the human just like any other processor, the human to process that information and figure out what response they need to give and then to give that response.

[00:22:58] RS: If you have a human in the loop process whereby the human goes from sitting on the bench cold to running on the field sprinting and making a split second intervention, and you don't really have autonomy, because they would have needed to be paying 100% attention that whole time to successfully make the intervention. Correct? That's interesting, yeah. As you said, an early problem with and what point do you give control back to the human, and you set them up to fail a little bit, if it's like, okay, you have to be this involved the entire time to make this judgment. Is the goal then for the human to be continually less in the loop? I guess, as you've talked about this phased out, phase rolling out of the autonomy of your tech. Obviously, over time, less and less human involvement necessary, correct?  

[00:23:42] IF: Yeah. That's definitely the aim. Ultimately, the desire is that the systems basically look after themselves, and that humans are only involved in telling the thing where to go, and then the rest is delivered autonomously. I say, the important thing for us is to get out there and start making a difference to people. So we need to find a pragmatic and safe way of doing that. To begin with keeping humans in that loop and having humans in the loop probably quite a lot to begin with, is inevitably the way of doing it, but yes, the aim is ultimately to ramp that down and ramp up the autonomy of the vehicle and take more decisions, and do more assessment of the signals automatically on the vehicle and take the human decisions out.

[00:24:26] RS: Could you give us some examples of where precisely in the loop the humans are at this moment?  

[00:24:31] IF: We're still in the development phase at the moment with the vehicle. Humans are quite heavily in the loop right now. We're still looking to double check effectively, the decisions that the vehicle is making. The vehicle is typically making recommendations at the moment and then getting some sign off from the human to say, “Yes, you've made the right decision. Go ahead.” We're trying to build in the ability of the system to make an assessment from the start, but the human will have oversight, essentially, and confirm that that's the right way to go for quite a few decisions. There'll be a whole bunch of flight control stuff that the vehicle looks after for itself. Key things like coming into land and making a landing site assessment and stuff like that. That's going to be checked to begin with probably just to make sure, because that's where you're entering into areas of highest risk.

[00:25:19] RS: Not dissimilar from commercial aircraft, by the way, right? I understand that the humans are really involved in takeoff and landing. Then that the plane flies itself, once you reach cruising altitude, it's doing a lot on its own. Is that accurate?  

[00:25:32] IF: That's how I understand it. Yeah, I mean, I say, in terms of aerospace, my experience is largely around Animal Dynamics, but that's what I understand too. It's a keep the humans in the bits that are of highest risk, and then allow the system to take over when appropriate.  

[00:25:47] RS: Before I let you go, Ian. I want you to indulge this technology a little bit to its utopian sci-fi climax. What do you think is the opportunity, the long term opportunity for this automated disaster response with regard to Animal Dynamics, but then, in terms of aerial autonomy writ large?  

[00:26:07] IF: In terms of the future, it's probably a question of scale. It's a question of the amount of situations and areas that this technology can be deployed in. The reliability of it, delivering in those areas with progressively less human involvement, so it's scaling up from a single vehicle operating on its own to multiple vehicles operating in a coordinated fashion. They're working together and swarming effectively, working cooperatively to concentrate their effort in the areas that are best needed. There are aspects of bio inspiration that can be taken in those areas, too, in terms of the control of the vehicles, in terms of the control of fleet of vehicles.  

We can look at insects swarming and the way that they combine in order to maximize their impact, and things like that. It's probably controlling multiple vehicles fleet to vehicle, swarms or vehicles that is going to be the direction that this is going. Once we've got the unit behavior. Once we have one of these systems working correctly, you can then connect multiples of them. Then the impact of them can be multiplied, the problems that they can solve can be multiplied in many different areas.  

[00:27:23] RS: Got it. Well, Ian. This has been a fantastic discussion. I'm fascinated by the tech you're developing and the mission is really honorable, as well. Not merely trying to scrape civilians away from United Airlines, you're also trying to deliver help to people who really need it. Congratulations on what you've accomplished so far. I'm just so pleased that you agreed to meet with me. This has been a fantastic conversation. Thank you for being here today, Ian.

[00:27:45] IF: No problem. Thanks very much.

[OUTRO]

[00:27:47] RS: How AI Happens is brought to you by Sama. Sama provides accurate data for ambitious AI, specializing in image, video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e-commerce, media MedTech, robotics and agriculture. For more information, head to sama.com.

[END]