How AI Happens

Building Responsible AI with Mieke de Ketelaere

Episode Summary

Mieke is an Adjunct Professor for Sustainable Ethical and Trustworthy AI at Vlerick Business School. During this episode, Mieke shares her thoughts on how we can go about building responsible AI systems so that the world can experience the full range of benefits of AI.

Episode Notes

 The gap between those creating AI systems and those using the systems is growing. After 27 years on the other side of technology, Mieke decided that it was time to do something about the issues that she was seeing in the AI space. Today she is an Adjunct Professor for Sustainable Ethical and Trustworthy AI at Vlerick Business School, and during this episode, Mieke shares her thoughts on how we can go about building responsible AI systems so that the world can experience the full range of benefits of AI.

Key Points From This Episode:

Tweetables:

“The compute power had changed, and the volumes of data had changed, but the [AI] principles hadn't changed that much. Only some really important points never made the translation.” — @miekedk [0:02:03]

“[AI systems] don't automatically adapt themselves. You need to have your processes in place in order to make sure that the systems adapt to the changing context.” — @miekedk [0:04:06]

“AI systems are starting to be included into operational processes in companies, but only from the profit side, not understanding that they might have a negative impact on people especially when they start to make automated decisions.” — @miekedk [0:04:52]

“Let's move out of our silos and sit together in a multidisciplinary debate to discuss the systems we're going to create.” — @miekedk [0:07:52]

Links Mentioned in Today’s Episode:

Mieke de Ketelaere

Mieke's Books

The European AI Act

Sama

Episode Transcription

EPISODE 47

[00:00:03] RS: Welcome to How AI Happens, a podcast where experts explain their work at the cutting-edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. We're about to learn How AI Happens.

[00:00:30] RS: Here with me today on How AI Happens is the Adjunct Professor for Sustainable Ethical and Trustworthy AI over at Vlerick Business School. She's also a multiple repeat author on all topics AI and machine learning, Mieke de Ketelaere. Mieke, welcome to the podcast. How are you today?

[00:00:45] MDK: I'm good. Thanks for having me.

[00:00:47] RS: So pleased to have you. Million directions we can go here in this conversation. I guess before we do any of that, would you mind sharing a little bit about your background, your journey in this space to set a little context for the folks at home?

[00:00:59] MDK: Sure. Well, I'm an engineer by education. I studied robotics AI. I'm really referring to the deep learning part of AI. Back in 92. Say something about my age, also about the fact that AI isn't new at all. However, at that time, it wasn't a very hot topic, a little investment, so I jumped to the other side of technology, which was the internet startups that were forming around digitalization. Spent 27 years in that area, working for big corporates, before jumping back to my original dream of making sure that we create systems that can do things by themselves, namely AI. So that's what I've been doing in the last five years, been hands on 100% in AI space again.

[00:01:40] RS: What made you decide it was time to get back to your dream, as you put it?

[00:01:45] MDK: Well, what I noticed is that, yeah, AI became a hype, people started to talk about it in the media, companies were starting to create events with AI in the title. Initially, I just thought, well, let's have a look at what has changed since my studies until I realized not that much had changed. The compute power had changed, and the volumes of data had changed, but the principles hadn't changed that much. Only some really important points never made the translation. 

Really important points about the fact that mobile [inaudible 0:02:16] comes with a certain accuracy, that isn't a rule-based system that always works exactly the same. It's simple things like, actually, they weren't translated correctly, so people started to talk about it in a way which wasn't really the way it is working. So I thought, this is dangerous, this is really dangerous. That's where I started to say, “Well, engineers need to make a little effort and to explain how it exactly works, what it can do and what it can’t do.”

[00:02:41] RS: I guess, maybe open ended question, but what can it do, what can’t it do? Where are we with it and right now?

[00:02:46] MDK: Okay, but that's always interesting. Everything we talked about AI, maybe then we first also have to say which definition we're going to use today with AI. Unfortunately, we're still in the area where there's many definitions floating around, so if we take AI as a system that has the ability to learn by itself by looking at a huge volume of data and seeing some correlations in that data, seeing some patterns in the data done, we can say that these systems do fantastic things, they do things better than a human brain can do, but they always have a certain error margin. They come up with a certain accuracy, 97%, accurate, 93%, 87%, accurate, etc. 

The fact is that, although they give a lot of advantages, because they don't need a lot of human thought in creating rules and rules and rules and rules and program code, we do know that their answer might come at a certain error. So this is one of the things that never meet translation. Second thing that, for example, never made translation is the fact that the systems work in a certain context on which they have been trained based on the data they've received, received data from a certain context. So if they're training the data, they will behave correctly in that same context, but if you change the context, or if the context changes by itself, for example, our world has moved on in the last years to COVID, well, these systems are going to start to misbehave.

They don't automatically adapt themselves. You need to have your processes in place in order to make sure that the systems adapt to the changing context. So simple things like these that we see in the data science course, very first chapters, they never made it to translation, and that's where it gets a little bit dangerous. 

[00:04:24] RS: Why does it get dangerous? 

[00:04:26] MDK: It gets dangerous, because as people do, people tend to focus on the profits side of things. It's always normal in technology. We always first look at the profit side. That was the case when we were starting to drive the car. We first used the car before looking into the fact that maybe we need to have some security measures to protect ourselves and to protect accidents from happening. So as business always looks at profit first, which is normal, because business needs to make profit, AI systems are starting to be included into operational processes in companies, but only from the profit side. Not understanding that, for example, they might have a negative impact on people especially when they start to make automated decisions out of it. 

The common mistakes happening in bias only appeared and became visible after a couple of years. Certain factors that technology, although it does very good things and might bring you the profit, it comes also at a cost and energy cost. If I refer again to it like with the car, initially, our car was doing fantastic things, but the consumption of our motor wasn't very much in balance with sustainability goals. We worked on the motor. Well the same thing with AI. AI does fantastic things, brings a lot of profit, but also comes at a certain energy costs. We need to work on the motor of AI. We need to work on energy efficient hardware, energy efficient algorithms, etc. These were the things that we didn't put correctly, because we were in this hunt for profit from a technology point of view.

[00:05:55] RS: The car comparison, I think, is very apt, because the seatbelt was a late invention, right? I think vehicles had permeated the market and 60, 70 years later, we decided to invent the seat belt. And even earlier than that, I'm sure traffic signs and speed limits came only after lots of people were hurt driving cars –

[00:06:15] MDK: Correct.

[00:06:15] RS: You’ve really accurate. I think comparison, also with AI, is not quite as simple in practicality as inventing the seatbelt, right? How do you approach that question of building responsible AI?

[00:06:27] MDK: Well, building responsible AI, first of all is moving it away from a pure engineering point of view. Engineers, what they love to do, and I'm just counting myself into this because I'm an engineer, is we love to solve problems with technology. But we don't always think the whole thing through at design phase. We don't always look at the fact that might have a negative impact on a certain, for example, group or a certain area, etc. so what we need to do is we need to understand our limitations, how good we are as an engineer in solving technology problems.

We are limited in understanding the impact they might have on society, on ethical implications, etc. so it is just a matter of fact that at design phase, so before we switch on our systems is we have a little collaboration and we sit together in a multidisciplinary debate with psychologists on the table, sociologists on the table, etc. from one side, but also with people from the legal side. So at the moment, what you see is when AI starts to misbehave, it's at that point in time when damage is done, we're going to start to look at the legal side of it and that's too late. 

We should move everything like this, like the cyber security part, the ethical part, the user experience part even, put this right at the design fees. Sit around the table with engineers, but with all the other disciplines that can have an impact by the decisions, and discuss the things through before deciding to switch on the systems. Let's move out of our silos and sit together in a multidisciplinary debate to discuss the systems we're going to create.

[00:07:59] RS: As you mentioned, the legislative piece of this will always act too slowly, right? I think we've even seen that in the last 15, 20 years with the permeation of so much technology. Is that the only way to really hold people accountable? Can we just expect people to build technology responsibly because they ought to? I fear that the incentives are misaligned.

[00:08:18] MDK: No, no, no. I absolutely agree with you. First of all, it's a complex technology. A lot of time algorithms are hidden within an application, so they're invisible. There aren't as visible as a car. A car accident is very visible. It's very understandable. Algorithms aren't. I think that's one challenge. Second thing is that it's also technology that evolves very fast. In fact, it's a scientific discipline of only 70 years old, but a discipline that moves very, very fast, even for somebody in there 100% of the time at the moment is difficult to keep up the pace of the evolution and the innovations that are done in that area. 

Is legislation the only way to do it? No, not at all. I think there is, first of all, a very responsible part in it for the engineer to translate to the backend, what we're doing, what they're doing, why we're doing certain things. Second thing is there's also a need from everybody around us who is wanting to use this technology to get to a certain level of understanding of how it works. Let me just compare it to the microwave. If we are in our homes, we're looking in our homes. Most people have a microwave, do they exactly understand the physics? No, they don't. But they know how to use it correctly, what materials to use in it, etc. 

To a certain level, we need to make sure that people that want to use smartwatches and smart assistants and autonomous vehicles, that they understand also how they work to a certain level, so they can take their own responsibility and understand the limitations of this technology. The third in line is indeed that for those people, and there will always people using it incorrectly, for those people that we have some frameworks in which we can work correctly. These are the ones that are still missing.

[00:09:59] RS: How would you outline that framework?

[00:10:02] MDK: Well, the framework is a very challenging one. It's also the one that Europe is struggling with, but at the other side, I'm very proud of being European, because at least we look into this. The point is that AI covers all different types of sectors. It's not medicine, which discovers the human body and everything around it. AI covers health sector, space tech, biotech, clean tech, etc. so that's already a challenge by itself. Then additionally, it covers also a lot of different types of technologies from knowledge, graphs, to deep learning technology to genetic algorithms, etc. so you have a matrix of complexity and complexity. 

How would you do this framework is making sure that those who create a framework, they do this in an [inaudible 0:10:44] way approach and maybe have different versions of it for different sectors or different versions for different types of technologies. Putting it all, like we've done it here at the moment in the European AI Act, into one framework, isn't, in my opinion, is my personal opinion, the best way to go, because of the complexity of the matrix in which we're working. The European AI Act is just one act covering all sectors, and covering all types of technologies covered by the definition of AI and that's a very difficult one to do.

[00:11:16] RS: Is the definition of AI they're operating on the one that you outlined beginning this episode, or how are they framing the question?

[00:11:23] MDK: Well, I would really advise everybody to read the first pages of the European AI Act, to see that they are also struggling with a definition of what AI is. Should it just cover machine learning and deep learning? But then it seems to cover only the current parts of our learning algorithms, being supervised learning mainly, but then what about the new learning algorithms? Are we going to use transfer learning? As mentioned, you can't blame it on Europe. I think they do a fantastic thing to make first steps but the technology is evolving so fast that it's very difficult one to put it into one act that's going to be accurate at each point in time. 

By the time they get the feedback from all the regions on what should be changed in the text, we are four years ahead. At that point in time, AI has completely changed already. I mean, transformer models, everybody talks about right now. They weren't covered by the AI Act, because they weren’t even existing two, three years ago. That's a difficult challenge. That's not like in medicine where you get 3000, 4000 years to work out the context we are going to work in, in which practitioners or health practitioners can work. They had so many years to work on this. We've only had some years in an area which changes year after year after year. So it's a challenge. It's a very big challenge. 

In my opinion, the responsibility and accountability remains a lot with engineers, but also with business wanting to get profit out of it. The point I was making before is that I'm okay that you can make profit out of it, but then you need to know for yourself to a certain level and you need to take the accountability yourself as a company to understand the limitations and in which context you can work with it.

[00:12:58] RS: Yeah. The shifting nature of this tech, making it slippery to legislate, does that mean we need to just continually reintroduce new legislation or is it a language issue? Can we put up a big enough umbrella that we can write laws for things that don't yet exist?

[00:13:16] MDK: Well, that's the challenge indeed, that there is not. Also, the key rule in my book is for something I call the AI Translator, because what you have here is on one side you have the engineers moving on very fast with a language that is not understood by other people. I know algorithms, alpha, betas, gammas, etc. that's not the standard language we’re speaking at. So you need to have a middle rule of a person who can translate the work that is done by an engineer to other areas, to the other domains I was mentioning, that can make sure that people start to collaborate and discuss things properly together. 

This is what is really missing at the moment is that people writing books about AI ethics, when you read through this, you go, “Well, this is not how it works.” I would almost, I'm going to say it incorrectly, but I would almost urge the fact that we stop a little bit with research on AI and we first are going to figure out together, all together, where we are at and which direction we should take. But what you see is that the digital gap on AI between those creating the systems and those understanding the system is getting bigger and bigger and bigger. 

I literally will take now six weeks holiday myself to study. I would love to be outside and in the pool and enjoy the beach, but I see that if I don't take this time now to study on the latest models and the latest algorithms and everything that's happening is that I will be out myself when I go back in September and this is not okay. This beat we're taking right now on AI should be calmed down, because I think otherwise, we will be also creating conflicting systems, people not understanding each other anymore.

[00:14:52] RS: When you read some of the dialogue happening around AI ethics and like you mentioned you read some of these books and you think that they're just a little divorced from reality or perhaps the practicality of how these tools are developed. What do you think the dialog is missing when you say that they're a little removed from reality?

[00:15:07] MDK: Well, they're missing the real understanding of how the systems work. You see a lot of focus on bias at the moment. To be very honest, bias in AI has been tackled through research into there's a lot of insights and approaches already methodologies on how to make sure that you can remove as much as possible bias from your systems, because you can't make your systems bias free, but you can do actions or you can take the right actions. 

For me, there's many other issues next to bias. It's also the use of deep fakes in areas where we didn't expect. We talk about deep fakes and news, we talk about deep fakes and videos, but to be honest, in Europe, at least deep fakes appear, for example, in car insurance companies. Deep fakes appear in medical contexts, we see now recently, we saw a case of deep fakes being incorrectly used in E-commerce. These points haven't been tackled by AI ethics, but that's also ethics. It’s business ethics. There’s biases towards the human person, this is towards business. So what you see there is that the people writing the ethical guideline is that they're unaware of this, because it's not visible. 

Also here for me, the AI Translator has enormous work to do in order to make sure that people understand what can happen if you share your systems in a free, open environment because that's what's happening. That's why China, for example, or even Google is now putting their deep fakes behind API's, because they understand very much the limitations, but this is only just starting right now.

[00:16:37] RS: When you say AI Translator, do you mean, an AI apologist or some kind of a marketing role? What is the role of the AI Translator?

[00:16:46] MDK: It's definitely not a marketing role. For me, it's more like an engineer who has an ethical foundation in his human behavior, understanding that profit is the easy way out. I could choose for myself just to go for the profit. I would just not worry about the backend not following. However, I think there are many engineers that also think about the impact we have on global society. Basically, the AI Translator comes more from them, as we understand to a certain level, the systems we are creating. But we also understand the role that we have in society. 

We make sure that we can collaborate, we find the right language, we find the right discussions and the humbleness to make sure that those that we need to help us creating systems that are going to be accepted and are going to have a place in our society are going to be integrated. So that's more the point. It's not about marketing term, or it's not about a fluffy term, it’s really making sure that this translation of what it does, what it doesn't do, where we get it wrong, where there are potential risks, that these things get translated correctly.

[00:17:46] RS: It's a technical individual who perhaps serves as a moral compass?

[00:17:51] MDK: Yeah. A personal moral compass and gets more energy from this than from making profits.

[00:17:55] RS: Right, right. Would this be someone that you would hire at your AI developing company to oversee, like AI Czar situation?

[00:18:05] MDK: Yeah, exactly. Since my book, there's actually a first course also called the AI Translator here in Belgium. Postgraduate course, where they zoom in to a certain level of detail on the different topics that an AI Translator should know. But you also see the first job description peering, which is called the AI Translator, which basically links in that context, the business and the engineers, because that's where the disconnect is most of the time happening.

[00:18:31] RS: Could you share a little bit about the course load for that individual?

[00:18:35] MDK: Yeah, absolutely. What they basically get is, they get the AI canvas, and they actually have to create one use case. But they don't have to just do it from a technical point of view and saying which data we need, which we're going to use, but they also get information about how do you get liability? How do you look at fairness made by the decision of the use case that you have? How do you look at the GDPR of the data that you're going to be using? 

All the things that are linked to the use case that they're proposing, they are going to be addressed through different evening sessions against an evening course. They get people from the business talking to them, they get people from the government talking to them, they get legal people talking to them. So with the different pieces of information, at the end of the year they are able to create a use case that is very fundamentally prepared by design in order to land correctly in society. Or sometimes they even decide not to start with a use case. So in September, when they start doing the exercise, they think, it's all fine, this is an easy one. So that's very interesting to see.

[00:19:38] RS: It's definitely encouraging, because when I have these conversations about building AI responsibly, the lingering question for me is, whose job is it? The answer is, “Oh, well, it's everyone's job,” which usually practically means it's no one's job. If engineers are too busy working on developing the actual product, do they have time to really raise their hand and say, what are we actually building? What are the implications here? Some will, of course, but for it to be someone's actual role, that's an investment, right? That's more of a line in the sand like, “Hey, we're going to commit to building this in a meaningful way.” So I just wanted to call that out.

[00:20:12] MDK: Yeah. That's why the AI Translator, it doesn't fit a typical role within a company at the moment, because you are working in silos. You either belong to the engineering department, you belong to marketing, you belong to the legal department, and a rule like this is transferal. It should almost rely on the direct line to the CEO of the company trying to implement the AI solution, or trying to create as in the AI product, AI driven product, because I absolutely embrace the technology, might sound negative here, but I absolutely embraced the value AI can bring to society, if it’s done correctly. 

If you have done this role in your company, reporting to the CEO, understanding all the bridges that are needed to be built between the different departments, understanding the fact that if you're in one of these silos, you won't be able to build these bridges. Then you do a fantastic, good job. It's not just a tick in a box from I have an ethical committee, etc. no, that doesn't work. It's really somebody's actively building bridges and reporting to the top and it's the top who finally decides if you're going to go ahead or not. At the moment, what you will see is that the top doesn't understand, they have heard about it on a golf course, etc., “We have to do AI.” 

It passes through the engineers, engineers create something, they don't think about the legal aspects, they don't think about the impact on ethical points. Then it's only when a system gets operational and the brand gets damaged, because of a certain incident, that it gets to the level CEO. So what I say is just turn it around, just started CEO level, put this AI Translator next to him and this person, the only thing he does is making sure the bridges are built and that what we're creating is done in a fully transparent, ethical way. I think that's the best way forward.

[00:21:54] RS: That is a fantastic way to put it, Mieke. Also I don't think we're going to find a better bookend to this conversation than that. I really could keep going with you for hours, but we have books to write and podcasts to edit and all these other things. So Mieke, at this point, I would just say, thank you so much for being here with me. I've loved this discussion with you today.

[00:22:09] MDK: Okay. Well, it's my pleasure. Thank you very much.

[00:22:12] RS: How AI Happens is brought to you by Sama. Sama provides accurate data for ambitious AI, specializing in image, video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e-commerce, media, MedTech, robotics and agriculture. For more information, head to sama.com.