How AI Happens

PwC UK's AI for Good Lead Maria Luciana Axente

Episode Summary

Ethics in AI is considered vital to the healthy development of all AI technologies, but this is easier said than done. In this episode of How AI Happens, we speak to Maria Luciana Axente to help us unpack this essential topic. Maria is a seasoned AI policy expert, public speaker, and executive and has a respected track record of working with companies whose foundation is in technology. She combines her love for technology with her passion for creating positive change to help companies build and deploy responsible AI.

Episode Notes

 Ethics in AI is considered vital to the healthy development of all AI technologies, but this is easier said than done. In this episode of How AI Happens, we speak to Maria Luciana Axente to help us unpack this essential topic. Maria is a seasoned AI policy expert, public speaker, and executive and has a respected track record of working with companies whose foundation is in technology. She combines her love for technology with her passion for creating positive change to help companies build and deploy responsible AI. Maria works at PwC, where her work focuses on the operationalization of AI, and data across the firm. She also plays a vital role in advising government, regulators, policymakers, civil society, and research institutions on ethically aligned AI public policy. In our conversation, we talk about the importance of building responsible and ethical AI, while leveraging technology to build a better society. We learn why companies need to create a culture of ethics for building AI, what type of values encompasses responsible technology, the role of diversity and inclusion, the challenges that companies face, and whose responsibility it is. We also learn about some basic steps your organization can take and hear about helpful resources available to guide companies and developers through the process.

Key Points From This Episode:

Tweetables:

“How we have proceeded so far, via Silicon Valley, 'move fast and break things.' It has to stop because we are in a time when if we continue in the same way, we're going to generate more negative impacts than positive impacts.” — @maria_axente [0:10:19]

“You need to build a culture that goes above and beyond technology itself.” — @maria_axente [0:12:05]

“Values are contextual driven. So, each organization will have their own set of values. When I say organization, I mean both those who build AI and those who use AI.” — @maria_axente [0:16:39]

“You have to be able to create a culture of a dialogue where every opinion is being listened to, and not just being listened to, but is being considered.” — @maria_axente [0:29:34]

“AI doesn't have a technical problem. AI has a human problem.” — @maria_axente [0:32:34]

Links Mentioned in Today’s Episode:

Maria Luciana Axente on LinkedIn

Maria Luciana Axente on Twitter

PwC UK

PwC responsible AI toolkit

Sama

Episode Transcription

EPISODE 48

[00:00:04] RS: Welcome to How AI happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we're about to learn How AI Happens.

[00:00:31] RS: Here with me today on How AI Happens is Tech UK’s chair of data analytics and AI leadership, as well as an advisory board member for the All-Party Parliamentary Group on Artificial Intelligence and The Responsible AI and AI for Good Lead at PwC UK. Maria Luciana Axente. Maria, welcome to the podcast. How are you today?

[00:00:52] MLA: I'm good. Thank you very much for having me, Rob. Greetings from London.

[00:00:56] RS: Yeah, a boiling hot London, it would seem, too. Are you managing to stay cool?

[00:01:00] MLA: Yes. Actually, I escaped to the seaside, and sometimes I forgot that we are actually on an island and we have access to the beach, close access to the beach. So yes, I was on the beach for few days. So, didn't feel the heat. By the time I came back in London, the weather was much cooler, and now we're back to the usual gray sky, 23 degrees, temperatures type of English summer, which we like so much.

[00:01:27] RS: Yeah, of course. Oh, great. Glad to hear you were able to beat the heat a little bit and loads to talk to you about, Maria. First of all, though, you have so many different roles, really. You are sort of involving yourself in a handful of different AI organization. So, could I hear it from you just how would you characterize your roles across these organizations? What is it you're most focused on right now?

[00:01:50] MLA: Let me start with how did I get here in the first place, because describing my journey will make sense why I feel I have a responsibility to use my voice for driving higher awareness and engagement when it comes to ethical technology, not just AI. So I'm Romanian. I was born and raised in Transylvania, and no, I'm not a vampire, as you can see. I'm in the light and I'm not catching fire. And when I was about 14, I decided that I would like to be a fly jet pilot. I think I've seen Top Gun and I was utterly fascinated with planes.

By the time, like it was in the mid-90s in Romania, girls were forbidden to pursue this career, then I was so annoyed and rather than stay in fight, I decided what is the other thing, the most outrageous thing I can do. Hence, I went and studied computer science. So, very early on in my life, I learned how to code and how the internet works and how various elements of the digital world come together, only to understand later in life that that it was something of a longstanding friendship that was forming. I didn't understand at the time how useful it will be for me to study computer science so early on. But as I progress in my career, and I had job after job interacting with technology, either building it or implementing it, or educating around it was the theme of my career.

Parallel with that, I decided that obviously, I am well suited to the business world. So, besides studying business in Bucharest, I ventured myself in the world of setting up businesses, transforming businesses, setting up my own consultancy, all sorts of types of ventures. But in all this myriad of different jobs, working with music producing, with TV and actors, but also retailers and not-for-profit, there were two elements that were constantly always there. One is my interaction with technology. And secondly, my quest for meaning.

I knew early on in my career that I would like to find a job where I will find meaning and purpose. At times, I was so torn by this quest for meaning, I really wanted to leave my job which was in a for profit in order to pursue a not-for-profit, in order to achieve that quest for meaning. Further down line, when I decided that I didn't have the challenges and I moved to London and pursued an MBA program. Part of my MBA program I studied, I focused on business ethics, corporate governance and strategy, which gave me a little bit of a glimpse into the world of how to understand organizational values and how to align all those organizational values with everything a company does, including technology. And then I joined a technology consultancy which step by step, led me to a point where the big question was like, “How do we build and use technology responsibly?” Helped me shape every single project I was involved. It just happened, it wasn't my main job. It was on the sidelines. It was a question that I always had with my clients for my clients.

But in 2017, I got an interesting invitation at the time to join a newly formed Center of Excellence on AI establishing at PwC. And at the time, the invitation sounded, “Hey, this is a cool venture we're doing. We don't know much about the roles or what sorts of team members we need. Would you like to join?” And I say, “Aha, sounds like an adventure.” Artificial intelligence, that was really intriguing. And then that's how I joined the Center of Excellence. I was lucky enough, and I will be forever grateful to my boss, because he gave me the freedom to go and explore different nuances and facets of AI beyond the technical layer and being able to say, “If we have this unique technology with unique characteristic, there's surely a lot of transformation and disruption, this technology will bring a certainly impacted implications, we need to be accounting when developing it. Hence, we need to build it differently. Hence, we need to have a different narrative.” And that's the journey towards responsible AI started for me.

One thing led to another. I joined forces with colleagues from around the world. We ended up building a responsible toolkit that will in fact, bring to life our philosophy of responsible AI and how we build technology that is aligned with a set the values. In this process, I found myself fulfilled because I had the job that allowed me to have purpose. And therefore, I said, “If this job allows me to make a difference in the world, therefore, I should give my attention and time, to other initiatives that will benefit from the experience and the knowledge we are developing as a team in PwC.”

So, starting having conversation outside PwC, with academia, with policymakers, with regulators, and one thing led to another because I'm passionate about my job. I also am curious, so I volunteered myself in different working groups. From this, I ended up being formally invited to be part of various initiatives that only allow me now to have a stronger voice and a mandate and authority to be able to take this message to more and more people and, and inspire others to get into this journey.

[00:07:49] RS: I'm pleased to hear you have found something where you are able to use your powers for good. That's somewhat common in the folks I speak with. AI and data science and machine learning as a skill set is very in demand. But it doesn't strike me that most people get into the space just because it's a high-paying job, right? Or it's an in-demand job. In your experience, are AI and machine learning professionals, are they more in it for kind of what you're describing, about building technology that is meaningful, or is it kind of an even split?

[00:08:26] MLA: I would say that it's split only because my experience is heavily skewed towards professionals with my skill sets, especially outside PwC. But also, in PwC, where we have a culture that empowers people to find this purpose in their career and in their lives, either being working artificial intelligence or ESG, or any other type of roles that PwC creates. I would say that I think it's much more balanced in terms of choices, that of course, is a lucrative and very appealing career, is very much in demand. You would have more roles than people to fill it with. Therefore, the demand is higher than the supply that justifies the salaries.

But at the same time, I have met people that are equally if not more passionate about this technology than me, that have put their whole life, passion, energy and knowledge into promoting a different approach to technology that really understand this other study or building this technology for much longer than I was. They were the ones that are giving me not only the motivation, the knowledge, and inspire me to go and challenge my personal boundaries. Those professionals are the ones that like myself have put themselves forward to set up new initiatives, to create new working groups, to be part of conversations that will raise awareness about what AI is. The dangers of AI. But also, how technology already damages and harms people in various walks of lives. And the fact that what we have – how we have proceeded so far, via Silicon Valley, ‘move fast and break things.’ It has to stop because we are in a time when if we continue in the same way, we're going to generate more negative impacts than positive impacts.

But on the other hand, I really have to recognize that many people that will come here for the glory, will come here for the money, which is okay. Because once they step in, and they have the right culture, they will be almost like infected with positivity. I've seen people changing their mind being able to search, as you said, for purpose. They will come here for the money, they will come in this domain for the money, for the glory, only for them to discover there's a higher purpose, there’s a higher mission in here, and they will leave well paid jobs at big companies that will not align with personal values. I think that even those who are coming for glory, will ended up being infected by this desire to build responsible technology, which makes me really optimistic. There's much more goodness that exists in the world, and especially in the people who are charged to be developing this technology.

[00:11:33] RS: The goal then would be to create a system in which even an opportunist couldn't help but contribute positively. Right? How then do we create a system that infects people with positivity?

[00:11:48] MLA: It's all down to culture, organizational culture, hence why I preferred PwC. And in no way I'm here to glorify my company. It’s more a reflection of the type of organization or my personal opinion on the type of culture PwC is, you need to build a culture that goes above and beyond technology itself. Goes into recognizes in each individual, for their own strengths and skills and value they bring to that organization. Treating them as equal, allowing them to flourish, supporting them to flourish, providing them with all the necessary resources, not just the paycheck at the end of the day, but with trust and confidence that they are doing the right things. That slowly in time, will create an open, trusted, and much more constructing relationship between employee and employers.

While companies, big companies that operate globally and employ thousands and hundreds of thousands of employees, that change is difficult. I would say it's not impossible. It's much easier to do it when you're a startup or scale up. But everything starts with how you see the people you work with, and what sort of an attitude, you as a leader have for them, and what sort of a support network and resources you'd have available to allow those people to bring the best of themselves at work. If you do that, of course, they will end up embracing values. They will not have to be forced to follow rules, ethical rules to build technology. They will do it because they know this is the right thing to do. If they don't know what the right thing to do, they will go and explore it. They will go and research it. They will go and engage with those who have this knowledge and they will bring it in within the organization. And then we'll make it work for that organization. That's the most sustainable way of building responsible technology, responsible AI, is relying on people’s good behavior in doing the right thing. When the right thing is nowhere, it needs to be standardized, providing that standardization to allow them to get a sustainable and cohesive approach.

But then beyond that, beyond the rules, there'll be people who make the right decisions, and that's when the magic happens. We've seen it, I'm sure it is. I've seen cases like companies like Microsoft, who have been deeply transformed by one leader and one leader alone, Satya Nadella. And I heard countless accounts, how much Microsoft has changed under his leadership. But also we see the goodness exists in organization to the perspective of the products and there are a few companies that come to my mind, Yoti which is a company based in the UK that performs age verification type of type of products, and few others. There are many companies out there, that they were able to build cultures, regardless of the size, starting with Microsoft and going all the way down to smaller companies like Yoti, Hugging Face, or DuckDuckGo, that will demonstrate how good their culture is through the products and services they offer. And that's the best indication you have an organization that is very much focused and driven by values and missions.

When the products and services speak for themselves. You don't need to demonstrate anything. You don't you to put any statements. The way that products are being used, and the impact it has on its user base and stakeholders, speaks volumes. On the opposite side, you'll have many companies that will deploy a ton of comps, in an attempt to demonstrate they actually have ethical products and services. I will let our listener to decide how efficient this approach is.

[00:16:06] RS: How efficient or how reliable and honest too, right? Related, I think here, is the responsible AI toolkit you mentioned earlier, because when we talk about how do we standardize a process to ensure that technology is built somewhat with a sense of guardrails, right? You mentioned that it comes from a sense of technology is built with values in mind. Can we maybe start with what those values are, and then we can drill down into what else the responsible AI toolkit prescribes?

[00:16:39] MLA: Values are contextual driven. So, each organization will have their own set of values. When I say organization, I mean, both those who build AI and those who use AI. So, becomes paramount for those values being honesty and integrity and beneficial, for example. What my clients still value, or care, and sustainability, or inclusion. At the end of the day, it's important that each organization recognizes that those values need to be translated into everything they do, including the processes around build and use of AI and how they build and use AI. The way they select data, the way they build the algorithm, that decisions are being made step by step in that process. They need to be aligned with those values and this is much, much easier to say than done, because it requires a different approach and a different decision and action process that is continuous.

Hence, going back to what I said earlier about fostering ethical behavior rather than relying heavily on rules because that allows us not to be paralyzed by inaction. So, what do we do now that we have this decision? Now, that we've chosen fairness metrics, what do we do next? Where do we apply it? It will become BAU, becomes ‘business as usual,’ if you use a combination between rules at first until people get the confidence they are abiding to some high-level ethical rules, but at the same time, give them the confidence that on long term, they will be able to make those decisions.

So, that's at the core of responsibility AI, is being able to take a set of values, is unique for every single context where AI is being used, and being able to translate them in design and governance requirements. That's, as I said, as a logic is pretty simple. It's a bit more difficult to put in practice, because what it means is that once you understand what your values are, then you have to map it to the key ethical principles that in fact, those ethical principles, allows those involved in the world of AI to understand what is bad, what is good and what is bad when it comes to building and using AI.

It was seven or eight years ago, when a group of philosophers, computer scientists and AI practitioners have come together to define the first few rules or ethical principles of AI. And those ethical principles allow anyone who's involved in the process to say, “If I have a value of trustworthiness, I will then be able to translate that, take it and translate it into more detailed instructions that will allow me to translate that into practice.” That's how the ethical principles are being used. They act as a translation between high-level abstract values, into instruction that translated it even further which can be used by engineers, can be used by data scientists or compliance people to be able to understand how to best design that solution, to be aligned with a specific value.

That's what we've tried to do with responsible AI to say, recognize that while each organization have a specific set of values. Secondly, there are a universal set of ethical principles, and we've done a research scan 200 different ethical documents about the ethics of AI, data, technology, robotics, computer science, engineering, and identified 155 ethical principles when we analyze, we aggregate them in nine principles. Those are, I would say the nine universal principles, which later as the topic progressed, we've done really research in 2018, was very much mirror to the principles we have seen by European Commission, OACD, and later, UNESCO, which says that, at least we have a universality in terms of the key ethical principles. But what's really different is how do you map them against your values, and then how you translate those into specific requirements in the context you operate.

What else it comes with that, it's a set of extra activities that need to be put in place in order to say, “Oh, I need to govern this process differently.” I need to have new experts with new responsibility like ethicists or engage with different type of scientists, human or social scientists, based on the type of the use case. I need to bring my legal team part of this, because yes, we don't have clear legal requirements. But there might be legal provision in other laws, that will more likely impact that solution. So, you have to cover that.

Secondly, you have to really pay attention to the risks that AI generates and being able to address that. And also, being able to work and be ahead of regulation and being able to say, “Oh, if I anticipate where regulation is going in different jurisdiction, then more likely I will develop a product that will pre-empt that requirement, because I understand where this is going.” That's the responsibility toolkit, a set of different assets, that allow a company to be able to first of all, translate with certainty or perform this translation between the values, ethical principles, and the product themselves, but then create a set of organizational capabilities that allow them to manage risk, to have a governance structure with clear roles, responsibility. And ultimately, being able to create a culture, an ethical culture, where developing AI, aligns with the set of values is business as usual.

So, responsible AI is also a transformative or transformation type of toolkit that allows organization to refresh the way they operate to be suitable for the world of AI. On top of that, yes, of course, we have a couple of other assets that we're looking to testing more in an assurance type of auditing AI solutions that we've seen others doing so. But overall, will be a set of Lego bricks that will allow companies to update the way they operate with AI for AI so that they will not be caught by surprise, and they are able to manage both the benefits and risks.

[00:23:53] RS: As you say, the specifics of turning principles into development activity are going to be different for every organization depending on like you say, their own organizational culture and their product and the risks associated with it. What questions though, can a practitioner ask regardless of where they are, when they are beginning to develop certain technologies, when they are beginning to develop AI, for example, to ensure that they operate within some of these universal ethics?

[00:24:21] MLA: It depends very much on what sort of organization rule exists already. I think most of the companies, at least from what we observed already started asking those ethical questions in one form or another. Either relying on those principles that have been drafted by OECD or UNESCO, or using various toolkits – the toolkits in a way are just a set of questions that an engineer or technologist can ask in a certain context. And those are freely available and they're very good prompts. They will start asking questions, “Why are you developing this application? What is the problem you're trying to solve?” And sometimes it sounds such a simplistic question. People will say, “Of course, I know why I’m building it.” But when you start going into the depths of, but what exactly is the problem? What exactly is the context where you are looking to deploy the solution, that that's most of the thing, most of the misuses of technology have lacked significantly in considering, what's the problem and to the context, does the problem emerge?

So, starting with asking yourself, “What are you're trying to solve? And if I solve it in this way, what sort of an impact do I have both positive and negative?” Will be the best starting point. And then we'll go further down and then the next question would be, “Is this solution uses personal data?” Because if it uses personal data, you would need to ask many more questions about, “Who's missing?” Or, “Who's not represented in the data?” Besides the usual questions, who are all the stakeholders that are involved in the process? If the application doesn't use personal data, it's a little bit more – it's a little bit easier, because you wouldn't have to go on the path of inquiring the impact on privacy or fairness or beneficiaries. But there will be other considerations around safety of that solution, the robustness, transparency, how that particular solution comes to be part of a wider process, that in the end, a business and operational process is formed of machines and humans working together.

So ultimately, you still have any human impact, but in a different manner. We'll start with this question,”What’s the problem we're trying to solve? What's the impact more likely we're going to have by building the solution in this way?” And thirdly, decide is this an application that uses personal data or business type of data? And from there, you start going step by step, and in asking more questions about that solution, but those will be the best one to start with.

[00:27:16] RS: The complexity of this is revealing itself to me even before you get into the technical difficulty of developing algorithms, of actually making the technology do what you want it to do. Because when you ask yourself those questions, about risk, about how might this affect someone, you have blind spots as a developer, based on your life experience, and you can really only indulge how it affects people who are exactly like you. So, before you can really build a piece of technology that affects populations equally, you kind of have to solve diversity hiring in your own organization, too. That's completely unrelated to the problem of this technology specifically, and I feel like it really proves what you were saying that everything else feels downstream of your organizational values. Is that a fair characterization?

[00:28:08] MLA: Yes. But we shouldn't have to rely only on boosting the diversity and inclusion of the development teams because we know there's a real problem, especially when it comes to representation of women working in technology in general, let alone AI. I would say that we don't have time to wait. We need to have alternative solution to the diversity and inclusion or the lack of inclusivity that has been displayed by so many applications that we have seen in the public domain. God only knows how many others exist out there. We have not been discovered because we didn't have activist or forensic specialists that will go there and explore how specific applications will discriminate. I would say that each organization besides aiming to have a diversity and inclusion agenda that goes above and beyond the technical team and goes into the depth of how do I create a culture of inclusivity. Going back to what we said about the overall values that will allow people from different backgrounds to have a voice. What's the point of hiring women, if women are silenced? When they are part of the development team? What's the point of hiring any other type of minorities if they are there just to represent the number? You have to be able to create a culture of a dialogue where every opinion is being listened to, and not just being listened to, but is being considered. And that's not that easy. It's easier to hire people from diverse backgrounds. It's more difficult to be able to hear different perspectives, being able to analyze them, and from there, extract and make decisions.

Making decisions is probably one of the most difficult elements around the AI, because suddenly, you see this decision-making being required to be made much more collaboratively than it was done before. Not just in terms of having evidence or different perspective, but allowing people to fit or in some cases, some of the decision that leadership might make. And that might end up clashing with the business imperatives, with the business objective of that organization. So therefore, how do you manage all these processes, where you allow people to speak from diverse backgrounds, allow them to voice their opinion, to make a distinction in between those personal opinions, and the opinions that were seek to represent those who are underrepresented at the table, and then being able to make a decision that is both timely and effective. I think there's timely and effectiveness, it's a big gamble, right? I think you also have to have the courage to go and at some point, we have to go with someone's decision on this. And then we'll have to adapt we’ll have to test and see that our decision is the correct course of actions, or it will require further adjustments as we go along, knowing the AI trade off continuously adapting and drifting.

So yes, the challenge is more than real, and it's less to do with the technical fixes, or the algorithm. It’s more about how humans engage in building technology. What sort of opinions are being considered, what sort of a decision are being made, and how the decisions are being made. And then what sort of approach you use long term, not just the one-off. AI is a live tool that operates 24/7. Once it’s in production, you will need a way to monitor. You will need a way of redressing various drifts. Therefore, how do you create an environment where you’re not only engage with those people, a one-off at the beginning, but you understand what matters for different groups that will be impacted by your AI solution and being able to continuously represent their interests as you go along.

I think that's more or less what I'm trying to say. Although, it sounds a little bit clunky and messy. AI doesn't have a technical problem. AI has a human problem. The human problem is not so much with the society. Before we go and save the world and address all the inequality that exists out there, let's start with our own houses, our own organization and fix the problem. Then recognize the problems the problems that we have might end up being reflected and absorbed by AI. If our organization are 70%, 80% white guys, white dudes, that enjoy playing computer games and drinking beer, of course, we’ll end up favoring white dudes when you build AI. And even if there will be girl developers, if they are not supported to bring their own self to work, in the end, they will end up drinking beer and playing computer games and say, “Oh, whatever. Let's just build this to suit the white dudes, because they are the majority. Therefore, the minority has to adjust to the majority.”

But those are the type of questions that should be asked is how do we solve the human problems that ended up being reflected in the AI rather than us going like, “Oh, AI has a problem here.” Of course, AI has limitation. Of course, your listeners will know really well. Probably the rest of the world fill dreams about AI, solving every single problem, including becoming sentient. Therefore, we'll have like a pet rather than going to the shop and getting the pet to just go and talk with a large language model as it is. So yes, let's fix the humans first before we go on blame AI because that's where the issue is.

[00:34:21] RS: I'm glad you said it's not a technical problem, because I wanted to ask you whose job this is to put these standardizations in place. It's not enough to say it's everyone's job because when you say it's everyone's job, what that really ends up meaning it's no one's job. So, I'm wondering if there's a need for non-technical hires. If this is like a specific hire of an HR or people department in a company who is developing artificial intelligence, are there non-technical hires that need to be made? How do you make sure this is someone's responsibility in an organization like an AI ethics czar? Is that a role that we should hire for?

[00:35:02] MLA: First of all, the first question is whose responsibility is? It has to be the big boss. No matter how you call him or her. Chairman, CEO, Director, the leadership of that organization has to own this and has to demonstrate how they own it. They will ultimately delegate that or create a position like an ethicist. I’m not a fan of AI ethics officer or anything to do with – especially AI ethics officer, because many companies, they already have an ethics officer, a wider ethics officer. AI is a technology that is part of the wider construct of that organization.

So, it has to be aligned with the wider business ethics, organizational ethics agenda. But in the same time, there has to be a designated owner, which job is much more tactical in nature, and they will be specialist in this area, that is able to represent the leadership into the day-to-day conversations. And themselves, depending on the size of organizations, or alongside the team of ethicist will be those who will support the technical team to navigate this complexity. It's so interesting to see that some organizations have gone and hire some ethicists, but they're not using them fully. Leaving their developers, wondering, for example, even giving a set of rules and standards, still wondering which course of action I take, because I am not a sociologist, I really don't understand the subject. So, I really do need support.

I think it's really interesting to see this dynamic being played out, especially tech companies, and how much value is being given to ethicists who have the ultimate knowledge and responsibility to drive and to guide technical teams in finding answers to those questions. So yes, leadership. Really important. They need to own it. They need to show they own it. And when I say leadership is not just CEO, their leadership could be product directors, for example, especially big companies, and then having ethics responsible, responsible AI officer, or ethics lead, that will be able to either provide by themselves, advise and support to technical team, or coordinate the wider team of ethicists that will provide this type of support, that will be able to bring everyone together. To be able to carry ethical analysis, step into the ethical dilemma, and being able to help people navigate those dilemmas and find and agree a course of action.

[00:38:07] RS: Maria, there's loads of work ahead of us, it sounds like. But I think after this episode, our listeners at least have a starting point. I will absolutely link to the responsible AI toolkit, and maybe even you mentioned the nine universal ethics. I think that's some important reading for folks. I'll try to find something that outlines that as well so that people can read a little deeper. Is there anything you want to plug here at the end of the episode before we wind down?

[00:38:30] MLA: I'm optimistic about the speed of change when it comes to considering values, value alignment for AI. That's mainly because, as I said at the beginning of the episode, when you create a culture or an instance, where you have – you give people the right resources to explore, but at the same time, to bring their best work, they will understand because they are building this technology. Both the benefits and the risk, and they will slowly come on this side of the story. Even if they were not educated, because we know ethics was an optional in many curricular, computer science curricular, they will end up embracing it because they know that that's what the company wants, but also, they have what it takes to build technology in that way. I wish that in in few years, I will be literally out of job and we will not use the responsibly AI term, because all AI that we're building is going to be responsible. Therefore, my expertise and needs might not be needed, then I'll need to find another job which will might be a bit of a problem.

[00:39:49] RS: You heard it here folks. Let's create a world where Maria is unemployed. This has been a fantastic conversation. I've loved learning from you Maria, thank you so much for being on the podcast and sharing your expertise with me today.

[00:40:02] MLA: Thank you for having me.

[00:40:04] RS: How AI Happens is brought to you by Sama. Sama provides accurate data for ambitious AI, specializing in image, video, and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, ecommerce, media, medtech, robotics, and agriculture. For more information, head to sama.com.