There will always be a certain level of risk associated with building AI models, but it is the responsibility of the creators to minimize those risks and ensure sustainable development. Today, we are joined by Meeri Haataja, the Founder of AI governance platform Saidot, to discuss all things responsible AI-building and risk mitigation.
In this episode, you’ll hear about Meeri's incredible career, insights from the recent AI Pact conference she attended, her company's involvement, and how we can articulate the reality of holding companies accountable to AI governance practices. We discuss how to know if you have an AI problem, what makes third-party generative AI more risky, and so much more! Meeri even shares how she thinks the Use AI Act will impact AI companies and what companies can do to take stock of their risk factors and ensure that they are building responsibly. You don’t want to miss this one, so be sure to tune in now!
Key Points From This Episode:
Quotes:
“It’s best to work with companies who know that they already have a problem.” — @meerihaataja [0:09:58]
“Third-party risks are way bigger in the context of [generative AI].” — @meerihaataja [0:14:22]
“Use and use-context-related risks are the major source of risks.” — @meerihaataja [0:17:56]
“Risk is fine if it’s on an acceptable level. That’s what governance seeks to do.” — @meerihaataja [0:21:17]
Links Mentioned in Today’s Episode:
Meeri Haataja: So that's very important. No one size fits all governance. That will definitely stop the innovation. I can promise.
Rob Stevenson: Welcome to How AI Happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about, the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we're about to learn how AI Happens.
All right, welcome back, all of you wonderful data scientist, machine learning engineers, AI enthusiasts of every ilk and persuasion. Welcome back to How AI Happens. It's me, Rob. I have another fantastic guest lined up for you today. I can't wait to speak with her. She has a ton of experience in our space. I'm going to stumble through it here at the top of the episode like I do and then ask her to correct me afterward. But she has had data roles for Accenture. She was director of Artificial Intelligence for the OP Financial Group in Helsinki. She serves on a number of different boards and chairs of various committees, including AS Chair of IEEE's AI Impact Use Case initiatives. She's a member of the Safety Advisory Board over at snap, and she is a CEO and co founder of psot. Wowi. What a litany of titles. Mary Hatia, welcome to the podcast. How are you today?
Meeri Haataja: Thank you. I'm great. Pleasure to be here.
Rob Stevenson: You know, that was, like I said, quite a litany. Did I do justice to that? Is there anything on your curriculum vitae that we should call out here before we carry on?
Meeri Haataja: I guess likei practitioner, like I'm very operational, so maybe that part is not so visible in all of those titles that everyday work is about working with data scientists and helping them operational they to quality management and governance.
Rob Stevenson: I'm glad you called that out because those are the folks I like to speak to most on this podcast. The people who are still very connected to the work. So you get to roll your sleeves up and get in there. Data scientist still. Is that what you're reporting?
Meeri Haataja: Yeah, that's my everyday work to help these teams, people who build models, AI systems and deploy them to do goodit governance. So yeah, absolutely love working with data scientists and everyone else is involved in building a based systems.
Rob Stevenson: That is who is out there in podcast land listening. So, wow, we. What a perfect guest. Is this almost like it was planned? You know, you have so much awesome work you're doing and we'll get into that, but you just got back from this, I guess, sort of, Kind of a conference, a giant meeting for AI Pact which is the EU's first ever like organization to conduct legal framework around the topic of AI. I was hoping you can kind of share with us a little bit about the event and how it went.
Meeri Haataja: AI practice actually yeah, it's a voluntary commitment scheme by the European Union's AI office which is really important new organization established for really AI acts related the new AI regulation related activities that there has been put in place for them. So yeah but AI Pact was. This event was about launching, launching the AI Pact last week in Brazil and really bringing together a lot of companies who have committed to voluntarily take action on towards the compliance with the new regulation that is in forced starting 1st of August this year. But there are transition times so there is a bit of time for companies to act. But AI PAICE really inviting companies to take proactive measures towards the compliance.
Rob Stevenson: So were you there on behalf ofsdot, on behalf of ieee, on behalf of YAM sel. Who are you kind of representing?
Meeri Haataja: I was representing ced. So we also took part in AI pact. It was 114 companies who took part. I think it should be open still to participate or that's what I assume about really more than 100 companies participated on the kickoff date and it was an amazing opportunity to connect with all of those companies, a lot of big tech companies and we had an opportunity also to share our statements on why this is important for us among OpenAI, Telephonica and some other pretty amazing companies.
Rob Stevenson: So tell me what was the statement? How did you participate?
Meeri Haataja: The statement. If, if I get the statement it would be several minutes. We don't have probably time to go through as it in Enti. But really the key message is that with AI there comes a lot of amazing opportunities that everyone is after. But there are also major risks and it's really about the accountability of the players in the ecosystem who are building this who need to really pay attention to that and identify those risks and mitigate put in place measures to really make sure that we are able to take those into acceptable level. So for us AI act in general is a very good step regulation towards nudging companies to take action on this and AI Pact. We really want to comment also to AI Pact from the perspective that's obviously going towards compliance with the AI Act. But also it's very important to share experiences between different companies on what is it to implement compliance with this new regulation, what works, what are the problems and also share this information with the AI, ah, office from the, the commission. So really it's about networking and sharing experiences because a lot of the success or failure of a new regulation is about how it's going to be enforced. So I'm looking at this from that perspective.
Rob Stevenson: Yeah, the enforcement is something that we maybe haven't spoken about much on the show before. We've spoken plenty about the need for guardrails and the need for accountability and governance. But what happens when people inevitably run afoul of it? So where are we with enforcing these sort of rules? How would you describe the reality of holding companies accountable?
Meeri Haataja: I think we are in the very beginning. So generally we are in a situation that we have been able to put in place some standards and even some regulations for narrow scope or now in the context of AI, pretty broad scope. And there is an idea of the enforcement. But I guess a lot of questions are related to even starting from who will enforce these IND different member states in eu, who are those supervisory authorities? And from that perspective also what are the things that they emphasize and highlight? What is important for them? If the supervisory authorities, privacy data protection authority or supervisor, or if it's cybersecurity, obviously their perspectives to the problems are pretty different. So that's going to be a really interesting journey and I totally hope that there is a lot of dialogue between the industry and supervisors, on this topic. We'll see. It's going to be interesting.
Rob Stevenson: We will see. I suspect, Mary, that all those questions and the fact that we are at the early stages of this is behind some of your own motivation for FindingShot. And I wanted to spend some time speaking about the organization and kind of where you are right now. I feel like that will set the context for the rest of the conversation. But you know, you'had this career with technical roles, you know, you have this background and whereas you could have probably spent your whole career, you know, doing these technical director of VP of roles and just written into the sunset, cashing at a nice PayCheck and building AI products forever and ever, amen. But instead you decided to found this company around AI, governance. So I'm just curious for you in your career, what was that turning point where you were like, you know what, rather than be internal at these companies doing, building AI products, I want to one, found my own company and two, that it should be in this area of governance.
Meeri Haataja: Yeah, I guess it was some kind of evolution during the times when the whole industry prepared for data protection and GDPR implementation. I also started to surface and like you know, compliance came on my table. I was working in financ financial sector in a major bank and I was responsible for taking AI and use across the different business processes over there and then ensure GDPR compliance. But at the same time it was not only about GDPR but it was also Cambridge Analytica happened in our industry. We saw a first use case actually one player in the market got fined for discriminative credit algorithm. and those few moments around 2016 to 2018 were really defining and like you know, put me to think differently about AI and the impacts. I've been very open that before that I didn't know that non discrimination law has something to do with my work or my colleagues work. Neither did anyone else in the industry. So the whole responsible AI and ethical AI as the topic started to surface and I recognize that I have seen so little from my perspective and not only me but the industry in general is looking very narrowly and we need to change how we're doing this and we really need to take product quality also seriously. So that was the first sort of recognition of the importance of the topic. Something needs to be done and the success depends on whether we are able to find ways to actually operationalize this. So that was the trigger for founding side really looking from data science teams, AI teams perspective and asking okay, how do those principles get into practice in the everyday life of these folks? And then it became very clear for me that these teams will need support with all of that compliance, all of that new burden that is coming from to their shoulders and also in order to find ways to do that in a meaningful smart manner effectively. So yeah, those were kind of the triggers. Seeing that this is very important and also that unless there will be good solutions, there will be no good outcome. And so seeing that there is an emerging market for what we're doing.
Rob Stevenson: So when you begin speaking with companies who are interested in putting up the guardrails as we say, where do you begin? How do you begin to like assess their development process and their own sort of organizational culture to see where they're at with this.
Meeri Haataja: It's really interesting question what? I ve recognize that it's best to work with companies who know that they already have a problem. Then it's really difficult. If there is a company who don't have any problem, they don't see that they need to do anything on this. So primarily I'm talking with teams who see that okay, this is important primary drivers for that is at the moment that something is stopping them from deploying the gen AI products or AI products in general with the pace that they would like to. So there is confusion in businesses of not getting the value out of Gen AI a lot of expectations, but not able to really realize that or materialize that value. Another driver is really the regulation. So there starts to be good awareness about the regulation and that is regulations in general. But now at this point of time, very specifically AI act acting as a force for chain. So it depends from which angle the company starts, but that's typically the starting point. And then the question is that, okay, whose responsibility is that? What does AA governance mean for us? How do we do this consistently in a large organization across our potentially a very large portfolio? And many times we are starting to answer to those questions with what we call AI policy putting in place company specific AI policy or AI governance framework which is sort of a standard, internal standard for organization. What does AI governance mean for us? Who does what and how? So that's typically the starting point, how we start the journey with customers.
Rob Stevenson: It makes sense that it would be easier or perhaps faster and more productive to speak to the companies who already are aware there's a problem. I'm curious what that sounds like. Like when someone reaches out to you and they're like ugh, Mary, like we got. I was digging through the output of one of our algorithms and it's bad, you know, like how do they know they have a problem? What does it even sound like to be.
Meeri Haataja: It's before that moment. So maybe there has been an attempt to take a product to the market and on the last minute legal compliance has stopped the process and then there is a confusion, okay, what do we need to do? How do we do this? What are even the requirements and how should we do this? It can also be on another extreme that there is such a strong awareness about all kinds of risks related to Genai. Third party risks, hallucinations, everyone is talking about those copyright related problems, data protection and so forth. So that might be also one situation that companies so afraid of these risks that they are almost like paralyzed and not able to do anything. That's a little bit sad situation because responsible AI is first using AI in a responsible manner. But I find it really sad if awareness of the risk lead into the situation that you don't do anything, you just stop. So but that's luckily an exception. So primarily it's about recognizing that, okay, we have like a third party risk related to how we're using these open AAI's or like ah other models or we'taking to use an AI based product that has AI features and we really don't know enough. Yeah what are the risks related? So let's start to explore how should we do this and find measures to do this in a proper manner consistently.
Rob Stevenson: Now is generative inherently more risky or does the risk factor come from the fact that it's just being used by more people? It's a user facing deployment of AI that we haven't seen before?
Meeri Haataja: I don't know. It's really hard question because I'm always looking at this risk that it's really hard to talk about one or two risks. It's always risks that are involved in the AI system which is not done about the technology but also the use case where you deploy it. So equally we are seeing different kinds of risks involved in genative systems and other types of AI systems But there is definitely more third party risk related and some like very specific types of risk like hallucination as a like you problem that has really emerged in the context of gen AI but generally I think third party risks are way bigger in the context of gen AI big because pretty much everyone uses treatment models and there is also that are provided by other players and there is also way more AI applications or products that contain gen AI features. So from that perspective sort of deploy your risks or like risks that are coming to you as an enterprise from a perspective that you are taking into use someone else's products or models or deploying those in your context is definitely something that is very unique and kind of new to this gen AI time.
The risk is more in the third party usage, you say
Rob Stevenson: So could you share m what you mean that the risk is more on the third party side?
Meeri Haataja: I mean before the generative AI boom and the availability of this third party pre trained large foundation models it was much more the emphasis in enterprises was to build own in house tailored models for your specific use cases. Now thei portfolios or first of all they are way bigger they have multiplied in size compared to a couple of years back when there was primarily models that were built for your specific use cases from the beginning. So that has really really also influenced this that there is just way more volume volume very important driver for the governance n and this whole risky discussion. Sorry, ah, I forgot the original question.
Rob Stevenson: Yeah, I'm just curious why you were saying that the risk is more in the third party usage.
Meeri Haataja: Yes, so practical. It's just like a bigger part of the port AI portfolios and companies is building on taking to use fine tuning Applying third parties models or even third parties products.
Rob Stevenson: This goes back a little bit to like who's responsible? You know, like if I take a pre made model off the shelf, and I do something nefarious with it, or like even if I'm using it and the output is biased in some way, can I just be like hey don't look at me, I just got this off of GitHub, you know, or I got this from one of the M big players. Like a lot of the huge like software developing companies are putting these things out there. And so like if I borrow you know, Intel's for example toolkit and then it's found to have a bias output, is it my fault? Is it Intel's? Where do you come down on that?
Meeri Haataja: This is actually one area where the AI act is really establishing some good practices because the focus has been really heavily on looking at the value chain and trying to see what is necessary in order the value chain to work in a responsible, trustworthy manner. So from that perspective of course it is important that different players have enough information from the other player that s what am I using, what is this model, how has it been trained, what are the risks involved, where it's supposed to be working, how it's supposed to be performing, and then what are the limitations where I shouldn't be using that? So this transparency is really important in order to support you as a user of that model to take accountability of that use. So you can definitely not outsource that responsibility to the previous players in the value chain. Actually a lot of AI risks are materializing when you take it into use in a certain context. So that's definitely something that no AI system or AI model deployer can sort of hide because that risk most probably doesn't exist or materialize if you don't take it into use in the way that you are doing. So really this use and use context related risks are a major source of risks that we need to tackle when we do AI governance. So that's really a responsibility of the one who controls the use. But of course you cannot do that alone. So you need to collaborate with the provider of that model that you're using. You need to have most importantly enough information about how it works, what are the limitations, risks and how it has been tested. So it's collaboration, but this is really, really where there's a lot of work to do in the industry in general to have these practices of sharing the right level of information with different players in the ecosystem and are really looking forward to the impact that AI ACT is putting on this because there are some pretty heavy expectations for transparency.
Rob Stevenson: Yeah, yeah, definitely. I wanted to ask you a little more about the EU's AI act because there is this challenge that the private market moves incredibly quickly, governments typically move slowly and now AI is moving faster than ever. Right. I keep hearing that on the show over and over again. And so it feels like we're set up for this even more pronounced imbalance between the rate at which technology is deployed and the rate in which, you know, governing bodies can monitor it and mitigate risk. So thrust into that challenge is this EUAI Act. Do you think it goes far enough? Is it sort of like a, just a good start? From your perspective, what do you think is going to be the impact of the Use AI act on innovation and the companies in the eu?
Meeri Haataja: Yeah, that's probably one of the most asked questions. Whether AI regulations will hinder innovation and to what extent, if they will. So time will show us, I see that innovation is at the moment being hindered by the UN clarity rule standards, not being able to see where is the safe space to innovate. So really what the AI act attempts and what governance also in general attempts to put in place, and we assign that when we work with companies that's really that putting in place that saves space for innovation. Knowing that okay, this is the area you, these are the limitations. That's where we shouldn't go. This is where we can innovate and be creative. That definition in place takes away a lot of this burden on clarity and not being able to really see what does good look like and ah, where to play. So that's definitely one thing and whether it's AI act or governance in general, I see that that is necessary for driving innovation in companies. Then there are a lot of ways how it can go wrong, how it can definitely any regulation can end up slowing down radically innovation in a way that we don't want to. One of the really important aspects on this, what I keep on talking with companies when I work with them and talk about these matters, is that we really are not governing to remove all the risks. So really not to avoid all of the risks but we really want to take the, keep the risks in control. So that's really important to keep in mind. Sometimes when we focus on risk and focus on preventing risks then we, then we might even go into doing that even to the role and like you know, start to even see risks where there are not like you know, real risks, involved. So I think it's really, really important to keep in mind that risk is fine if it's on an acceptable level. And that's really what the governance seeks to do to take it to the acceptable level. Also one where very important is that we don't apply same standard of governance to every system. Good governance practices and good AI regulations are typically focused on putting in place risk based governance. And that means that where there is no unacceptable risk then we shouldn't do that. No innovation in those areas that are really actively harming people's fundamental rights or actively not creating wellness but like harming people's health or so forth. So no innovation wanted in that area. But then there are different levels of risk under that one and the governance should follow the risk level. So that's very important whether it's regulation or implementation of AI, governance in companies, enterprises to create that visibility over your entire portfolio. See these are the systems, these, the systems'building ourselves, they saw the systems we are deploying and then what is the risk level and then apply the right level of governance to each of those different categories or classes of system. So that's very important. No one size fits all, governance that will definitely stop the innovation, I can promise.
Rob Stevenson: Yeah, that is really interesting this idea of like a sliding scale sort of, okay, where there is less perceived risk, we're less concerned about restricting your activity. Right. That it just feels very sensible. Mary, we are creeping up on optimal podcast length here. Before I let you go, I wanted to ask you, you know, for the folks out there listening who are building and deploying their own AI right now, what can they do to sort of take stock of their organization, their own risk factors, these sorts of things to make sure that they are building in a responsible way. Short of, of course just reaching out to you and asking you to, you know, investigate their operation.
Meeri Haataja: My guidance would be, of course there is a very often first step to just like make sure that there is enough understanding and information like knowledge, skill set, about this area. But I would really, really guide towards creating this understanding within your organization. It's probably collaboration between the AI folks, compliance, risk, different stakeholders, but really putting in place that your company specific understanding about risk and different levels of risks that you should apply to your context and really strive for having that big picture about what is risky in how you use AI or how you build AI and what are the areas where you are doing this kind of work. And then starting from there to define further on how do we do data management, how do we do risk management, how do we do policy or legal related work in the context of this system? Third party management then iterates always best learning processes to just get started and try it out in a context of individual use cases and there will be a steep learning curve that will make it basically then after you have done governance for few cases it becomes basically part of everyday work and that's what we are looking for, just regular product quality management.
Rob Stevenson: They say every time you do a little AI governance it gets a little easier, right?
Meeri Haataja: Yes, that's a promise.
Rob Stevenson: Mary, this has been great. Thanks for that advice here at the end and thanks for being here on the show and sharing your experience and your wit and wisdom with us. This has been a delight. So thanks for being here today.
Meeri Haataja: Thank you so much. It's been a pleasure.
Rob Stevenson: How AI Happens is brought to you by Sama. Sama's Agile Data Labeling and model evaluation solutions help enterprise companies maximize the return on investment for generative AI, LLM and computer vision models across retail, finance, automotive and many other industries. For more information, head to sa.com.