Valeria holds a PhD. in informative systems and has a wealth of knowledge and experience in the topic of responsible AI. She shares the findings of her doctoral research at the University of Auckland, what drew her to responsible AI, and how she encourages AI leaders to prioritize ethical design.
AI is an incredible tool that has allowed us to evolve into more efficient human beings. But, the lack of ethical and responsible design in AI can lead to a level of detachment from real people and authenticity. A wonderful technology strategist at Microsoft, Valeria Sadovykh, joins us today on How AI Happens. Valeria discusses why she is concerned about AI tools that assist users in decision-making, the responsibility she feels these companies hold, and the importance of innovation. We delve into common challenges these companies face in people, processes, and technology before exploring the effects of the democratization of AI. Finally, our guest shares her passion for emotional AI and tells us why that keeps her in the space. To hear it all, tune in now!
Key Points From This Episode:
Tweetables:
“We have no opportunity to learn something new outside of our predetermined environment.” — @ValeriaSadovykh [0:07:07]
“[Ethics] as a concept is very difficult to understand because what is ethical for me might not necessarily be ethical for you and vice versa.” — @ValeriaSadovykh [0:11:38]
“Ethics – should not come – [in] place of innovation.” — @ValeriaSadovykh [0:20:13]
“Not following up, not investing, not trying, [and] not failing is also preventing you from success.” — @ValeriaSadovykh [0:29:52]
Links Mentioned in Today’s Episode:
And that's what's happening with chatGBT, right? We pose the question, we get the response, we take the response, we don't even question the response anymore. And that's the problem. What it means, actually, our cognitive thinking, and by the way, I'm exactly the same example, my cognitive thinking is also diminishing, right? We are thinking less and less.
Welcome to How AI Happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stephenson, and we're about to learn how AI happens.
My guest today on How AI Happens has a rich history of experience in the AI and ML space, both private and academic. She holds a PhD in information systems from the University of Auckland and continues research in the areas of decision making ethics and adoption within AI and ML. Currently, she serves as a technology strategist over at Microsoft.
Valeria Sadovich, welcome to the podcast. How are you today? Good, good. Thank you, Rob, so much.
I'm extremely excited to be here and looking forward to our discussion and hopefully we'll come up with something at the end. Yeah, yeah, I'm looking forward to it as well. Did I do your curriculum vitae justice there? Is there anything I left out you would want to make sure we call out? No, the most important things, Microsoft, PhD, human decision making, AI, responsible AI, ethical AI, this is kind of the keywords lately I've been throwing around.
Yeah, yeah, me too, as it happens. Also, we should remind ourselves about the biases. Biases is also a big topic at the moment.
Oh, yeah, of course, of course, we will get there for sure. And I want to learn a little bit more about your PhD because it wasn't maybe one to one related to the field, but I feel like it explains how you wound up in this space. So would you mind sharing a little bit about your study at University of Auckland? Definitely.
I think I'm actually probably the most exciting part of my life and the entire life is actually the PhD research that I've done. And sometimes I go back and I regretted that I didn't do it full time. I wasn't an academic full time because I was also studying and working at the same time.
However, my PhD research solely focused on human decision making. And I looked into how people make decision in online social networks. And my question was why our innermost decision and specifically, I looked into health and finance were outsourced to the wisdom of crowds when we know that the crowds does not get it right.
Correct. So that question was fascinating me because health and finance is the most intimate things in our life. However, we are asking someone we don't know about our personal matters.
So that was the first thing. And then the second thing I was thinking about it, how do we make those online technologies and tools more usable for our decision making? Because it's a known fact that internet can influence our decision making. It is manipulators, right? And specifically within areas such as finance or even the health, which is so much marketing going on in terms of the different drugs, different procedures, like even the example of weight loss.
Right. So I was always questioning how can we design it in a better way? So for people make their own decisions rather than take decisions that are already available online. That was my first thing.
And the second thing, I use the famous theory of rational man, bounded rationality from Herbert Simon. He's an American scientist. He got a Nobel Prize for this, which talks about that human mind goes through three phases of decision making process, intelligent design and choice.
And later on, because of the technology, we added implementation and post analysis of deployment, implementation, whatever, how we call it. So the fact is when I did look into this community's online communication of the people, how people perceive the information from online and apply it in real life, the results were quite dramatically commish, right? That we realized that people stop thinking, they don't have a design phase anymore. We all go through the intelligence, we all seek a question, then we post the question, then we receive the answer and we apply the answer on ourselves.
And that's what's happening with chatGPT, right? The same thing, we post the question, we get the response, we take the response, we don't even question the response anymore. And that's the problem. What it means, actually, our cognitive thinking, and by the way, I'm exactly the same example, my cognitive thinking is also diminishing, right? We are thinking less and less.
We might be thinking in different directions, but however, some sort of brain functionality, brain terms are not as used. So our design phase is basically disappearing. We are not articulating, we are not discussing our problems, we don't go to the doctors or specialists, experts, we don't even look for experts anymore to get a different opinion.
We look for examples of success from the internet, and we apply directly on our case. So that's basically what the study, and the study found that the design phase does not exist. Some of the online social network or some of the community supposed to have a different type of structure that makes sure that the information that is provided there is compliant, that it's regulated, that it's bias-free, that it does certain checks of the quality.
However, it's an academic way of thinking, but in the real world, it's really difficult to achieve. I am definitely guilty of just copying and pasting and using it in our hands. Yeah, of course.
Yeah, making decisions like that. And if I find myself in a pickle, I think to myself, surely someone out there has had this problem already. What did they do? And we have more information, we have more access to other people upon which to make decisions.
But it sounds like your thesis was that it's not necessarily better information. No, it's not necessarily better information. And be thoughtful of the fact that we're all living in an omni-chamber of AI, right? AI influencing us and putting us in a certain position with a certain network that actually amplifies our biases.
We have the confirmation bias and we hang out in the same environments, right? Because the Google search knows what we're searching for. So whatever the problem we are posting there, the answers will be corrected based on our environment that has already been settled in. So we will never be able to jump to a different environment or to learn something new.
So we will be always surrounded with people who are confirming our opinion, which is actually really dangerous, right? It's manipulations, it's influence, and it's fine with choosing the restaurant, right? Okay. But there are bigger decisions specifically in the organizational level as well. So that's a higher risk there.
And what about the biases, the gender, the race, the personal information that is getting leaked? So it's getting more and more complex when we start looking at that at the bigger scale. It's the filter bubble is how it's referred to, right? The endless U-loop. You are being recommended information based on the things you are likely to click historically, right? Yeah.
And the history change, we as humans change as well, but we have no opportunity to learn something new outside of our predetermined environment already. Right, right. So was that the way you connected the dot to your career in AI and ML? Was like, okay, part of the problem with trying to make your decision online is that you are having artificial intelligence serve you up the information that's not necessarily reliable.
There were quite a few instances. And first, because I was in consulting for some time, for a decade, I was at the big four consulting firm at PricewaterhouseCoopers, where I basically served as the global delivery lead for digital transformation program support. That's where I realized a bit of the people component that it's not that really easy for people to accept new technology, to move with the technology.
So that was the first part where I saw the change management theories, the organizational theories that coming from academic background, they're not necessarily applied in the practical world. That was one part that kind of technology implementation, technology usage, technology design, and what I actually looked into my academic world, that was first part. The second part was information processing, information design, how wise it is, how impactful it is in our personal life, in our organizational life, because the decisions are being based on the information that is available here, but it's not necessarily the quality information.
So you mentioned that you wish you had taken this research further, that you had kind of made this your full-time approach, but surely you still could. Is it still kind of informing your work? Is it still sort of structuring the way you make decisions, not to put too fine a point on it? So I moved a bit more into the ethical AI and responsible AI. So specifically thinking about how do we actually design those tools, bearing in mind that people without the education, not necessarily without education, people who is unaware how the tools were designed or created can actually use it safely, and how they can have more variety of the decisions, and how they can actually enhance the decision.
But that requires different components, that requires education of the end users. We're also looking at the phases of development of AI and machine learning, where do we put the certain mechanism check-ins, how do we design the model, who is designing the model, how do we test the model, how do we train the model, where the data is coming from, who are the end users, who are the one who is maintaining the model. So I think that's where I moved now, but I also, from my theoretical perspective, from practical perspective, I still see how AI is getting implemented in the big organization and small organizations, and how ethics in some cases is important, and in some cases there is no opportunities to care about the ethics.
It's not like that people do not care, but the use cases might not necessarily seem to be relevant for the ethical concerns, and then there are always the question of resources, there are questions of education, question of the data, and then the ethics become the later part, the good to have. That's it, it's the incentives are not aligned, right? And so this challenge of filter bubble, for example, was wrought by this idea that God is clicks, right? Like we want people to click on something, so what do we serve them? The thing we know that they've already clicked on before, right? So this is a fundamentally different approach, like you are, this is less sensational, this is more accurate information, but it is going to be less clicks. So I'm curious for you, like when you see that sort of, that lack of incentive there, what kind of conversations are you having to make ethical design less of a nice to have and more of a fundamental part of the design process? I always kind of draw a parallel of ethics and the quality concept.
Back in the days, quality was good to have, right? Like if we look at the Ford manufacturing and all the industrial revolution, right? The quality became as the certain standards, right? Then we came to the cycle of sustainability, right? The ethical concerns or the sustainable concerns, the environmental concerns, right? They also become standards. And now we're going through another wave where ethics, guidelines, regulations are getting more shaped. However, the ethic itself as the concept is really difficult to understand because what is ethical for me might not necessarily be ethical for you and vice versa, right? So that's why I think sometimes it's better to use the word responsible is because at least you will have some sort of guidelines that will help you to act responsibly towards the society, towards the end users, towards the designers, organization and person itself.
So I think that's where we're going with the responsibility component, but there is another danger that no one talks about it with the new technologies that are coming into the place is the lack of authenticity. How do we know how authentic we are? We are creating some sort of socio-economical gap where, first of all, we cannot check what is real, what is not, because how do I know that the content that is coming, the emails that are coming, 50% becoming computer generated, right? The human interaction is becoming more and more expensive. The physical interaction between the humans is even more expensive if you can understand the tendencies now we're moving everything online.
We don't have this physical touch. So that creates some sort of problem from the psychological perspective that AI and some sort of new emerging tools, they start impacting the human beings in the different ways. And then the second way is what is organizations getting quite concerns as well? The organizations get quite concerns in a lot of senses, but ethnicity and the real creativity, that's what might be lacking now from individuals.
Can you speak more on that last point? Like why it's crucially important that those things are lacking? Why the authenticity and creativity and emotions might be lacking in organizational context? Basically, that's where we create, that's where we build something new, that's where we innovate. And I don't know, obviously now everyone is using, whether it's Bing, whether it's chat GPT, when you write something, it's a perfect tool to check your grammar, right? Or it's a perfect tool to do some sort of adjustment, shorten the text. However, it's becoming soulless.
It's becoming not interesting. It's becoming less interactive. So that's what it leads to a certain factor that people are already becoming kind of isolated.
So by bringing more and more tools into their life, we're facing some sort of diminishing of the human interactions. That will be a tipping point of incentives, don't you think, is when the quality is diluted to the point where someone's like, I can tell this was all automated. This is soulless.
I'm craving some kind of actual human expression, even if it's in something as mundane as like a Black Friday coupon in your email, right? That is an incentive for less tooling and for more human interconnectedness. Is that what you're saying? But the thing is, it will get so expensive. To do what? To hire copywriters? No, no, no.
To do the human interaction. I mean, I don't know whether it is a problem or it's not a problem. Maybe it's an economical growth, right? An economical breakthrough.
But the humans are getting too expensive to hire them to do those tasks. Even we, as human beings, we cannot be by ourselves and we seek the human interactions. We understand that it's much cheaper to interact with the technology.
Are humans expensive or are corporations greedy? Well, it will be two in one. Because what about if you are, I will give you an example, there are hundreds of tools that help you with the depression and they are AI tools and they are cheaper than psychologists. So organizations are greedy, of course.
The ones that are focused on creativity and some sort of expansion on new products and things like that, they will be preferring the human beings because that's something where emotions come in, that the gut feeling comes in, that machines are not capable of replicating and that's where it actually creates some sort of a new innovative stuff. Love, for example, love is one component of the emotions that creates a piece of art. So this is connected to sort of economizing away human labor, no? Like this idea that we're automating tools, we're removing the need for certain human jobs and replacing it with AI products.
Is that the fear? Which is not too bad, which is progression, right? We are progressing further in life, it's evolution, we're changing the way we're living. But there's everything that comes that are negative things that comes as well. So how are we going to preserve what we have? And do we really want to preserve what we have? Do we really want to preserve human interaction? Do we need this human interaction? How authentic do we want to be in our communication by email, without lines, without different type of personnels? Or maybe there is no need to be inauthentic anymore.
It's too soon to say, right? This stuff is all so new. There's no longitudinal studies, there's no like, here's the 20 year effect of this, of using only tools instead of going outside, getting vitamin D and talking to a psychiatrist. And the problem with that is because we had a view of the real life, physical life.
We've been there, we were born there. But now we're slowly transferring into the online life. There is no more real life.
We are all interacting in different ways. We're learning how to interact in different ways, how to build the connection and so on. And the future generations, they will be completely living in a different world.
Yeah, of course. So what do you think is the responsibility for companies developing and deploying this technology? If it's just a nice to have to create things in an ethical way, do they have more social responsibility to let it not be a nice to have? Well, organizations are machines, right? They are profitable machines in the first place. So in terms of staying ethical and maintaining authenticity, I don't think it's actually on the list of things to do.
However, quite a few organizations are really focusing on making sure that the tools and the AI mechanism actually comply with regulations because it might fire them back. They are checked on the biases because we had hundreds of the cases where the brand get damaged because some of the biases came up through the toolings. But also remember that all of the biases that are coming are actually human being biases.
So no matter what, we are the fault of what's happening in the world and it will take generations and it will take the mindsets to be different type of neutral human beings, probably, if it's possible. It's important for them to stay competitive as well in order for them to be ethical and responsible to take care of the end users. It's really important for them to stay transparent in terms of how those models using what type of certain data in order to provide certain decisions.
For now, we don't know how all of the tools that we use give us recommendations. We don't know where the data is coming from. We can only assume and only we can assume because we think about it.
But if we don't think about it, we just take again what comes. We type something, request, the response comes and we take it. But we don't know where is the information comes.
But I think later on, we will become a bit more equipped to ask the questions, to ask some giants, organization giants to provide some sort of information. Where is the data coming from? Does it use personal identification or not? Is it government compliance? What policies it requires? What type of education? Whether developers have been certified or not certified? So far, we are early adopters and we don't have those strict regulations yet in place. However, a few organizations paying attention to that, I can definitely say.
Or at least they try to educate themselves on what is it about and why they have to care. But also don't forget, there are some tools that are used strictly for operations, right? Automation, some sort of reporting, where we also have to think about that the ethics sometimes should not come into the place of innovation. And the AI should be used.
Yes, it might reduce some workload, but however, its efficiency, its effectiveness, and organizations strive for this. Now, is this the kind of stuff that you are tasked with exploring at Microsoft? Or is this the stuff that keeps you up at night? Is it both? No, this is the stuff that I think about it after I do the Microsoft job because this Microsoft job is actually more probably straightforward, right? Because they usually this is demand and supply. We have a technology and the customer have a problem and they try to solve the problem.
So it's actually much more theoretically easy. Right, a little more cut and dried. Practically sometimes there are certain issues.
Of course, every time we deploy certain tools, we go through the responsible AI piece of communication, explainability, and etc. However, also to add, most of the organizations that are using AI or at least know what to do with AI, they actually know the basic of responsible and ethical AI. They try to know, they try to stick with the rules.
Some of them have executive functions of the ethical AI or in terms of the governance processes, they start adding additional processes. That's what I see in the market, that it's not completely ignored. People trying to make an effort on it.
And I'm sure when you go on LinkedIn, you can see lots of posts specifically talking about what each organization is doing to be ethical, to be mindful, to make sure that the employees know the consequences of using different tools outside of work or within the work. Okay, yeah. So it is good to hear there's some general wariness on the part of companies deploying this tech.
In your case, when you share with them some of that communication around responsible AI, explainability, etc. Is that sort of like a warning label on a carton of cigarettes? Or does that imply some actual physical guardrails? Depending on the tools and depending on the request, because not every AI and machine learning application requires those discussions. Because some of them might be as simple as to do a report, right? Which does not even have any personal data, just a number.
Where would you draw that line? What are the ones that require that sort of warning? That's definitely the ones that end-users' communication, end-users' touch impacts the customers, using the personal identification data, which can be anything from race, age. The ones that make some sort of decision-making, this is one of the main things, and that those decision-making can potentially influence a human being or some sort of outcome. So there are usually, every organization, if they don't have necessarily a kind of center of excellence in AI ethics, they will have a governance department that goes through certain risk metrics mechanism, where they assess the tools and the technology and they define where they have to go through more scrutiny on ethical AI.
So in addition to the responsible piece, what are some of the other common challenges that companies face when they are plugging in some of Microsoft's tech? I don't think it's only Microsoft tech. Any tech you plug in an organization will have some sort of challenges, right? Whatever you're going to do, you're going to change SAP, you're going to have people that are basically in any technology project, there are three components that create complexity. People, process, and technology.
And that goes around it. In the people component, you will have change, right? Change management, people will not be happy with the new processes, people will not like it, or certain level of education will be required, stakeholder management support processes, the change of the processes itself, the change of the organizational restructure, the change of the way how things used to be done, the regulation, the compliance, and of course the technology, the cost. The cost, the time, the resourcing.
Right, right. Yeah, that makes sense. That's helpful.
What do you think, when you think of those three things, people, process, technology, specific to deploying AI and ML, how do those challenges start to show up in those three buckets? Well, the first that might be process, that there is no certain stage gates within the development of AI, for example, implementation of AI, training of AI, monitoring of AI, because it's continuously progressing, right? The tools and technology keep changing, so the data is keep changing, right? It's not like one chunk of data you put into the system and it's using it. No, it's a continuous change of the data, continuous labeling. So I think from the process perspective, certain stage gates are responsible in ethical AI.
There is also the complexity of the models themselves, the cost of the data they're using, the cost of the computing power is increasing. It's a huge, not that many people actually talk about how, well, some do expensive machine learning and all of the AI algorithms, even just to do POC, proof of concept, it takes resources, it takes money out of the organizational pockets. So that's the challenges they're experiencing in terms of, does it worth it? I'm glad you mentioned the expenses associated because we've made some hay on this show about the importance of democratizing access to AI, for example.
However, when there are these large expenses associated with it, how much can you democratize? Because the barrier to entry then is a significant financial one. So do you expect some of these costs to decrease over time, as we've seen with other kinds of technology? It's a really good question. And let's think about it together, right? The demand for AI is increasing.
The data is just skyrocketing in terms of the volume, right? The computing power is increasing as well to process the data. The request from the market asks for the tools to be delivered quicker and to be available quicker as well. So, I mean, obviously some technologies that are being already developed and they've been on the market for some time, they will get cheaper, but the new, the shiny ones will always be expensive.
So that's a normal rule. However, we can assume that some computational data or some templates, data manipulation, data loading, those type of activities might get cheaper because it will be more, people will get more familiar with that, as well as the resourcing cost might be different due to the fact that most of the people now studying data science, everyone you want to ask about what they want to do in the career, it's always going to be surrounded with information processing, data science, computer science, and et cetera, right? So we will have more and more engineers available. Now it's this, and who knows, maybe, maybe not, but I assume more people are actually choosing those degrees.
So yeah, some costs will go down, some might not. Like, I mean, I've been in examples where organizations that had substantial amount of, like, you know, on one of the fortune organizations, they simply say that we won't be able to innovate because it's too expensive. And let's not take the big giants into the place, right? Because like Microsoft, Google, NVIDIA, like AWS, Meta, all of them, they have different mindset, different environment, or even the niche startups, data analytics startups, they have different way of living and surviving.
But if we're talking about the traditional companies, which still can be big banking or whatever, big entertainment companies, they also have profit and loss, right? They will see that some of the investment on this cool innovation and shiny things might not necessarily pay back. So what do they do? They need to compete, but at the same time, it's not necessarily that easy. So that's a huge question, usually always goes into the boardrooms.
And sometimes those projects are not necessarily, as you said, democratized. In that case, where the expense may be viewed not to, where it's like, you know, is this worth it? Like, can we justify these costs? That strikes me as like someone not understanding the technology. It's like, this is a shiny, fun tool, like saying you have AI saying you have generative is a great way to raise some money from some VCs.
But what impact is it having in your business? I suppose that's probably where you come in, in your role is you're like, okay, what is the actual need for this technology? Can we set you up to succeed? I think that's the roles of quite a few technology consulting firms and technology organization as well as to actually define the problem and provide the solution that will help.
(This file is longer than 30 minutes. Go Unlimited at TurboScribe.ai to transcribe files up to 10 hours long.)