How AI Happens

Mercedes-Benz Executive Manager for AI Alex Dogariu

Episode Summary

Alex and his team are keen on using AI to increase proficiency at Mercedes while being hyperaware of using this new technology in the most responsible way possible. In our conversation, we learn why Mercedes-Benz thought it necessary to have an entire team dedicated to AI, how they assign AI teams to specific departments in the company, and how the output of descriptive analytics has been improved by machine learning.

Episode Notes

Mercedes-Benz is a juggernaut in the automobile industry and in recent times, it has been deliberate in advancing the use of AI throughout the organization. Today, we welcome to the show the Executive Manager for AI at Mercedes-Benz, Alex Dogariu. Alex explains his role at the company, he tells us how realistic chatbots need to be, how he and his team measure the accuracy of their AI programs, and why people should be given more access to AI and time to play around with it. Tune in for a breakdown of Alex's principles for the responsible use of AI. 

Key Points From This Episode:

Tweetables:

“[Chatbots] are useful helpers, they’re not replacing humans.” — Alex Dogariu [09:38]

“This [AI] technology is so new that we really just have to give people access to it and let them play with it.” — Alex Dogariu [15:50]

“I want to make people aware that AI has not only benefits but also downsides, and we should account for those. And also, that we use AI in a responsible way and manner.” — Alex Dogariu [25:12]

“It’s always a balancing act. It’s the same with certification of AI models — you don’t want to stifle innovation with legislation and laws and compliance rules but, to a certain extent, it’s necessary, it makes sense.” — Alex Dogariu [26:14]

“To all the AI enthusiasts out there, keep going, and let’s make it a better world with this new technology.” — Alex Dogariu [27:00]

Links Mentioned in Today’s Episode:

Alex Dogariu on LinkedIn

Mercedes-Benz

‘Principles for responsible use of AI | Alex Dogariu | TEDxWHU’

How AI Happens

Sama

Episode Transcription

Alex Dogariu  0:00  

This technology is so new that we really have to give people just access to it and let them play with it. We give them guardrails, from a legal perspective from a data privacy data security perspective. But other than that, let people use it.

 

Rob Stevenson  0:16  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Here with me today on how AI happens is the executive manager for AI and automation topics over at Mercedes Benz Alex Dogariu. You Alex, welcome to the show. How are you today?

 

Alex Dogariu  0:54  

I'm good. Rob. Thank you very much. Very happy to be here.

 

Rob Stevenson  0:56  

Yeah, pleased to have you remind me where you are broadcasting in from

 

Alex Dogariu  1:00  

today is from Munich as I live next to Munich, but work in Stuttgart from Mercedes Benz.

 

Rob Stevenson  1:05  

Gotcha. So end of your day beginning of mine, but we're a global international podcast. So these are the sacrifices we both in this case make. Thank you again for being here. I have so much I want to speak to you about but let's get to know you first. Would you mind sharing a little bit about your background and how you wound up in your current role at Mercedes Benz?

 

Alex Dogariu  1:22  

Sure, totally. So I didn't start out as a straight career path with it or engineering and then moving into AI here at Mercedes Benz I had more of a self determined path. I started out as a strategy consultant always in marketing and sales working for Accenture and then moved on to be a managing director for a repricing company we did a repricing for more than 10 million products worldwide. Also getting there in touch the first time with algorithms, any eye on that kind of stuff. And at some point, I wanted to get back into corporate and started out back in 2015. At Mercy dispense consulting there, I basically founded and have grown the AI and analytics and data practice together with another colleague, and we've grown it to a certain extent. And then I was asked to join Mercedes Benz directly in charge of digital topics and also AI topics. And that's when we started our journey back in 2019. directly at the corporate headquarter.

 

Rob Stevenson  2:20  

So the need for this team at Mercedes Benz or any company really may be assumed to the listeners, or to yourself are the people who understand this technology deeply. But could you take us back to that moment when you were consulting for Mercedes Benz and you took stock of the organization thought, Okay, what you really need is a team dedicated to XYZ. That was

 

Alex Dogariu  2:40  

the time when a lot of people would experiment with first algorithms first regressions and also would work with chat and voice bots. And especially back in 2015 16, there was a real hype around chatbots nai driven also by IBM Watson. So we could definitely see that everybody doing the same projects over and over again, would not make much sense. And we'd have to come to more like a platform thinking and center of competence, Center of Excellence set up to bundle the little knowledge that there is in the experts, and also the projects together into one organizational unit. And so that's when we started building the first you know, hubs where we try to enable AI projects to run smoothly and leverage the knowledge as well as bring, for example, on a chatbot platform more than one Chatbot. So not everybody has to do the same work over and over again, but can leverage what others have already built and build upon it. From there took basically into several directions analytics got quite big, descriptive analytics had been there for a while. But as you know, with descriptive analytics, you do another analysis and you have a lot of dashboards, they're not really AI driven. And what do you do with the insights nothing. And with machine learning, you could do first algorithms for recommendations to do individual one to one marketing and whatnot. And that's when we started really building products that have real impact on the business that we that we see and even automate stuff and that kind of thing. So

 

Rob Stevenson  4:06  

what are some examples of the output of descriptive analytics once you would kind of tacked on some machine learning to really drive business insights?

 

Alex Dogariu  4:13  

Well, as I said, descriptive analytics, per se, is just data visualized. In a certain sense, maybe you make certain correlations visible through some dashboards, but they don't really add value that the cool thing is when you automate stuff and have a machine learning algorithm that's using supervised learning to then improve certain kinds of forecasts for exempt from receipt of spent so many cars you might sell in market XYZ, that and you would compare that with your traditional forecasting methods or you would use it for pricing. Or you would do it for discount policies are many many things where you could use AI to influence you know, the bottom line of your business, and also an automation you could use bots to automate customer service. When for example, there is no agents live 24/7 on weekends holidays after working your business hours. You want to have somebody who's still there. And website can be quite tough thing to navigate to find the right information. So a bot might help with that. So there you could create some efficiencies and also improve customer experience. So the fields are quite diverse, I'd say. And if you then go into production and the purchasing department and whatnot, you can convert to supplier management, you can optimize your production and supply chains to reduce downtime. So you can identify error patterns early on, and then find fixes quite quickly by using AI. So there's a lot of applications you can do throughout the company and the value chain to basically increase efficiency and in sales and marketing, of course, increase our revenue.

 

Rob Stevenson  5:42  

So we've kind of drawn a circle around it a little bit, but we did jump into the weeds here. Would you mind just kind of explaining how you characterize your role, I want to make sure we hone in on that before we get too specific.

 

Alex Dogariu  5:52  

My role is basically consisting of two important tasks. One is the whole strategy around data and AI. So I'm trying to do that for marketing and sales especially. But I'm also tagged into some larger corporate initiatives that do it for all of Mercedes Benz. The second part next to strategy is the whole operational part where I especially work on conversational AI and automation. There are leveraging classical NLU speech to text text to speech to build bots of any kind of sorts of style. And we offer that services to all of the company. And at the same time we mix in automation bead robotic process automation, which is not necessarily AI but can have it depending on if you have picture recognition, or you have some workflows that are not deterministic, but rather heuristic. And then we also now since about 15 1618 months also mix in generative AI where necessary to do content summarization and information retrieval and you know, creative work and whatever is necessary. So my job is strategy, conversational AI and automation. And in conversational AI and automation, it's a, I'd say a triangular of mixing conversational AI, with automation and generating AI to build solutions that mostly I'd say create efficiencies from iPod.

 

Rob Stevenson  7:14  

With chatbots. I have sort of an existential question about chatbots. How realistic do they need to be when it comes to the conversational part? Do they need to like pass the Turing test? Or is it more important that you just get the user to where they're going? I'm sort of wondering, when a user encounters a chatbot? Is it important that they think it's a human or like, I'm talking to a bot, like, here's what I need to say it to get where I need to go? What's your take on that,

 

Alex Dogariu  7:39  

we have to firm belief that we need to tell people that this is a bot, and they're not talking to a human being. And for us bots are really a helpful, you know, assistant that can help, as I said, outside business hours, weekends, holidays. And for me, it doesn't have to be human, as long as it gets you to do whatever you want to do. And it's not just fetching information providing you with an answer, like, can you help me? How do I pair my phone with via Bluetooth with my head unit, that's a quite complicated process, you have to do a lot of steps on your phone. Once you've done it, it's super easy, but it's you know, with everything new, you need to understand the rundown of it. And then you don't want to look into the into the owner's manual, you don't want to go to the websites or just want to get an answer for that specific problem for your car model. Or if you want to know, for example, Hey, I'm thinking about buying an electric car, what is the range of the car does the QB or EQC have higher, higher range was the trunk volume. So for example, for hybrid cars, you'll obviously always compare trunk volumes, and that kind of stuff. So there's a lot of things in range and whatnot in there, but can retrieve data way faster than any human and help you in that spot. Also, there's this transactional stuff. So you not only use it to retrieve information efficiently, but you can also do stuff like I don't want to book an appointment, I want to buy this car, where's the trade and offer? What is the next dealer to me, and then whatever, depending on the use case, it can forward you to the right form, or it can do it for you. And that's the transactional stuff. So and then the third part is, of course, navigating the website. So we all know the search field for every website has it, but they're usually not very good at understanding your intent what you want. So that's also one thing that bots can help a lot with. And then also you have to think about the fourth component is to bridge the gap to communication to humans. So when no humans are available, because it's, I don't know, 2am in the morning, you can still book an appointment to have a personal consultation with an agent or a salesperson or a service agent whenever you want to, and can use that tool to do it. So I'd say they're useful helpers. They're not replacing humans, at least not of now. And they don't have to be human perfectly and everybody understands that they're not there yet. However, with the ascent of generative AI and its capabilities The quality of the bots has increased tremendously. And by now GPT, fall and palm two and so a very, very good at mimicking human thought processes emotions, and generating content that sounds like human beings. So that changed the whole perception of bots, but also the expectations. So I think customers by now or users expect more from bots than just get x y Zed done for me or treat that information. So it might be that we need to add more human emotions and natural way of communicating to these virtual assistants that we might have done two years ago

 

Rob Stevenson  10:42  

with tech GPT for their sort of do your own research, wink and nudge that kind of happens alongside it. Like, you cannot just take this output as the word of God. So with your own chatbots, how are you making sure that the information solutions they're presenting are accurate? Because it's like you say, it's not as simple. It's like, how do I reset my password? You know, it's like, I want to order a car, I want to schedule an appointment, I want to connect to Bluetooth. These are more complicated things it's solving. So how are you measuring that accuracy?

 

Alex Dogariu  11:10  

That's a very good question. And one that has been concerning us for the past 18 months or two years, even since we're looking into foundation and generative AI models. So on the one hand, is that we don't have a bare naked, large language model talking to users. We have a classical NLU approach with bots, that does a pre filtering of incoming inquiries. And only when these inquiries fit the data sources that fed the large language model, not the foundation model itself, but that we fine tuned it on, then we forward the request to the foundation model to for example, GPT, for whatever, to answer that question. So that's pre filtering already makes sure that we don't send questions that it cannot answer now from those that we think that it can answer. We have built in several quality gates to make sure that the incoming inquiries are correct. So there's a scoring model that compares than the given or generated answer to a predefined data set and compares it also to the data source. So one first step is of course information retrieval, then maybe a summarization. And then you have that answer that's from GPT. And you then you take it and compare it to the original chunks that you have retrieved from the original data set, and make sure they are scoring model that they have, you know, closeness or similarity. There are services out there that do that, that extract, for example, entities from generated text, and then also look at the links if they're valid. If they're not, you know, hallucinated, they check whether the whole sentence structure makes sense from a logical perspective, and that kind of stuff, but we do it through several quality gates and scoring models. And, of course, we can test for everything. So we have built an automated testing engine that tries to fire as many possible questions to the AI, then have a generate answers, then have those answers be reviewed by another system and scored and compare to each other. So we can build up a data set that allows us to tell us how accurate the system is. And through other thresholds, like classical NLU would have something like accuracy? And you say, if you're not sure, below a level of 0.7, or something, don't answer the questions, I don't know, please rephrase it. Similar things you can do here as well to make sure that whatever output comes, is accurate. And then also, you can adjust the temperature as ever probably is aware of was working with this generative AI models. And you can tell it, you know, how far is it allowed to deviate from the original data source? That's just a few things we do. Beyond that, we do a few more which we have not or do not want to talk about as those we have developed with a lot of effort to reduce the amount of hallucinations and wrongful information. So by now, I'd say we don't have hallucinations. But sometimes, depending on the information that is retrieved from the original source, some things might be left out. And so you get an answer. That's correct. But it might miss maybe one little part. And so that's something that we're still working on to make sure that we collect everything that's relevant to answer one question to the user. And that's something that that's more difficult because information retrieval is also art of its own.

 

Rob Stevenson  14:28  

That's a nuanced problem, because in that example, it might be enough information to answer the question at hand, right? So you have to think a little deeper than that question answer to say, hey, you need this additional information. Is that right?

 

Alex Dogariu  14:41  

Exactly. And I'll give you an example. So when the customer asked us a question about a technical detail or functionality of our vehicles, and it contains, for example, wanting labels and stuff, we've marked those in the original data, and we made sure that those are always being extract Did and played out word for word to the user? So we don't miss out definitely on any legal or I'd say compulsory things that we need to display with the answer. So we chopped up the answer to several parts, and only parts of it are generated, others are really copied out of the original source.

 

Rob Stevenson  15:20  

So it sounds like in addition to chatbots, you also were sort of focused writ large on enabling team members with AI tools and processes. Is that right? Absolutely. So everyone wants AI, right, that it's sort of this magical Thor's hammer that everyone wants to have on their team? How do you organizationally assess whether there's an opportunity or a need to enable a specific team with a tool like this?

 

Alex Dogariu  15:46  

Again, a very good question. To be honest, I would love to answer that, from my personal subjective point of view, and not from maybe what is the official I'd say, out there common understanding or best practice, in my opinion, this technology is so new, that we really have to give people just access to it, and let them play with it. Even if this sounds okay, but what is the efficiency gain? How do we know if they use it in a good manner or proper manner, blah, blah, we give them guardrails, from a legal perspective, from a data privacy data security perspective. But other than that, let people use it. It's like the internet the first time you add search engines, you are lost, what do I do with it? What's the search result? I'm confused? And the same is happening now. People see and of course, the first thing they do they check lodge language models for knowledge. They ask questions, while GPT is trained on data back from November 2021. If it doesn't have access to the web, you get an outdated answer or maybe an halogenated answer, we need to first educate them what it can do what it can do, and given them certain guardrails, and that's my opinion. So in step one, for me, it was important or is still important, give everybody access to it in a safe environment where the data they put in, is not going to train the foundation model. So we don't spill any secrets or internal data, but they can experiment with it. Out of that we should have a incubator or center of competence, whatever you want to call it, where people can go to if they have an idea, or found some quick wins or use cases where they can have a quick check. Is this sound, technological wise? Can you do it? Is it expensive? Do we need to find you and can we use something out of the box. And once they have that understanding and do a POC then you know, they can move beyond that and build their own environment, or in the cloud or wherever they want to host it and build that end to end product that they want where they want to infuse generate AI in it. But step one is get them acquainted with, get rid of the fear that they have of using it, because a lot of people are still like, I don't want to touch this. This is weird. I heard this and doubt about it. So is this voodoo magic, give people access to it, let them figure it out themselves. People are great at figuring out how to use stuff in a good and proper manner. And once they've done that, they will come up with use cases, we don't have to have a central team that thinks everything forward, they will come with the right use cases as they understand their business processes and needs best. And based on that we will then prioritize use cases based on size, impact whatever costs and then go into an implementation mode with a COC where real experts are center of competence, Center of Excellence, whatever you want to call it, that helps them you already get that life. So this is my personal take on that. So that's why I put a lot of emphasis on educating people about the technology, providing them with information and access. And when I mean access, not just here it is Go ahead. But here are guardrails. What you shouldn't do, this is what you can expect as behavior from that. And here are our first use cases as an ideation to start out with. And once they try out the first use cases, they're like, Oh, my God, this is awesome. This is really good. Wow, can I get for my whole team, my department, my function, everybody wants access. They're like, This is amazing. I didn't think it works that well now understand why it answered with outdated data. Now I know what hallucinates, I know how to circumvent it, and then they come up with really cool ideas. And once you get them into the more awesome stuff of really building custom pipelines for their products, where they can ingest their own data sources and whatnot. Wow, they really take off. So that's when the real fun starts for me because you start solving real world issues. Yeah.

 

Rob Stevenson  19:24  

There's like a eureka moment that happens for folks once they've kind of shaken off the fear a little bit.

 

Alex Dogariu  19:29  

Absolutely. And they understand the limitations. I mean, they're not dumb. They understand. Yeah, you can use it for research purposes, but it might hallucinate now. And then so one colleague of mine always says it's 90% of the time, right? So you still need to have the capability to discern whether what's providing you in terms of information is correct. You can't just do it. There's this example of this lawyer that used it in the US. I think he screwed up because he didn't check it. And you shouldn't do that. I mean, if you know the technology Exactly. That wouldn't have Do you because you wouldn't have false expectations? And you would know to double check before using the output?

 

Rob Stevenson  20:06  

So are we mainly speaking about enabling the team with third party vendors? Or is Mercedes Benz? building these tools internally? Is it both I suppose.

 

Alex Dogariu  20:15  

So in my opinion, there is no one solution. There is no just third party, I think open source plays a key role in this whole topic. And it's growing super fast. I think open source is outgrowing even what big vendors are offering right now be like Google, Microsoft, or AWS. And this space of open source is something to really keep an eye on. Of course, getting as a large corporation access to open source is always a little bit more tricky. You need to host the models yourself, it's not as easy, you know, it's not like, you go into a restaurant and the cooks the meal for you, it's more like do it yourself, you need to cook it at home. So it's some extra effort in it. But very often the tastes better. So checking out open source is super important. And I would say definitely, as a corporation as mercy dispense, we're looking to both what corporate is offering, what open source is offering, and then we're going to fine tune models to our needs, which we do a lot because these models don't know, our internal data, they don't have access to our data. And only when we mix that in, then the valuable output comes because, as you know, whenever we don't provide a specific prompt or data, it starts to make up stuff, it starts to fill in the gaps. So the more we provide it with the better result we get. So definitely fine tuning models, adjusting them for our purposes for our data. And then mixing in available models, as well as open source and maybe self cooked ones is the right approach. Use case dependent, I'd say.

 

Rob Stevenson  21:43  

So any sufficiently successful or advanced or useful, open source project, open source models, it feels like it will be corporatized at some point, right? Like some entrepreneur will draw a circle around it, and then package it and sell it to people. And there's benefits associated with that, like now they have some kind of compliance or some sort of like expectation, like if you find this great GitHub project that you love, and you want to use this model will be supported, you know, forever or even for long enough for you to use it. So there are advantages to using a vendor. But it sounds like you're saying that this open source approach will always outpace that level of corporatization. Also, there's this notion of just like models kind of becoming commoditized. Like these models, you can kind of get off the shelf, and tweak to your own needs. Those because of the way the open source community operates, there's no need to really pay for them does that how you feel,

 

Alex Dogariu  22:33  

I don't think payment is the key component here, though it for sure plays a role as well. It's really about what the models offer in terms of quality and abilities to solve problems that we have or business problems, you know, that we want to tackle. And so whatever model fits best, we'll take it when I said open source outpaces what the large ones are doing that I mean, in terms of specialized models, and so forth, the bigger ones will always like open Allah with Microsoft or Google, they will have more resources, for sure, they will have more GPUs and whatnot. But in the end, the small ones really are open source community develops custom solutions for specific problems, which the big ones will never tackle. So looking at that will always be beneficial. And in terms of open source, you said it before, one thing is that they can be outdated, they're no longer supported. So that happened to us, as well as certain things we did in the past with machine learning, well, then you need to switch a model, retrain it, it's not the end of the world. What I'm more concerned of is the regulatory framework. So if governments come up with now, for example, in the US, or the European Union, they want to have certificates for certain AI models that they're certified. And what they can do is certified. That could be an issue for open source, and for smaller startups and stuff, because they won't have the resources to go through all the tests and hoops. It's a little bit comparable to the pharmaceutical industry. So if you want to have a blockbuster drug, it's so expensive to do, because all of the certifications, not only the development, sometimes the certification is more expensive. And the same could happen to AI, which in my opinion, would stifle innovation. So I would not advocate for it, though I understand the need for at least a certain type of regulation to avoid the Terminator scenario, which everybody is one more concerned about. I'm not yet concerned about it. But I trust the experts that there is something on the horizon we should be and as Mercedes Benz we, for example, care a lot about it. That's why we got our AI principles I gave a TEDx talk about it was explainability and whatnot. There's a lot of stuff in there that definitely is applicable to generate a VI and it's not fully yet clear how these models will evolve over time. So keep an open mind on on regulatory frameworks and certification might make a corporation like mercy doesn't move the safe way and go into using the large models instead of the the open source.

 

Rob Stevenson  24:59  

I'll make sure it's Link to the TEDx talk you gave in the show notes, because it's definitely worth a watch. But we have a couple of minutes left here, I'd love to hear, you know, you share, what was it you were trying to express and accomplish at a high level with that talk,

 

Alex Dogariu  25:11  

I wanted to make people aware that AI has not only benefits, but also downsides. And we should account for those. And also that we use AI in a responsible way and manner. Because very often, we just simply applied, it's like, you know, one hammer and you got a nail, and you just want to fix it. But it can have bias in the data, which we now find out these large language models have, depending on the data that has gone into the training, and the same goes for bias. And then you have the expendability issue. And there's a lot of stuff. So that's why I say raising awareness, and making sure that people use technology in a responsible manner. And since there's no general available, you know, driver's license for using AI, I think everybody's working on it should be a little bit more cautious about it, not just an enthusiast that applies it, you know, without thinking about consequences that it might have.

 

Rob Stevenson  26:01  

Would that be possible a driver's license for AI?

 

Alex Dogariu  26:05  

I don't know, maybe, maybe there's no driver's license for the internet as well. And people are doing crazy sorts of stuff on the internet. So as I said, it's always a balancing act. The same with certification of AI models, you don't want to stifle innovation, with legislation and laws and compliance rules, but to a certain extent, it's necessary and it makes sense. So like a netiquette on Twitter, which people adhere to, and there's somebody cleaning up the space from too much bad actors. It's not bad to have that. I think that's a good thing. And so making sure that people don't develop applications that are harmful, or use them in a non very responsible manner is a good thing.

 

Rob Stevenson  26:45  

Yep, of course. For more on that, check out the TEDx talk. But right now, we are creeping up on optimal podcast linked Alex. So at this point, I'll just say thank you so much for being with me here and sharing about some of the work you're doing over there at Mercedes Benz, I've loved chatting with you today.

 

Alex Dogariu  26:58  

Same Europe, thank you very much, and to all the ICCs out there, keep going. And let's make it a better world with this new technology.

 

Rob Stevenson  27:10  

How AI happens is brought to you by sama. Sama provides accurate data for ambitious AI, specializing in image video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e commerce, media, med tech, robotics, and agriculture. For more information, head to sama.com