How AI Happens

Alberta Machine Intelligence Institute Product Owner Mara Cairo

Episode Summary

Mara explains how the team at Amii decides when ML is and isn't appropriate to use. We discuss Amii’s process and its ultimate goal, along with common challenges their partners face when implementing AI solutions.

Episode Notes

Key Points From This Episode:

Quotes:

“Amii is all about capacity building, so we’re not a traditional agent in that sense. We are trying to educate and inform industry on how to do this work, with Amii at first, but then without Amii at the end.” — Mara Cairo [0:06:20]

“We need to ask the right questions. That’s one of the first things we need to do, is to explore where the problems are.” — Mara Cairo [0:07:46]

“We certainly are comfortable turning certain business problems away if we don’t feel it’s an ethical match or if we truly feel it isn’t a problem that will benefit much from machine learning.” — Mara Cairo [0:11:52]

Links Mentioned in Today’s Episode:

Maria Cairo

Maria Cairo on LinkedIn

Alberta Machine Intelligence Unit

How AI Happens

Sama

Episode Transcription

Rob Stevenson  0:00  

For the first time I'll call Jurassic Park on this podcast. You asked if you could you never stop ask whether you should that

 

Mara Cairo  0:06  

is on one of our slides. Like initial workshops, absolutely.

 

Rob Stevenson  0:15  

Welcome to how AI happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Joining me today on how AI happens is the product owner of advanced technology team over at the Alberta machine intelligence Institute and also recent adopter to Spotify. Mara Cairo, Mara, welcome to the podcast.

 

Mara Cairo  0:57  

Thanks for having me, Rob. Happy to be here.

 

Rob Stevenson  1:00  

Really, really pleased you are on you. Like so many of my guests have had pretty interesting path to your current role. So I like to start there just in an interest of getting to know you, but also just kind of documenting all the cool ways people are carving a path through this space. So would you mind sharing about your background and how you wound up that AMI as it is called to those in the know?  

 

Mara Cairo  1:19  

Yes, certainly. So I'm actually an electrical engineer by trade, I went to the U of A and I specialized in the Nano engineering option. Back in sort of 2010, when nanotechnology was all the rage, kind of the hype was really interesting to me. And so I spent about 10 years at like a fairly traditional engineering firm, working on miniaturization of things. So working with companies who had a proof of concept and wanted to take it to commercialization, and we were helping them to kind of scale things down. So I spent about 10 years in a cleanroom, working with some pretty cool equipment, having to gown up and protect the things that we're working on from ourselves, because we're dirty. And the things we're building are really, really small and can be destroyed by a little speck of dust. So I spent about 10 years in that field. And honestly, I was just kind of getting a little bit bored with like the hands on lab work, I realized that I actually preferred the work outside of the lab in meeting rooms with teammates and clients and just kind of the bigger picture stuff. So that's when I started pursuing project management designation. And I still at that company was moved into a more pm role, and then made the transition to AMI as a machine learning project manager about three and a half years ago now. So I transitioned from a very like, hardware kind of background to more AI and software, and specifically machine learning, which just by nature is managed differently than more traditional industries. So I learned a lot along the way. And then somewhere during my time at AMI, I was moved into this product owner position. And so essentially, I am now leading a team of machine learning scientists and project managers who are working with our various clients from industry all hoping to adopt machine learning and apply it to their unique business problems.

 

Rob Stevenson  3:28  

What do you mean when you say ML is managed differently?

 

Mara Cairo  3:31  

So I think the way that Amy approaches, it definitely is, we generally know the problem we want to solve. We know like kind of the end goal, but we don't necessarily know how we're going to get there. So it's harder for us to at the start of a project, do a really detailed project plan or roadmap. So Amy has developed a machine learning project management tool called the ML PL. And really it just says allows for a lot more flexibility, iteration and experimentation. And that goes, you know, you start with the data and the business problem, and then you move on to the machine learning. But you might actually realize that the results you're getting don't really make sense, the ML model isn't super accurate, maybe because of the data or maybe because you don't have the right business problem scoped out. So at any point, you can kind of go back one or two steps. And it may seem like you're starting over, but you're really just making sure that you get to the result that makes the most sense for the problem that you're trying to solve. So we just allow a lot more room for kind of iteration and experimentation and exploration, as opposed to a more traditional project management framework, which is really like we're going to spend a week doing this a week doing that and then we're going to get to this end result. We allow for more flexibility in the whole development process.  

 

Rob Stevenson  5:00  

Okay, that makes sense. Does that change depending on whether the particular project has much higher stakes? For example? Well, I probably don't need to give an example. Like, there's certain things where it's like, Oh, if it's X percent more efficient, that's great for the end user. And there's other things where it's like, if it's a autonomous vehicle, it needs to know whether something is a brick or a plastic bag.  

 

Mara Cairo  5:20  

For sure, yeah. And something that we really need to understand at the beginning is like, what is success here? Like, what is the metric we're trying to measure? And we really do need our, like our partners input on that, because that looks different. Sometimes it is just improving processes efficiency by 10%. But you know, sometimes the stakes are a lot higher. So the other way that we approach projects is really, really collaborative with the domain experts who are able to sort of take these results and translate them back into the domain and the end user, and validate that what we've built is actually going to make an impact and is actually better than maybe a more traditional rule based way of doing things. So lots of kinds of conversations and collaborations to make sure that the products we're delivering is actually a value and make sense for the different business cases.

 

Rob Stevenson  6:16  

Is it sort of like a agency model, kind of what you focus on ml projects, as opposed to I'm used to knowing about like, marketing agencies, ad agencies, this kind of thing? Is it a similar approach?

 

Mara Cairo  6:27  

Like our business model? Yeah. So Amy is all about capacity building. So we're not like a traditional, I would say, agent in that sense, we really, were trying to educate and inform industry on how to do this work with Amy at first, but then without AMI, at the end, like we want to turn our industry, our partners into AI companies who are capable of deploying and continuing to develop these products in the future. Because we don't really see AI adoption is something that's like, done in a project, 12 month period type thing. And then you just like, carry on, it's more of something that you're adopting. It's a tool you're adopting, and something that needs to be maintained and explored and kind of carried forward. So Amy has different products and services designed to meet different client's needs, depending on where their maturity level is at. But all of our products and services are really designed with that capacity building in mind. So it's enabling our partners to see how we're doing things, learn from us, and then go on and do that work themselves in the future.

 

Rob Stevenson  7:47  

In the earliest stages, there, perhaps the discovery process, where do you begin to understand what folks need? What kind of questions do you ask and what are you looking for?

 

Mara Cairo  7:56  

Yeah, so we need to ask the right questions, that's one of the first things that we need to do is to explore where the problems are, we don't want to just be applying AI and machine learning for the sake of it, we want to make sure there's actually like a viable problem that can be solved with this technology. So there's teams that Amy who really focus on those initial conversations, we have some brainstorming workshops, we call them AI planning and initiation workshops, where we really start with like, okay, tell us your business problems, and bring a cross section of your organization to us, let's have those conversations with people from different teams, different business units, where are you struggling right now. And then let's talk about maybe the types of data that you're collecting or you've collected over the years. And maybe we can start linking those business problems to some of the data that you have to solve those problems. So we have a whole kind of workshop series designed around those, like very early brainstorming, initiating conversations. And then after that is, okay, we've think we've identified one business problem that is ripe for sort of the application of machine learning. Let's dive into that even more. Like before we do any modeling. Let's make sure we really understand this, we understand the data availability limitations, let's just make sure the problem is scoped properly. And then we would go into a more hands on project with the advanced technology team, where we're actually building out these ml models, but it's on a business problem that we really understand and have a good grasp on because of all of these initial steps we've done to make sure we've selected the right business problem to work on.

 

Rob Stevenson  9:54  

Selecting the right business problem is interesting because machine learning Is this magic wand that particularly non technical business units want to wave over something? Right, and venture capitalists, etc. So it's kind of the opposite of this podcast to discuss when you would not use machine learning, but it could be valuable. Like, could you give an example of maybe where you're like, hey, this is not an appropriate use for ml. And then we can change the title of this episode to how AI doesn't happen.  

 

Mara Cairo  10:27  

I love that. Honestly, that's why we put so much effort into these beginning stages is because we don't want to work on the wrong type of project. So I think there's lots of cases for that. Sometimes a problem, or a solution is performing just fine with like a rule based approach. And we don't feel like the application of machine learning will be able to outperform that. But sometimes, the other thing with machine learning is it's not always super explainable. So if we're dealing with more sensitive subject matter if we're dealing with machine learning solutions that really have an impact on people, and say, their access to funds, or mortgages or financing or anything like that, we do sort of have our ethics built into all of our products as well. So if we're not comfortable with machine learning, making a decision and a human not being in the loop being in that decision making process, we would also like turn that type of use case away. There's lots of examples of bias, inherent bias in datasets and whatnot. And if we know that the result of our machine learning models is going to be biased because of the data that we have available, that might also be a conversation. So again, this is why we put so much effort into those initial conversations to make sure we've narrowed down to the right problem. But we certainly are comfortable turning certain business problems away if we don't feel like it's an ethical match. Or if we just truly don't feel like it is a problem that will benefit much from machine learning.

 

Rob Stevenson  12:16  

So it's less a matter of oh, we can't use ml on this and more a matter of we won't forget the first time I'll call Jurassic Park on this podcast, you asked if you could you never stopped ask whether you should  

 

Mara Cairo  12:29  

that is on one of our slides. Like initial kind of workshops? Absolutely. Yeah. And sometimes, you know, like, to your point, we can't like we might try because we love to kind of experiment and iterate. And like I said, we don't always know what is going to come at the end of these projects. So if it's something that we're curious enough about, we'll certainly try. But we always like to have that baseline to measure against. So at the end, we can say, actually, maybe the rule based method is the right approach here. But again, why we really encourage experimentation and exploration is because we don't always know what we'll get at the end. So if it passes, what we are willing to do and what we want to do, we will kind of experiment and see if we can outperform some, like more traditional approaches to problems.

 

Rob Stevenson  13:24  

Because you work with a bunch of different companies, you get to look across a wider swath of the industry. And so I'm curious if you see common challenges popping up for folks, what is something that is typical that people are coming up against when people are facing adopting AI?

 

Mara Cairo  13:41  

Well, I think you kind of already mentioned it like this, this magic wand approach. So the first thing that we want to tackle is that education and literacy piece, so making sure that industry is fully aware of what AI and machine learning actually are and what they aren't, and the types of problems you should apply them to and maybe the types of problems you shouldn't. So we do a lot of work in that initial literacy piece. And then from there, you know, it's a very common problem around data. So just, I think different people have different understanding of what good clean data is, obviously, not having data available can often be a roadblock with some of the machine learning approaches that we use. So we often spend lots of time again, in those initial conversations, ensuring that the data is in a state that will be easy to kind of build models off of and in that data, again, that piece around if you're trying to predict something, but you don't actually have any examples of it happening in the past. That gets really tricky as well because you know, machine learning is really just you using historical data to make predictions in the future, and if you don't have that historical track of what you're trying to predict, like that can be complicated as well. So there's lots of questions around that, like data readiness piece, I think another challenge we see is just industry not always understanding the skill sets required and the types of people that you need to do this work, bigger machine learning and ml projects, they do require different skill sets coming to the table, especially when we're looking to kind of build something more state of the art and custom. I think, oftentimes, companies have maybe one data scientist that is tasked with doing all of the machine learning pipeline, and that might work for like fairly simple, more off the shelf models that they just want to implement. But when you're looking at building off of those and customizing your models, and making something a little bit more state of the art, there are different skill sets that are required, like data scientist is certainly one of them. But then, who is like building the machine learning models, in our case, we have our machine learning scientists doing that work. And then when we talk about deployment, that's another skill set that's maybe a machine learning engineer or software developer. So there's lots of people that kind of need to be involved, as well as domain experts, too, because ultimately, we need that domain perspective, to make sure that the models we're building actually makes sense for, again, the problem that we're trying to solve other challenges, I guess, around strategy, if there isn't an understanding, or a buy in across an organization that's looking to adopt this technology, there can be lots of roadblocks there. Because you may need to ask for additional resources, additional tools and whatnot. So you do really need that buy in, we see that obviously being a barrier, in a lot of cases, when it comes down to getting the right person to sign off on the project, but then just not really understanding the importance of this work are the implications of it. And again, you know, having that similar mentality of exploration, it's harder to sometimes get that buy in, because we do want to be able to explore an experiment. And we don't necessarily know the accuracy of the model that we're going to deliver for anyone. So having them understand our approach is important as well. So that's just a couple of the challenges. But we do kind of have, again, in our products and services ways to kind of mitigate those throughout the adoption cycle.

 

Rob Stevenson  17:39  

Just a few of the challenges folks are in for right, they're in for a world of hurt. It sounds like,

 

Mara Cairo  17:44  

I know, it might sound bad. But obviously, there's a ton of benefits and a lot of fun in this type of work as well.  

 

Rob Stevenson  17:53  

What's interesting is that a lot of it is not specific to the domain, like you mentioned, just hiring is a challenge. Like you don't even know who are the types of folks you need to be out there interviewing and hiring and what talent even looks like, or what are the roles who you need to bring this project into reality. That's like an organizational HR business leadership problem more than it is a technical problem.

 

Mara Cairo  18:14  

Absolutely. And especially if you're a company who's making their first hire in the machine learning space, and they don't even necessarily have the internal capacity to do that technical assessment. They hire someone, they bring, you know, this machine learning scientist, engineer, whatever, into the organization, and then that person could potentially end up a bit on an island. So they don't have a lot of like support or peers, or that mentorship opportunity, which is obviously really important no matter what industry you're in. But this field is no exception. And one of the ways that Amy mitigates that is we'll actually help companies with their first hire. And we keep them as AMI employees for a timeframe working on our clients problem. But we have like a lot of that mentorship and support built into our day to day work. So that hire is receiving a lot of support throughout the way. And then at the end of a project with Amy, the client is then empowered to permanently hire this person who's been able to pick up a lot of the support and mentorship throughout the project with Amy.  

 

Rob Stevenson  19:29  

Interesting. So you were there kind of getting the town on a trial basis. They're seeing the execute on this project as an employee of AMI, and then they have this opportunity to maybe bring them on full time afterward. It's like an agency to hire model.

 

Mara Cairo  19:40  

For sure. Yeah, and it D risks that first hire right? So we don't do like standalone recruitment services. It's really all around a project and executing on that project and ultimately delivering something of value beyond the talent, but the work doesn't get done without the right talent. So that's obviously a really, really important piece of it. And more often than not, at the end of our projects, or clients are fully bought in, they've built those relationships with the talent, the talent has great context and understanding of their business and the problem, and it's just, it's an obvious next step, then hiring these people.  

 

Rob Stevenson  20:20  

Where are you finding those folks?  

 

Mara Cairo  20:22  

Yeah, so we're pretty lucky, we have a pretty deep connection to the University of Alberta, which is sort of where Amy was born about 20 plus years ago. So we're still really connected to the research community at the U of A and the Department of Computer Science, and oftentimes the talent that we find our students of some of our AMI fellows. So these are the folks who have been really kind of paving the way for decades in the area of machine learning and reinforcement learning. And we want to bring the students that they're supervising into our projects with industry. So oftentimes, we don't have to look too far for the talent. But we certainly do. When we're recruiting for these opportunities. We cast a really wide net, we get hundreds of applications. And, you know, we're looking for ultimately the the right person with the right skill set for whatever business problem industry, we've sort of defined with our industry partners. So we're really lucky because we do have this kind of talent pipeline that's coming out of the U of A, and they're receiving world class education from some of the founders of this technology. So that's kind of where we go as a starting point.  

 

Rob Stevenson  21:45  

Yeah, it is an excellent pipeline for sure. Do you think that the traditional university experience is still necessary for this career? Or in this case? Is it just going to work out really? Well?

 

Mara Cairo  21:55  

Great question. So I come from an engineering background, which, at least in Canada is like fairly heavily regulated, you need to have that undergrad degree, you need to register with a professional association and, you know, log a certain amount of hours, and it can be fairly restrictive. And what was really interesting for me, when I made the transition from kind of a more engineering hardware background, to the world of computer science and AI, I realized that not everyone comes from that really traditional background. And I think just the open nature of computer science is much different than the world of engineering that I was used to. So oftentimes the talent that we're finding, they don't come from that traditional background, a lot of them actually come from engineering, or other fields. And they were introduced to machine learning because they had a problem that they were working on. And they're their masters or their PhD and machine learning actually ended up being the right tool for them to solve that problem. And so that kind of gets them interested. And then they just continue to pursue that more as a passion, they take up the many open courses that are available to kind of upskill in this area. So I certainly don't think that a traditional degree and master's is always the most important thing. I think, more importantly, it's understanding how to apply that fundamental knowledge to real world problems. So that's what we're looking for when we're recruiting for these positions is that translation. And it doesn't always come from academia. Certainly. So that's also what really like excites me about this field is it's just more opening and welcome and sort of the the field I was in beforehand.

 

Rob Stevenson  23:51  

Well said, Maura, this has been really great chatting with you. We are creeping up on optimal podcast links. So all there is left to do is say thank you. And I'm really glad you got to be here. It sounds like you're doing fascinating work over there at Amy. So thanks for being here and sharing your expertise with me. I've loved chatting with you today.

 

Mara Cairo  24:06  

Thanks, Rob. It was great chatting with you as well. Thanks for having me.

 

Rob Stevenson  24:12  

How AI happens is brought to you by sama. Sama provides accurate data for ambitious AI specializing in image video and sensor data and notation and validation for machine learning algorithms in industries such as transportation, retail, ecommerce media medtech robotics and agriculture. More information, head to summit.com

 

Transcribed by https://otter.ai