How AI Happens

RoviSys Director of Industrial AI Bryan DeBois

Episode Summary

The application of AI technologies within industry is growing in popularity. It has the capacity to transform traditional industries while paving the way for innovation in a rapidly evolving landscape. In this episode, we are joined by Bryan DeBois, Director of Industrial AI at RoviSys, to discuss the realm of Industrial AI and its range of applications. RoviSys is a company that specializes in providing automation and information solutions to various industries. It focuses on developing solutions where technical expertise and vendor independence intersect. In our conversation, we discuss the concept of industrial AI, its applications, and how it differs from standard AI processes.

Episode Notes

Bryan discusses what constitutes industrial AI, its applications, and how it differs from standard AI processes. We explore the innovative process of deep reinforcement learning (DRL), replicating human expertise with machines, and the types of AI approaches available. Gain insights into the current trends and the future of generative AI, the existing gaps and opportunities, why  DRL is a game-changer and much more! Join us as we unpack the nuances of industrial AI, its vast potential, and how it is shaping the industries of tomorrow. Tune in now!

Key Points From This Episode:

Quotes:

“We typically look at industrial [AI] as you are either making something or you are moving something.” — Bryan DeBois [0:04:36]

“One of the key distinctions with deep reinforcement learning is that it learns by doing and not by data.” — Bryan DeBois [0:10:22]

“Autonomous AI is more of a technique than a technology.” — Bryan DeBois [0:16:00]

“We have to have [AI] systems that we can count on, that work within constraints, and give right answers every time.” — Bryan DeBois [0:29:04]

Links Mentioned in Today’s Episode:

Bryan DeBois on LinkedIn

Bryan DeBois Email

RoviSys

RoviSys AI

Designing Autonomous AI

How AI Happens

Sama

Episode Transcription

Brian De Bois  00:00

Even with Compute coming down, you still could get into a space where it just never converges on the right solution. You know, we've seen this work during training where it gets really obsessed with this minutia over here. And it's missing the elephant in the room, like the most important problems it's trying to solve, because it got into kind of a local minima. And it's over here screwing around. So I don't think so. My whole career, Brian.

 

Rob Stevenson  00:25

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field, and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Joining me today on how AI happens is the director of industrial AI over at roboticist Brian to boy Brian, welcome to the show. How the heck are you today?

 

Brian De Bois  01:02

Thanks, Rob. doing really well. It's a Friday.

 

Rob Stevenson  01:04

It is we are winding down the week here who knows what day it'll be at time of release. But in any case, I appreciate you bringing the energy here at the end of the week. And meeting with me. This is great. I'm really looking forward to chatting with you so much to go into man, first of all, is that you've been a roboticist quite a while. Yeah. Which in the AI industry, any tech industry, I think it's rare to have someone with long tenure. Is it like over 20 years? Right?

 

Brian De Bois  01:26

Yeah. 23?

 

Rob Stevenson  01:27

How good for you. And it was curious is that in that time, I'm sure when you joined, they weren't doing over robustness, so much in the AI field. Okay, so I would love to know, like if you could share a little bit about your journey. And when did AI I guess, enter the picture? ,

 

Brian De Bois  01:44

Yeah for sure. So I graduated with a computer science degree from the University of Akron in 2002. And I had actually co opt with Roe versus the two years prior. And so I came in knowing really, you know, knowing software, obviously been really knowing nothing about the manufacturing industrial space. So I learned all of that, you know, here at robustness, and came in building software, specifically for industrial and manufacturing customers. And oftentimes, that was taking the large volumes of data coming up from the plant floor and doing something interesting with them. So maybe we were doing some kind of analysis, or maybe we were doing dashboarding, and things like that. And OEE, which is a key performance indicator that's been around for a long time in our industry. And then also, we were integrating things called Time Series databases that are called historians that are used pretty predominantly on the plant floor. And so it was, you know, obviously, it was a very data rich environment. But there was also a lot of variety, which was really cool. So robustus works. Now it's 14 different industries, I think, when I started was probably closer to four or five, but we work across all these different industries. And so I got to do that kind of work with all these different types of customers and getting to go into these plants, which I kind of geek out on that. I don't know if you've ever seen the show how it's made. But I really liked that growing up. And so now I actually get to go into these plants and see how things are made. And so it was a wild ride. So then I kind of you know, I moved up through the ranks. And so pretty soon, I'm a technical lead. And so now I'm building the software architecture for the software systems, and then moved into software project management, and then moved into management. And now I'm managing, you know, a team of developers, and then you know, in guiding their career and all that kind of stuff. And then in 2019, robustness decided that we were well positioned to go after this industrial AI market, we felt like with our 30 roofs, this was started in 89. So with our 30 plus years of pedigree in the plant floor space, but then also with all of our expertise in kind of software and data around that kind of space, we felt like we were really well positioned, you know, there were AI vendors in that space. But many of those ai ai vendors had come from the IT world, and so didn't really have a great grasp on, you know, kind of the specific challenges the unique challenges of plant floor. And so roses decided to create this industrial AI division. And I was fortunate enough to be named the director of that. And so that was, what, four years ago now over four years ago. And so it's been great, it's been wild to see, you know, certainly manufacturing tends to be a little bit behind the curve on adoption of technology. And there's good reason for that. They're very risk averse, and things like that. But it's also been really interesting to see how forward thinking some of these companies are in some industries that you may not have thought of like one of the the most forward thinking and early adopters of industrial AI is actually steel. Believe it or not, we don't think of that as a real like forward thinking kind of industry, but they have and they're doing some really cool things with that.

 

Rob Stevenson  04:39

So maybe it would be useful to just define what we mean when we talk about industrial AI. Is this just like automation kind of leveled at any business that has an assembly line or is that a reductive way to look at it? 

 

Brian De Bois  04:49

No in from the outside looking in, you know, if you're not living and breathing this you know, every day it's hard to kind of understand where the bounds are of industrial. We typically look at industrial as your either making something or you're moving something. So in the case of making something you could be making, there's a subset called the process industry so that you're making stuff, not things, you're making gallons of stuff, think chemical oil and gas, you know, refineries, things like that. And then there's more of what we call discrete manufacturing. So that's where you're making things not stuff. So that's where you're making widgets, countable things. And that's everything from, you know, metal types of items to consumer packaged goods, you know, one of our customers makes the bowls for Chipotle, you know what I mean? So those types of processes, but then we also have customers that just move stuff. So we have several pipeline customers. So they're moving product through the pipeline, they're not actually making the product themselves, they're just responsible for moving it. And then we have folks who we have a whole power and energy group. And so they just focus on, you know, power generation, and transmission and distribution and things like that. So it's all of those things. And it's, you know, to distinguish it though from industries we don't work in, for instance, like we don't do anything in retail. So if you think of brick and mortar retail, that's not us. If you think of healthcare, while we do work with all the big life science customers that produce pharmaceuticals, we don't work in actual healthcare facilities, think hospitals and things like that's not us, you know, so it's a little fuzzy, but you know, that's generally speaking kind of where we operate. 

 

Rob Stevenson  06:18

Okay, yeah, that's helpful. And, you know, Brian, I set out to make a podcast that was sufficiently technical. So I'm just thrilled that we were able to define the difference between stuff and things. Yeah. Right, exactly. So what is the state of AI in the industrial space? How is this technology being used?

 

Brian De Bois  06:34

Yeah. So I mean, I would say that it is still kind of nascent, but it is absolutely growing. Like I said, there's certainly some forward thinking customers that are adopting it, it actually has been around for a while. So while we think of AI as kind of relatively new, there's aspects of it, in particular, if you think of like vibration analysis to predict the failure of a piece of equipment, like a pump, or something like that, that's actually been, they've been doing that for over 20 years in the industrial space. So that's been around. And there's been other aspects of, say optimization, like I did a project, and probably almost 10 years ago now, with a hydropower generation customer that wanted to optimize how the water flowed through the dam. And that was a linear optimization, we use linear optimization to solve that. So these concepts, and some of these techniques have been around, however, AI in terms of the way we think of it now, you know, particularly around the leveraging things like neural networks, and training large ml models based on data from the plant floor that is relatively recent. And so, but there's been some great strides, we did a project for a customer a couple of years ago, that's a drywall manufacturer. And so it turns out, you can look at a piece of drywall as its drying, and the distribution and the shape of the bubbles that are shown, actually have a correlation to what the final quality of that drywall sheet is going to have. And so we actually built an ML model that does that, and can effectively give instantaneous feedback to the operator so that they can make adjustments and get more dialed in on quality sheets. And the state of the art prior to that was that you made the product, you took a sample that went off to the quality department. And it could take an hour to two hours for the quality department to come back and say, Hey, guys, you've been making scrap this entire time. And so that ability to effectively get near instantaneous feedback and make those changes and get back to good. I mean, that's orders of magnitude, you know, type of improvement. I mean, it's huge, big, big dollars for that. So that's kind of the state of the art now is where we're looking at things like predictive quality building, those types of ml models have still predictive maintenance, there's still ROI to be had there. But then we are also and hopefully we can get into this today. But we're also attacking kind of a new type of AI called autonomous AI. And that's based on deep reinforcement learning. And autonomous AI really can do what frankly, what a lot of customers think AI can do in that it can look at the state of the system and say, Here's your next best move. Here's the next best thing that you could do the most optimal thing that you can do. 

 

Rob Stevenson  09:05

And so this is like the natural progression of AI, like if you think of it in terms of okay, here's Deep Blue beat Kasparov. And then now chess computers absolutely just smoke with the best human players in like 2014. I want to say we saw AI take on the best GO players, which is orders of magnitude more complicated than chess, don't tell that to a chess player. But and so now it's like, this is a similar kind of tech right? Are we talking about reinforcement learning here? And we're pointing this at more advanced sort of planning based problems that a game for example has been previously?

 

Brian De Bois  09:39

It That's exactly it. It's deep reinforcement learning, it's that same so AlphaGo was what you're referencing previously, that came out of Deep Mind, which was the Google spin off that created AlphaGo and yeah, so deep reinforcement learning was a big deal when it came out. It was a pivotal change in operational AI. And so it was, you know, it was beating our best go grandmasters, then if Alpha zero ended up beating our chess grandmasters, then, like you said, it went on to be some of our best chess software, right software that had had decades of optimizations built into it. And here comes Alpha zero with pretty much no a priori knowledge, just the rules of chess built into it, it plays against itself for the equivalent of like, hundreds of years, and it comes out and it becomes a chess champion, then they go on, and they create alpha star, which beat our best Starcraft players. And so there was something here, and so over the years, so that was, yeah, like you said, around 2014, to 2016. And then over the years, some very smart people started to apply deep reinforcement learning to other problems, in particular, in my case, and two industrial types of problems. And so one of the key distinctions, though, with deep Reinforcement learning is that it learns by doing not by data, so we have to have some sort of simulation, at least a portion of the process, so that the DRL can play so that the DOL can get in there, it can try a bunch of different things. Frankly, it may end up blowing up the plant, but it's doing it in a virtual environment. And it will do this for the equivalent of again, like hundreds of years, and by the time it's done, it will be like a, you know, a 30 year operator, it's going to be an expert operator. And one of the really interesting things about it. And one of the things that makes this approach really novel is that it can solve problems that it's never seen before, it can still find optimal solutions to novel challenges it's never seen before. And you know, as proof of that, if you think about what it would take to beat a chess grandmaster, right, despite the fact that it played against itself for a long, long time, it's still going to see strategies from that human Grandmaster that it's never seen before. And yet, it still was able to build optimal long term play a long term strategy to beat that human player, even in the face of novel challenges. And so that, like, there's no other AI that can do that. And so that's what's really exciting about my exciting about my job is being able to take this that kind of seems like magic to a lot of my customers, and bring it to bear in this industrial space and solve some really cool problems with it. 

 

Rob Stevenson  12:05

Can you kind of connect the dots on why it is useful to talk about AI beating chess players go players and Starcraft players as a foundation? And why this is the same kind of tech that we can now point at industrial sort of problems? Yeah.

 

Brian De Bois  12:20

And in fact, I think it's actually really well positioned to solve industrial problems. Because of that. The parallels are that when you're using it to solve a game, again, by its nature has kind of a constrained scope, right? A closed system, right? Yeah, you've got certain rules that you can follow. And but it's artificially constrained, right? I mean, why, you know, you could pick up a night, and you could just place it on any other spot on the board. But that's not the rules of chess, right? It's artificially constrained, you know, the movement of the pieces and things like that? Well, in the same way, in the industrial space, we have relatively constrained processes and constrained. In fact, one of the things that roses has done for over 30 years is what's called control system integration. So this is where we actually program, kind of the lowest level intelligence on the plant floor, they're called controllers, the mini computers. And one of the big things that we do is actually constrain them, we say, if this valve is open, you can never Oh, it's called an interlock, you can never open this valve. Because if you were to combine those two chemicals that could blow up the plant, right. So we're all about constraints, like that's what we're building into the system. So we're actually taking, you know, the world of everything that these machines could possibly do, and constraining them down to what we the very small set of actions that we want them to do. And then also constraining the operators, you can move this down to 10. But we can't let you go to 11. Because you could potentially risk the harm of life and limb and property and things like that. So it actually lends itself really well to the type of problem solving that deep Reinforcement learning is good at. So it actually is kind of a really cool parallel there.

 

Rob Stevenson  13:45

So what's important is that there are a huge number but a finite amount of variables, and that you can account for all of them in this closed system. Like, for example, like the chess one is interesting, because it's like, yeah, you could just knock all the pieces off the board, right? If you ever played chess with a four year old, you know, like that, that can happen. But that's not chess, right? So like in a factory, for example, it's just more variables, there's more things to consider, but it is still a finite number. Isn't that just like, once you say, Okay, it's a large number, but it's a finite one more than a human brain can can manage, but still finite, right? Isn't that the case? For any problem? isn't any problem, just a sufficiently large amount of data, even if you have to factor in chaos and freewill?

 

Brian De Bois  14:27

Yeah, I mean, to a certain extent, I mean, again, like it can't handle an infinite or near infinite number of variables, you know what I mean? So while it can handle a lot, it can handle you know, a huge number. So and again, that's why it works really well. So to give you an idea of kind of the types of problems that we solve with this on the plant floor, so we've got a customer right now that is using it to shape glass bottles. So we also run into in here's one of the distinctions between what we do on the plant floor and games is that you can't be on half a space in chess or and go right you're either on the space or you're not on the space. So it's a very discrete type of problem. In the manufacturing world, there's a lot of like continuous type of variables in the manufacturing world. So shaping a bottle to get it to the right shape. When it's molten glass, you've got pressure, you've got heat, you've got a lot of different variables, continuous type of variables. And this particular customer only has two experts in the world who can really dial in all those different knobs, and get it to where it's making good product. And it's also a process that drifts over time, though. So even if that operator has it dialed in, they go to lunch, they come back an hour later, and now it's often you're making garbage. So that's why they're looking at autonomous AI because it never takes breaks and never goes to sleep. And it's able to get dialed in on a drifty type of process, it can get dialed in, and then keep it on a steady state better than a human can. Plus, if you have this, we call it a brain. If you have this autonomous AI brain that's been trained in this way, you now don't have to have just two expert operators in the world, you can bring in a one to two year operator. And frankly, that's the kind of labor market that we're seeing right now in the manufacturing spaces. We're seeing one to two year people, we're seeing heavy turnover in those operator roles. And so you know, these companies need a way to actually augment those operators and bring that expertise, bake that expertise in to those systems. One of the ways that we do that, so we talked about autonomous AI autonomous AI is really more of a technique than a technology. And then encompasses a couple different things. One of those is deep reinforcement learning, which we talked about. But a second key piece of that is what's called machine teaching. And we can talk about that in a minute. But that's really a key part of this building in of expertise.

 

Rob Stevenson  16:36

Yeah, I would love to talk about the machine teaching part and the building of expertise, because it seems to me that when you are talking about replicating or replacing the domain expertise of someone who's been working in that plant for 30 years, for example, is that documented? Have they been writing down? Here's exactly how I do my job, or they just kind of they show up and they do it right. You're shaking your head and laughing for folks out there and Bob Caslen. So then does that individual need to personally train them? Or how are you managing to replicate their expertise? 

 

Brian De Bois  17:06

Yeah, it's almost never written down, right? Because a lot of it is kind of rule of thumb type of thing. So we're sitting down with those subject matter experts. And we're saying, okay, when the line is, in this particular state, what do you do? And we don't, oftentimes those heuristics, we're getting that from them, too. So like, they may come in, and they'll say, okay, the line will run hot. And when it runs hot, you know, what does that mean? First off, well, it means that these particular indicators are high, when it's running that way, here's the things we do, I turn this valve up, I turn this dial up, I always have to watch though, and counteract that by turning this knob down. And then when the line runs cold, or whatever, then these are the things that we do. And those are the types of strategies that we're extracting from their head. And then that machine teaching approach really builds like a workflow of the all those little decision points. And then we can actually incorporate So the approach actually, you're not training a monolithic derail brain. In this particular machine teaching approach, you're actually training of a network of agents, a network of smaller DRL brains. And in that process, that workflow, and one of the other bonuses that we get with this machine teaching approach is that we can replace any one of those decision points with if derail is maybe not appropriate, or it's overkill, we can replace that decision point with traditional ml. So we that's called perception when you use it in this way. So we can have a traditional ML model that maybe takes in a real complex process, and then simplifies it down into just two or three categories, which makes the DLL life easier if you're not giving it a ton of continuous variable. So that decision point of are we running hot or cold, we can make that decision with a more traditional ML model, and then pass that decision on into the network of DRL agents. The other thing that you get for free with that is you get Explainable AI, which people really like that. And that's been the goal of a lot of these AI in a really elusive goal of a lot of these AI projects for a long time is Explainable AI, but with this, because we have this network, we can actually follow all the decisions that each of these individual brains made to get to that point, and we can say, okay, now I see why I thought we were running cold. So that's why I did such and such and such, which, I mean, it's just a really cool bonus to this whole thing.

 

Rob Stevenson  19:19

Yeah, I mean, that Explainable AI is the goal and the way you're describing it of like seeing kind of a progression of decisions. It's like in algebra, your teacher would be like, you got the right answer, but you didn't show your work. So zero credit, right. And so it's the same thing. It's like, alright, AI, show your work. And it's also interesting that with this training, it doesn't sound like you need to even capture that domain knowledge from the expert. It's more like you can simulate, you know, just like the chess engine or just like the AI playing chess against itself without for 100 years, you're able to do the same thing. It's like it's simulating all these things. It comes out with the best, whatever best means outcome, and then you implement that right as opposed to this is like Got a joke? Anytime you've worked in the foodservice industry you're being trained. And it's like, so you're not supposed to do this. But here's what I do. Right? It sounds like you don't have to find those elements and train the AI on it. It's just is it just generating enough and simulating enough that it can move past that without you instructing it 

 

Brian De Bois  20:16

to a certain extent? So a couple things there. One point there is, is why do we bother with machine teaching at all, like why not just hook it up to the simulator with no a priori knowledge and let it go crazy, there's a couple challenges with that one is, is that it may never really converge on the most optimal solution, it may take 1000s of human years to ever converge on the most optimal solution. And you're paying for, it takes a lot of CPU, a lot of compute to train these brains more so even than like an ML model, a typical ML model. So you're paying a lot for cloud computation while you're on this process. So machine teaching it Oh, and then the fourth one, though, is that that's not even how you teach a human operator. So you don't take a human operator and off the street, hook them up to a simulator and say, Hey, you'll figure it out, right? We sit them down with one of those experts, and we say, Okay, here's what we do, you know, when you're seeing this number, go, Hi, this is what you do. We give them all that knowledge ahead of time. So it's all of that there's a lot of advantages to going ahead and giving them that ahead of time. However, I think the other aspect of your point, though, is is that the advantage to this approach is you don't have to bake in every single rule as a little background. So I took an AI class, many moons ago, when I was in college. And at the time, this was in the late 90s, early 2000s, rules based systems were invoked, they were the big AI thing. And there was a ton of research being done. And so most of the class was spent on teaching rules based systems and rules based systems have their place and they're great. But kind of to the point you're getting at, you kind of have to think of everything and you got to bake in all of those little micro decisions and all those you know, and it gets overwhelming. And it's frankly, why, you know, that kind of type of AI kind of went out of vogue. So with this, you don't have to bake in every single tiny decision, it will discover those strategies on its own, which is really cool.

 

Rob Stevenson  22:04

Do you think that as compute costs come down? Which I feel like it's inevitable that just seems like the trend is that tech gets smaller, more affordable, et cetera, et cetera? Do you think there will be more reason to not bother with a priori knowledge? If the compute is such that oh, we can just simulate a kajillion times doesn't cost that much. I mean,

 

Brian De Bois  22:21

I don't think so only because, again, even with Compute coming down, you still could get into a space where it just never converges on the right solution, or it ends up getting really obsessed. You know, we've seen this work during training where it gets really obsessed with this minutia over here. And it's missing the elephant in the room, like the most important problems it's trying to solve, because it got into kind of a local minima, and it's over here screwing around. So I don't think so. But my whole career, Brian. Again, the Explainable AI is, is just gold, so we don't want to lose that. And machine teaching gives us that for free. So,

 

Rob Stevenson  22:59

so there's no Explainable AI if you were to brute force solutions. 

 

Brian De Bois  23:03

No. And in fact, what we found is, is that we've actually trained, monolithic DRL, without the machine teaching aspect of it and done the agent of DRL brains, or network of hero agents, we've done both. And we found that the approach with machine teaching we have a bunch of small brains actually outperforms a monolithic DRL. The other thing that can happen with a monolithic DRL, not to get too into the weeds is that while it's trying to figure out strategy, it can sometimes get into a situation where it learns how to do one aspect of a really well. And then you try to train it on another aspect. And it starts to forget the strategies that learned and the first aspect so with a monolithic DRL, you run into situations where there's just too much for it to try to balance and optimize. Whereas with the approach where you have a network of dear ol agents, each one can become really, really good at one small aspect of the process.

 

Rob Stevenson  23:54

So monolithic? DRL and agents of DRL. I'm guessing are not new mini series on Disney plus. Could you maybe just differentiate monolithic and agents of DRL from your good old fashioned run of the mill DRL?

 

Brian De Bois  24:09

Oh, you I mean, monolithic. DRL is what? So that would be like what AlphaGo was, or

 

Rob Stevenson  24:14

Alpha zero or something like that just mean like specifically trained for a single kind of function? 

 

Brian De Bois  24:18

Yes. And you're using just basically one brain, you're training one brain to solve this one problem with the machine teaching, like I said, you're actually creating like a workflow. And at each of those points in the workflow, each of those blocks if you can imagine, you know, lines and blocks of that workflow. Each one of those blocks can be its own derail brain, so it's really more about so again, I keep coming back to this example of Okay, so if the line was running hot, you can actually train a different derail brain effectively, to step in and run the line in that case versus when it's running cold, then you would train a different DRL brain so it's a network of these DRL brains rather than one just big one.

 

Rob Stevenson  24:57

I got it. Okay. Thanks for clearing that up for me. Brian, I want to ask you to kind of look around a corner here, the space is moving incredibly quickly. We at all times beset by the hype around generative, for example, computer vision, machine teaching, et cetera, et cetera, lots of buzzwords, but you are in the space, you are in the weeds every day, I would love to hear from you. What kind of useful trends do you see coming down the pipeline? I guess we could keep it to industrial or maybe just speak to the AI industry writ large, if you care to? 

 

Brian De Bois  25:25

Yeah, well, I think that there's kind of two big winners, at least right now that I'm seeing and that I'm the most excited about. One of them is, as you mentioned, generative AI. Right. So that's the chat GPT, largely speaking, MLMs are large language models. And what I see those solving are knowledge type of problems. So and I think they're really good at solving a lot of knowledge types of problems. And so that's been really interesting. And I think that there's even a place for that, in manufacturing companies kind of largely, however, in my world on the plant floor, we tend to, to need solutions for operational types of problems, rather than knowledge based problems. So and that's where I think autonomous AI and you know, deep reinforcement learning under the hood, I really feel like that's solving a lot of operational types of problems. And those are the types of issues that we tend to see. But I think both of them in parallel, are really interesting, I think both are going to have a huge impact on their respective fields. And then they're not mutually exclusive. There's some things we're already starting to think about and look at where the two could maybe overlap, and things like that. I will say one kind of caveat, I've got a lot of customers that are, you know, with my role as director of industrial AI, I get calls from customers all the time that are just like they heard about something, they read an article, and now they're excited, and they want to talk to me about it. And that's great. But I've obviously seen a lot of interest in this generative AI. One of the challenges big, big challenges that I see with that on the plant floor is this concept of hallucinations, it really makes it difficult, you can't have anything that's going to hallucinate when it's on the plant floor. So to put a real concrete example around this is one of the places that we're looking at, are the people that other companies have looked at applying generative AI to plant floor problems is maintenance, right? So I've got a newer maintenance person, and they could ask the manufacturing chair GPT, hey, when this particular piece of equipment is running hot, and I see visible smoke, and here's the RPMs, or whatever, what do I do to fix that? Okay, so you can imagine a chatty puttees type of thing trained on a corpus of all the maintenance records they have and all the SOPs and everything they have around that. The problem is, is that it's going to give an answer it will never say, I don't know, which is actually a big problem with the current generation of LLM, they won't tell you I don't know, what it will do is it's gonna give you an answer, regardless of the accuracy. So it may say, Oh, here's what you do, you take this wrench, and you torque this, this thing as far as it'll go, and then you blow up the plant. So there's some real risk there that I don't think has been fully kind of captured or appreciated by some of these current efforts to bring generative AI into the plant floor. So I'm excited about it. But I'm very cautious because I need to see these MLMs get to a point where they can say, You know what, I'm not 100% sure on that. And I don't want to guess they won't do that today. They'll just make something up. 

 

Rob Stevenson  28:21

Yeah, yeah, that is really important to call out. And it feels as though this is in a larger sense. Part of how AI gets adopted and becomes more useful and widespread is can we apply it to higher and higher stakes sorts of problems, right now, I can use chat GPT to help me write an email. And if it sucks, who cares? Right? Or even if it doesn't suck, even if it's okay, if it's like 60%. And they're great. That's the starting point. They worked for me. But there's plenty of industries, you're in one of them where 60% is unacceptable. It's either 100% or nothing. Right? 

 

Brian De Bois  28:54

Right. Right. And I mean, that's put too fine a point on it. But like we work with, for instance, polymer customers, right? Well, the polymer reaction is so volatile, we've got a customer that in the 90s, had what's called a runaway polymer reaction that literally the plant exploded, and there was just a crater where the plant was, and loss of life, obviously, massive loss of property. And people heard the explosion like 30 miles away. I mean, it was like a bomb going off. That's just unacceptable. Like we have to have systems that we can count on to work within constraints to give right answers every time. It's one of the other reasons why one of the questions I get asked is do the in the industrial AI space? Do these algorithms learn while they're on the job? Do they update while they're on the job? Well, we explicitly don't do that. So it doesn't do any kind of online learning. All the learning happens and all the training happens ahead of time. And then when we deploy it, I mean, we can retrain and build a new brain and then replace the one that's on the plant floor. But that's a very manual process. It's a very intentional and deliberate process. The brain that's on the plant floor will always make the same decisions every time. And that's really, really important on the plant floor, we cannot have a brain that supposedly learning on the fly because we don't know what conclusions it's going to come to. So we always we don't have any kind of online learning in the in the plant floor. 

 

Rob Stevenson  30:12

That feels like a really important line to draw. Because other sorts of tech, I know, for example, in autonomous vehicles, they are sending all of the data on the real driving situation back to the mothership, and that is helping to train it. I haven't spoken to someone in one of those companies in a while. So I don't know how much babysitting they're doing to see what goes back into the model. But it sounds like you're doing quite a bit. Yeah,

 

Brian De Bois  30:34

you've got to have it predictable. So out of 100 cases, if it made this decision, you want it to make that same decision, even if it was wrong, but you can fix it if it's wrong, but you can't fix it if you can't predict what the decision is going to be at any given time. 

 

Rob Stevenson  30:46

Yeah, make sense to me, Brian, we are creeping up on optimal podcast length here. And this has been a blast. I definitely could stand to hear more. But you know, we have hundreds to cast in factories to automate and more efficient and whatnot. So have probably let you go. So at this point to say, Hey, man, thanks so much for doing this. This was really fun. I love having you on today. 

 

Brian De Bois  31:03

Yeah, this was great. I appreciate it. Rob. If people want to find out more about robustness and what we're doing and that kind of thing. If you just go to Roe versus ROV i s y s.com/ai. That takes you to the AI portion of our website and my contact information is there and they can get a hold of me that way. The other thing I would call out is a lot of this whole autonomous AI approach was outlined in a book by a brilliant guy, former Microsoft employee named Ken sanderson. The book is an O'Reilly book. It's called Designing autonomous AI. And that's where a lot of our approach kind of came from. So that's maybe something of interest to your listeners as well.

 

Rob Stevenson  31:37

Definitely it'll be in the show notes.

 

Brian De Bois  31:40

Sounds great. Thanks for I appreciate

 

Rob Stevenson  31:43

how AI happens is brought to you by sama. Sama provides accurate data for ambitious AI specializing in image video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, ecommerce, media, med tech, robotics and agriculture. More information, head to sama.com