How AI Happens

Leveraging Technology to Preserve Creativity with Justin Kilb

Episode Summary

As generative AI continues to scale, the likelihood of computer-generated content outpacing human creation increases. Today, we explore how evolutionary generative algorithms in music production can preserve and enhance human creativity rather than overshadow it. Joining us is Justin Kilb, a data scientist and PhD student who serves as an AI technical advisor to Fortune 500 companies.

Episode Notes

 In this episode of How AI Happens, Justin explains how his project, Wondr Search, injects creativity into AI in a way that doesn’t alienate creators. You’ll learn how this new form of AI uses evolutionary algorithms (EAs) and differential evolution (DE) to generate music without learning from or imitating existing creative work. We also touch on the success of the six songs created by Wondr Search, why AI will never fully replace artists, and so much more. For a fascinating conversation at the intersection of art and AI, be sure to tune in today!

Key Points From This Episode:


“[Wondr Search] is definitely not an effort to stand up against generative AI that uses traditional ML methods. I use those a lot and there’s going to be a lot of good that comes from those – but I also think there’s going to be a market for more human-centric generative methods.” — Justin Kilb [0:06:12]

“The definition of intelligence continues to change as [humans and artificial systems] progress.” — Justin Kilb [0:24:29]

“As we make progress, people can access [AI] everywhere as long as they have an internet connection. That's exciting because you see a lot of people doing a lot of great things.” — Justin Kilb [0:26:06]

Links Mentioned in Today’s Episode:

Justin Kilb on LinkedIn

Wondr Search

‘Conserving Human Creativity with Evolutionary Generative Algorithms: A Case Study in Music Generation’

How AI Happens


Episode Transcription

Justin Kilb  0:00  

I think in some situations, right, the level of feature extraction that I discussed is just so large, that potentially maybe you do need, you know, tons of features, tons of sample points to really capture the variation you care about. But if you think about what's really caused when you get the right features, sample size doesn't matter, you'll see immediately what the relationships are.


Rob Stevenson  0:21  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Here with me today on how it happens is a man who serves as an AI technical adviser to Fortune 500 companies. He also has a history with NASA. He's a PhD candidate right now. His name is Justin Kilb. Justin, welcome to the podcast. How are you today?


Justin Kilb  1:03  

Good. Good. Thanks for having me.


Rob Stevenson  1:04  

I'm really pleased to have you. And I want to start with the least capitalistic thing that you do. Which is this awesome project you're working on called wonder search. That is trying to inject creativity into AI in a way that doesn't alienate creators. Right. Did I get that? Right? I should probably have asked you what it Yeah,  


Justin Kilb  1:22  

yeah, no, you bet. It's a little bit of a weedy argument for generative AI and how it differs from, let's say, like factual retrieval versus generative content to intentionally push the bounds of creativity. My wife and I actually just put out a paper last week, where we describe a model created that generates music, but it only replicates the creative bias of a single user. Like I said, it doesn't use any pre existing training data. So in traditional ml, like, you know, with predictive models, in a sense, we try to generate predictions that sort of plagiarize historic relationship, it's like, you know, these are the relationships that historically predict the price of a home and we think the future will behave like the past will generate predictions with these relationships as closely as you can. It's not truly plagiarism, because there's no explicit ownership of the relationships, or, you know, large language models for physical laws, what is the formula for momentum, and I want an overfit response to make sure it prints the exact formula, not a formula with some deviation, whereas for generative art process is something like, learn to create images that look exactly like this German Shepherd. Okay, that's pretty good. But that's maybe plagiarism. So slightly modify the output, but not enough such that a human no longer recognizes that German Shepherd and, and then an artist who wants to, let's say, maximize novelty, and create a creative interpretation of the German Shepherd could then dial up the random functions that feed the noise vectors that ultimately create diversity in the output. So it's an important distinction between reproducibility for commercial use and artistic novelty, right, like maybe another example is that new business owner might generate the images and code for a website, their task likely isn't to push the creative bounds of web development, but rather to cost effectively develop a website of sufficient quality. There's obvious utility for these types of tasks. And I think this space is really exciting and generative models will do a lot of good. But this is very different than creating a song with enough novelty to create a new genre.  


Rob Stevenson  3:14  

Yeah, so it's not that there's no training data. It's just very specific to the user, I guess, or to a single creator.


Justin Kilb  3:20  

Yeah, great question. So it uses evolutionary algorithms. And so it produces a melody with some constraints from music theory, not from existing work. And so when it starts generating melodies are not very good, you score it, and that's where the concepts of evolution come in is that facilities that have scored higher retain the properties that likely lead to the higher score. And this is a lot different, right? Because with machine learning where you've learned from existing content, you can dial up the randomness introduced to the network, and it will probably find DVD content that can start new genres. But the direction of deviation is random, you know, the far future, let's say are at the limit where generative ml methods are played maximally, it's hard to imagine human content ever again, outpacing computer contents. And so we could diminish the human training dataset, which means we'd have to increasingly rely on the random functions for a novel content, rather than this feedback model that only learns from you. And then once you kind of put in this quote, unquote, learn state, then you can modulate once again only as a function of your input and no one else's.


Rob Stevenson  4:23  

So it deviates randomly, which is, is evolution, which is why I imagine you use that in the parlance, right? Organisms, you mutate randomly, the ones that are more apt to their environment survive. It's not like they naturally get better. It's just like it's a culling of the things that are more disposed. The Culling happens by nature, you the Creator, our nature, you get all of these mutations and you say, this one's good. This one's good. This one's good. You score it, and then it learns from that as at the idea.  


Justin Kilb  4:46  

Yeah, I love that interpretation. Exactly. And we developed six songs with it, and we submitted them anonymously to international record labels and all of them receive signing contracts. And so that really was it tended to show it's commercially viable. And there's some really cool like if you know, if you were a touring producer, you could imagine, like a tour where you've trained this thing only from your creative bias, you use it to generate melodies at scale. Of course, that's the beauty of computers. And then you could have unique albums for like each touring area that are only shared with that audience. So you get the generative scale of computers. But once again, you're not It's not learning from other people's work. It's it's only a reflection of a single user. Why do you think they were so successful with these record companies? Yeah, I mean, it's a good question. Your music is so interesting, like, you need enough novelty, so that it's interesting. But if it's too, out there, it's not gonna be accepted. You know, I guess like, theoretically, what I find interesting was also interesting to the record label, because the melodies that are generated, of course, are only responsive, it's a good question. So it's hard to quantify what makes music good, actually. And that's a really important point, why we submit it to the record labels. There's been like, brilliant papers on music generation, but it's really hard at the end to say, you know, how good is this? Is this music sound like an ML prediction, we can say, you know, we're 90% accurate. For the last decade, it's like, it's really hard to just say it's the author. And so I guess we'd have to ask the record labels that question.


Rob Stevenson  6:19  

It's very cool, Justin, that works. And that this process, the process, you've designed for it to work, and ask you an elementary question, why, why did you make it


Justin Kilb  6:29  

It's definitely not a, like an effort to stand up against generative AI that use traditional ml methods. I use those a lot. And I think there's going to be a lot of good that comes from those. So I think there's going to be a market right for, like, generate content that is of sufficient quality. But I also think there's going to be a market for more human centric, generative methods. Like when I think of music, one of the most fun and rewarding things is like getting to know the producer and think that's why people go and see them live, right, like, technically, the music's very different. But you're excited to see the artists and so I'm really excited for what generative methods will do that even just use, you know, traditional machine learning. But I think there's also going to be a big market for creative capacity that comes more from humans and just replication.


Rob Stevenson  7:15  

So do you think we'll see with creative works? Like, it'll be like organic food or something? It's like, oh, this was made with no AI, right? Like, there's like, we will distinguish, like, Okay, we quit with the machine can do over here. And then also, there is this thirst or desire for a purely human made thing. But then I guess, like, how do you differentiate because we're already using even minus generative, we're already using so much technology and music production today?  


Justin Kilb  7:39  

Yeah, I know, like determining what's been generated from the computers, of course, is challenging. Like, it's a billion dollar industry. Now, there's a lot of really smart people working on that problem. I do think that that will be a label, like once again, really to focus in on the difference between, like, I need a picture of a German Shepherd because I run a dog training. And I'm like, my intent isn't to creatively enumerate away from a German Shepherd versus creative writing or painting or music. Like I mentioned, it's really hard to imagine, right? that humans are going to produce more content than computers in the future. If that happens, right, like the reservoir of training data from humans, generative models diminishes. And then the only novelty comes from the random functions. And you certainly can get a lot of diversity from random functions. If you think of all of the incredibly complex ways that human builds artistic bias, like through the environment, and over time, seems like those are more directional than just a aimless looking through a random function. And so I think it's also even in something we say in the paper is more computationally efficient, because you're not like I use GPT, to like, maybe I have an idea. And I say, hey, like, create me 10 More ideas that are similar to this. So in that sense, it's creative. Of course, what it offers as mutation, right, from what it's learned, is bound by those random functions, as opposed to me saying, incorporating that feedback loop where if my musical biases are changing over time, then I'm also giving different feedback to the melody generator. So the melody generator is generating new content that moves with time, as opposed to this static and I mean, that's a far future. I don't think that's going to be an issue for a while. But it's, it's fun to think it's fun to think about.


Rob Stevenson  9:13  

It's good, the German Shepherd example of like, oh, I need any photos of German shepherd for this training purpose, like I run a dog kennel or something, I don't know, does it need to be generated by a human for your purpose in that case is like, oh, it's not important to me that a human being took this photo of a German Shepherd, maybe not, in the same way as like, think of all the places that like, imagery exist. It's like, is it important to me that a human designer made this do not enter a construction site? No, it's not at all. But then there's places of like paintings or music, maybe I do want there to be a human involved. I think of it a little bit like, like chess computers, just like at what point do we is it important to us to delineate what is human made and what is machine made? Chess computers have been better than the best human players for a long time. Right. And the best player in the world, Magnus Carlsen will get the by the best chess computer like, you know, probably like nine times out of 10, maybe more. And yet, we are more interested in his games, Magnus is games against other human beings than just watching to chess computers. Okay, yeah, we know that computer can beat you. Who cares, right? And so in the same way, it's like, yeah, AI can make music. Sure. It's great. It's fun. It's a novelty. I still want to hear what the human being can do. I still want to do want to watch the competition, I still want to go to the concert, that doesn't feel threatened to me by AI.


Justin Kilb  10:28  

Right. Yeah, I think it's because it's, you know, it's fun to do those things. And so, like, there's a lot of tasks, you know, I actually think it's a really good question when, like companies are trying to adopt, like AI, or let's just say math and computer science is, like, understandably, there's this tension of, you know, are we going to automate certain people to certain roles? I think a really good question is just to say, like, what tasks do you hate doing right? Or where do you feel the most unsafe, and start there, and I don't think there's going to be much friction, right to remove that a lot of that happened during the Industrial Revolution, even though the large language models I think, have surprised us with how creative they can be. I think that we just as humans, we enjoy those hobbies, and we're going to enjoy doing them. And even if computers are better than us, it's still gonna be fun for us to participate.  


Rob Stevenson  11:14  

Yeah, yeah, of course, I'm pleased with that. I think the fear of like aI replacing artists is just that I think it's just fear. I don't think it's realistic. Would you agree,  


Justin Kilb  11:23  

kind of something else I've observed, you know, with AI replacing people, it's really hard to know, like, you can't really answer it properly, unless, you know, the rate at which it also produces new jobs, like in the companies I've worked for, it's it's been very rare that there's not a laundry list of tasks that we're trying to get to that we can't like, as we get more efficient, you can do more. So unless you know what the rate of that replacement is, I think it's really hard to, to answer that properly. And I think we've shown thus far. I mean, the Industrial Revolution created so many rules, right? It's like when you reduce the price of steel by a factor of 10. Look, how many industries have been opened. But certainly in the short term, it's changing, at least in my work. I mean, I write code a lot faster. And so you know, if I had no domain understanding, or if I wasn't a part of the problem formulation, I can see quantitatively, like how many more lines of code I write per hour, and that's definitely changed. It just allows me to work on other things,


Rob Stevenson  12:20  

such as your PhD, which is unrelated to wonder search. Could you tell me a little bit about your PhD? What are you working on?  


Justin Kilb  12:26  

Yeah, so I'm doing a PhD in operations research. And what's great about a PhD in operations. Research, which is really a form of Applied Math is that it's agnostic to technology and industry. So whereas many PhDs focus on a specific technology, potentially for a certain industry, my PhD is about the furthering, or the application of math and computer science as a tool. So while many PhDs certainly are a specialization, my topic feels like the opposite. It feels like it's greatly diversifying, not narrowing my skill set. You know, regarding my decision to leave my full time role, I was just really interested in learning the topic that my current academic advisor is an expert in, it doesn't seem to be a skill that's abundantly distributed in industry to really encourage me to enroll. It might sound cliche, I'm certainly not pursuing this for a PhD. Title. There's a lot of commentary on graduate school. And I think most of it is fairly low resolution, a lot of people ask, you know, what percent increase in salary, if any, will identify complete graduate school? And I don't think it's a great question questions like this really turn graduate degrees into discrete certificates that don't reflect the content specific opportunities that might open as a function of what you actually learn or develop during your program. You know, like good questions, maybe look something like, what type of knowledge am I interested in? Is the knowledge scarce? Is it generalizable beyond an academic environment? Is it useful in multiple industries? I don't know. Is it likely to be in demand when I graduate and in the future? And then you can ask yourself, if that type of knowledge is accessible from your immediate surroundings? And if it's not, and if you're willing to pay the financial opportunity cost to leave industry and return to school? Maybe it's the right choice. And so I guess when I asked myself some version of those questions I like to answer and so


Rob Stevenson  14:12  

how about the question, do you like it? Is it an enriching to your human experience? Is it interesting, it was fascinating. All right, those are all upstream of can you use it to make more money?  


Justin Kilb  14:21  

Absolutely. And there's no way it would be really hard to get through a PhD program, I think, if you weren't super passionate, and it was cool about Matt, like mathematical optimization is because we can get into this if you want, after here, but because operations research is prescriptive, it's prescribing a strategy. It's not just predicting a result. You have to think really hard in many domains, not just math, there can be the ethical implications when you're autonomously making decisions. And so it's been SharePoint like if it's been personally interesting, it has my you know, I'm not only reading math papers, I'm reading a lot on history a lot about how people thought about incorporating these types of models, especially right like that. distinction between, am I using AI as it relates to machine learning to make a prediction versus I'm actually using prescriptive strategy to help us make an optimal decision is it's quite a bit different.


Rob Stevenson  15:10  

It's different. But it's it's just sort of the next step right to say it's prescriptive rather than predictive is more apt to say it's prescriptive. In addition to being predictive, surely it must be one before the other,


Justin Kilb  15:21  

you know, that it's like, you know, the classic, like, there's descriptive, predictive, prescriptive, you're absolutely right, it can sit at the end, which is, I think, what makes it really exciting because whether you approach a relationship prediction, with physics, look at an engineering, or statistics like you do an ML, everything is a system and very few systems run optimally. And so it can always sit at the end. How would you differentiate operations research then from data science? Yeah. So, you know, like in data science, I, like I mentioned, data science isn't the only method we use to make predictions like physics, force equals mass times acceleration is just a prediction. But whether you use statistics or physics, the distinction is that in both cases, for a number of inputs, the predictive method, generally, generally, it's a single output. So for example, all the features for predicting the price of your house are put in and outcomes a single output as you know, the predictive price of your house. But in operations research, in contrast, which like we said, it's prescriptive, it iterates through the inputs until a better output is found, hopefully, the best output. So this wouldn't necessarily be used if you were if you were selling your house. But if you were building houses, and you could try to do this with data science or machine learning models, you could look at the coefficients of a regression equation, let's say and, and just try to iteratively change the inputs to maximize the price of a house, assuming your coefficients are truly causal. But this gets a lot more complicated when you have constraints, like certain features can't exceed certain magnitudes, or nonlinearities, that make the search complicated, or discrete decisions, like how many non fractional bedrooms should I build. So instead, we use operations research optimization methods, which as you can see, the result is prescriptive because it prescribes a strategy, rather than just offering a single prediction. That's where it's really exciting, especially in the context of AI, as it relates to machine learning. Because in a hybrid approach like this, you don't only see, like we said, a prediction about the world, but rather predictions and actionable steps to move in the direction of optimality.


Rob Stevenson  17:18  

The building a house example, is that just one off the top of your head, or is that something that is actually showing up in your work right now?


Justin Kilb  17:23  

No, no, it's not. It was the first example that was showed to me that made the most sense to me. Well, I guess recently, it's been diversified quite a bit physical systems like geology or, or big machinery, some with computer vision. I did some work with NASA, like I mentioned, that that's the beauty of math is it's I mentioned agnostic to industry that the models are the same, the context is different.


Rob Stevenson  17:41  

It's, well, it's variable, right? It's like a closed system. And so far as like, numbers are precise, there's a right answer and a wrong answer, as opposed to trying to understand behavior be predictive of like human behavior. So it feels like in your case, you're dealing with physics and math, it's a little bit more of like a closed system.  


Justin Kilb  17:56  

Yeah, I saw the difference when it transferred from Kinesiology to engineering. And then again, between my career in engineering working with physical systems and my master's in data science, which worked with social systems, just like you said, in the physical sciences, like building a bridge road or designing a mechanical valve. There's many predictive formulas that come from physics that are really accurate. Many systems and engineering can be predicted with extremely high accuracy, like predicting the force of an object, an object, since we have an explicit formula with no learned coefficients, right, force equals mass times acceleration. And then you can stack a bunch of these physics type formulas together and predict multiple component systems with great accuracy. However, just because we have accurate physics, formulas, doesn't mean we have the measurement or data necessary to make the predictions. So as the physical systems we're working with become more complex. Or maybe it's just our appetite to model systems in more complex ways. We inevitably face uncertainty that opens the door to empirical methods like machine learning.


Rob Stevenson  18:52  

So at what point do you need to jump into machine learning? Where would you draw that line?


Justin Kilb  18:56  

The physics isn't wrong, but the data we feed in is potentially wrong. I don't mean that in the conventional sense of data quality, like incorrect values in your spreadsheet or measurement error. Although that's important, too, I mean more that the data is too abstracted to explain variation of the results. Or in other words, the features you record in columns as potentially predictive variables might be too far removed from the actual causal mechanisms that lead to the result. So one example I use in my research is, if you consider a game where you have to predict how far a ball will roll after a force is applied, if you know the force of the push, and something about the materials in the game, you'll do really well. However, if you just have the sound of the force, you'll do worse, and you'll likely do terribly if you only have the change in temperature of the room after the push. And in each scenario, the predictive feature is abstracted away from the causal mechanism. So you can see the same thing in the real world, right? Perhaps someone is trying to predict your life expectancy with your resting heart rate. Or perhaps you're trying to predict the mineralogy on Mars with only the color from the satellite image. In each prediction. The predictive features only potentially correlate or emerged from the true causal mechanisms. And in these cases, it doesn't matter how fancy your statistics are, there's an unmeasured distribution underneath that can only lead to accuracy potentially, on average. Right? It's like, predict the shape of the ocean floor only with a video of the waves. So to come back to the concept of switching between physics and machine learning, like you asked, I believe it's it mostly depends on how much unmeasured heterogeneity exists in your system, which is a consequence of measurement or feature extraction, or another way, right? How do the features that predict the outcome vary in ways I don't record? When companies think like this, you can actually start to ask, what is the level of unmeasured heterogeneity? How much does it affect the outcome? And what is the trade off to measure the variation compared with the cost to attain it?


Rob Stevenson  20:45  

How would you be able to measure what you don't know?  


Justin Kilb  20:48  

Yeah, so the classic, I don't know how to measure the unknown unknowns. But in some systems, I think it's easier in the physical sciences, like if you're working with earth systems, you know, you could say we have a measurement that we took 100 meters or 1000 feet, whatever into the earth, and then we have another one a kilometer away. And then you know, the rock properties are going to change, right and vary between those two sample points. So you don't know maybe the degree to which it varies, but you could write pay to drill additional holes, or you could say, hey, to the best of our ability with our domain knowledge, we think that these properties can deviate 20 to 30%. Yeah, certainly a lot harder. I think, especially with social systems where, even if there are proofs that dictate how we behave and what we buy tomorrow on line, you'd think the level of resolution of measurement would be quite extreme. And I think that would be a little harder to know.


Rob Stevenson  21:37  

Yeah, freewill is the ultimate variable. Right? Right. So it feels like going back 10 years, maybe there was this like data arms race, right from once we got the term Big Data, and the belief was like, more data is always better. Future models with more data will be more accurate. I feel like I'm seeing a little bit of contraction, I feel like huge amounts of data and input are now sort of not impressive to someone in your position where it's like, Okay, here's this massive corpus of data. Can we use any of it? Is this just like, how curious is this? How relevant? Is this to our problem? Or is the belief that when you get more data, you will inevitably get more of the data that you need? Do you see this sort of? Have you noticed this too? Do you feel like people are a little less impressed by huge corpuses of data,


Justin Kilb  22:21  

I think I'm noticing the same thing. And, you know, I might offer a little bit of an unconventional approach that definitely was instilled during my engineering. But and it's a bit of a paradox. But like sample size isn't the important feature. It's only the causal strength of your features. Right. And as an example, in the game I described earlier, if I push a ball only 10 times, so I generate 10 rows in my data set, which have three columns, hopefully, force, mass and acceleration, well, with just 10 samples, you'll have a perfect model minus measurement noise, since you found all of the features that are truly causal force. And so, you know, I think like, there's so much great work going into making models more efficient. But I think in some situations, right, the level of feature abstraction that I discussed is just so large, that potentially maybe you do need, you know, tons of features, tons of sample points to really capture the variation you care about. But yeah, you're absolutely right. I mean, if you think about, and it's hard to do, but if you think about what's really caused, when you get the right features, sample size doesn't matter, you'll see immediately what the relationships are obviously hard to do in practice. But I think we're seeing more of that.


Rob Stevenson  23:28  

I started out by saying, let's not talk about your work quite yet. And then here we are all this time later. We haven't talked about it at all. So maybe we should spend a little bit of time when you were brought in as AI technical adviser. What sort of problems you end up focusing on?


Justin Kilb  23:41  

I think some of it is like, I think there's just so much hype around AI. And, you know, understandably, industries or companies, maybe it's a better word, they're not at the forefront completely understandably, so their business, but they also don't want to miss out. And, you know, I think one thing that I kind of focus on is, although there's extreme societal implications to understanding, like what artificial intelligence truly is, and like when something is artificially intelligent, like, although that's incredibly important, I think, organizations who are, let's say, trying to maximize profit, maximize safety, minimize risk, like, I don't know that just to execute on those options in a safe way. I don't know that it's so much matters. That model is what leading AI researchers would consider artificially intelligent, right? And so hard to define. I mean, our definition of intelligence just continues to change. As humans progressing as artificial systems progressing. It's really hard to say like, does the method come from a computer? Yes, it's artificial. Is it intelligent in the sense that it adds utility, even if it doesn't parallel human intelligent, like there's a lot of ways to monetize that right for whatever the organization's goal is. And so I think, long story short to come back to your question, like if I'm working with companies, it's more like aI super cool. Like, here's I think the leading papers on like, what we think it means, and maybe where this is going as best as I can tell. But I think more importantly, like I've spent a lot of time just looking at different math methods. Sure they are, what types of problems do you have? Is it descriptive, predictive, prescriptive? Like what is it taking compute to do something like this? What kind of skills do you need to just which is more of a, like an applied math discussion than here's what actually makes a predictive model parallel human intelligence? Because I don't know that we have that figured out yet.


Rob Stevenson  25:27  

Not quite. So Justin, here you are sitting in academia, you are building this music tool or have built this music tool, you are advising companies. So it feels like you have a pretty broad view of the industry, what is uniquely exciting to you, when you are reading through all of these papers, and cutting through the hype? What stands out?


Justin Kilb  25:46  

I think there's some really exciting things. And then maybe some things we I'd be naive, if I didn't talk about some of the things that are maybe a little more concerning, I think it seems like AI is one of the first like really powerful. And let's just say like, like, let even large language models are one of the first like step changes in technology that were distributed uniformly, like generally, like, you know, took many years for many countries to have an industrial revolution. One thing that's cool is that as we make progress, people can access it everywhere, as long as they have an internet connection. So I think that's really exciting. Because you see a lot of people doing a lot of great things. Of course, with like, that also means that you know, maybe the quantity of good things will dress like the good things that people do will drastically go up regardless of the magnitude of the good. But, of course, with more powerful technology, if the magnitude for the potential to do bad also goes up. So even if the frequency of doing something bad goes down, but the severity goes up, the risk value is the same. It's I know, that's a double edged sword. And you know, distributing technology like this can have severe ramifications. It's exciting just to see how many people are using it to learn things to help automate the tests they hate doing.  


Rob Stevenson  26:56  

Yeah, it's a crazy time. I liked that answer that just the fact that it's distributed and smartphones are capable. They're tiny models and algorithms are running right there locally. So yeah, Justin, that's a great answer at the end of an episode full of great answers and explanations. So at this point, I'll just say thanks so much for being here. Man. This has been really, really fun. I definitely stand to hear more. But as we creep up on optimal podcast length, we'll wind down and let you go back to your money project. So thanks for being with me here today. Justin. I love chatting with you.


Justin Kilb  27:23  

Yeah, I love the podcast. Thanks for having me on really enjoyed it.


Rob Stevenson  27:28  

How AI happens is brought to you by sama. Sama provides accurate data for ambitious AI specializing in image, video and sensor data and notation and validation for machine learning algorithms in industries such as transportation, retail, e commerce, media, med tech, robotics, and agriculture. For more information, head to