How AI Happens

Teaching Machines to Smell with Theta Diagnostics CTO Kordel France

Episode Summary

Out of all five human senses, our ability to smell is considered to be the most strongly linked to memory, learning, and emotion, and is arguably the most elusive. Existing AI development has largely been focused on more concrete senses like sight and hearing. But until we’ve incorporated all five senses into an artificial being, it’s unlikely we’ll ever be able to achieve human-level intelligence. That is why today’s guest, Kordel France, has chosen to take on the complex and challenging task of developing machines’ ability to smell.

Episode Notes

Kordel is the CTO and Founder of Theta Diagnostics, and today he joins us to discuss the work he is doing to develop a sense of smell in AI. We discuss the current and future use cases they’ve been working on, the advancements they’ve made, and how to answer the question “What is smell?” in the context of AI. Kordel also provides a breakdown of their software program Alchemy, their approach to collecting and interpreting data on scents, and how he plans to help machines recognize the context for different smells. To learn all about the fascinating work that Kordel is doing in AI and the science of smell, be sure to tune in!

Key Points From This Episode:


“I became interested in machine smell because I didn't see a lot of work being done on that.” — @kordelkfrance [0:08:25]

“There's a lot of people that argue we can't actually achieve human-level intelligence until we've met we've incorporated all five senses into an artificial being.” — @kordelkfrance [0:08:36]

“To me, a smell is a collection of compounds that represent something that we can recognize. A pattern that we can recognize.” — @kordelkfrance [0:17:28]

“Right now we have about three dozen to four dozen compounds that we can with confidence detect.” — @kordelkfrance [0:19:04]

“[Our autonomous gas system] is really this interesting system that's hooked up to a bunch of machine learning, that helps calibrate and detect and determine what a smell looks like for a specific use case and breaking that down into its constituent compounds.” — @kordelkfrance [0:23:20]

“The success of our device is not just the sensing technology, but also the ability of Alchemy [our software program] to go in and make sense of all of these noise patterns and just make sense of the signals themselves.” — @kordelkfrance [0:25:41]

Links Mentioned in Today’s Episode:

Kordel France

Kordel France on LinkedIn

Kordel France on X

Theta Diagnostics

Alchemy by Theta Diagnostics

How AI Happens


Episode Transcription

Kordel France  0:00  

So we went out trying to also define the data standard for smell. And in order to do that, we also had to invent the sensors along the way to solve the particular problem that we were looking at.


Rob Stevenson  0:13  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Here with me today on how AI happens is a really interesting fellow. He's making machines smell. You're gonna hear all about that and more from our guest today. He is the co founder and CTO of theta diagnostics Cordell France Cordell, welcome to the podcast. How are you today?


Kordel France  0:57  

I'm doing well. Rob, thank you very much for having me.  


Rob Stevenson  0:59  

Really pleased to have you on and I'm just really tickled by that logline. Like a lot of the guests that I have are working on fantastically complicated things I know you are as well, but it can rarely be so neatly summed up that way.  


Kordel France  1:12  

Yeah, it's the one liner is we're trying to give machines a sense of smell. And that's the simplest way to putting it.


Rob Stevenson  1:18  

There's much more of this episode than that. We're gonna get into exactly what that means and how and what they are smelling and how well they're smelling them and all of those questions. But first, I'd love to get to know you a little bit Cordell. So would you mind sharing a little bit about your background? And what led you to found this company?


Kordel France  1:33  

Yeah, absolutely. So yeah, I'm CTO of data diagnostics. And primarily, I'm an AI engineer and a scientist by schooling and by trade. I started my journey with AI way before it became cool. A couple of years ago, I actually grew up on a farm. And when I was a kid, there was a lot of new technologies coming out. And my father liked to take advantage of some of these new technologies. And so he actually bought a self steering system for one of these large tractors and put it on some of the equipment of our farm. And whereas a kid, you start to see you see these giant machines moving by themselves. And that was like something straight out of a sci fi movie for me. And from there, I cos asked my dad, I go, how do I work on this? How do I make these things drive themselves? Like what do I need to do to contribute to this field, because you need to be good at math, you need to do this, and you do this and you do this. And so that really set the stage for the rest of my career was the farm really. And so I continued on to robotics and AI in mathematics or school, and worked at a couple of different companies, building autonomous systems. And that eventually, transmogrified into secret down the road where my co founder and I took the leap to leave our comfortable nine to five jobs started a company called Secret technologies that was acquired by what is now outdated diagnostics. And we've been doing a lot of incredible things since


Rob Stevenson  2:55  

I love this origin story for you. Because when you know thinks the furthest thing away from AI and ML and generative and sensor fusion, and all this is farming, but that is absolutely not the case. And this maybe betrays my own ignorance. But I was really interested to learn how technologically advanced farming is these days, like optical sorting, and drone tech and all this stuff as being used commonly in agriculture. really new to me, but you got to see that kind of firsthand like in this early proto self driving vehicle, basically.  


Kordel France  3:27  

Yeah, it's interesting, because agriculture had self steering or self driving, if you will, decades before it actually became mainstream for public transportation. Now, granted, driving through field is much easier of a problem to solve than driving on a highway where there's pedestrians or other vehicles, etc. But it was interesting to see that that was kind of a pioneering field in that effort that captured that technology and made it mainstream. And there's still a lot of opportunity left in agriculture, in my opinion, that could leverage just a lot of the existing AI techniques, you don't really need to look super forward looking. We just need to adopt some of these newer technologies to agriculture. There's a lot of ripening field for innovation there.


Rob Stevenson  4:06  

Pardon the pun. So I love to that your farmer father was just like learn math, learn stem, basically, if you actually want to take this to the next level. So what was your path? Like? What did you study? And how did you get to the point where you were founding AI companies?


Kordel France  4:20  

So I started studying math a lot. I excelled in math as a child, and went on to eventually into college to graduate college early and did my bachelor's in applied mathematics, statistics, and physics. So I double majored and then went into the industry for a while. And then throughout that entire process, I was programming so I took a few computer science courses throughout my bachelor's degree, but I didn't actually major computer science. But I was I was using CSS using programming throughout my career just in general to accomplish the tasks I needed to so I became very fluent in programming. And then I actually did my Master's in AI and CS and then went on Gonna do the PhD in CS as well. So my process was very unconventional, I didn't start when CS is the beginning. But if you think about where AI is, and what the kind of the definition of AI in general is, and how it actually became the field it is today, it's all built on mathematics and statistics, fundamentally, computer science principles as well. But a lot of what enabled AI to take hold was faster computers that can do solve these complex mathematical equations. And so that really laid the bedrock for me to enter the field of AI. And then learning how to program allowed me to apply it to the actual field, the industries I went to, were highly involved with autonomous systems. So I was in aerospace for a while working in sensor systems and drones, things like that. And that gave me a really good sense of the field of robotics and multimodal AI, sensor, fusion, etc. It's interesting because one camp of folks will call merging multiple data streams together multimodal AI, and then the other camp we'll call them sensor fusion. And so but it's really kind of similar, and it's kind of all coalescing. So all of that experience kind of culminated into what is really an interesting point in artificial intelligence right now, where we're actually manifesting a lot of these different modalities AI and trying to make sense of them through a general brain, if you will.  


Rob Stevenson  6:20  

That's interesting. No one had ever called out the difference between multimodal AI and sensor fusion. But sensor fusion is sort of like the, the Nirvana of multimodal AI, right? People are like, Oh, well, you can achieve some sort of data synthesis bliss, with all of your different data inputs. And then you have sensor fusion, whereas everyone else is just like, we have lots of different modalities. And we're doing the best of what we got. So multimodal feels like a more realistic, it feels it feels like the reality of where we are with it right now. Anyway,  


Kordel France  6:50  

I agree. And I've encountered that a lot with different things between computer science, mathematics, statistics, and other fields. For example, principal component analysis is one thing in computer science. In mathematics, it's singular value decomposition, and statistics they call eigenvalue decomposition. But it all really means the same thing. And so a lot of these different terms, it's kind of interesting, you come from different fields, where these terms that are seemingly different, solve a similar problem, but have same definition. And it's just interesting to kind of like see that across fields and like, we need to talk more one another between math, science, statistics, biology, computer science, etc. I always found that interesting.  


Rob Stevenson  7:29  

That's the real sensor fusion is getting people from different disciplines on the same page. That is funny. It's sort of a tomato tomahto problem with a lot of these different this vocab anyway,


Kordel France  7:37  



Rob Stevenson  7:38  

Given your background, it feels like you could have probably gotten a lot of different directions. So I am curious why you chose the one you did, at least for the moment. First, I guess we should establish what it is you do. So we have the logline, where teaching students how to smell. Can you just jump into that a little bit? Tell us what that means.


Kordel France  7:54  

Yeah, so right now we see a lot of focus from researchers and practitioners and companies on vision and audio and speech, right. So those modalities, and no one can argue that humans are very good at. But we could also argue that AI is getting very good or better than humans at spotting patterns and images, and being able to detect speech patterns. And now chat GPT really doing well with formulating language and constructing really formal, concise documents. That's, I'm not gonna say a solved problem. But there is a disproportionate amount of work being done on those senses versus others. And those are the highest bandwidth senses for humans. I became interested in machine smell, because I didn't see a lot of work being done on that. And as we approach the, what the definitions of what a lot of folks give Artificial General Intelligence, there's a lot of people that argue we can't actually achieve human level intelligence. Until we've met, we've incorporated all five senses into an artificial being. So vision has a lot of work into it, speech, hearing, taste, and smell are largely chemical based. And there's not a whole lot of work in that. And one can argue that from a human perspective, if you're in a French Cafe, eating a crepe, and you're on vacation, right you can't really experienced that without the smell a you're going to experience it, you lose something. We all experienced this probably recently with pandemics some of us lost our sense of smell. I know I did, I found that I did lose some experience some memory detail with certain things that I've gone through. And so to be able to smell the crepe smell the Baghdad's being baked in the back, see the Eiffel Tower in the background here, the passers by walking and talking, that contributes to an experience that all is integrated into intelligence, and looking around like co founder night, and we just didn't see a lot of work being done in the field of smell. I mean, if you look at dogs, they have the incredible ability of being able to track cents from miles away and do it. It's such a finite level that we can't even I mean, we can't even sympathize with because we just don't have that capability. And I'm looking around I'm trying to figure out why this is not where replicating machines yet, or at least I didn't see that it was. Now there are a lot of technologies out there that are electronic noses, if you will. And that consents at a very high concentration. But we wanted to look at how do we detect at a very small concentration, and detect generally any compound out there. And so we really started to drill into this problem, looked at the different technologies out there that were being attributed to that, and trying to solve similar problems and really tried to figure out what's being done and what can we do. The other interesting thing is like, if you look at Vision, there's a standard modality or there's a, there's a data standard for vision and hearing, right? vision isn't mp4 files, PNG files, JPEG files, that's what's fed into machine learning models, sound files, or WAV files, mp3, whatever, mp4, there's a data standard for that, and how machines train, how does a machine train on smell? What's the data standard for that? Is it a spectrogram? Is a time series is it? What is it? How do we define that. And so we went out trying to also define the data standard for smell. And in order to do that, we also had to invent the sensors along the way to solve the particular problems we were looking at. So that all kind of culminated into what was originally what we were trying to do was basically track sense through navigation. So navigation by sense, and that transmogrified into several other applications, which I can speak about later on. But our founding mantra was, how do we build a machine that can track sense just as a dog can?  


Rob Stevenson  11:28  

So a less specific version of this would be like a carbon monoxide detector or a smoke detector? Is that kind of the point? Oh, oh, one and you're looking at like, Okay, what if we would have a device could detect smoke and carbon monoxide and a million other chemicals, basically, is, is that the idea?  


Kordel France  11:46  

Yeah, so there's a lot of devices out there that can smell different chemicals. But one of the problems is they're either very specific to one chemical, or they can only detect a very high resolution. So smoke detectors are a good example, they pretty much detect only carbon monoxide. And they have to have all things considered a pretty significant concentration in order to go off, we were looking at a much smaller resolution or smaller accuracy. Smoke detectors, depending on which one you get can be on the order of part per 1000, or maybe slightly better, we're looking at a part per trillion resolution and trying to get a part per quadrillion resolution. So that means if there's a quadrillion particles within a cloud of gas, right or within within the air, we should be sensitive enough to detect one of those particles. And that shouldn't be enough to get our sensor to go off. So that's on par with dogs and canines, they can detect one particle per quadrillion. And so we're not there yet, but we're trying to get there. And being able to be that sensitive. And that precise, it opens up several different applications. And one of those was which we can talk about a moment, a breathalyzer use case. So being able to actually monitor chemical compounds on human breath, to detect different medical conditions. The other aspect of that is that we not only had to be very sensitive, but we also had to look at every compound available or a much bigger battery of compounds than what's currently on the market, right? If we have a device that's very specific, but only detects one compound, well, how are we differentiating? And how are we actually contributing to the sense of machine smell. So we built our device to be tunable. So that I can detect several different compounds. So just a software reprogramming, and then this happens over the course of milliseconds. So we can detect a 16 compounds simultaneously. And that constitutes really a very good representation of what's in the air, you have multiple compounds in Utrecht at once, plus a very, very small resolution that we're looking at. And that opens up several different applications that you can really enable through just clever engineering.


Rob Stevenson  13:46  

I think it's worth clarifying the per 1000 part per quadrillion scale. So this I'm assuming this is like how we measure the sensitivity of smell, right? Would part per quadrillion be like one particle amongst a quadrillion particles? That's what the dog can detect?


Kordel France  14:02  

Correct? Yes.  


Rob Stevenson  14:03  

Okay. And your average smoke detector, you said is one part per 1000?


Kordel France  14:07  

It depends. I think some of them are a little bit better now. But in general, they're about part per 1000. Yeah, maybe part per million if you have a really good one, but it's good to have recurring samples. It's not very fast, and it's between a part per 1000 and part per million.  


Rob Stevenson  14:19  

Okay, and then what is the human knows?


Kordel France  14:21  

It depends. Some folks have part per million resolution, some only have proper 1000. It also depends on the compound to so like, my sense of smell is more or less gone still. And I'm at a probably at Parker 1000. I haven't actually calibrated that in the lab, but that's what I'm, that's what I'm guessing. And most humans can do part per 1000 parts per million.


Rob Stevenson  14:41  

Okay, got it. And that's like trainable, it's like you think of like a smell. Yeah, if for example, these people just like learn how to smell really well. Right. And so they're like they're training their nose like a muscle. Is that possible for humans to improve one cent or is it just you got what you got?  


Kordel France  14:54  

Yes, it is possible. I was reading an article the other day of a woman who was able to I smell Parkinson's, at a very high concentration. They didn't give a concentration level. But I imagine it's, it's a fairly small resolution, otherwise it would have made headlines. But yeah, she's able to smell Parkinson's, and various other medical conditions by just smelling patients breath in their general well being. So I thought that was interesting. But she said she actually trained herself. In order to detect that I was looking, she was gifted with a very high resolution nose. But it is interesting because smelling is largely a chemical, it's an electrochemical process. And just like some of us have different sight sharpness and different hearing sharp versus we can also have different smell sharpness is some of us can smell different compounds, just like we can see different sharp menaces and hear in different thicknesses. So for example, some folks can smell aldehydes better, some folks can smell alcohol is better. It just depends, it's kind of comes down to I won't say genetics, because I don't know enough to say whether that's true or not. But I think it's largely influenced by the person and just how our brains are weakened or constructed. Dogs, for example, have 40% of their brain attributed to the sense of smell, whereas ours is much less. So we're just never going to be on par with dogs. But you can definitely train to a certain degree, a human can train itself to detect certain compounds. But we have a we have an engineering limit there on what we can detect, we'll never get the parts per billion or per trillion and uncertain things.  


Rob Stevenson  16:22  

Right. So it maxes out around a part per million is probably the idea. So you mentioned something interesting a moment ago, I want to spend some time on which is you said smell is an electrochemical process. And this happens a lot of this podcast where I ask what sounds like a very elementary kindergarten level question, but I think it's important in this case, the question is what is smell? People have been asking that for computer vision. It's like, oh, well, it's context, it's being able to identify this as a bookcase. This is a bicycle, this is a stoplight. This is a color. This is a person. And so then we got bounding boxes, and then we started putting, having people annotate the bounding boxes, and that was the approximation of vision is like, Okay, what is vision? And what does that mean for a computer to be good at vision? You kind of have the same question, right? What is smell? What is the data we get from smell? What is the judgment we make with that data? That's kind of what you're working on, I would imagine or at the very beginning, you have to ask that. So sorry, with that, that long Prelude, I want to ask you what is smell?


Kordel France  17:19  

I draw an analogy with smell to the same way that I do with vision. So basically, an image is a collection of pixels arranged in a certain pattern that represents something we can recognize. And there are different channels, you can have a grayscale picture, which is one channel, or you can have an RGB picture, which is three channels. To me, a smell is a collection of compounds that represent something that we can recognize a pattern that we can recognize. So for example, it's easy for us to smell bread being baked. But that's that's a general pattern, right? What are the chemicals that are actually being emitted as breads being baked, at least my nose is not good enough to go and say, Oh, this is Halloween, there's a little bit of benzene in there. It's 15% kept saying, like, I can't do that. I don't know anyone else that can. But to have a machine that can do that. I mean, they can do that. And we're proving you can do that with that data. So being able to say that there's a collection of particles here in a gas or liquid. And being able to break it down to its fundamental parts to say it's this percent this, this percent this this percent this, there's a lot of power just to being able to say what's around me, and what I need to be aware of how can I make sense of this information?  


Rob Stevenson  18:28  

That is a great example of the bread baking one, because you can smell, okay, I know those bread baking, and not pull out the individual compounds and individual chemicals. But each one of those chemicals is more than the sum of its parts, right? Bread is a different thing than all those chemicals in a petri dish, maybe I don't know. And so then the idea for smell for a machine would be okay, it can find every one of those parts, and then it can make a judgement about what the same is the same judgment that humans make subconsciously, which is, ergo, its bread. So I assume that's a it's an illustrative example. But I assume that's maybe not the use case you are optimizing. I'd love to know like, what are the use cases that you are currently trying to operate on? And then what are some examples of other ones down the road?  


Kordel France  19:13  

So right now we have about three dozen, four dozen compounds that we can with confidence detect, and they're very common. Well, I won't say common, but they address a lot of use cases by just knowing those four dozen compounds. Those compounds in different permutations represent different things at various significance. But that's kind of where we started. And we originally, as I said, we were trying to do navigation by scent. And then as we started getting more into it, you know, the world shut down and 2020 If everyone didn't know, and there was a lot of demand that had to be addressed there. So we actually, in order to help out we took our devices and tried to say, Okay, well, people emit breath very frequently, and we know that breath is a good indicator of what's going on in your body. There's a lot of studies To show this, there's 1000s of chemicals that are emitted. And those chemicals, depending on their severity, indicate different things going on in your body. So we took our sensors, and we decided to see if we could detect COVID detect just general respiratory conditions. So Coronavirus, pneumonia, influenza A influenza B, and we actually trained it to detect this through our lab hooked it up to what we call our autonomous gas system, which runs of simulations of several different gases to figure out which gases we need to look at, and what their intensities need to be. We built several devices and to try to kind of capitalize in help rather the overall effort to relieve the pandemic. And then with the success of that, we looked into more significant use cases. So lung cancer. Right now, we are going through a clinical trial, I can't discuss the results specifically yet. But in general, I can discuss what's going on, we're actually detecting lung cancer on patient's breath. And the goal is, we'll be able to take our device, have them just give a 32nd breath sample and hopefully, relieve the need of a biopsy. So if you have lung cancer, that's highly probable, the doctor will advise you go in for a biopsy, you have to go under the knife to actually have to take a piece of tissue from your lung, there's complications that can arise. It's an incredibly tedious process. And it's scary, right? So with our system, we can just take a 32nd breast sample. So there's a high focus right now on trying to attribute our tech to the medical sector, and try to relieve some of the demand there. This whole field is called Breath omics, where you're monitoring breath and diagnosing which compounds are present what's going on. And we have our investors are heavily focused on trying to master this realm. But there's several other applications that we could look at down the road, the compounds that we mastered, give us good indication that we can look at fuel leaks, we can look at explosives being detected, we can look at several other things, drug detection, etc. So there's several other issues that we can address with our sensors as we move down the road. And the good thing. The interesting thing is, once we've built one device, it can be easily attributed to other domains. So it's not like we're locked into the medical field, it's very easy to just reprogram the device and say you're looking for these compounds that are attributed to drugs now and start gathering data and addressing that industry.  


Rob Stevenson  22:18  

Right, right, same sensor, but just tuned to different input right to to detect different things.So how do you train these things.  


Kordel France  22:26  

So that was actually the biggest contributor to our success is building, building the builder, right building the thing that simulates all of these compounds together, we built an autonomous gas system called AGS. And it's this giant machine with pipes and tubes and pressure regulators and all these different environmental controls that take several compounds, mix them up together, simulate different environmental conditions such as humidity, temperature, pressure, and then runs several 100, or several 1000 simulations of these tests over our sensors. And our sensors learn from that to say, Okay, this is what a positive signal should look like. This is what a negative signal should look like. I can't quite diagnose what's going on when these two chemicals are mixed together, but I can figure out what's going on when I see them individually. So we start to naturally draw, not only identify patterns, for specific compounds, but also identify are the limits of our sensors for which they can detect interactions between other compounds. And that actually informs better sensor design and informs better software design, etc. So it's really this interesting system that's hooked up to a bunch of machine learning that helps calibrate and detect and determine what a smell looks like for a specific use case and breaking that down into its constituent compounds.  


Rob Stevenson  23:44  

So there was never this time where you were like, Alright, here's a spritz of carbon monoxide into the sensor and then log that piece of data or like, how did you even go about collecting all of the various chemicals that would need to be detected.  


Kordel France  23:55  

So we get to that point, right? Because that's a real use case, we're never going to have pure compound directly fed over the sensor. So before we get to kind of more of the real world use cases where we put something in a room and see how long it takes for the device to actually pick it up. We take finite gas samples, and we would known concentrations, very specifically known concentrations, and we were in several permutations of these concentrations and merge them together. A lot of this to start is based on literature. So for example, for the medical device development. It's not like we just took a bunch of compounds. We're like, Yeah, I think this is what lung cancer looks like. We use a lot of the existing literature to inform us about which Gen, three dozen compounds should we be aware of? And from there, we'll start doing our own studies to assess how will our sensors detect if we can actually manifest the interactions between so a lot of it's literature driven and then we use our own lab analysis and our own data to drive whether or not the literature is actually correct, because a lot of times that literature is not the exact use case or it might not necessarily be correct for lung cancer.So we have to adjust. And then we have to validate it in humans. So maybe looks really good on the guest system. But it doesn't actually turn into that, because those exact same results on human breath. So there's a lot of this, this feedback loop where we're doing gas system tests, human tests, tests, and tests, environmental tests, usability tests, and they all kind of recursively feed into each other, to make the sensors more continuously learn. And that's also part of the beauty of what we built with Theta, our software platform called alchemy, makes sense of all of this data together, and allows us to detect sensor drift to compensate for that, and how interactions happen over time, temporal differences, environmental differences, and it's always on, it's always learning, it's always taking in data. And it's been really powerful for us to be able to use that. We use several different algorithms in order to do that. But it's the success of our device is not just the sensing technology, but also the ability of alchemy to go in and make sense of all of these noise patterns and just make sense of the signals themselves.


Rob Stevenson  26:03  

What is the output? You mentioned a moment ago that like, Okay, we know what a dot wav file a dot mp4 file is? Are you having to create a new file extinction? Like what does the output of these sensors even even look like?


Kordel France  26:15  

It's honestly, I can't get into too much detail for proprietary reasons. But you can think of them like spectrograms. So we actually use sensor fusion or multimodal learning, however you want to look at it. And we take all the compounds that we can, and we fuse them all together, and we try to represent them into an image format or an image like format that can be interpreted by a human. And so if you were to look at, for example, what Tallinn looks like on a spectrogram, versus what Benzin looks like on a spectrogram, they look different, and it's visually different. Now, we put them into images, just so humans, so we can visually check and make sure that our model learning but in reality, these data sources are in 38 dimensional space, there's no way we can make sense of it. So each data pattern is 38 dimensions, and it's really just represented by these giant tensors. So let's call it a dot sent file, I don't know for now, but it's a dot set file is a 38 Dimensional tensor that represents a smell. And that's kind of what we've been using to train our sensors on and train our AI on, we've found that we get a lot of lift from merging other scent detection mechanisms together with this. So for example, GCMs is an optical device gas chromatography, mass spectrometry, it's a machine that's more or less an optical device, that is considered the gold standard for detecting compounds with an error. So in order for us to check whether or not we're actually detecting the right level of intensity for a certain compound, for example, Tallinn, then we run those tests over our sensors. And then we run those identical tests over the same gas sample in the GCMS machine. And we make sure that we see parity between the intensities there. And then when we originally started, we were fusing those modalities together. And the GCMS was act acting like a supervised trainer over our sensors. So we'll come back and say on the GCMS, the sensors don't look like they're actually looking for Tallinn at this specific degree, I think we need to adjust some settings, so that it will advise to go back and adjust setting the neural network for Tallinn. And then we just have this back and forth exchange is closed loop exchanged between GCMs and the sensors until the sensors can stand on their own. The way I can liken this is I know that some self driving car companies use LIDAR to train monocular just day vision color cameras or day color vision cameras to detect depth. So they're stereoscopic vision. But you can get depth with monocular camera if you overlay LIDAR over the top of it and train a neural network to infer depth. And then eventually you just take the LIDAR away, and then you have a neural network that in first depth, right, so we're kind of doing the same thing where the GCMS was the supervisory over our sensors. And now we don't need the GCMS anymore. But all of our sensing data, all of our sensor data was backed and founded on ground truth GCMs data, which is considered the gold standard for machine smell, if you will.  


Rob Stevenson  28:58  

Okay, so you would not wish to keep the GCMS in the sensor that is merely for training purposes.  


Kordel France  29:05  

Correct. The GCMs is huge. I mean, it's like the size of a fridge. So it's ginormous. And it's not practical to like give as a consumer device or put in a physician clinic really. We wanted to make sure that it didn't look like we were drinking our own Kool Aid, or we were basically inventing vaporware. So we we looked at this state of the art, we said what's the gold standard for how chemicals are generally detected. And GCMs was was a general consensus among our science team, and all of our advisors. And so we built our sensors to basically be trained in parity with GCMs. Now our sensors are a couple of grams in weight. And the device itself is less than a pound. So you can imagine something the size and weight of a fridge versus something that sits in the palm of your hand. You can move around. It's pretty compelling, right? The GCMs can look at a large array of compounds, but that's just because it's a much more mature technology that's been around for decades. They also send it as an example they send These GCMs machines up to the space station periodically. And they're very mechanical in nature, they need to be refurbished. And they they do a lot for exploration of life because they're looking for organic compounds that indicate life. And so one interesting application to me as we start to grow as a company, is to use our devices in the sense of a GCMs. I mean, if you can send something out, that's a half a pound on a spaceship, or on a rocket, versus something that's the size of a fridge and the weight of a fridge. I mean, every kilogram is hundreds of millions of dollars, 10s of millions of dollars. So we save a lot of money, we can use these things in a lot more applications. So there's a lot of compelling use cases there that are enabled just by a simpler form factor.


Rob Stevenson  30:39  

When you go back to the example you gave of smelling the crepes, right, or smelling the croissants, and you see the Eiffel Tower, and you hear lovey and Rose playing right, you know, you're in Paris. That is the context that is given by multi modalities, that is sensor fusion. Do you have any desire to add modalities into your sensor to create context?


Kordel France  31:04  

Yes, absolutely. So one thing that excites me, and we have some we have a lot of data behind vision for this is that our sensors are much more informed when they have context. So the best way I can think of this is likening this to a dog, I have a dog that I just got recently, and I wish I would have gotten two years ago, because I've learned so much about just general machine smells since then. But I watched my dog, and it tracks the scent. And it's it doesn't even look at anything, it's only using its nose like it's looking directly at the ground, and it periodically looks up and it checks with its eyes. Right. So it's primarily using one modality. And then using the using vision as a secondary modality, we found a lot of lift with that. And that we're able to use vision to kind of supervise incident tracking. So we're literally just making a very similar application as the dog we use cameras that can perform better object detection and better tracking. But the primary tracking mechanism is our machine olfaction sensors, our electronic noses. That's one way of fusing those modalities together, we have a lot of experience in sensor fusion and multimodal learning, because we had to use again, we had to use the GCMS with our sensors and fuse those together in order to get our sensors to behave the right way. And to actually look at the same resolution as the GCMS did. So it's really easy for us to just kind of add in more modalities now because we're set up for that. But I'm really excited to start merging that in together with more robotics, intense applications. Because you see a lot of investment right now with these humanoid robots between figure AI, Tesla app Tronic, all these other sentient sanctuary, all these other big humanoid robotics companies, they're all lacking the sense of smell, none of them have the sense of smell built into them. And the goals of a lot of these companies are to put them on the manufacturing floor or to put them in homes or to put them in civil serving scenarios. We need to be able to have setbacks baked into that, right. If you can detect drugs as they come in, if you can detect potentially volatile substances that might be horrible for humans, that really increases the value of that robot and the service that can provide. So that's one thing that I'm really excited about. It's kind of hitting the nail on the head, right? Well, of course, we want to put a nose on a robot, right? We're putting everything else on them. But it's not something I see being actively worked on. And so I think we have an opportunity there to be a front runner.  


Rob Stevenson  33:21  

Yeah, it sounds like you already are. And it sounds like the use cases are just like they kind of flood into my mind. So this is really, really exciting. I can't wait to follow this company and and see where these sensors go, Cordell, this has really been fascinating. Thanks for getting in the weeds of the meat and getting technical. I really loved hearing about all that stuff. So as we approach optimal podcast length, I will just say thank you so much for being on the podcast today and for, for sharing all this information. I've loved chatting with you,  


Kordel France  33:45  

of course, thank you very much and thank you for your listeners. It's a privilege to be on the show.


Rob Stevenson  33:50  

How AI happens is brought to you by Sama. Sama provides accurate data for ambitious AI specializing in image video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e commerce, media, med tech, robotics and agriculture. For more information, head to