How AI Happens

Brilliant Labs CEO Bobak Tavangar

Episode Summary

Bobak shares the process of developing Brilliant Labs' AR glasses, including the function calling they developed to train the device to decide which LLMs to query.

Episode Notes

Bobak further opines on the pros and cons of Perplexity and GPT 4.0, why the technology uses both models, the differences, and the pros and cons. Finally, our guest tells us why Brilliant Labs is open-source and reminds us why public participation is so important. 

Key Points From This Episode:

Quotes:

“To have a second pair of eyes that can connect everything we see with all the information on the web and everything we’ve seen previously – is an incredible thing.” — @btavangar [0:13:12]

“For live web search, Perplexity – is the most precise [and] it gives the most meaningful answers from the live web.” — @btavangar [0:26:40]

“The [AI] space is changing so fast. It’s exciting [and] it’s good for all of us but we don’t believe you should ever be locked to one model or another.” — @btavangar [0:28:45]

Links Mentioned in Today’s Episode:

Bobak Tavangar on LinkedIn

Bobak Tavangar on X

Bobak Tavangar on Instagram

Brilliant Labs

Perplexity AI

GPT 4.0

How AI Happens

Sama

Episode Transcription

Bobak Tavangar  0:00  

The ability to kind of pick and choose the model and have the complexity layer running on top is a really powerful one. Because there's just going to be so much innovation in that sort of foundation model world. At this stage, you don't want to be locked to one model or another.

 

Rob Stevenson  0:16  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Joining me today on how AI happens is the CEO and co-founder over at Brilliant Labs, which is a really cool company doing amazing things and embodying AI and egocentric perception. Bobak Tavangar,  Bobak. Welcome to the podcast. How are you today?

 

Bobak Tavangar  0:59  

Doing Okay, Rob, great to be here.

 

Rob Stevenson  1:00  

Thanks for figuring this out. You are in Hong Kong. So it is first thing in the morning there. It's the end of my day. But here we are podcasting making it work. It's a planetary global podcast. What about that?

 

Bobak Tavangar  1:11  

That's right. It's one world,

 

Rob Stevenson  1:13  

you're working on some really cool tech. And we're gonna get into all of it. The logline is this is wearable AI embodied AI, and you have succeeded already. And I think the largest challenge in wearable tech is designing something that doesn't make someone look like a complete nerd. And you absolutely succeeded. Your glasses look awesome. They are really cool. We'll get into that in a minute. But first, I would love to get to know you a little bit about back would you mind sharing a bit about who you are your background, and what led you to start this company.

 

Bobak Tavangar  1:42  

I was born in Philadelphia, on the playgrounds where I spent most of my days and went to DC for school, and then very quickly came to Asia and just immerse myself in the beauty and the dynamism of this part of the world, learn Chinese, learned how things were made. And that sounds kind of cliche or kind of trite, but I think there's a lot to that, and being here and being able to speak the language, it helped me dive into that world. And back where I came from, that's the boring stuff. You know, people don't want to know how things are made. They want to know, the strategy around it, or, you know, the design and all of that's cool. But sort of the roots of the tree very much or how you make something and they've mastered it here and and so I got a real education on the ground in that started a couple of companies out of grad school, all in the software space, went to go work at Apple and it was at Apple that I once again had this sort of front row seat, you know, a masterclass in terms of how a great company makes some of the greatest products in the world and learned a lot. And then I left Apple to start brilliant labs. So it's been kind of a whirlwind, but full of learning.  

 

Rob Stevenson  2:57  

When you started brilliant Labs was wearable tech always the plan or what is the journey been like so far? Yeah,

 

Bobak Tavangar  3:04  

well, we started brilliantly. It was off the heels of Magic Leap, kind of struggling to capture the mass imagination with their first device. A couple of other AR companies had just gone bust. There was one called meta before Facebook rebranded as meta, there was one called odg. And there were a bunch of others. So when I first started at Apple, I was very excited about the space and I just assumed, okay, there's all of these really talented people who understand all the optics and the power electronics and the things that make this hard, they get it, and they're very well funded, like we're talking billions of dollars. And they're gonna figure it out one of them or a few of them, they're gonna figure it out, you know, Microsoft had the HoloLens. And so to me, I felt okay, it's an eventuality, and I just have to focus on software, I just have to think through what are the use cases and what brings us to life as an experience. Lo and behold, no one figured it out. And the big goggles, the big headsets, the $3,000 thing that you didn't want to use, who knew, but no one bought them. And so, companies started folding. So I saw that and I started to think this is really interesting. I thought it was just a matter of time, people and money. It's none of those things. It's a matter of a different idea, a different path up the mountain. And I keep using this analogy, you know, and the Lord of the Rings, we all remember it, let's not deny it. We all love the Lord of the Rings. And there were armies that marched through and just you know, went to battle. But it was a humble Hobbit that found a different path. That was the one and trusted to bring the ring to the mountain. And so I kind of use this as an analogy that someone with a really different idea, a different vision, and they can be small and you know, really that actually might be their strength, because they can go kind of relatively undetected and get where they need to go without drawing too much attention at the wrong time. So I saw that and I said you There's an opportunity here, we could turn the focus from this away from console great graphics and sort of fantastical images in front of the eye. And we could turn it toward artificial intelligence, you know, back then we were really focused on computer vision. So a lot of bringing computer vision and its various incarnations in front of the AI to make sense of the objects, the faces, you know, everything that you saw. And then very quickly, we brought in that to generative AI. So now we're looking at all range of exciting mostly multimodal models, AI search. We can talk more about that. But yeah, that's that's the genesis.

 

Rob Stevenson  5:37  

So I believe the idea for an unassuming small creature to destroy the One Ring was Gandalf. So I have to ask in this metaphorical incident, are you Frodo Baggins? Are you Gandalf? Who are you about back?

 

Bobak Tavangar  5:55  

Yeah, we're really digging into the analogy.

 

Rob Stevenson  5:57  

It hit me with the hard hitting questions.

 

Bobak Tavangar  6:00  

Yeah, I mean, I'd say I'm Frodo, me and my co founders. Collectively, we compose Frodo and Sam wise, who's Gollum? That's the question

 

Rob Stevenson  6:09  

the probably me, I'm not going to be the voice. But that was apropos. I'm just trying to extract some value and maybe get some free glasses that

 

Bobak Tavangar  6:22  

can help is probably one of our angel investors, who kind of set us on the path encouraged us, but then like, checks in every once in a while disappears, when you sort of didn't think that he would, but then reappear at some point down the line, and look, oh, great, you're still with us on this in some way. That's probably the Gandalf role.

 

Rob Stevenson  6:42  

Yeah, the series, see when they drop 100 million is when they become Gandalf the White, we have to get away from the Lord of the Rings metaphor, people are turning off. But I'm glad you started out with this belief that wearables were just a matter of time that these companies were investing in it, someone's going to figure it out. And yet, you think back to all these examples like Google Glass, and the Snapchat glasses, or the wearable exercise bands, they haven't permeated, like, for whatever reason, it just has not hit this critical mass. So I would love to just know your opinion on why that is why wearable tech is not as ubiquitous as people planned on it being  

 

Bobak Tavangar  7:20  

I kind of see it a little differently. I mean, I think that Wearable technology is sort of creeping into our lives in different shapes and forms. In now where I go, wherever I go, I see an Apple Watch, I see air pods. So I guess all that's to say, we have a lot to thank Apple for, you know, really being a flag bearer for doing wearables. Well, and at a price point, and you know, with sort of really lovely, delightful functionality that more and more of us can get behind. I think before Apple did watch and air pods, it was challenging. You know, there were early pioneers like pebble, but the Pebble watch. And so that was super cool, open source. But I think the big hurdle is glasses, you know, like doing a watch and doing like air pods, these things are, you're opting in at a moment of time, in terms of like wearing it being perceived, wearing it and using it, glasses are just on you, and you're not opting in or out, you know, they're part of the wardrobe, at least the way that many of us wear them today. And so you're really buying into it as this pervasive fashion product in your life, maybe the most visible before your wardrobe. And so I think the hurdle is a lot higher there. So that's why we knew that as a first step, we have to zig or others have zagged. So to speak, we need to do something that is not only functional and priced, right, we need to do something that looks beautiful, that is thin, that is lightweight. And so you know, frame it weighs less than 40 grams, very light, it's super thin, both kind of around the lenses and down the arms. There's no awkward kind of some of these glasses, they have this wire thin frame, but then all of the chaos of the household has been sort of swept under the rug. And it's like in the back here. So there's like this massive like battery that's kind of hugs your temples. But they don't want you to see that. So. So I think the bar is high in that regard. If you can cross that, or if you can kind of overcome that hurdle, which I think we are starting to I think there's so much more that we can do. But I think that we're starting to get there. Then you start to think about, okay, there's necessary Where's sufficient and I think that's where generative AI comes in. I think being able to deliver momentary, near real time generative AI experiences that gives you this sort of superhuman feeling. They can look up something around you that can give you directions where you need to go there recognize the object in front of you to illuminate some context or helpful information that can help you pick up on the mood in the room and not being hyperbolic like the GPT for row on frame. I literally asked like, how is he feeling over there, like what I see like a terse expression over, like what's going on. And these are interesting sort of cues that AI can help us pick up about a world and our fellow human beings. And so, especially if you're someone who is somewhere on the autism spectrum, and has trouble picking up some of these cues, especially if you're someone who's hard of seeing hard of hearing, and having some transcription, or having some helpful cues to indicate what's around you, who's in front of you signs on the wall, these are particularly acute problems and glasses like this with generative AI can start to solve them. But for the rest of us, it's an exciting time.  

 

Rob Stevenson  10:40  

Yeah, the whole part of it being on your face as a hurdle, as you call out, right? It's the You can't hide it. It's not like a watch where you can look at it or decide not to wear it. Like with the wearable with glasses, you're using it or you're not and which is to say you're wearing it or you're not. And at the same time, though, there's plenty of people who don't care about fashion. I see it constantly everywhere I go, here in Denver, Colorado. But for those people, if the tool was good enough, if the use case was good enough, they would wear it. So is that been the problem? Do do you think a lot of this tech just hasn't really, frankly, been that good?  

 

Bobak Tavangar  11:14  

I think so. I think that's definitely been part of the problem. Maybe this comes across as harsh I, you know, I think it hasn't been good, because no one's serious has really tried to do it. And do it in a way that isn't just like putting a game constantly in front of your face. There's plenty of people trying to do that. And I wish them good luck. But doing some a thin form factor genuinely useful, we believe should be generative AI oriented. No one serious has tried to do that yet. I think the first real player that I've seen that got their device out before we did, it's Facebook, you know, the metal Ray Bans. And so you can definitely see in their roadmap, they're gonna pick up speed on that. I'm sure they'll get display optics in there, they can start to do some heads up display stuff to complement a text to speech, AI experience. So they've got their Snoop Dogg personality in there. They love that stuff, their avatars and their Snoop Dogg personalities. That's Facebook. And so they're a serious player and to the tune of bleeding, like, what is it $15 billion a year on their reality labs division. So that's serious. That's a lot of money. And I would never doubt the seriousness of their intention. And their founder led, you know, Mark still runs the show. And like that means so much. So I think for the first time, we're going to start to see this space really pick up speed between startups like us. And we're not the only one, we're not going to be the only one. But also large companies like Facebook, and you can only imagine there's a bunch of others working on it.  

 

Rob Stevenson  12:44  

Yeah, of course. So when it comes down to being useful, and the utility, where did you begin, because if you started not merely okay, it needs to look cool. But it also has to do something that's really useful. You rattled off a bunch of awesome use cases that I think the case is well made for. But where did you start? What did you think was the most important thing to deliver?  

 

Bobak Tavangar  13:03  

where I started, honestly, was, I started out with this feeling of how it should feel this notion that there should be trust, like, it's kind of looking out for you, like literally looking out for you, that our memory is fallible, and that we think we see what we see, but we don't always know what we see or remember what we see, like, we have a second pair of eyes, that can connect everything we see, with all the information of the web and everything that we've seen previously, that is an incredibly powerful thing. So that's very much where we're going with this. And so I knew that trust had to be really important. But I also knew it needed to feel like that sort of nerdy friend, that phrases things in a way that always brings a smile to your face. Like, you know, when you're walking down the street with that maybe not all of us have had this experience. But you know, what we're gonna bring Lord of the Rings back here needs to feel like Samwise Gamgee sound was was a little bit sort of quirky, and nerdy, or at least he, he was sort of idiosyncratic and the things that he cared about. And he was always bringing them up. But he was always able to bring a smile to photos face, and Frodo knew that Sam was always looking out for him. Even if he wasn't the most graceful, he was always looking out for him. And so I think it kind of needs to feel like that relationship, hopefully, with a little more grace, and tact, but, you know, it's always got to be like, informative, interesting, you know, eventually, these things are going to become a agentic, they're gonna be able to execute on things for you and be able to do things for you. Not just tell you about things, but actually do them. You can give them commands. And so it'll be actually more like Sam in that regard, as well, you know, go and get this done for me, go and help me with this. But it'll also just bring a smile to your face. I think the relationship needs to be characterized by friendliness and, and a bit of wit. And so we're trying to design that. And so I think in terms of crafting the experience around this whole thing, that's where we started. Beyond that. I think we knew that there was a lot around like we just observed what companies like perplexity been doing an AI search are really, really cool company really, really cool product. And many of us use it now. So to be able to help us evolve beyond 10, blue links to get a precise answer to a really complex compound question. And yet to be in the moments in the middle of a workflow, maybe you're walking through a city, or maybe you just sitting in a classroom, or you're working through something repair, maintenance, on the job training, whatever it might be sort of in, you know, in the enterprise context, whatever it is you might doing, or you're just like, sitting on the grass and a beautiful day, and just a question pops into your head, you know, like, hey, when is that spring festival happening in my city? And like, Are there still tickets available? Where can I buy them? And how much are they going to cost? Like, questions bubble up to the human mind all the time. That's why Google has been so successful. And so we felt that needed to be embedded into your everyday moment, without you needing to pull out your phone. So like, that was one, I think our web searches is really compelling. But more so if you can combine it with what you see. So being able to walk into a store and see a beautiful jacket on display, and be like, hey, this thing's awesome. How much does it cost? And can I get it 30% off somewhere else on the web? Like, that's powerful. So there's a lot there that I think we're eager for people to kind of tinker with. And then of course, you know, we have GPT, four, oh, we have claw three models, we have a bunch of these sort of general knowledge search labs integrated into frame. And I think that's helping people just understand the nature of reality, and all of its different forms. And so beyond that, it's very much at the stage where we believe it has to be open source, because there's a principle there. But there's also a practical sort of element of, we got to all be playing with this technology, we got to all be experiencing what this is not just as a static kind of thing sitting on a laptop at a desk, but as a woven into our daily lives, as we're just out and about doing what we do. And so that's where we felt really strongly it needs to be embodied needs to be open, because we need to let developers sink their teeth into it, and really unpack what this can be for all of us.  

 

Rob Stevenson  17:05  

So that is really helpful to understand your decision making, like how should this feel right? What should the experience be? Trust, of course, super important seamlessness, like, what are people going to expect this thing to do? So yeah, those guiding principles are definitely helpful. If I were to put it on right now, though, like, what can I do?  

 

Bobak Tavangar  17:23  

Out of the box, you've got a couple of things, you've got GPD forto, that's baked in out of the box. It's all a free tier. And so we have some pretty generous limits, daily limits of usage. And so people can just get running with GPT forro quarrying, what they see what they hear. So in that it means that you can make GPT rows incredibly useful to analyze what you see, you know, whatever you see, you're just asking questions about what's around you, what you hear, and you can do live translation, or translation of what you see on a sign in front of you, or on a menu. And then you've got perplexity, so a web search stuff, you know, just last night, so I'm here in a hotel. And so I want to know, you know, are there any great restaurants near me that meet the following criteria and aren't going to break the bank, and perplexity was able to do a phenomenal search within my specified radius. And let me know the precise restaurant what people say about the top dish there, how much it's going to cost. And then I was able to follow up and say, Great, now tell me how to get there from the hotel. So all of that was like a hands free workflow as I was walking out of my hotel room. So it can do that out of the box as well. And then we've got image generation in there. It's probably the most experimental, we were big fans of image Gen. And like, what that can do, I think having image generation in front of your eyes, we kind of did it because we thought it would be like a pseudo trippy experience, like, Hey, I see this really beautiful scene, what if I could turn the sky purple? And what if I could put the dog on that roof. And then I can iterate on this, and then I can share it with my friends for like, really interesting, sort of goofy riff on reality. So we wanted to have that in there, because we want people to have all these tools available to them as they paint on this canvas of embodied AI. And so that's what's out of the box today. And then again, it's open. So people can riff on that. If they're comfortable getting a little technical.  

 

Rob Stevenson  19:18  

Sure. Yeah, the image generation is fun. The obvious use case would be like, oh, I want to redesign my room, and how would the couch look like over there? Right. But then you could also like, what if the couch was tidy? What if the couch was upside down on the ceiling? You know, that's right. The Minecraft Roblox effect of like, what happens if you gave that to a bunch of 14 year olds? Right, then you can start to really see what something can do you know? Absolutely. So then the device would pair to your phone, your phone needs to connect to the internet. It's all cloud based, like anything happening on the edge.  

 

Bobak Tavangar  19:48  

Yeah, so we've got some what they call tiny ml. So really simple machine learning workflows that happened on the edge they today mostly just clean up what comes in from sensor is on frame. And then of course, all of that gets old right now, you know, frame kind of acts as the sous chef. It kind of pulls in source ingredients, washes everything, chops and cuts everything, prepares it very nicely, and then passes that on to the phone, which really act at this point acts as a relay to some of these powerful foundation models. And so our assistant know, its job is to understand the intention and the nature of your query against the context of where you are your previous queries. So kind of the context there, what you're looking at, then of course, surmising the intention of your question. So all of these things are kind of baked into how it tries to understand your intention, and then it selects the right tool. So okay, I think it's, you know, Baalbek is looking for the right restaurant around him. That's a live web search that involves something live with, like web search, which needs to be synthesized. And so perplexity is the right tool for the job. But then it might be asking, you know, I asked the other day about something, you know, my daughter burned her hand, and, you know, hey, is a second degree burn? What's the best way to treat this with an at home remedy? Should I run over water? Should I directly apply aloe vera? All of these questions, and it was able to query a mix of GPT forro, but then also perplexity for something live on the lead as well. So no, his job is to route to the right model or the right AI service and bring you the answer. And so that all happens up in the cloud. Right now, over time, we are keeping our eye on what I think is an exciting trend. And I think Microsoft build this past week, or over the past few days, is accelerating this, more and more is going to be able to happen, sort of your local device, silicon, whether that's your laptop, I think Microsoft showed some really cool stuff like ARM based processors sitting in the new laptops, that are able to run like a powerful model at the edge, or increasingly, your phone. So you know, we've got all eyes on WWDC next month, because hopefully, Apple makes an announcement that regard if Apple does it, you know, Qualcomm is going to follow so all the Android handsets are going to follow, every one of us has a phone in our pocket. And so that means that we potentially could have a large language model running locally in our pocket, that is great for low latency that's great for lowering cost, you don't have to pay for it to be managed up in the cloud. And you can just have near real time dialogue with your increasingly personal AI system. So we're keeping an eye on that, because that bodes really well. For us. I think some of the other AI hardware companies that have launched recently, they're trying to run counter or like oppose the phone in some way. And we were never about that, like we were always sort of very pragmatic, like the phone's useful. It's not going anywhere. It's incredible hardware that all this use for very real daily workflows. And so we should seek to pair with it, we should seek to work with it, maybe nudge your behavior a little bit. So you're not always pulling out your phone, but pair with it and make use of it, even if it's just locked in in your pocket. So having a model running on your phone, it couldn't come sooner in our book. And in fact, it's coming sooner than we thought it would. But yeah, that's I think the picture of how this is going to start to shape up in terms of that edge. And sort of like centralized cloud story, I think more and more is going to be pushed to the edge. So I am curious, though, as some of these larger companies like Apple, like Microsoft, begin to partner with other folks like open AI, or Google, or others anthropic, how many of these models which run locally on hardware that we carry with us will be sort of locked to the device, because of the nature of that partnership, or the nature of the platform provider? And how much leeway will each of us users have to swap that in and out? Obviously, we would favor the latter. But I think with browsers, that's how it works today, you know, on my iPhone, I think Google pays Apple is rumored to be something above $20 billion a year to make Google search engine, the default search engine on the iPhone, but you can swap that for Bing or DuckDuckGo, or any number of others, I wonder if we'll be able to do that as well with a model provider and how they integrate locally on your hardware. So I am curious to see how that shakes out.

 

Rob Stevenson  24:16  

I wanted to ask you to you mentioned a minute ago, that the model it can decide whether to query GPT. Four or perplexity? How does it know,  

 

Bobak Tavangar  24:25  

there's something called function calling. And so it takes in all of the contexts you can provide it. And so we give it elements of like vision, you know, select what the device sees, and what appears but also a history of your previous questions. But it also analyzes the nature of the question. And then it takes into your location as well. So over time, we're going to be able to broaden that even more, but it takes in all of these inputs. And the model surmises, okay, he's asking for restaurants nearby. I think that's a live web type of query. So I'm going to route it over to perplexity. Now the reason it's able to do that, and reason table to surmise that way is because we have done all of these descriptors around the various tools that the model can select. So for example, we'll describe the tool and will describe the nature of the questions, which should be routed to that tool. And so then the model will be able to understand the incoming query, and then be able to make the decision or or surmise that, oh, hey, this kind of matches the description, like my understanding, it's kind of like a prompt, you know, my understanding of web search. And okay, I think, therefore, I ought to route this in that direction. So we've needed to kind of go through and identify these tools and all the associated prompts, so that it does that effectively. But that's an ongoing Sisyphean task.

 

Rob Stevenson  25:52  

So when you think about GPT, four versus perplexity? How do you compare and contrast? And why do you need both?

 

Bobak Tavangar  25:57  

That's a good question. And we've sort of debated internally on the team over the last period of time, as we've been building out the NOAA assistant side of things, which integrates with both each obviously has their strengths and GPD. Four, oh, is a relatively new model. It's phenomenal. You know, for live web search, perplexity, like any tool, it's got its hiccups and challenges, but it is the most sort of precise, it gives the most meaningful answers from the live web GPT, four, oh, cannot get Cory was live on the web. And so that's a powerful distinction. I think, personally, it's a matter of time, before open AI decides to kind of extend a tentacle in that direction, it's going to be interesting to see how competition shakes out in that corner of the market. Because I think both open AI and perplexity fashion themselves are sort of taking on Google. And I think Google is very quickly trying to respond with their own, you know, compelling models and riffs on search. And they're trying to evolve things so that they stay relevant. But I think it's a matter of time. perplexity is interesting because, under the hood, they're kind of like model agnostic, like us. So they've got this like Omni model or like model agnostic approach where you can pick, you know, I want complexity running on llama, or on I want it running on the straw, you know, this Misra model or, you know, they're even doing integration with one of Google's models. And so, you know, I think the ability to kind of pick and choose the model and have the complexity layer running on top is a really powerful one. Because there's just going to be so much innovation in that sort of foundation model world. At this stage, you don't want to be locked to one model or another. Because in April, or whenever Anthropic released the cloud, three models, that were hot, and they were awesome, and they were fast, and they were cheap. And then all of a sudden, GBT four Oh came out and said, Whoa, like open AI is back in the fight. And it's only a matter of time before, you know, Oh, of course. And then I'm missing, you know, Google, they released this, you know, large model that has insane context. So suddenly, you can like, throw like a whole video in there. And it'll just you can query against that. And I'm sure it's just a matter of time before the next left hook comes out of nowhere. And all of us are scrambling again, to figure out the landscape. So this space is changing so fast. It's exciting. It's good for all of us. But we don't believe you should ever be locked to one model or another. So we'd like to perplexity mimics our approach in that regard. Today, we use open AI and open AI is insanely capable. But we've got to hope that they stay the most capable, because the competition is pretty stiff in that world right now.

 

Rob Stevenson  28:41  

Yeah, I tend to agree with you that you should not limit yourself to one model, because you're so just beholden to them. Right. And we've all seen the various misfires, like what is this thing trained on? What is the output going to be? We've seen those swings and misses. And in your case, like a swing and a miss, like that is now two inches from someone's face. And your average user is not going to be like oh GBT for you rascal. That's not right. They're gonna blame you. They're gonna be like, these glasses suck, you know. So you have to be agile. I also liked the way you put it that maybe opening I would put a tentacle in the direction of like indexing on live web searches. That got me thinking that, especially as you think of all of the different sources that these elements are trained with, I could foresee like, as a user being able to toggle where you wanted your results to come from. Because there's a trend now of like, oh, IS LM is trained on my own proprietary company data. And so that's really valuable and very specific and niche and contextual. And you get a better answer than you would just asking GPT for example, but as we become more nuanced users in the same way as it's like, oh, I want to do a Google Image Search, or I want this to turn off my location. I wanted to be location agnostic search. Maybe those are like filters and those are preferences we exist and levers we can pull within these lmm has to be like don't show me anything that was trained using In a live web search, because I don't want to know what some random blogger thinks, for example.  

 

Bobak Tavangar  30:05  

Absolutely, yeah, absolutely. That'll be the other interesting part of this is just gauging the sort of collective public reaction, sort of the sociological elements of how comfortable people are with this, and what knobs they want to be able to turn. And so yeah, I just think that there's, we're so early with all of this stuff. And so that just kind of ties back into why we're, why we're open source. And why we are really seeking to approach this with more of a humble posture, where we're learning together with our community, rather than deluding ourselves into thinking that oh, we figured it out. Like, you know, I give you iPhone, you know, like, this is not the time for that. It's the time to be open and have a real humble posture of learning, because it's all moving very, very fast.  

 

Rob Stevenson  30:52  

Very, very fast. Very exciting, Bob egg. This has been a ton of fun talking to you about this stuff, all about wearables, but also the tech inside, I wish you all the best. And I'm going to be following this company very excitedly, because I hope this works. I can see how fun they would be to use. So I might be sneaking over to brilliant labs.com right now. And then entering offer code how AI happens for 80% off doesn't doesn't exist on unfortunately, I'm not like, I don't have that Oprah Look under your seats swept. In any case. barback thanks for being here. Man. This has been a ton of fun chatting with you today.

 

Bobak Tavangar  31:22  

Thanks, Rob. And if people want to learn more about us, they can go to brilliant dot XYZ. So brilliant, spelled the usual way dot XYZ, and I'm on LinkedIn. I'm always open for DMS. So thanks for having me.

 

Rob Stevenson  31:37  

How AI happens is brought to you by Sama. Sama provides accurate data for ambitious AI specializing in image video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e commerce, media, med tech, robotics and agriculture. For more information, head to sama.com