Sean explains ins and outs of Algolia's disruptive vector search technology, and why vector search provides much more relevant results than keyword-based search.
Algolia is an AI-powered search and discovery platform that helps businesses deliver fast, personalized search experiences. In our conversation, Sean shares what ignited his passion for AI and how Algolia is using AI to deliver lightning-fast custom search results to each user. He explains how Algolia's AI algorithms learn from user behavior and talks about the challenges and opportunities of implementing AI in search and discovery processes. We discuss improving the user experience through AI, why technologies like ChatGPT are disrupting the market, and how Algolia is providing innovative solutions. Learn about “hashing,” the difference between keyword and vector searches, the company’s approach to ranking, and much more.
Key Points From This Episode:
Tweetables:
“Well, the great thing is that every 10 years the entire technology industry changes, so there is never a shortage of new technology to learn and new things to build.” — Sean Mullaney [0:05:08]
“It is not just the way that you ask the search engine the question, it is also the way the search engine responds regarding search optimization.” — Sean Mullaney [0:08:04]
Links Mentioned in Today’s Episode:
Sean Mullaney 0:00
You need to have AI end to end, you need to have AI in the understanding, the retrieval, the ranking. And this middle piece has been the missing piece. We've been so excited because we can now have AI in retrieval at scale at cost with low latency.
Rob Stevenson 0:17
Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Here with me today on how AI happens is the Chief Technology Officer over at Algolia Sean Malini Sean, welcome to the podcast. How the heck are you today?
Sean Mullaney 0:54
I'm doing great. Rob. Thank you so much for having me on.
Rob Stevenson 0:56
thrilled to have you and I love that you are broadcasting in from your home office slash fitness studio slash wood burning fireplace behind you. It's a unique setup you have.
Sean Mullaney 1:08
Yeah, it's a rare sunny day in Ireland here. So I'm based in Dublin.
Rob Stevenson 1:12
Well, don't hold your breath. I'm sure you'll get a cascade of rain at any moment here that was my experience in Dublin was beautiful sunshine torrential rain. beautiful sunshine just kind of on off on off. Yeah,
Sean Mullaney 1:22
they say if you don't like the weather, wait five minutes. Yeah, just
Rob Stevenson 1:24
wait around. It'll change. Exactly that. I love Dublin. No loads of live music. The people there were so kind. You're lucky to get to live there full time. Yeah, it's fantastic city to live in. And a fantastic city to build artificial intelligence in. So it happens. Seamless segue. How do you like how did that? Yeah, we have so much to get into here. Shawn. There's this new technology that you are debuting, you have all these great takes on keyword versus vector search. We're gonna get into all that. First, though, let's learn a little bit more about you. Would you mind sharing with us about your background and how you wound up at Algolia? Sure, yeah.
Sean Mullaney 1:58
So I joined algo Lea as the Chief Technology Officer about four and a half months ago. But before that, I've kind of had like a journey through the world of E commerce and AI. So I was at stripe, where I helped build up their European payments engineering team, and got to see the kind of like late stage of the conversion funnel and ecommerce where people are actually going to pay and helping to optimize that. Before that, I worked at a company called Zalando, which is Europe's largest fashion marketplace with about 50 million active customers. And I got to build a lot of incredible AI around search and discovery, particularly trying to bring AI into the search engine, personalization recommendations, the browsing experience, all the data infrastructure around that. So really the kind of like middle bit of the Commerce experience. And then before that I was at Google for eight years, and really got to understand how advertising works, very data driven business, understand how retailers could acquire customers, and how they optimize that part of the E commerce journey. And it was really a cool actually that I got exposed to AI really for the first time in production systems at scale. And when I joined Google, it was a mobile first company really focused on the Android operating system during that big wave, mobile explosion. But about halfway through Google actually decided that they were going to become an AI first company. And every engineer in the company ought to go through kind of AI boot camps and learn how to code in TensorFlow. And it really started my passion for AI and see how AI could operate at a scale Google size. But also how can affect e commerce. Prior to that it was Oscar for 10 years, I had three startups just after the.com Boom. And I'm very much an engineer. I've been programming since I was about 10 years old, and absolutely loved being in technology every minute since
Rob Stevenson 3:45
what was the first coding language you messed around with?
Sean Mullaney 3:48
Oh, it was basic. I remember we got the first computer appeared when I was about 10 years old. And the very first thing I wanted to do is figure out how to write a text adventure game. Yeah, I got the book out, wrote out all the code and then try to figure out how to change it kind of started everything.
Rob Stevenson 4:05
So that was your print hello world was like you enter into a foyer obvious exits are north, south and east.
Sean Mullaney 4:11
Yeah. So yeah, I've always been fascinated by video games as an on ramp for kids into coding and software development. Have you been
Rob Stevenson 4:18
able to keep that passion alive? Do you guys are like on the side? You know, kind of a quick Tetris or something? I mean, what like, there's a lot more advanced now the the kind of thing that you could put together within Unity or, or a similar tool, but have you do you got to scratch that itch at all?
Sean Mullaney 4:32
No, but I've been trying to work with my kids. I've got a five, seven and 10 year old at home, and I'm trying to get them excited about programming and video games is definitely the way that they get excited about computers.
Rob Stevenson 4:42
Yeah, is that like Roblox? Or how do you facilitate that?
Sean Mullaney 4:46
Yeah, there's some online courses that we use that really makes it fun and interesting for them, but have tried to hack around with a Super Mario game and JavaScript with my oldest one, but we're still working on it.
Rob Stevenson 4:58
Hey, it's never too soon to expose get to that kind of technology, and I love that I can kind of play along with video games. My parents were concerned that video games were rotting my brain. And now this generation is like, no video games might give you a lucrative career someday. Let's make sure we steer into this.
Sean Mullaney 5:13
Yeah, absolutely. Anything that gets them excited about computers and coding, I'm all for.
Rob Stevenson 5:18
That's fantastic. So you are, you're an engineer by trade, that is sort of your identity. I've spoken to other CTOs. And it's often it's often like a stop on you know, the the path to becoming a CEO is like leading the technology department. But you, you know, you you're born and bred, as I speak, as a technologist, which is good to see,
Sean Mullaney 5:40
the great thing is, is every 10 years, the entire technology industry changes. So there's never a shortage of new technology to learn new things to build. I love building things. But I also love the commercial aspects. I love the product, design piece of it. And I love working with customers and solving problems. So I think you kind of have to be excited about the full product development process, not just the engineering piece in order to be successful as a CTO.
Rob Stevenson 6:05
Yeah, of course, being able to see the full pipeline, I guess, our full cradle to grave approach. Yeah, exactly. So you starting with Google, and then onwards in your career, up until now, really at Algolia? It sounds like search has kind of been a common throughput for you. When did you start to notice that there were some serious limitations to the ways that consumers experience search?
Sean Mullaney 6:29
Sure, I mean, the original search engines back in the 90s, were all based on the same keyword search technologies, that a lot of companies, particularly e to e commerce sector still using today. So it's been, you know, 20 years, ecommerce has really taken off search engines have grown enormously. But a lot of the base technology hasn't changed very much. You know, when you shop online at the moment, you still have to think twice about how do I tell the computer what I want, you know, humans have to kind of adapt to the fact that a lot of these websites are just databases with a nice user experience on top. And you know, as a human, you got to adapt to the computer, you got to figure out exactly what keywords to put in, you got to figure out which filters to play with. And you got to figure out which categories to browse. So there's a lot that you have to do as a human to adapt to Google to adapt to your favorite website. And the technology itself really hasn't moved on a huge amount. There have been a little bit of AI added here, there. But it hasn't really been transformed in a way that feels natural feels very human until very recently, with these breakthroughs in large language models and chat GPT is
Rob Stevenson 7:41
such a good call out I think a lot of people don't recognize that we sort of speak search ease. In a way when you're entering a search query. For example, if I wanted to know where to go out in Dublin, I might say, hey, Shawn, what's a good restaurant near you in Dublin that I should check out, but I wouldn't march up to you and go, restaurants Dublin, which is what I would do if I were trying to, you know, make a search query. Similarly, like, this is a well tilted joke, a screenshot of like someone's grandparents search Google and like, Do you have any recommendations for restaurants I can go to, right? Like, these are actual language, which is like, sweet. And it's funny. It's like, Haha, grandma, come on. That's not how you talk to Google. But the contention is like, well, maybe it should be it would be a lot easier for people. Surely people would be getting better search results. If they could speak to a search engine the way they might speak to a person, right?
Sean Mullaney 8:34
Yeah, absolutely. But it's not just the way that you asked the search engine that question. It's also the way that search engine response. So even today, 2023, we still get like 100,000 blue links from Google, where it's like, Hey, here's some stuff that you might want to go and read. You still have to do all the work to go and you know, through all these metal pages, similarly, Big E commerce sites, present you like 100 pages worth of products that you still have to go through and figure out which ones relevant for you or not. So, you know, although we've had this great advancements in AI, the experience for customers hasn't really changed a lot, you know, in the last 20 years.
Rob Stevenson 9:09
Yeah. And the whole 100,000 blue links thing, it's a great way to put it, because who needs 99,000? Like exactly, you know, 950 of those links, right? Like, when's the last time anyone got past page three of Google results, I never get that far. But like point taken, usually you make a search, the first few results are ads. And then it's usually someone who's a savvy marketer who has figured out how to rank well. And that's not necessarily that it's going to be super relevant to your need. It's just going to have, you know, it's taking advantage of a system that's there to be taken advantage of.
Sean Mullaney 9:45
This is why chat GPC has been such a huge, like consumer phenomena. It's like 100 million users have adopted this in the last few months. Like we've never seen a product that has really taken the zeitgeist of the world by storm so quickly. And it's because it's the first time that users have really experienced that they can talk to a computer in a human way, using their own natural language, and that the computer will respond in a very human way with natural language as well. It's not just giving you the 100,000 blue links and telling you to do the work, it's actually giving you the answers. And it's explaining stuff to you. And I think that's really elevated, the expectation of consumers now are really going to expect this kind of technology to be available everywhere they go, whether that's Google or whether it's, you know, a local e commerce Store that they want to buy product from.
Rob Stevenson 10:29
Right. And that difference is not merely the way it's presented to the consumer, right? We're speaking about keyword search versus other kinds of search. Yes.
Sean Mullaney 10:38
Yeah, absolutely. So the first thing that you have to think about in this kind of Quantum Leap, is what we're doing with the language. So in a keyword search engine, you're taking the actual words, and you're using that to match other words in the product, or other words in the webpage. So it's a very simple kind of algorithm. But what Chachi PT does, it uses these large language models. And these large language models are able to understand the concepts behind the words. And they're able to encode them in such a way that similar concepts are actually near each other in vector space. And what this unlocks is a very powerful way of matching things, which really understands the context and the concept that the user is looking for, regardless of whether the words match, right? And so this is going to unlock a very, very fundamentally different experience and the way we search,
Rob Stevenson 11:32
how can you train results on context? Yeah,
Sean Mullaney 11:36
so the first thing is, is that these large language models are trained on a huge corpus of human language. So like, trillions of webpages, and books, and articles, and tweets, and all sorts of stuff, remember, all around the internet. And what they've been able to do is really be able to map all of these kinds of words, sentences, phrases, into a vector space, where we're able to really understand the nuance behind these concepts. And we've been able to do this for a while, I mean, the idea of embedding huge language into vectors, it's been around for a decade. The problem is, is that we've first of all, never had models that have been this large. So the size of these models is increased by like 100 fold. And with the size, and with the increased size of the training data, they've actually become extraordinarily sophisticated. And they can now really kind of mimic understanding human language in a way that has reached a quality threshold that humans actually feel that it can provide value for them. So a heartbeat is not necessarily been the vector understanding that vectors are the way forward. The hard bit is like once you actually encode something with these vectors, how do you find out of all the web pages on the internet, or all the products that a huge ecommerce site, similar vectors, so the similar vector search problem has been like the Holy Grail for the industry. And I'll tell you the hard bits about vector search is, firstly, with the explosion and size of models, these factors have gotten huge, right. So they're like 500, floating point numbers, 1000, floating point numbers, they're really big to store. And they're also really expensive to compare against each other. Right. So computationally, you need a lot of GPUs to run these comparisons across the big indexes, web pages or products. And they're also really expensive to store because they're so big. So the industry has really been searching for a way to both Store and search very large indexes of factors at scale, at speed and at a reasonable cost. And we've gotten very close, there are techniques like nearest neighbor, approximate nearest neighbor, which is considered like leading way to solve this problem the moment and there are companies like pine cone and mediate and others who've been able to optimize this methodology, but it's still very expensive and slow. And this approach hasn't scaled to the level of a lot of production systems.
Rob Stevenson 14:04
Can you speak a little bit more about the problem of matching vectors to one another?
Sean Mullaney 14:10
Yeah. So when you match words together, right, you just match the characters. And when you match two vectors, what you're trying to figure out is how far apart in high dimensional space they are from each other. So you know, if you remember, like Euclidean jump geometry, if you try to do the Pythagoras theorem to figure out the distance between two points, it doing that in two dimensional space, relatively easy doing it in 500, or 1000 dimensional space requires a lot of computation. And so when you have to search through a very large index of vectors, which is your webpage, or your products that you're trying to match with a query, which is also a vector, you know, it ends up taking a lot of time, a lot of computation and a lot of memory to be able to do that.
Rob Stevenson 14:52
What solution is to use the to the problem of you know, being expensive, lots of time, is there like, Are there compression techniques? How would you go without making us more affordable, accessible, etc.
Sean Mullaney 15:03
Yeah, so Algolia we've been developing keyword search for the last 10 years, we're like the leader in this, we're actually the second biggest search engine in the world, we serve about 1.5 trillion queries for about 17,000 customers who are ecommerce and other web sites around the world. And we've been on the search to solve this problem for many years now. And we've known just like everyone else, the industry factor search is the future in terms of human language understanding. But because we operate at such high scale with customers in the enterprise space, we knew that we couldn't really introduce a product until we solve this. So we're very excited, we just actually have launched a VITA program with a technique we call hashing. And the idea is, is that if you can compress vectors using another AI algorithm called a hashing algorithm, you have to compress them in a very specific way. So we're able to retain pretty much 100% of the relevance and information of the original vector, but it's 10 times less space. So it's extremely compressed in terms of the size. But the other incredibly important thing that we train the algorithm to do is to make sure that we can actually compare the similarity between these hashes that we created, and extremely fast computational time. So the way that we're actually able to do and solve this vector problem is through these new hash data structures. So we're able to like scale up to like the trillion query level, and to customers are doing hundreds and 1000s of queries per second on their websites, whilst using this very powerful vector technology through the compression mechanism. And the similarity is built into that compression mechanism.
Rob Stevenson 16:44
Got it? So the searches that are being made by Algolia users at the enterprise level, what exactly are they searching for? They're searching through their own company's private data, what is the nature of the what they're looking for? Yeah, so we have
Sean Mullaney 16:56
about 70,000 companies who we power the search on their website, the search and discovery experience, including browse the recommendations, companies, you know, like Under Armour, Stripe, PetSmart, Walgreens, Sony, you know, a lot of big customers, about 6000 of those are ecommerce retailers. And so what we found is, is that with a lot of these kind of, let's take ecommerce, for example, there's like a kind of a head of small queries that represent most of the volume, right? This might be people going to do fashion site, searching for red shoes, or something relatively straightforward. And you know, keyword matching works pretty well for that, you know, particularly when you have a good product catalog. But you know, about 70% of the queries on these websites are actually somewhere in the middle are longtail queries where users very unique and maybe a one off period that's never existed before. But it's customers who have a lot more detail that they want in their answer, or they're trying to solve the problem. So I'll give you a couple of examples where kind of keyword search doesn't work as well. But vector search or neural searches, our product is called works extremely well. So a lot of like customers will turn up to like, let's say a pharmacy or health care website, like Walgreens, and they'll say something like, they want to solve a problem. So maybe their baby's crying. And they'll type in like baby crying. And what they really want to see is colic medicine, or they'll describe symptoms to something and they really want to understand what medicine is available for those symptoms. Or they'll say something like, you know, I want to like get moisture out of the air. And what they really want is a dehumidifier. Right. But they don't know the exact term for the item. They just know the problem they're trying to solve.
Rob Stevenson 18:41
Yeah, and if you Google baby crying right now, I bet you the first result is a link to like a YouTube video of like 90 seconds. Maybe, right?
Sean Mullaney 18:50
Yeah, absolutely. And then there's like, they're ambiguous keywords. So a good example I always use is the difference between chocolate milk and milk chocolate, right? Exact same words, but two very different products. Yeah, good one. The other one that I thought was great was when we look at hair growth cream. So someone's obviously looking for some, you know, some cream to help their hair grow. And the products that get returned on most websites are products that are actually cream for someone with a lot of hair. So they're get the opposite of what they're looking for. The other incredible thing we've seen is these language models had been trained on such a large corpus of information around the internet, that they've understood a lot of things like brands, for example. So a lot of people you know, would go to a website, let's say fashion website, and they would put it in a brand new one. So let's say you're looking for I don't know, ugg boots, they would search for OG and maybe the retailer doesn't sell up boots. So most of the time, they get a no results link. But these language models understand that at you like ugh is known for its furry boots, and so it will actually return other furry boots from us. Other brands for you, because it really understands the concepts behind the products that these brands are selling.
Rob Stevenson 20:07
So it's a little bit like a Netflix, if you search for a movie, they don't have it. They don't just say no results found they give you like 90 other movies that are kind of like it. Yeah,
Sean Mullaney 20:17
yeah, absolutely. So like if you search for a north face, and there's none of that brand, they'll return other outdoor jackets for you. And this is really, really important for a lot of E commerce sites who have a like limited brand selection. The other one that I find interesting is like expert domains, right? Where people don't know the terms like when I worked at Zalando fashion has a lot of very specific fashion Eesti type terms. And if you don't know the term, you can't actually get any good search results. And the one that I always thought was very, very interesting was one called chelsea boots. I had no idea what a Chelsea boot was. But it turns out, lots of people don't know what a Chelsea boot is. And they searched for like kind of brown or black shoes with no laces and a little pocket on the side. Basically what a Chelsea boot is, but they don't know what the word is. So they're kind of just describe it in a way that they're hoping to find out what it is. So you can imagine, you know, you know, electronics, for example, people might not know all the technical terminology or in fashion, they don't know the fashionista type terms. And it just turns out all of this adds up to a lot, right? So about 70% of the searches that happen on these websites are folks who are entering something that's not like just a standard kind of keyword.
Rob Stevenson 21:31
Okay, that makes sense. In this example, that I use, like the Oh search, baby crying, you want actual colic medicine, but on Google, you might get a video of a baby crying. In that example. Is Dr. Search more useful in the colic medicine example? Because you can assume buying intent on the part of the searcher?
Sean Mullaney 21:51
Yeah, it's a good question. I think the context matters a lot. So what we do with a lot of the websites is we actually we take the click and conversion events that happen from the website. So we observe every query, we then see where users click, and then we observe where users are kind of checking out and buying things. And what we can actually do is we can use all of that event data analytics data, to fine tune and train these large language models. So we take a base model, and then we fine tune it to a specific website using their specific customer interactions. And it really, really increases the relevance and recall of the search results. And so we have a model for every single one of our customers. And those models are really fine tuned and updated all the time using live data, so that we can get that context. And we can kind of like see the feedback that's coming from customers.
Rob Stevenson 22:46
Thanks for sharing that about the searches going on on Algolia products. I thought that was important to clear up before we got too deep in the weeds with the compression technology. I would love to spend a little bit more time there, can you kind of share just how that works, how that technology was wrought?
Sean Mullaney 23:01
Yeah, one of the ways that people have been trying to solve this for a while is using this nearest neighbors algorithm. And you can see the major vector search engines in the market, like the pine cones BVH, even Google's kind of factor search engine uses this nearest neighbors approach. And folks have been trying to optimize this nearest neighbors algorithm for a long time. And it requires using a very specific data structure, or these vectors have to be organized into like a tree or graph or hierarchy, so that you can search through them more efficiently. So a lot of the research has been in the optimization of this, like tree data structure, and optimization of how the vectors are stored and searched and rebalance in this tree data structure. And it's kind of been like, you know, the industry's gotten stuck over here on this data structure about trees and nearest neighbors. And so we were so excited to say, let's actually change the entire way in which we want to solve this problem, instead of trying to optimize over here around nearest neighbors and trees. And let's actually look at compression. And I'm pretty excited, I think, you know, we're the only company in the world that has been able to get a breakthrough in the scale, the speed and the cost of vector search like this. And we're very shortly going to be announcing both the commercial products being made available, as well as some interesting benchmarks and data around the kind of breakthrough so people can see just how much better it is than the other existing vector search approaches.
Rob Stevenson 24:29
So if the existing paradigm was this the tree like to conceptualize it as a tree, how would you conceptualize the compressed version? Is it like a tree where the leaves are all much closer together? Or is this Is there a better way to maybe imagine it in my brain?
Sean Mullaney 24:44
Yeah, I mean, really, what you're trying to do is you're trying to take this huge vector space and compress it down into a much smaller space, where all the vectors that are conceptually close to each other and the high dimensions are also kind of like numerically close. to each other in the hash, which means that you can use a kind of binary comparison as you're going up and down the hash. And it took, you know, to be honest, several years worth of experimentation to figure out how to do this hashing, how to train the algorithms, and how to use data to make the hashes work. It's been a lot of research, a lot of experimentation. But in essence, you know, it is a compression technology. And it is one where it requires rethinking both the data structures as well as the representation of the vectors. But what we're excited about is, is that these hashes can work in a traditional database environment, the nearest neighbor structures really require a different type of database structure, a different type of data structure to search through, which means, you know, if you've got your data right now in a database, you have to replicate it, and you got to hand it over to someone like a pine cone, and it's going to be stored separately in their very specific vector database. And so there's all sorts of like, really interesting and important implications about this hashing technology. And I'm pretty excited, we're gonna be sharing a lot more for developers. And for the impact on E commerce in specific over the coming weeks, I'll definitely
Rob Stevenson 26:11
put some links in the show notes so people can sink their teeth into some of the technical stuff if they care too. Quickly, or before we move on. I do want to ask also, you mentioned the binary comparison, is that like, comparing the hash like relevant, not relevant, what is the binary there? Yeah, so just like the way
Sean Mullaney 26:27
nearest neighbor is trying to find similarities between the distance and high dimensional space between two factors, we're just figuring out a different way to do similarity in vector space with just in a compressed format, we figured out ways to make it much faster to search through.
Rob Stevenson 26:43
Okay. And then when you say high dimensional space, do you mean the distance between two possibly relevant search terms or search ideas, or it's a high dimension more of like the indicator of relevance?
Sean Mullaney 26:54
Yeah, so you can think about a vector, let's say it's 1000 dimensional vector, okay. And what you do is you take all your products, or all your webpages, and you reduce them down into 1000 dimensional vector. And then you take a query coming from the user, and you reduce it down to 1000 dimensional vector. And then your algorithm is really just trying to find the distance between your query vector, and all of the webpage and product vectors. And you know, these tree structures help to kind of make that search process a bit faster. But you still have to compare 1000 floating point number vector with another 1000 floating point vector number to figure out the distance between them in 1000 dimensional space.
Rob Stevenson 27:34
Okay, got it. Thanks for clearing that up. For me. I'm no expert, you know, so I wanted just to stop you to do some quick vocabulary there. Okay, so that kind of outlines how the search would go about finding possible matches, however, it's not going to just find one possible match. In Google's case, it might find 100,000, and presumably analogous cases, finding fewer, but more than one, and now the goal becomes Alright, how do we serve these up? How do we rank these perhaps in order of relevance to the searcher? How are you deploying AI in the retrieval set when it comes to ranking results?
Sean Mullaney 28:06
Yeah, as I talked about earlier, this like using factors in the retrieval phase has been like the Holy Grail. But what it also unlocks is it unlocks the end to end AI, when it comes to processing search queries. So you know, a query goes through like three phase when it comes to producing search results. The first phase is understanding the query itself. So that's the text to vector transformation. But it's also trying to identify Are there words in here that indicate this as part of a category of products? Like, is this a TV product? Or is this a, you know, chocolate milk is milk products, milk chocolate is a confectionery product, right? So there's actually some clever AI that you can do at the start. Traditionally, people use synonyms as a way to make up for the lack of specificity of keywords. So some people said AI search, and when they really mean, they're just bringing synonyms out of a dictionary using some AI, stuffing them into the keyword retrieval phase. So that's the first thing is the query understanding. And there's a lot of AI there. The second phase is retrieval. We talked about, you know, how we use vectors and retrieval. But the third phase is actually where probably most of the AI has been focused on search over the past few years, which is, once you've retrieved the top 1000 results out of the catalog that could contain hundreds of 1000s of different products, for example, you can actually do a lot of powerful re ranking with like 1000 candidates that you couldn't do in the whole results, like 100,000 million, that kind of thing, you couldn't rank all of those. So what it turns out is, is that we now have a keyword score right from the keyword engine, and we've got a neural score as well, or a vector score. And what we can do in this ranking phase is actually we can get a lot more power and relevance by using AI we use a learn to rank algorithm, whereby it's a machine learning algorithms opt in I have to place the order of items in the correct order is a very specific type of machine learning algorithm. But what we can do is we can use this Learn to rank algorithm and take all sorts of input. So we take keyword scores, we take this factor search score, we can take personalization signals, what we know about the customer, we can take popularity scores, because we understand what people have clicked on in the past. So we can understand how successful it's been in the past. We can even take stuff like business metrics, how much margin you'll make on the product, that kind of thing. And these learn to rank algorithms to figure out that precise order of the items so that you can increase the relevancy. And hopefully get a customer engaging with a product or webpage far higher in the result set. Because we know that that first row of items, or first set of links on Google always get the most clicks. So if you can really make sure that the most relevant thing that's going to engage the user is up at that top row, or those first few links, you're gonna get like way higher performance. So what I'm trying to say is, is that this all builds together, like you need to have a high end to end you need to add have AI and the understanding the retrieval, the ranking. And this middle piece has been the missing piece. And we've been so excited because we can now have AI in retrieval, right at scale at cost with low latency. And it means that the whole AI end to end is seeing huge increase in performance, a huge increase in relevancy for shoppers, and for all sorts of users across our customer base.