How AI Happens

Intel VP & GM of Strategy & Execution Melissa Evers

Episode Summary

Melissa shares the factors that determine whether an open-source community is healthy, speaks to Intel’s philosophy on innovation and open AI.

Episode Notes

Melissa explains the importance of giving developers the choice of working with open source or proprietary options, experimenting with flexible application models, and choosing the size of your model according to the use case you have in mind. Discussing the democratization of technology, we explore common challenges in the context of AI including the potential of generative AI versus the challenge of its implementation, where true innovation lies, and what Melissa is most excited about seeing in the future.

Key Points From This Episode:

Quotes:

“One of the things that is true about software in general is that the role that open source plays within the ecosystem has dramatically shifted and accelerated technology development at large.” — @melisevers [0:03:02]

“It’s important for all citizens of the open source community, corporate or not, to understand and own their responsibilities with regard to the hard work of driving the technology forward.” — @melisevers [0:05:18]

“We believe that innovation is best served when folks have the tools at their disposal on which to innovate.” — @melisevers [0:09:38]

“I think the focus for open source broadly should be on the elements that are going to be commodified.” — @melisevers [0:25:04]

Links Mentioned in Today’s Episode:

Melissa Evers on LinkedIn

Melissa Evers on X

Intel Corporation

 

Episode Transcription

Melissa Evers  0:00  

The innovation is in the way it's applied. The innovation is not in the core functionality. And so since that is not differentiating, I think the focus for open source broadly should be on the elements that are going to be commoditized, that are not going to be the things that will drive the future of a differentiated service.

 

Rob Stevenson  0:19  

Welcome to how AI happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we're about to learn how AI happens. Hello everyone, and welcome back to your favorite AI podcast. It's me, your host, Rob Stevenson, and I have an amazing guest for you lined up today. She serves as a Advisory Committee board member at the Cockerell School of Engineering at the University of Texas at Austin, the same University, where she also serves as a corporate champion Council board member and other boards, just boards all over the shop here include the Technology Association of Oregon, the future ready Oregon Association, and a series of increasingly exciting roles at her current place of employment leading up to where she is now in her role as the VP of the Software Engineering Group and the general manager for strategy to execution at Intel. Melissa Evers, welcome to the podcast. How are you today?

 

Melissa Evers  1:27  

Thanks for having me. I'm doing very well.

 

Rob Stevenson  1:29  

Did I do justice to your curriculum vitae there? I feel like I kind of maybe stumbled through that a little bit. But you have so many roles. You're so active right now.  

 

Melissa Evers  1:37  

Yeah, I have the privilege of serving a number of different communities, outside of Intel, where I work today.

 

Rob Stevenson  1:44  

So Did I did I sum those up. Are there any that we can provide color commentary on? Or,

 

Melissa Evers  1:49  

I think the one that you probably didn't mention, and this probably the area that I concentrate the most in, is the Linux Foundation governing board, and the work that I do in the open source community,

 

Rob Stevenson  1:59  

gotcha, where you're most active. And the thing that I didn't mention, because I'm a naughty interviewer, what is the extent of your of your role there? Like I feel like serving and working with Linux is instant street cred with the technological folks out there listening. Could you share a bit about that?  

 

Melissa Evers  2:13  

Well, I've had the privilege of working in the open source community for a really long time. Probably, I actually don't know how many years. It's been a long time, over a decade, and have served on the Linux Foundation governing board on behalf of Intel for the last six years. And so my activity is both at the board level, but then at some of the sub project levels. So I've been active in the LFA and data community, the LF edge community, the CNCF, at different points in time. So depending upon where Intel is leaning in and what we are doing in the open source community, I've had a privilege in playing a part of some of those initiatives and strategies from a leadership perspective over the years.

 

Rob Stevenson  2:59  

You know, this is perhaps part of the bigger conversation we're going to have, but I would love for you to explain why open source governance is not an oxymoron.

 

Melissa Evers  3:10  

Actually, as boring as it sounds, I think open source governance is super, super critical. So one of the things that is true about software in general, is that the role that open source plays within the ecosystem has dramatically shifted and accelerated technology development writ large. And there are kind of two types of open source. Broadly speaking, there's open source that's maintained and governed by a company, you know. So it's github.com, backslash Intel, you know, whatever. And so it is governed by the company. And those companies do have different practices with regard to the ways in which they both do their development, as well as embrace community contributions. Neutrally governed. Open Source under a foundation means that the community is responsible for the way that project is governed. The community is responsible for the strategic direction. The community is responsible for what code gets accepted by whom, and the technical merit on which the project is stewarded, both from a governance perspective, but also from a Technical Steering Committee, those types of community led agency and self determination with regard to the future of the evolution of the technology is super, super, super critical, in my mind, and we do as Intel we have open source projects that we govern. So for example, the open vino Project is an open source project we govern, but then there we also have projects that we have contributed to the open source community, and we participate in just a tremendous amount of open source development in the context of neutrally governed open source.

 

Rob Stevenson  4:55  

So I guess the answer to the question, why is it not an oxymoron is because open source doesn't. Mean anarchy is important for there to be this governance organization. And when you were kind of explaining that the community is responsible for the direction and the input, it feels like you have a microcosm of democracy, right of it says merit based, Representative opinion sharing. That's how it's meant to work. Is that fair? Yeah,

 

Melissa Evers  5:19  

it's absolutely merit based. There's a saying of chop wood and carry water in the community. Those who are doing the work of the community are those who the community then elects to help drive the project forward and get to set the direction. And so it's it's important for all citizens of the open source community, corporate or not, to understand and own their responsibilities with regard to the hard work of driving the technology forward, reviewing the PRS, fixing the bugs, closing the security gaps, etc. There's a lot of hard work that goes behind the scenes in order to enable what folks consume, with regard to downstream productization.

 

Rob Stevenson  5:57  

When you consider the importance of the governance and how that is. It sounds like it's in place to sort of assure that there is a self determinant quality to these communities. What about the community makes you believe or proves to you that, yes, this is truly self determined, like we do have an the platonic ideal of an open source environment here.

 

Melissa Evers  6:19  

So there's a couple of things that we strive for when terms of looking at what is a healthy community, and that healthy community means that there's a diversity of contributions from an array of companies, both large companies, medium sized companies, small companies, independent developers, that there are a lot of different types of contributions. It's not just a couple contributors. You don't really want to be involved in a project that only has a couple players that are working in their in upstream, for example, because that means that that project has a risk of dying if those couple players move on, right? So the the number of contributors, the diversity of the contributors, the geographic diversity of the contributors, are all signals that you'll see with regard to the overall health of a project, and there are things like the open source security foundation scorecard and other things that'll give you a sense for the overall health of the project itself, with regard to security maintenance and other things. And so there's a lot of tools at your disposal is that you are assessing various projects. But for me, that's one of the key critical signatures, is the diversity of contributors upstream, and the diversity of the TSC, the diversity of the maintainers, et cetera.  

 

Rob Stevenson  7:33  

Yeah, thanks for explaining that. Because participation in open source community is so common in this space, any sort of software engineering space sufficiently technically, you get individuals doing the sort of thing on the side. And I just want for the folks out there, as they are picking these kind of projects, to be able to assess them, and to be able to be like, okay, is this something that is a healthy environment for me to exercise this skill set? Yeah,

 

Melissa Evers  7:55  

absolutely. Dig into the contributor list. Take a look at who's contributing. How much, where is the volume of the PRS coming from? Is there a specific company? Is it a diversity of companies? Is it just a couple contributors and a long tail of minor contributors? Those are all signals with regard to the overall health of the project?

 

Rob Stevenson  8:12  

Yeah, it makes sense. Intel has a bunch of different open AI initiatives, and it strikes me that they are not merely for the sake of like an employer branding exercise or for like goodness in the community. When you look at like the reference kits and like the one API, like these are pretty valuable tools for folks that they can go out there and use. So I was hoping you could just kind of explain a little bit you were hinting at it before, but just Intel's perspective when it comes to the import of open source and OpenAI,

 

Melissa Evers  8:45  

absolutely so as a foundational context. Intel is a horizontal silicon provider, right? We provide a diversity of types of processors and ASICs and FPGAs, et cetera, to enable broad scale deployment and innovation of technology. And in order for that to be achieved, in order for that innovation and scale to be achieved, we really need to have the foundational layers that provide the kind of the neutral, horizontal context for that innovation to occur. And so we work really, really hard in the open source community, broadly across the diversity of our hardware, across the diversity of software applications, from client to edge to network to data center, to ensure that that platform, with regard to the technology stack is available for folks to consume and innovate on top of. And so that's the kind of context that we believe that the ecosystem should compete on, right? We believe that innovation is best served when folks have the tools at their disposal on which to innovate. And take advantage of the prior innovations of the past. And so it's in that context that we apply the same philosophy to AI. And so really seeking open, secure platforms that enable developers to choose what is the best software, what is the best hardware for my particular configuration, my particular use case, my particular application, and being able to enable modularity and choice is very key to the ways in which we approach the market writ large, but particularly in the context of AI. So you'll see the work that we're doing with regard to sickle and the standards to enable heterogeneous hardware, from CPU choice to GPU choice all the way up through one API with full middleware framework stacks to the work in pytorch and TensorFlow hugging face all the way up to some of the work that we're doing. And you'll see in the context of the open platform for enterprise, AI, which is a rag reference pipeline solution that is now under the Linux Foundation, AI and data umbrella.

 

Rob Stevenson  11:05  

Gotcha. Now it sounds like it's the part of the philosophy is to provide this foundation right upon which these exciting new businesses technologies can be built. At what point do we run into this proprietary issue where, okay, you've had this foundational support, you've had these tools that are available to you know, that are widely available, that maybe you don't need massive resources to engage with, but we're building something super secret until getting about it, like, where do you draw the line? Well,

 

Melissa Evers  11:34  

I think so. Proprietary models versus open models, we believe the developer should have choice. There's a value props associated with some proprietary solutions, and there are value props associated with open solutions. And so from a model perspective, enabling developer choice, and that developer experimentation, the pace by which AI technologies are changing is so rapid that nobody wants to be in a situation where they have lock in. They want to be able to swap out this model for that model and see how their application now works. You know, do they have a higher degree of safety? Do they have a higher degree of trust? Are they getting more efficient compute, etc? We really want to enable choice, and certainly that downstream proprietary models have a role to play, or an open models have a role to play. I've been really pleased to see the innovation happening with regard to the open model frontier and model openness framework from the Linux Foundation to really understand how to define open AI, for example, or open source AI. There's a lot of really good work that's happening across the ecosystem in that context, but in the same regard, in the end, there are real choices to be made with regard to implementations and success of the application, the stability, the performance, the efficiency of the model, et cetera. And so developers need to have the choice. Now what comes from that is really up to the developer, and our job is, as Intel is to accelerate you. So whether you're using pytorch, everything that you need if you choose to use Intel, hardware is upstream and it's optimized for you. If you choose to use a competitor solution, I'm sure they do their own enabling of their hardware too. In the end, it's we are really trying to harness the core values of openness, choice and trust on behalf of the developer community, and putting them in the driver's seat of how to move technology forward.

 

Rob Stevenson  13:32  

Can I ask you about lock in? Yeah, model lock in in particular. So this is more and more common of just this warning of like, don't get locked into one model. Models are like, coming out so frequently. They're updating, they're changing. They're getting better. I guess my first question is, like, do you still see people falling for that? Like, has this become obvious advice to say, don't get locked into a model? Or are you still seeing this that people are are getting locked in and painting themselves into a corner with the models they choose?

 

Melissa Evers  14:00  

I think people are getting smarter about how to architect their application so they aren't getting locked in. So like, for example, in the open platform for enterprise AI the OPA project, we have containerized in a microservice function perspective, all of the models. So you could go like, Okay, I want to use Claude, and then I'm going to try this other implementation, or this other implementation. You can swap them out with the ease of a function in terms of your particular implementation and what you chat bot or code Gen, or whatever the case may be, whatever it is that you're building as a service. And so I think you're really thinking about the architecture to enable that modularity and freedom of choice is really essential. And you mentioned earlier a concern about proprietary data and model development or application development, and then I think that's where you see rag innovation and rag solutions really becoming much more prevalent, such that folks can have their own data their. Customer data, or their healthcare system data, or whatever the case may be, their proprietary data, they don't want to leave the firewall, et cetera. They can build that and then use a rag solution to be able to complement that with the power of llms.

 

Rob Stevenson  15:14  

To put yourself in the shoes of someone who has maybe not been particularly thoughtful about their architecture, right? What would be like the hint that, like, Uh oh, we really are stuck with this model, and it's not optimal for us right now?

 

Melissa Evers  15:28  

What would be the hint that they're stuck with it, or it's about choice, it was a bad choice,

 

Rob Stevenson  15:32  

I guess either. Yeah,

 

Melissa Evers  15:33  

I think the hint that you have a bad choice is probably with regard to the accuracy you're seeing out of the model, the reliability and the trust that you're seeing with regard to the things that the answers to the questions, or whatever it is the application that you've created that being said, you know, there is tremendous innovation that's happening with smaller models and the accuracy of those models. So I think being able to experiment and deciding on an application structure such that you can look at a 70 billion parameter model and then swap it down to a 7 billion parameter model. Right the ways in which your use cases can explode at those smaller sizes with smaller form factor devices, whether you be at the edge or at the client, are really quite mind blowing in terms of the opportunities that new use cases and new revenue streams, etc, that are possible as you get into smaller form factors and smaller models. And so I think really being able to think critically about future proofing yourself with regard to model choice is is a structural question that everybody should be asking themselves as they start out on these various initiatives. I'm

 

Rob Stevenson  16:43  

really glad you brought up sort of smaller parameter models, because it felt like having an LLM having a trillion parameters, whatever it might be, was sort of like an arms race. It was like, more is better, right, always, or more is sufficient to train it to do XYZ. I'm noticing I feel like a little bit of a contraction in terms of, like, the size of models. People are thinking, maybe you need less. Are you seeing that? Do you have a take on whether more is less or less is more?

 

Melissa Evers  17:11  

Well, I think that there will always be opportunities for those really large models, particularly in the HPC space, or if you're thinking about something very, very large, or where you need really, really clear level, you know, really very fine levels of accuracy and specificity, I think, for a vast majority of use cases, complementing a smaller model with your own data sources, and being able to look at those different types of rag applications, or different types of use cases that are enabled at smaller form factors. If you think about being able to put one on every light pole, or being able to put one, you know, like it just becomes really quite expansive with regard to the potential at those smaller sizes that'll provide you with consistent levels of precision as the larger models. And so it really depends upon what you're trying to achieve. And what is that minimum model size or maximum model size that enables you to be successful? You can run them more easily, you can train them more easily, like there's just a bunch of benefits of smaller models if it meets your application's needs.  

 

Rob Stevenson  18:21  

Yeah, that's an important call out that is not merely about with a smaller model. It's less compute, it's cheaper, right? But also, you know, you could run it on a toaster at the edge. The idea that that, to me, feels like an important domino that falls in like true democratization of AI is the reliability and power of a smaller model, just because then anyone can do it like, to need to run a model with a billion parameters and to need to rent server space and compute is still fantastically expensive. So do we have truly open AI until you can run these things on your phone, right?

 

Melissa Evers  18:54  

And I think the other thing that you bring up, which is cost, and I feel like we're kind of gonna see, I'm a big fan of tech history, right? And so, you know, one of the things that we've seen in the past is kind of this huge push towards public cloud, and then folks started getting the bills as their services scaled and their needs expanded, and they get these bills and they're like, holy cow, that's really expensive. I can stand up my own infrastructure for this cost, right? And so there's this move to pull some things back on prem, because it's just enormous, gets enormously expensive. I think we're going to see the same thing with regard to models, as folks are going to experiment, they're going to innovate, they're going to do some proof of concepts, and then they're going to be like, Oh, that's a very big bill. I'm going to do something different my you know, I can get good enough with these other implementations, and that'll serve my needs just fine. And there, of course, will always be things that need to be running on those bleeding edge as accurate as possible, huge models, for sure, there will always be those cases, but I don't think it's going to be the vast majority.

 

Rob Stevenson  19:55  

So do you think that's a factor of it's like an economic thing that the. Huge models and the compute necessary for them are fantastically expensive. While that's one factor, another factor is, do we even need it like we can do it on prem. We can do it on with us. We can be a little more lean, wink, wink. Or the VC money's drying up. We can't just throw a ton of money at compute. This sort of thing.

 

Melissa Evers  20:16  

Economics, in the end, is the name of the game, right? So we need to figure out where that perfect intercept is between the value delivery and the value creation and the cost of that value creation. And in the end, it has to be sustainable. And so I think, you know, part of the innovation curve that we're seeing with AI, broadly speaking, and the role of open source, is this deep inherent belief across the ecosystem that there has to be that democratization of technology. It can't be held by just a few who have the billions of dollars to build the infrastructure to drive that innovation curve with regard to the next greatest. LLM,

 

Rob Stevenson  20:58  

yeah, of course, Melissa, one of the reasons I was excited to speak with you is that, you know, I speak a lot with folks who are who are sort of, they're internal, and they're working on their company's technology, which comes with a unique point of view, but also they don't see the market, maybe in a way that you might because you are seeking Intel, is seeking to enable technologists of every ilk in persuasion, right? You see a lot of different use cases. You see a lot of different sorts of people in the way they're trying they're thinking of this technology. So I was hoping you might be able to sum up for us, like when you look across some of Intel's customers and some of the exciting things going on, what are some of the common challenges people are facing?

 

Melissa Evers  21:34  

What are the common challenges in the context of AI specifically, that we see folks talking about and really puzzling over is this very strong belief that generative AI has the potential to be enormously transformative. However, getting things out of POC land, proof of concept land and into a production implementation for enterprises writ large is providing a substantive challenge, and that frustration is coming for a couple reasons. One are some of the things that we've talked about with regard to fear of obsolescence. There's also concerns with regard to the complexity of some of these solutions, and if I design it and build it, I need the right talent, and I don't have that talent, and that talent's really expensive, and then I need to maintain that talent, and if I hire somebody else to build it, then I'll be getting into a maintenance burden for forever, you know. So there's just, like, a lot of institutional angst with regard to moving from proof of concepts, which are pretty easy to spin, up to production implementations, either for internal consumption or through commercialization, to external services. So that was actually one of the reasons why we created the open platform for enterprise AI, is we surveyed all of our customers and we said, what are the most critical use cases that you have with regard to generative AI, and we called the list of all of the different use cases. And it turns out there's a lot of consistency, whether it be in financial services or healthcare or, you know, you pick your your domain of your company. There's a lot of consistency with regard to Okay, chat bots or code generation or code translation. Like, there's a ton of consistency with regard to the ways in which people want to be able to use generative AI. And so that's what we built. We have, I think, 11 reference implementations of fully open source rag pipelines ready for consumption that are heterogeneous with regard to hardware support, and those are available in the open source community.

 

Rob Stevenson  23:39  

That's interesting, that you saw so much consistency in terms of, like, the generative use cases. I'm curious why? Why you think that is like the idea that generative is going to upend so many things and really, really change the way we we interact with technology around us. And yet, do you see that commonality of use cases as like a dearth of creativity? I would love to hear you opine on that.

 

Melissa Evers  24:02  

It's such a great question. Sorry, I get excited and passionate about this stuff. So the those pipelines are not differentiating, right? If you're talking about a healthcare chat bot versus you're talking about a customer service capability that for an automaker or, like, those are still chat bots, right? Like, it's all still the same thing. It's just fed with different types of data with regard to your vector database, that's not where the innovation is. The innovation is what you do with it. The innovation is what you add to it. The innovation is how you wrap it. The innovation is how you differentiate that offering versus your competition. The innovation is what is the retail experience look like when you have an assistant that helps you with your purchases and can understand your size and inventory and all this kind of stuff and and serves your curated experience. The innovation is in the way it's applied. The innovation is not in the core functionality. And so since that is not differentiating, that's not where the value will be created for enterprise customers. We were like here, let's just provide that for you. Provide it for you in a way that enables your choice and modularity. You choose your model, choose your vector database, and let's work on creating the platform by which you can deliver that differentiated service, deliver that value that sets you apart from your competition. And so I think the focus for open source broadly should be on the elements that are going to be commoditized, that are not going to be the things that will drive the future of a differentiated service. You know, there's fundamental capabilities and building blocks on which that next level of abstraction will be achieved, that next level of innovation we will be built. And that's where I really get excited about it, because it's in the it's in the super unique implementations for, like, Okay, I'm a tire manufacturer right on manufacturing line with defect data, right? There's, there's so much opportunity in the application. Let's focus on the innovation there, and not on the stuff. That's where, you know, there's no differentiating value for you?

 

Rob Stevenson  26:00  

Yes, okay, that makes sense to me. To feel like there wasn't that image innovation happening. To feel like, oh, generative is going to change everything. Here's another chat bot would be to focus on the wrong piece of it, which is like, Oh, it's just a core functionalization stage, which is just, does this thing work? Can you put it in front of a customer and can it not like, crash or just give them a stupid response? If you're focusing on like that, then sure, it probably doesn't seem terribly interesting or creative. It's about moving it to like, what is the part that is more specific, right? What is the part unique to your business? And so I would love to know, like, what are for the folks who are doing it? Well, you have to name names, or you can, if you want to, I don't know, what are some of those creative uses of it? What are the some of the ways you think are differentiating? Well,

 

Melissa Evers  26:43  

I mentioned a few. You know, we see some really interesting work happening in the retail space with regard to purchase experience combining multiple types of models. So, you know, the notion of compound AI systems, where you're taking vision data and pulling in a rag pipeline and being able to curate experiences with interaction based upon multiple types of data sources. There's some really interesting things happening there. Interestingly, those same technologies can be applied in manufacturing for defect recognition and remediation and factory inventory management and things like that. Those same technologies are the core building blocks of that type of application as well. And then you've got some really interesting things happening with regard to financial services and curated support based upon your portfolio, integrating what's happening in the market. And you know, like, there's just, there's so many opportunities for integration of these various data stores that are happening in real time. We're doing this in our brain. We're looking at the market, and we're making these decisions, and we're the you know, this is like my portfolio and things like that. This is work that a human has to do today that can all be served to you in an integrated capability as part of a value added service. If, you know, as folks continue to innovate and move things forward.

 

Rob Stevenson  28:03  

That's a sigh of immense relief, because I feel like you're giving kind of a healthy dose of perspective here. It sounds like you're saying we're still kind of early days like these youth. There's so much exciting opportunity and possibilities, but you know, we're still we're still getting there, whereas I've had other guests on the show who refer to October last year as, like, vintage, traditional, old school generative and so I am getting whiplash from people telling me how fast this space is moving, and it sounds like you're kind of telling me, like, look, here are all these awesome possibilities that we're going to start to see. Yeah,

 

Melissa Evers  28:37  

no, absolutely. We are very much in the early days. We are very, very much in the early days. I mean, if you look at any of these, if you think about like the innovator's dilemma, right, we are at the early adopter curve, where you know the role of the late majority, or, I can't remember all the faces anymore, but we have a long way to go until the real value of many of these capabilities are realized. And I think, you know, one of the things to your point of the privilege of working at a company like Intel is that, you know, you get to see the vast array of folks who are taking these technologies and going like, ooh, could we, could we do this and tinker a bit notion of, yeah, the notion of really integrating a lot of these different systems, and being able to take different types of data feeds, and being able or to do multiple stage what needs to go to the cloud, what doesn't need to go to the cloud. You know, I'm getting all these terabytes of data from this, these sensors, or this car or this whatever. How do I filter? Filter, filter, filter, to be able to get to a place where this one piece of data is aggregated, sent to the cloud, and then, you know, the police department is notified that a tree is down on Fourth Street, right? There's so much future opportunity, and we are very, very, very much at the forefront.

 

Rob Stevenson  29:57  

Yes, it feels good to hear you say that. It's weird to say, like, I've been doing this show long enough where I feel like I've kind of lived through a couple hype cycles myself. And for a long time, it was just like, all these headlines endless VC money stocks for particular AI companies flying and now you're seeing, like, the sentiments sort of pull back and it's like, oh, did uh, Are we fulfilling the promise of generative as if, like, it was a swing and a miss, and it's just It happened a little too slow for the people who are trying to, I guess, want things to be, want the future to be yesterday, or want their stock to explode now. But the actual innovators, the people who are building this have their heads down and are working on cool things, and they're not reading Business Insider. Well, I

 

Melissa Evers  30:42  

think that there is certainly impatience, and understandably so, it's expensive. We're investing a lot of money in various capabilities right now, but one of the things that I've appreciated over time is the notion of Wardley mapping and so, so Simon Wardley, out of the UK, has done, you know, has studied a variety of different technologies and the role that open source plays in driving commoditization and scale. And, you know, I think we're going to get to a place where this next level of abstraction just kind of becomes, then the de facto normative, and then things are built on that, and things are built on that. And I feel like right now, whether you're looking at what is available in hugging face or pytorch or, you know, various communities, etc, are a lot of blocks. We got a lot of blocks, but the integration of those blocks, the integration of all of this choice, the distillation into a couple de facto ways of doing things that are best in class for their safety, best in class, for their efficiency, best in class. You know, all of that work still needs to be done. I was reading this morning a little bit about some of the work that's happening within ml, Commons and AI safety, with regard to definition and specifications for what that even means, like we're just doing it, like it's just being done, right? So there's so much, so much opportunity, but also so much work that needs to be done to really drive to a place where we have commoditized, de facto implementations that the ecosystem can trust. Yeah,

 

Rob Stevenson  32:11  

yeah, of course, Melissa, we are rapidly approaching optimal podcast length. Pains me to say, because I'm enjoying this conversation immensely. Before I let you go, though, I kind of want to ask you to take off your intel hat a little bit just this, like, just like, lift it above your head. You have to take it all the way off. But when you are just kind of taking stock of the space and just sort of being your curious self, what is really exciting to you? Maybe that's not even related to your role or your job. What kind of paper might you see? Are you like, wow, that's uniquely cool and specialty. That's that excites me.

 

Melissa Evers  32:43  

The thing that excites me, and I can't help but notice we're in childhood cancer prevention month or awareness month, or whatever the case may be, is the opportunities that are with healthcare, both from a therapeutic perspective, but also from the perspective of enabling rapid learning. So for example, I have two children who have two very rare conditions, very, very rare, and it took us a really long time to get those diagnosed because they were so rare. And in the context of health data sharing, you know, hey, this is the list of really strange symptoms that symptomatology is at the pediatrician's hands, and in the context of curated gene therapies or cancer therapies, etc, you know, there's just so much opportunity for high levels of value creation that make a difference in humans, make a difference for human life. And so for me, that's where I get really excited and hope. Have great hope for what the future will hold. Have

 

Rob Stevenson  33:42  

you even seen some of that progress in the last few years in terms of the level of care you're getting? Not me,

 

Melissa Evers  33:49  

but I you know from the work that we're doing in the startup community and with various incubators, and absolutely see the work that folks are doing to lean into this opportunity. And I'm very, very excited about it. Yeah,

 

Rob Stevenson  33:59  

it can be more personal than that, and the idea that you can like that feels like the most impactful thing, to be able to provide some sort of relief for something that otherwise you didn't or didn't even have a name for. Now there's progress. Now there's a treatment. So yeah, it's very exciting time. I hope for you too and for your children too, that there's progress there. So Melissa, this has been a delight. Thank you for being here and for your candor and for sharing all of your your experience and sort of perspective on the space. It's a breath of fresh air. I'll say that thank

 

Melissa Evers  34:27  

you again for having me.

 

Rob Stevenson  34:30  

How AI happens is brought to you by sama. Sama provides accurate data for ambitious AI specializing in image, video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail e commerce, media, medtech, robotics and agriculture. For more information, head to sama.com .