How AI Happens

Gradient Ventures Founder Anna Patterson

Episode Summary

Anna Patterson is the Founder and Managing Partner of Gradient Ventures, a full-service seed and series A fund. We discuss the contrast between an AI winter and the standard hype cycles that exist, her thoughts on sectors that were initially under-hyped, how she navigates hype cycles, and why English is the next programming language.

Episode Notes

 

Key Points From This Episode:

Tweetables:

“When that hype cycle happens, where it is overhyped and falls out of favor, then generally that is – what is called a winter.” — @AnnapPatterson [0:03:28]

“No matter how hyped you think AI is now, I think we are underestimating its change.” — @AnnapPatterson [0:04:06]

“When there is a lot of hype and then not as many breakthroughs or not as many applications that people think are transformational, then it starts to go through a winter.” — @AnnapPatterson [0:04:47]

@AnnapPatterson [0:25:17]

Links Mentioned in Today’s Episode:

Anna Patterson on LinkedIn

‘Eight critical approaches to LLMs’

‘The next programming language is English’

‘The Advice Taker’

Gradient

How AI Happens

Sama

Episode Transcription

Anna Patterson  0:00  

If we're going to believe that we're going to eventually get to general AI or towards it, I think that this is actually one of the breakthroughs that we needed.

 

Rob Stevenson  0:12  

Welcome to how AI happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. My guest today on how AI happens has some serious tech chops primarily, although not exclusively in AI. Her work in search and search engines is in no small part responsible. I don't think it's too hyperbolic to say for the way Google is working today, she founded a clustering based search engine called Cool another history based one that was the first approach to searching the internet Wayback Machine, a former VP of engineering over at Google and now she is the founder and managing partner at gradient ventures, which she will tell you all about here today. Anna Patterson, welcome to the podcast. How are you today?

 

Anna Patterson  1:15  

I'm sorry, we're very well, how are you doing?

 

Rob Stevenson  1:17  

I am great. No, never asks. Thank you for asking. I'm doing great and excited to be talking to you. There's so much to go into in your background, I guess I'll maybe I'll kick it over to you to add some color commentary. Would you mind just sharing a little bit about about your background, but we can we can get to the point where you decided, You know what I want to found gradient ventures, I want to start working with companies who are going to be bringing AI tech to market.

 

Anna Patterson  1:40  

Yeah, I founded a number of companies. And we started in AI division in Google in about 2014. And I found it really interesting and felt it was an up and coming space. And when we started doing research in the area, I thought, You know what, why don't we start to see what startups are in the area. And so in 2015, I did not see a lot of startups in AI. And so, in 2016, I pitched that we started a seed fund to try to kind of make AI happen and bring more AI companies to bear. Now it looks prescient, but at the time object to ridicule and just

 

Rob Stevenson  2:29  

another geopolitical injustice was that because it was viewed as some sort of like pie in the sky sci fi futuristic mumbo jumbo, or what was the attitude at the time

 

Anna Patterson  2:37  

that we were in an AI winter was, am I going to happen? Was the next wave going to happen? Or was it going to stay unpopular?

 

Rob Stevenson  2:47  

Well, it's a good job. You didn't listen to the haters. And what gave you the certainty that they were all wrong?

 

Anna Patterson  2:53  

Actually, it was more of a timing thing of, you know, how long AI winters last? And how the ebb and flow of popular and unpopular technologies. And also I saw the commitment of my peers to advancing the state of the art in research, and felt that a bunch of breakthroughs were right around the corner. And I guess it turned out to be right, I could have I could have turned out to be wrong.

 

Rob Stevenson  3:20  

Yeah, you certainly did turn out to be right. When you say AI winter, how would you compare? Contrast that to the standard hype cycles that exist? Or would you trust it?

 

Anna Patterson  3:30  

Yeah, usually they over promise and under deliver, and then they fall out of favor. And so when that hype cycle happens, where it's overhyped and falls out of favor, then generally that's, you know, kind of what's called a winter because people think, you know, that didn't really pan out. But I've actually lived through two other hype cycles. I think that we really underestimated the amount the web was going to change society. And so I think looking back on it, the web was under hyped. And I think we underestimated mobile, and how much mobile and cell phones were going to change society. So looking back on it, I thought it was under hyped. So no matter how hype Do you think AI is now? I think that we're under estimating its change.

 

Rob Stevenson  4:19  

When you say falls out of favor, do you mean with other venture capital firms with investors writ large? Who are the people that are sort of anointing or non anointing the next paradigm in this case?

 

Anna Patterson  4:31  

I think it often happens with with researchers. It happens in industry, like did it deliver on industry changing technology? Did researchers have breakthroughs that were interesting? Were they able to continue to get funding in the area where the grad students excited were researchers excited were entrepreneurs excited? And so when there's a lot of hype, and then you know, not as many breakthrough are not as many applications that people think are transformational, then it starts to go through winter.

 

Rob Stevenson  5:07  

You, for example, I imagine would be just as bullish in an AI winter as an AI spring or summer. So when that happens, do you just kind of roll your eyes and get back to work? Or how do you do you ignore the hype cycles? How do you view them?

 

Anna Patterson  5:21  

I mean, I think that it's don't waste opportunities, right? So if it's a hype cycle, enjoy it. If it's a winter, yeah, get to work,

 

Rob Stevenson  5:32  

there is a little bit of a freeing component to a winter, where it's like, oh, I can work now without the attention of the people who may need to understand the space. Yeah, I agree. Well, and a part of the reason I was excited to bring you on is because a lot of my guests work internally at ad companies. And so they have a very clear view of how their business is operating and bringing some of this tech to market. But you're in a unique position, because you get to speak with lots of companies, you get to speak with lots of founders who are bringing AI tech to market. And so I feel like that probably gives you a more wide ranging view of the space than most. So I'm curious to learn a little bit about how you were evaluating early stage companies, I'm sure you're a pitched a lot. And even when you're not pitched, you are tuning in to know who is out there and what they're making. So what are the kinds of things you listen for when you're evaluating or just taking an interest in early stage AI company?

 

Anna Patterson  6:22  

I listen to some of the very basic things that other investors would look at in any company. are the founders are knowledgeable about their area? Are they excited about the problem they're trying to solve? Do they have personal knowledge about the problem they're trying to solve? And, you know, kind of how committed are they to solving the problem? Because every company is going to go through ups and downs. And so if it's something that you're a bit of a tourist about, rather than really committed, then in the downs that I worry that people will not have perseverance through those downtimes?

 

Rob Stevenson  7:06  

How would you assess whether someone was a tourist or if they were really committed?

 

Anna Patterson  7:10  

Actually, when you ask them, stories of you know how they came to solve this problem, when they first noticed this problem, a lot of people will say, well, at work, I had to solve this problem over and over again. And so I am now writing an AI engine, plus integrating with a workflow in order to solve this problem for everyone. I think that's kind of a typical example, where you know, that someone like really understands the issue. And they've already had to solve it kind of a few times at a few different companies. So they're really kind of excited to bring their ideas to the world.

 

Rob Stevenson  7:51  

In that case, is it less important that they have a background in AI or machine learning and more that they've really struggled with a problem? Because I fear that sometimes AI is sort of a magic wand that people wave over something like is this really an appropriate application that technology gets? That's my question, how do you tell like, even if someone has intimately struggled with a problem, that the the right answer is AI, and they're the person to do it?

 

Anna Patterson  8:15  

Yeah, I agree. I think that we're okay. If it is something that solves a problem, that that doesn't use AI. But in general, our investments are problems where, you know, AI is the right answer, and we actually feel that people have a good handle on on how to approach the problem.

 

Rob Stevenson  8:40  

Right. Makes sense. So I'm curious, given that it can be a bit of a magic wand, do you have thoughts on where the best applications of AI and ML are maybe the most opportunity to apply this tech?

 

Anna Patterson  8:55  

I have spent a lot of time thinking about that. I think, one, that large language models that we're seeing today are moving incredibly fast. I think that it's like the new compute substrate, you know, you had computers, then you had networked computers, you had kind of cloud operating systems. So you could run across many computers. And now you have models that can really understand how to kind of do a lot of things and need more training to do something specific.

 

Rob Stevenson  9:31  

So short answer large language models.

 

Anna Patterson  9:34  

Yeah. I'm very excited about large language models.

 

Rob Stevenson  9:37  

Could you tell me more what specifically as you're excited?

 

Anna Patterson  9:39  

I think that if we're going to believe that we're going to eventually get to general AI or towards it. I think that this is actually one of the breakthroughs that we needed. And I'm not sure if you read my eight critical approaches to blog, but I've actually written For different blogs about oh, that's how excited I am about them that talk about various techniques, how to train models with chain of thought reasoning. And so one of them is titled The next programming language is English. And, of course, by that I mean, any natural language. I don't actually mean English, but I mean describing to the computer, what you want done in a clear fashion is one of the ways to train these large language models. And I think that is going to allow an uplift for anyone to be able to take this new compute fabric and and do something with it.

 

Rob Stevenson  10:40  

The reason that I'm so exciting because it democratizes access that will stay less than the domain of a software engineer. Yes, correct.

 

Anna Patterson  10:47  

I even make a small joke that we'll try not to have too much shade in Freud when the English major says their essay doesn't compile?

 

Rob Stevenson  10:55  

Well, you're welcome to our world English majors. Fantastic. I'm curious when you say if we should approach AGI I guess, I mean, big question. But should we can we assuming that lol memes are part of it? Where do we stand there?

 

Anna Patterson  11:14  

I mean, you know, Allah labs, when you think about them. In one case, it is like an essay reformatting of search results. You know, when you ask it a question, it does give you an essay, and you can think of it as an essay that reads and reformats search results. So, in that way, I think some people say, well, it's not that much closer to AI, if you view it that way, but when you view it as a building block, that you can instruct on how to do tasks, I think we're a little bit closer. So you know, I was in John McCarthy and Carolyn towel cots research group at Stanford. And one of his really, really early papers was called the advice taker. And in the advice taker, you teach a machine how to do things. And so I find the chain of thought reasoning approach to be, you know, really harkens back to to that idea.

 

Rob Stevenson  12:22  

I want to ask you a kind of nuanced question. And it's okay, if it's too, too squishy, or nebulous to get into, but we are in the case of like English as the next programming language. And we now know what you mean by that. Not specifically English, but communication, really human communication. Does that limit machines to speak the way we do if you think of the way communication evolves through our brains and this like, speech as an example, or just using symbols for real world things we agree upon in the physical universe around us? Do we limit machines by teaching them to speak the way we speak?

 

Anna Patterson  13:01  

He that's an interesting idea. Because another thing that machines are doing is they're learning API's all the time. So you know, they're learning how to call databases. And you know, you can think of your app safe Safeway that says, Are your prescriptions ready is really a database call. And so I think that these models are learning about database calls. And so you can imagine that pretty soon, databases will know how to call each other and to be able to do more complicated tasks, without the speech in the middle.

 

Rob Stevenson  13:42  

Could you give an example the Safeway pharmacy one is a great one, what would be an example of a more complicated task?

 

Anna Patterson  13:47  

I'd say one of the tasks that I talk about is Majan. You've been in a car accident. And you have your car accident report, and you have maybe a witness report, you have the other person who was involved in the report. And maybe there's a police report, and you have to collate all that information into a, let's say, an insurance adjuster report. So all of that text goes in. But other things go in as well. Other modalities, perhaps there was a traffic camera, perhaps the insurance gave you an app, and you had to take pictures of your car. So as you're taking pictures of your car, there's another AI model that says, here's the stuff that's fresh from the accident, you know, that ding on your passenger door that's not from this accident. You were hoping you that door but repaired, you know, and so like really kind of synthesizing putting all that text together the visual models together, and then coming up with a report. But at the end of the day, I think it is also going to have to call the database and say, you know how much is See a bumper in Santa Clara County, California. And I think, you know, learning how to synthesize the information, the visual information, call the database, in order to generate that report, I think

 

Rob Stevenson  15:14  

that's possible. The insurance example is a really good one. Because when you start considering all the modalities, all the various data sources at play, and then remember that it's passed through the filter of human judgment, often, probably less than less, but it is right for bias in that way. And a lot of insurance claims in particular, like they may be made based on other things that had nothing to do with the specifics of a collision, right, like, for example, your, your age, your gender, where you live in the world, those kinds of things. So I'm curious where where you draw a circle around quality AI, putting up some guardrails to make sure that while we're training machines to do really complex tasks, that they're doing it in such a way? That is fair?

 

Anna Patterson  16:03  

Yeah, I think most of the things that I've contemplated definitely have a human in the loop, I see this as helping enable work, helping people to focus on like the most complicated cases, helping to review cases easier. You know, there's real issues with getting enough people to be insurance adjusters or getting enough people to be bookkeepers. And so I think this kind of had already now AI that can help do a task and have humans kind of up level, you know, what's, what's happening with those tasks is, is where we're going.

 

Rob Stevenson  16:43  

So human in the loop is, like, essential, we are as much Hey, is made over automating away human labor, you still see this need for human involvement? I do. Yeah. What else in addition to human in the loop, would you prescribe for for folks who are out there, and they're developing their own AI technologies, and they want to make sure that their organizations are prioritizing a meaningful bias free approach? What are some things they can think about to ensure that's the case?

 

Anna Patterson  17:12  

I mean, obviously, it starts with the data set. If your data set does not cover all the cases that you're going to see in real life, then you're going to have some holes in your reasoning. And that's going to wind up having some bias in the final model. And so really paying attention to what's the application? What are the, you know, sort of like writing a program? So people think, oh, it's gonna be so much easier? Well, you know, in writing a program, you have all the if statements for all the things that can happen, well, here you need, like all the data for the things that are gonna happen.

 

Rob Stevenson  17:54  

This is maybe an elementary question, but how do you know you've collected enough edge cases? How do you know that you have? Like, how do you know what the number of of cases out there are so that you can get enough data to cover them? I think

 

Anna Patterson  18:11  

if you have human in the loop, what you'll see is, if the system has a recommendation, it might have very low confidence in his recommendation. And that points to the fact that you don't have the right data until that can point you to collecting data in the right area.

 

Rob Stevenson  18:31  

Is that always what you think? If you see a low competence number there? That's a data issue?

 

Anna Patterson  18:37  

Usually, yeah, that's, I think, at first

 

Rob Stevenson  18:42  

whether the amount of data or that it's, you know, been annotated or cleaned properly, those sorts of things.

 

Anna Patterson  18:48  

I think that preparing the data, having the data, those are boring tasks that have to be done.

 

Rob Stevenson  18:55  

Boring, but essential, yes. Whose job is that?

 

Anna Patterson  19:02  

Yeah, I think that's interesting. Because sometimes, you know, if an AI company is selling into an enterprise, there's this what I call the big AI myth, which is the data is just sitting around, and it's waiting for someone to do em on top of it. Or it's waiting around for you to use that data to fine tune an LLM. I mean, sometimes maybe you haven't saved the right data, or you haven't cleaned the data, or, like we just talked about, you have holes in your data. So really, the idea that all the data is there, and it's just ready for your needs is kind of a myth. So you have to really say, what is it that I want to predict? And do I have the right data to predict it? Rather than I'm sure I have the right data? What could go wrong?

 

Rob Stevenson  19:56  

What could go wrong? Oh, I'm only about a million angles surely.

 

Anna Patterson  20:01  

Yeah. a countable number though.

 

Rob Stevenson  20:04  

Right, right. When I when I asked whose job it is? The answer is anyone who cares about this being done properly, right? It's not like oh, well, you must you better hire a machine learning engineer you better hire but our data science team or like, oh, I don't know about that I'm not I'm not the data guy. It's like we're all data guys at this point. Is that kind of what you're saying?

 

Anna Patterson  20:21  

I am saying that we're all data, guys, but also, what is the product you're trying to make? And what problem are you trying to solve? And so if you know that, then you can go back and see, do you have the right data to solve that problem? Do you have the right data to make those predictions for the tool you're trying to build?

 

Rob Stevenson  20:43  

Right, right, that makes sense. And I'm curious where you kind of come down on some of the the doom and gloom headlines, we know how you feel about hype cycles and AI winter. But when you hear some of the sensational stuff that you know, we're economizing away human jobs, or that an AI is going to 3d print to the singularity in someone's backyard. I don't know how, how silly it gets. But what do you think of some of that sensationalism? And is it reasonable to be fearful? I guess, is my question.

 

Anna Patterson  21:13  

I think when you think about the number of lines of code in your editor, pick your your favorite word processor of choice. There's a lot of lines of code, there were a lot of decisions made. Nobody thinks they're their editors going to all of a sudden become self aware. But but, you know, we are entering a place where we already entered a place where a search engine seems to know a lot of things because it has indexed a lot of the web. Large language models have used a lot of the web in order to generate sentences, which explain things. And all the sudden it seems sentient. I think if you think of it as a reformatting of search results, it's a lot less scary.

 

Rob Stevenson  22:03  

Yeah. When you when you crack the hood, and it's still just lines of code, for example, I think, okay, maybe maybe it's not time for the rise in Michigan's yet.

 

Anna Patterson  22:11  

Yeah, I have a friend who, who jokes around how to stop a robot army, you shut the door, because a lot of robots can't open doorknobs. So

 

Rob Stevenson  22:23  

yeah, it's that simple. or unplug it or get a get a water? I don't know. So, and I do want to ask a little bit about your about gradient and your approach over there. And I want to know, what gets you excited when it's time to support new companies. And I guess I want to know, half in a this aligns with gradients, strategy for investment, and half just like, for you personally, you are this curious person, you have been around the block a couple times when it comes to AI, and you have a love and excitement for this space? So weighing those two factors, what are the kinds of things that get you excited, and that get you fired up about being a part of a company's early story?

 

Anna Patterson  23:09  

I think, you know, we do fund people from their earliest stages we fund Well, you know, we fund it, they will box when they were three people. And I think people's attachment to the problems, their passion for the space, me feeling like they're solving an important problem that is both important and timely. Is this the right time? You know, why wasn't this done earlier? Why shouldn't it be done later? And, you know, are they the right people to do it goes more to their excitement, passion and commitment, rather than, do they know everything they need to know? Because I will say that we've we found entrepreneurs of all ages. And no matter what age people are, we kind of know they're winging it, you know, entrepreneurs are like, I got this, I really felt that I know everything. No, I mean, we know you're winging it, you know, and that's, that's good. Yeah,

 

Rob Stevenson  24:09  

that would be freeing. If I were in a position to be raising money. It's like, oh, they know that I'm kind of making some of this up as I go along. Because that's what entrepreneurship is, right? Like, if you're solving a well defined problem, then how interesting can that space be? Right? Exactly? Well, no, before I let you go here, I want to ask you to bestow some wisdom upon the folks out there in podcast land, having worked in academia and participated in lots of fantastic research, having been at large companies, having founded them yourself. What advice would you give to the folks out there forging a career in the AI and ML space?

 

Anna Patterson  24:43  

I'd say that the space is really evolving rapidly. I used to see people graduate from graduate school or undergrad and feel like that's it. You know, I know everything. I'm ready to go apply what I I know and take on the world or whatever I see right now, a really important thing to do is to stay abreast of modern technology. And I think that's a kind of big change. Stay curious. Stay curious. Yeah. lifelong learners. Yeah.

 

Rob Stevenson  25:20  

I love it. This has been fascinating chatting with you. And we are creeping up on optimal podcast length here. So at this point, I would just say thank you so much for for joining me. I have absolutely loved learning from you today.

 

Anna Patterson  25:30  

All right. Thank you so much. Take care.

 

Rob Stevenson  25:34  

How AI happens is brought to you by sama. Sama provides accurate data for ambitious AI specializing in image, video and sensor data and notation and validation for machine learning algorithms in industries such as transportation, retail, ecommerce, media, med tech, robotics and agriculture. More information, head to summit.com