How AI Happens

Theory Ventures General Partner Tom Tunguz

Episode Summary

AI has become the single largest driver of revenue spend for venture capitalists. Today’s guest has sat on the board for many companies and currently serves as a General Partner at Theory Ventures. Tom Tunguz joins us to share his predictions for the future of software along with many other insights from his research into AI, including the importance of being at the forefront of AI developments as a leader, changing metrics to predict future success, and whether or not Generative AI is gearing up to replace Google search.

Episode Notes

Tom shares further thoughts on financing AI tech venture capital and whether or not data centers pose a threat to the relevance of the Cloud, as well as his predictions for the future of GPUs and much more. 

Key Points From This Episode:


“Innovation is happening at such a deep technological level and that is at the core of machine learning models.” — @tomastungusz [0:03:37]

“Right now, we’re looking at where [is] there rote work or human toil that can be repeated with AI? That’s one big question where there’s not a really big incumbent.” — @tomastungusz [0:05:51]

“If you are the leader of a team or a department or a business unit or a company, you can not be in a position where you are caught off guard by AI. You need to be on the forefront.” — @tomastungusz [0:08:30]

“The dominant dynamic within consumer products is the least friction in a user experience always wins.” — @tomastungusz [0:14:05]

Links Mentioned in Today’s Episode:

Tomasz Tunguz

Tomasz Tunguz on LinkedIn

Tomasz Tunguz on X

Theory Ventures

How AI Happens


Episode Transcription

Tomasz Tungus  0:00  

People don't want to be outmoded. You don't want to be the leader who didn't pay attention to AI and didn't know about it or didn't know about that particular piece of software. So their willingness to try is very high. The question is, are they staying?


Rob Stevenson  0:14  

Welcome to how AI happens. A podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field, and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Okay, hello, out there to all of you wonderful machine learning engineers, data scientists, whatever titles you have roles you have in the space. I'm so glad you're tuning into the podcast because I have a special guest for you today. He has been on the board for a ton of companies that you would recognize companies bringing really exciting technology to market specializing in lots of different AI outputs. Currently, he is a general partner at theory ventures, Tomasz Tungus. Tom, welcome to the podcast. How are you today?


Tomasz Tungus  1:08  

I'm great, Rob, thanks for having me on. It's a pleasure to be here.


Rob Stevenson  1:11  

I've only had a handful of venture capitalists on people who are kind of focusing on the AI space within venture. And it's exciting because I just love to get into your brains a little bit and into your day. Because you're busy people, obviously we're all very busy. But I'm just curious, what are you working on? And how do you fill up your time right now? Like, how did you slot this podcast in?  


Tomasz Tungus  1:29  

Yeah, there's a lot that's happening. I mean, we research pretty aggressively what's happening in the AI ecosystem? That's a big part. We've been growing the team at theory. So we're nine people now. And working with some of our investments, we have seven portfolio companies soon ba so there's definitely a lot happening. But it's I always love chatting about AI and, and the future of software. So thanks again for having me on.


Rob Stevenson  1:52  

Yeah, of course, the future of software is something that I think people in your position are tasked with understanding, right, and you kind of need to see around corners a little bit, obviously, because you're making investments in companies that you hope become big thing. So how do you kind of stay up to date? What are your sources for information? I'm curious, like what signals you tune into to try and see around those corners.  


Tomasz Tungus  2:13  

That mean, this is the core of our business, I think a big part of it is understanding the problems that buyers have, and having many conversations with potential buyers of software. Another major source of information, particularly when you have very technical products, and this is just as true in crypto as it is in AI are reading some of the academic papers that are being produced just to understand and transformer itself that paper tension is all you need is so foundational, so fundamental. And the pace of innovation at the infrastructure layer within AI is breakneck where every day you wake up, could there be some other significant events, that's either written in a paper published by a big corporate, so reading more academic literature than I have been accustomed 10 years has been really important. Twitter X is also a pretty interesting place to be. Because people are publishing all kinds of stuff. And then even individuals, researchers blogs have been really helpful. So it's, it's all over the place conferences from time to time, as well.


Rob Stevenson  3:15  

Gotcha. Yeah. The involvement of academia in this space is like I like you, as you know, is kind of in software and SAS for a long time. And academia is a very separate institution, right? If you're not directly serving that market, but a lot of the folks I speak to they have one foot in academia, one foot in the private sector, a lot of times people are lecturers or professors, and then they are also working with a company at the same time. So it's interesting. Do you have a theory on why that is why people are participating in both at the same time in this space, because it feels unique?


Tomasz Tungus  3:45  

It does? Well, I think the innovation is happening at such a deep technological level, really at the core of machine learning models. That were people who spend their time focused on that it tends to be in academia, a really large technology company. So if you think about like PhDs, where do you find the concentration of PhDs who really understand the differences between like RNNs and transformer architectures or any of the next generation? Like how many tokens you need to train a platform? What's the right algorithm is to do cost benefit analysis on what's an action Silla for example, they tend to hang out in, like, postdocs or professors, or they're at Facebook and Google, which I think has something like 40 to 60% of machine learning PhDs in the US.


Rob Stevenson  4:27  

Yeah, it makes sense. It's just Where does research take place? Like where can experimentation and research take place without the expectation of it immediately turning into profit, right? And that's huge companies who can afford for that to be long term and academia obviously. Exactly. Yeah. Now, there's so much hype in this space. We try and cut through it on this podcast hope we succeed. And AI for some folks has been like a magic special sparkly flag, you can wave that. That will get investors attention. However you are tasked I'm sure with cutting through that hype and trying to understand, okay, what is the actual technology? Do you have something novel here at play? So I'm curious to learn a little bit more about your strategy. And when you are looking for companies to make investments in in the AI sector, what stands out to you?  


Tomasz Tungus  5:12  

This is the question we spend a lot of our days trying to answer. I think the first, we'll get a lot of these wrong, right, I think it's really important to note that we're trying to understand what the future looks like. But we won't be right. All that often, we'll try to be as right as often as we can, what we're looking for. So we spend most of our time in the developer tools and software applications of AI rather than the large language models, as investors just because of the capital intensity that exists at the LLM layer, and also at the inference layer, you need to raise hundreds of millions, if not billions of dollars to buy VMware chipsets. And so it's just it's a place for others to play, not us. So when we're looking at the application layer, we come up with thematic areas. And the idea with those thematic areas is to find right now we're looking at where are their rote? Where's the rote work, human toil that can be repeated with AI? That's like one big question where there's not a really big incumbent. And we just announced an investment in a security company called drop zone that is automating the rote work of security operations analysis, that's five to 7000 alerts hit a security operation center in an enterprise every day, but fewer than 1% are reviewed. Now, LEM is a phenomenally capable tool to be able to understand which of these alerts actually matter correlated to other signals within the company's events management system. So we're looking for companies that look like that, where there's no clear incumbent, there's a lot of rote work. Another meaningful attribute that we talked about is machine learning systems, particularly when they start out are only about 80%. Accurate, let's say you can look at the MML use scores of many of the current leading small, medium and large models, and they're all roughly around 70 to 80%. Well, there are certain applications where an 80% accurate answer is really good fits like a head start on writing a blog post its stem cells, so to speak of a brand design, and it saves a creative person a lot of time. But there are other applications where an 80% solution is worthless. It's not even if you're doing if you're wanting to analyze somebody's credit score or make a debt determining decision. 80% answer doesn't help anybody. And so we're trying to understand when we invest, is this a place where an 80% Answer is sufficient, or really valuable. And then the last, and this is probably one of the hardest, we talked about quality of revenue. So if I take a software company five years ago, say was growing one to five to 15 million, you have a very high degree of confidence, that business was worth probably 500 million or more, because the odds that the revenue would meaningfully plateau or even contract, were some very small, but within AI because of the very fast paced changes in underlying infrastructure, and also the fast paced changes of buyer preference. That's not necessarily the case, you can have an extremely fast growing business, and it's unclear whether or not that revenue is sustainable. And I'm not saying that that's broadly the case. But it's just a question that we asked ourselves, and we're trying to understand, Okay, well, how do we measure the sustainability of that revenue? Is it like net dollar retention? Is it like other key metrics? And we don't have one yet?  


Rob Stevenson  8:25  

Yeah, it's interesting that there seems to be less predictability in terms of revenue. Is that an economic happenstance? Or is that just because of the tech,


Tomasz Tungus  8:33  

I think it's like a social phenomenon where if you're the leader of a team, or department, or business unit, or even a company or board on the company, you cannot be in a position where you are caught off guard by AI, you need to be on the forefront. And AI has become the single largest driver of net new revenue spend within infrastructure. If you looked like last few years, the dominant dynamic within software sales has been customers, contracting their spend, like snowflake, net dollar retention, for example, went from like 177 to 130, in three or four quarters. And it's very, very clear the dominant narrative within AWS, Amazon's and Microsoft's public company earnings calls has been cost reduction. And then the countervailing force of that has been AI where the budgets they're basically unlimited, or it seems that way. And that's what's driving a reacceleration in overall hyperscale or cloud growth. I think all of them have all those three businesses started to re accelerate by one or two percentage points on like $150 billion market. It's just staggering. So people don't want to be outmoded. You don't want to be the leader who didn't pay attention to AI and didn't know about it or didn't know about that particular piece of software. So their willingness to try is very high. The question is, are they staying a year from now after they've tried some LLM enabled application? Are they staying and so this is the difference, where all of a sudden a different architecture a different approach Whether a different model where you take a large language model versus a small language model, or you build it in house versus buying off the shelf, or a new vendor comes about, or all of a sudden, a particular layer in the stack is no longer as valuable as it once was. Because it's been commoditized by five or six different meaningful competitors, those dynamics are all at play in almost every category of AI. And so I think that the account churn risk is significantly higher, you can't look at the benchmarks that we've been using over the last 10 years where if you're selling to enterprises, it's a 5%, annual churn risk is top quartile top path. And at the SMP is 15%, I think you will start to see some significantly higher churn within AI software companies for some of these reasons.


Rob Stevenson  10:44  

And so the idea is that the investment is there, everyone's kind of having FOMO about like, Okay, I don't want to get caught not investing in AI. But the question is, then, okay, you've put the money where your mouth is, but can you actually make something work? Because is that it's like, okay, we're, you're going to spend for now, but on the other end of it, will you have something that people want?  


Tomasz Tungus  11:02  

I think there places there's clear value, right code completion, there's no doubt I mean, every software engineer under the sun will have code completion at some form, content writers will very likely be using AI for customer support teams, there's no doubt that the performance benefits and the cost efficiencies there are staggering. And so there are these clear places where the AI will exist. I think there's lots of applications. And this is the beauty of startups, as an ecosystem, we try to figure out, Okay, we have this new hammer, which nail actually works, and we're ever since I read the analysis, I think something like 25% of all venture dollars invested in 2024 have been invested in AI as the single largest category by 2x. And so everybody's trying because this exists, and everybody's trying on the buy side, all the potential customers, and many startup founders trying on the sell side. And some of these markets. In other words, some of these will achieve product market fit, and some of them won't, because he's just the technology is not accurate enough, or the application isn't the right one, or there's lots of competition. So we're figuring all that out, if you remember in like 2008, or 2009, whenever it was that Apple launched the iTunes Store or the iOS store, there was a panoply of different applications that will be built in the some of them worked in some of them didn't seem dynamic here, except it's happening in software where the contract sizes are bigger.


Rob Stevenson  12:22  

Yeah. So this prevailing question of like, you know, there will be plenty of competition, some of these things will just not work. Some of these use cases will not find a home users will not adapt to them will not trust them. These questions are all at play, I made a note, actually Tom to ask you, if you thought generative would replace search, right, if you thought it would replace like the traditional Google Search. This is a clunky way to tee up a question, but you'll see why I'm doing it in a minute. And it's because this morning, I ran a normal Google search and there was generative right there in my search results, right. And it took an extra second to load it had a little like sparkle icon over it. And then it was just trying to intelligently pull something contextual from one of the highest ranking search results, right? My first reaction was to turn it off. My first reaction was to like mistrust it and try and figure out where they got it from and to actually look for context. That's just me, though, bearing this in mind, do you think user preferences will just shift and we will come to accept things like generative and our search, will generative replace search?


Tomasz Tungus  13:25  

I think the answer is yes, we made a prediction at the beginning of 24, that half of all searches by the end of this year, would be generative. And I don't know if we'll get there, but the verge ran a study, two thirds of 18 to 24 year olds prefer generative search already, Google and their most recent announcement said that people who are on gender search tend to have far longer dwell times and engage actually execute more queries than standard search. And so there are lots of questions about what does this mean for the monetization model of Google and the monetization model of the internet more broadly? Because if Google is summarizing the content of a New York Times and a CNN article, are you going there and viewing the ads? No. But I do think the dominant dynamic within consumer products is the least friction in user experience always wins? And even if the answer is right, 80 to 85% of the time, that's probably good enough. And the reality is, even if you were to search on Google and find a search result, the odds that it's probably right, maybe like 80 to 85% of the time, the answer is correct. That's a really good point. Yeah, yeah. And so it's not that the results themselves are a gold standard. And it's just the summarization is real. So I think this is where it's all going is. And we saw in the demo, or actually, I'm sure you watch the announcements, Rob Mike OpenAI speaking to the computer back and forth and you know, like if you're an expert, like the way you are in machine learning, and I call you and ask you, Hey, what do you think about this small language model with the new orca three or whatever the new llama three at a billion and you say I think it's a phenomenal model. Oh, Just you, because you're an expert. I don't need to go and look at the citations or anything like that. I think there'll be a very similar dynamic in search.  


Rob Stevenson  15:08  

Yeah, it's worthwhile to just remember what a Google search actually is, which is just 80,000 links, and a lot of them are up there because someone was really good at SEO and like they ranked in the search. And is it because it's true? Or is it because they were playing this game really well? And so is it that good anyway, you know, is a worthwhile question, I think to remember. But yeah, and also, with all of these copilot tools, right, like, will generally have replaced search Sure, what if instead of going to Google with your question that's gonna take you to GitHub, you could just ask your co pilot app, right? Like that search will take place in a contextual URL that is trained on maybe your own company data. So that's way more relevant than just taking into the open web anyway.  


Tomasz Tungus  15:51  

That's right. Yeah. That's why you see Stack Overflow licensing their data to I think Google bought it or licensed that. I think, you know, the overall business model of the internet may change pretty significantly here, where it's much more lucrative for Reddit and StackOverflow, to sell their data to some of these large language model vendors than it is to run ads. And, you know, even on a multiples basis, licensing data is a far more valuable form of revenue than running ads, because one is a subscription business, the other one's transactional. And the first one is probably much higher margin.


Rob Stevenson  16:23  

Yeah. And it does depend on how much you can trust the user generated data, though do right or user generated content. What happens when that user generated content is itself generative?


Tomasz Tungus  16:31  

That's a really good question. The other thing that's happening is some of the Stack Overflow users are angry about this licensing. And so they are the revolting,


Rob Stevenson  16:38  

yeah, they're like, I'm taking my code down. I didn't sign up for this.


Tomasz Tungus  16:41  

And so they're putting incorrect answers. And they're always new. They're all these new dynamics that we'll have to figure out in this world? Or do we actually need to remunerate user generated, the people who produce there's a power law or even more than a power law in social media, where you have 1% of people producing content 9% of people engaging with it, 90% just reading it, or passively consuming it. So if that 1% is producing $60 million a year for Reddit, that's a lot of value. Right? So should they be remunerated? And I think the other big questions like this Reddit start building product experiences, because Google needs certain kinds of training data that the Reddit platform doesn't offer. And so as a way of capturing more licensing revenue do they do that start to compete with other kinds of user generated content sites to branch out to that business model ultimately changed the way that UGC platforms are built and managed and the communities engage? That's a fascinating question. We have 10 or 15 years to answer here.  


Rob Stevenson  17:40  

Yeah. And do you start compensating users, right? Like, are they incentivized to create their content for more than up votes or likes, which is like the currency for which they're playing right now, or some sort of social clout, right. So you have a lot of followers, and maybe you can find a way to parlay that into real world value. But right now, you're basically like, it's a fake currency of the site. And I do wonder, I feel like is that the next step? Like, will these companies start to be like, Okay, you're generating content for us that content and that data is how we create value. So ergo, you should be able to participate in that value, too, or else people will leave?  


Tomasz Tungus  18:14  

Yeah, imagine if you had a loyalty program like United, right? Where certain number of likes is equal to a certain amount of dollars, and you go and spend that within the overall ecosystem? I mean, it sounds a little far fetched. But I think, particularly for some of these, I mean, you can see it within like YouTube, right, YouTube versus twitch and the dynamic there of the platform's paying for the right content creators, there's no reason to believe that this won't exists, and broaden out.  


Rob Stevenson  18:37  

That's right. The precedent is already there. YouTube and Twitch are good examples. Patreon, right? So will other companies who rely that much on their users, will they eventually they eventually I think they'll have to, I think they've skated by being like, oh, people will generate this because they like being here. And then eventually, if they don't like being here, they'll just leave so that it feels inevitable. All right, we kind of get off on a tangent there. But I do love when that happens. But you mentioned earlier on when I was asking you about the investment strategy that you weren't so interested in playing in the LLM space, just because the investment amount is so astronomical. So I would love to hear you speak a little bit more about like, is it just a matter of the compute expense? Is it just like what it takes to train these models is that astronomical investments strictly necessary,  


Tomasz Tungus  19:23  

Different people will have different perspectives. But Amazon is starting to talk about a single training run costing a billion dollars, and then 10 billion in the future. So there's a huge sums of money. There are startups that raise hundreds of millions of dollars just to invest it in GPUs. And so I learned about a financing a couple days ago, that was 400 million give or take and 80% of those dollars, were going into buying GPUs. This recalls the days and I wasn't in the industry back then. But the days of the early 2000s When the majority of venture capital dollars going into technology companies was to buy servers, and people would rack them and that's the way that for early search engines were built. And so there's a big parallel here, then the question is an investor is like, Okay, well, what's my return on equity, because if the company needs to continue to spend hundreds of millions or billions of dollars, building GPUs in it a pretty significant business to blossom. Otherwise, the dilution associated with raising those huge rounds in equity will make your position not worth that much. Maybe there's a role here for debt to play. I imagine, for some of the later stage companies borrowing to buy some of these GPUs is a good way to do it, you need some pretty sophisticated financial engineering, but you know, so I think there's that dominant dynamic, and you can see it within the GPU, the LLM parameter accounts, right. The very largest models are somewhere around 275 billion parameters. But llama Facebook, meta, is now building a 440. They're training a 440 billion parameter model. And so that's definitely happening where the models themselves are becoming bigger. You also have this dynamic where even the smaller models like llama, three, 8 billion was trained on 15 trillion tokens, which is about three times more than math would have suggested that meta train them on. And so that costs a lot of money. So even the smaller models, and then there's this push to really small models that are built for purpose. There's a legal SLM small language model, there's a finance like a Bloomberg, like SLM where the training data is much narrower, the input possibilities are much narrower, and the output possibilities are much narrower, which gives you much better performance, lower latency, higher accuracy, less hallucinations. So you have this sort of huge gamut of the way that the ecosystem is evolving. And ultimately, it's probably like a big company game, right? You look at the web two ecosystem, the top three clouds control something like 80 to 85% of the overall hyper scalar market. And the reason that that's the case is the capital intensity of those businesses, is also equally large, right to build out those data centers, each of the hyper scalars is spending 12 to $15 billion in q2, building out additional data centers. And so first startup to be able to compete, both in terms of performance, availability, leverage in negotiations with the GPU providers to get discounts. It's a hard place to be. But it's not to say that all is lost, I think there will be specialization, right? Every time you have these big players that are broadly general purpose, you have interesting specialization look at Digital Ocean, that's a publicly traded three or $4 billion company competing with the hyper scalars are focused primarily on incredible developer experience through content. We've seen companies that are building vertically integrated data center, networking, memory, GPU, all the way to the algorithm and SDK level for video where the video files themselves are sufficiently different from a text inference problem, that it makes sense, because the files are really large, and the cost savings from that performance is material. So I think you'll start to see some specialization there. But you still need, I mean, 10s, if not hundreds of millions, if not billions of dollars.


Rob Stevenson  23:03  

Yeah, the amount of parameters that the model is trained with, it feels like a little bit of an arms race, right, it's like, oh, we can say that it's trained with more. So that means it's necessarily better and more accurate. But then you get the example of small language models and more niche use cases, which is necessary for those to exist, right, there's less of that data that is relevant for them to be trained on to begin with. So they couldn't be a large language model that they want to do. And if you're going to do something that's like very sensitive high stakes data for a company, then it's only trained on their own data, it's a much smaller, more finite amount as well. So it feels like it could be both and you know, it's like, oh, there is this use case for not needing hundreds of millions or billions of parameters to train this because we're just going to buy it for this one single use case. The server example from the 2000s is a good example. Because then like with AWS, you had like, Okay, this is now loaned out to everyone, right are rented out to everyone, the same way as like, Can these models are? Surely they will be rented out too. And so maybe you can be someone like me in your basement with a computer, and then you can use these models to your own use case. Right? Do you foresee that being a big part of the ecosystem?


Tomasz Tungus  24:10  

I think that's right. I mean, there are some people who want to run their own data centers and manage those data centers. And then there's some people who want the main benefit of the cloud, which is elasticity. And startups typically want a lot more elasticity because they cannot predict demand. But if you're a late stage company and Dropbox famously moved from the cloud to on prem, or their own managed infrastructure, they saved $75 million a year. From that. They were in a place where it was very clear what the demands would be on the infrastructure, exactly how many users how much data, they would move, and they could predict it because they had five or 10 years of historical patterns and the growth rates had tapered just because of the maturity of the business. But if you're a startup and you're building a chatbot, or some copilot for accounting, you have no idea what the demand will look like. And so to go and invest a couple million dollars into GPUs and either be massively over provisioned because you didn't find product market fit or massively under provision, because you did find product market fit, sort of at a disadvantage. And so you'd rather pay the extra fee for the option of having that scalability. So on the whole, I think we'll probably see the vast majority of the market prefer cloud, even if there's challenges in getting access to some of these GPUs, at least for the foreseeable future until companies are in such a place where they can actually accurately predict demand, and then provision and reduce costs.


Rob Stevenson  25:33  

Yeah, that makes sense. The expensive GPU is kind of driving the amount needed for investment. It's a bottleneck, but the trend is for technology becomes smaller and more affordable. Right? I remember growing up there was one house on our blog had a big screen TV, and they were the rich kids right? Now everyone's got a big TV. Granted, a GPU is a lot more complicated than a TV. But do you suspect that there will be that similar trend with technology, in this case, a GPU getting smaller and more affordable?


Tomasz Tungus  26:01  

Yeah, I mean, you have a huge range of different GPUs today, you have consumer GPUs that are used for gaming all the way to the most sophisticated video cards. And there is inevitably a commoditization, I mean, you can kind of see it already happening. AMD is pushing really hard into this category. Google has Facebook matter. And then Amazon all have their own chips. They're not necessarily broadly available, like the Google GPUs are only available through the cloud. But it will happen. There's a data point we were I was looking at the EBITA multiple of VMware over the last 10 years. And it's gone from about 4x to 77x. And it's had three pretty big surges, either the first one was 25x. The next one was 50x. And then the last one was 77x. And the first wave was gaming, the second wave was crypto. And then obviously, the third wave is AI. And the challenge was selling GPUs is all of a sudden, people need much more computing power for whatever application one of the three applications that just mentioned. And then over time, the market saturates and then the growth rate slows down again. And so both because the total count of GPU productions will massively increase, because the profits are so massive, you can look at invidious margins, growth quarter over quarter, just staggering, I think their profits doubled or tripled, that will bring more manufacturing capacity online, then you have the hyperscalers, who really want to compete for this inference jobs that will bring more capacity online. And so actually, there's this brilliant paper studied in grad school called the bullwhip effect, which has to do with supply chains, you play this game, it's called the beer game. And you have four different imagine like a hypothetical beer supply chain, where there's the person who makes the beer, there's the distributor, the wholesaler, then the distributor, and then the retailer, and we learn in that game is like very, very small changes in demand that the restaurant, create these absolutely massive swings in inventory planning at the end. And if you are not aware of this effect, it's very easy to bankrupt yourself. And I bankrupted my team in grad school, and learn this lesson firsthand. The same thing will happen in the GPU market where there's a massive under Supply today, and they'll be a massive oversupply, you know, three, four or five years, and then the costs will go down.  


Rob Stevenson  28:06  

Yep, that makes sense. Well, Tom, I have a million more questions for you. But we are creeping up on optimal podcast length here. So before I let you go, I want to put you in the shoes of an AI entrepreneur as someone who's developing a new company and bringing some exciting tech to market and if I wanted to get your attention, well, what I write in the subject line when I eat cold email you  


Tomasz Tungus  28:26  

Oh, my gosh, hello. I always like those just to Hello.


Rob Stevenson  28:33  

You're easy. Okay. Well, it's just so personal.


Tomasz Tungus  28:35  

Right? I think a lot of the machine generated SDR outbound stuff, tries to hit on something like that. But Hello, is just so human.


Rob Stevenson  28:43  

Yeah, you don't start a conversation with someone, a random person by getting right to an ask or something like you don't like test it with data, you try and have an actual moment.  


Tomasz Tungus  28:53  

Yeah, you know, it just makes me think there's a human on the other side. And many of the emails we receive today are not. So I think that instantly kind of gets you the top 10% of emails.


Rob Stevenson  29:04  

Man, you really just throw a javelin through a lot of email marketing.


Tomasz Tungus  29:11  

Very different preferences than the rest of the market.


Rob Stevenson  29:14  

Now, that's great. I think you're right. It's important to have these human moments as we talk about recreating human cognition and machines here. So, Tom, this has been really fun talking to you. Thanks so much for sharing all of your experience and really fascinating look at the ecosystem and the business case here. So thanks for your expertise and for sharing with me today. I really love learning from you.


Tomasz Tungus  29:32  

It's been a privilege. Thanks for having me on Rob.


Rob Stevenson  29:36  

How AI happens is brought to you by Sama. Sama provides accurate data for ambitious AI specializing in image video and sensor data and notation and validation for machine learning algorithms in industries such as transportation, retail, e-commerce, media, med tech, robotics, and agriculture. For more information head to