How AI Happens

Synopsys VP of AI Thomas Andersen

Episode Summary

Thomas tells us all about reinforcement learning, their constant process of learning and reinvention, and the power of generative AI (with good data). Finally, Thomas shares some words of wisdom for anyone looking to forge a career in the world of AI.

Episode Notes

 VP of AI and ML at Synopsys, Thomas Andersen joins us to discuss designing AI chips. Tuning in, you’ll hear all about our guest’s illustrious career, how he became interested in technology, tech in East Germany, what it was like growing up there, and so much more! We delve into his company, Synopsys, and the chips they build before discussing his role in building algorithms. 

Key Points From This Episode:

Quotes:

“It’s not really the technology that makes life great, it’s how you use it, and what you make of it.” — Thomas Andersen [0:07:31]

“There is, of course, a lot of opportunities to use AI in chip design.” — Thomas Andersen [0:25:39]

“Be bold, try as many new things [as you can, and] make sure you use the right approach for the right tasks.” — Thomas Andersen [0:40:09]

Links Mentioned in Today’s Episode:

Thomas Andersen on LinkedIn

Synopsys

How AI Happens

Sama

Episode Transcription

Thomas Anderson  0:00  

That's the next level of automation that we sort of see where generative AI combined with Agent system can really, I think, push the envelope to the next level.

 

Rob Stevenson  0:12  

Welcome to how AI happens a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we're about to learn how AI happens. Okay, hello out there to all of you, wonderful machine learning engineers, data scientists, VPS, directors, heads of AI, practitioners of every ilk and persuasion. Welcome back to how AI happens. I'm your host, Rob Stevenson, and would you believe it if I told you I have an amazing guest for you today? You better believe it, because I'm really excited about the conversation we're going to have here. My guest earned his PhD in Computer Engineering at the University of Kaiserslautern in Fultz, Germany. He also served as a doctoral researcher at IBM's TJ Watson Research Center. Currently, he is the VP of AI and machine learning over at Synopsys. Thomas Anderson, welcome to the podcast. How are you today?  

 

Thomas Anderson  1:19  

Thanks. Rob pleasure being here. Very excited to have your chat today with you about ML and AI.

 

Rob Stevenson  1:24  

Yeah, me as well. You made a little bit of a smirk, a little bit of a face when I mentioned your PhD. Were you surprised that I referenced it?

 

Thomas Anderson  1:31  

I think you must have been training on how to pronounce this complicated word of the city where I went to school. That's what I was smirking at. How did I do? Very well, actually, very well.

 

Rob Stevenson  1:31  

Okay, wow. I do speak for a living, so I have to get that right. I will confess to you, since you called it out that I did search false pronunciation right before this call, and I found a YouTube video that they said it like 10 times, and I was just like, I hope this is right. This is, you know, hopefully closer than whatever. I would just say, if I had to guess, so,

 

Thomas Anderson  2:02  

yes, you did a perfect job. Really, really well done.

 

Rob Stevenson  2:05  

Thanks very much. Wow, I'm killing it. What a great episode. This is already, you know, I wanted to mention it, just because it helps to kind of understand where you came from. But it's common in this space for folks to have a background in academia and in research, and sometimes they even maintain that having one foot in either camp. So when you think back to yourself as a researcher, and now you are working more in like the private sector, what was sort of responsible for the shift, what made you move from one to the other?

 

Thomas Anderson  2:33  

That's an interesting question. So I remember going to school, I essentially was really interested in computers, and I should probably mention as an interesting tidbit, when I grew up as a kid, I was born on the East German side, so my experience essentially very little exposure to modern technology. But I did at the time, get as a present, I got a I got a computer. It was an Atari. I don't know if anybody remembers this from the 1980s so that was my crown jewel. I had no software. I had to write the software myself. I wrote games myself because, you know, I didn't have they weren't able to buy them. So that got me interested in the computers. And then later I went into a computer club, even when I was in high school. And then I essentially pursued this career in computer engineering, and I was always fascinated, essentially, how computers operate, how they're being built. And that sort of made me go in the direction of, initially research. I must admit that after doing a few years of research, I was more interested maybe in the practical things, which is why I went into or the corporate world. And the company I work at currently synopsis, but I started at IBM and at magma design automation. We're essentially what we call EDA company electronic design automation. So we are building software for the chips that sits inside computers, inside your iPhones, inside pretty much all your electronic devices nowadays, and I always found that fascinating. But I noticed after the early years of my career, I was more interested in the practical aspects on how to make things work and then build real products.

 

Rob Stevenson  4:14  

You know, I never realized what a privilege it was that the one other kid in my elementary school who led computers was able to just give me a floppy disk with Doom on it. I had access to these apps, and there you were writing them yourselves. What were those early games you built?  

 

Thomas Anderson  4:30  

Oh, that was way before doom. I remember doom. And I'll have to admit, I'm not a gamer, really. So I was what I was doing is relatively simple stuff, so like Tetris or something like that. You know? I mean, I'm not really a big gamer guy, so I'm not in the doom and other things as much, but I was more interested in how to make things happen and making practical things.  

 

Rob Stevenson  4:51  

Yeah, it's interesting, even if you're not terribly into games early, like proto engineering, it feels like, once you get past printing Hello world, it's like, well, what can I do with. Let's make some sort of closed system. Let's make a finite game. Do you think that there's like something in that, in like this compulsion to make a game, even if you're not a gamer,  

 

Thomas Anderson  5:11  

there is definitely a compulsion in building something. I personally, I like to build things for a purpose. So I know this sounds very grand when I say this, but I personally like to do things for the goodness, like, let's say, if I were to build something for healthcare, let's say, in our world where we build chips, we have aI accelerators, for example, they are like specialized AI architectures, and they are being used for cancer research, or they were used to decode the strings in the covid 19 virus, and essentially being able to faster build vaccines. So these are the kind of things I'm more interested in than say, you know, the maybe social media or things like that. I mean, there's value too, but I'm more drawn towards making something useful that actually brings us forward, right?

 

Rob Stevenson  5:59  

Yeah, yeah. Of course, I kind of want to do a quick history lesson here on the podcast before we get to some of the cool stuff you're working on right now. Had you left Europe by the time the Soviet Union collapsed? Because you were just sharing that you were on the east side of things and less access to technology in your childhood.

 

Thomas Anderson  6:16  

Oh, I was really just as a kid and young teenager, and then the wall came down at that time. So when I went to study, I was already in the unified Germany. And essentially for me, I was lucky enough that it happened early enough, and I was able to essentially go study and pursue my career. So I came to the US after, essentially I finished studying. So yeah, by that time, it was all the wall had long vanished.

 

Rob Stevenson  6:40  

Got it okay? I wanted to ask, because I was curious what it was like in terms of a technological change once the Wall came down. Like was there an influx of Western Tech and educated and academic practices? But it sounds like you had kind of already begun your education at that point.  

 

Thomas Anderson  6:55  

Correct? I already had, but of course, there was an influx of tech. The funny thing I sometimes think is like, people ask me when they say, Well, you grew up on the eastern side and you didn't have, like, say, like, for example, let's say the Atari computers or electronic calculators or or fancy cars. And what I tell them is, well, if you look at the technology that existed in the 80s or 90s, if you gave this technology to people today, they would say, Oh, my God, how horrible was your life back then. You didn't have you didn't have an iPhone, you didn't have Facebook, you couldn't do all these things. And that's, of course, not true, right? I mean, life, life 3040, years ago, was fun too, because it's not necessarily the technology that makes life great. It's how you use it, right, and what you make of it.  

 

Rob Stevenson  7:44  

Yeah, that's a beautiful sentiment to someone who had never seen a computer before. Used a computer an Atari is a magic wand that can do anything, you know, even if it's not, compared to the tech that people are growing up with today at the time,

 

Thomas Anderson  7:57  

absolutely, at the time, it was definitely an amazing thing nowadays. People would laugh at it and say, you know, like my Texas Instruments calculator has has more computing power than a computer had back then. But back then, of course, yes, absolutely, it was amazing. And this was really just the beginning of it. And, you know, since the topic we have here is machine learning and AI, I wanted to mention that I think the beginnings of machine learning started like in the in the 1960s and at the time, it wasn't really hot, and one of the reasons was simply because the compute power wasn't there. You couldn't build large models. You couldn't have chat GPT and build big training models. It just wasn't possible. But many of the ideas, at least in principle, already existed. So that's kind of interesting how things have evolved since then.  

 

Speaker 1  7:57  

Yeah, of course. And this is this really common debate, I guess, challenge problem, about the cost of compute, and even though, like, it is available in a way that it wasn't in, certainly in the 60s. But is it really, you know, it's still expensive. It still requires a lot of resources. So do you see that coming down? Do you see compute and access to compute becoming more commoditized in the way that other sorts of tech become smaller and cheaper and faster over time?

 

Thomas Anderson  9:12  

Definitely, the cost of compute is coming down. Unfortunately, at the same time, the demand for more compute is going to continue to grow, and oftentimes at a faster rate than we can deliver new compute like, if you look at, for example, training transformer models, like the latest LLM type models, you can see there's a big exponential growth in training them. So the demand, I think, is outpacing to some degree, the delivery of it that will continue to be, I think, an issue for some time. But the good news about this is it essentially these challenges create opportunities for us. So there's things like coming up with new, specialized AI accelerator architectures that are very good at doing specific tasks, right? You can say that NVIDIA. Got lucky multiple times, like they came up with the GPUs. Originally, they were meant for just graphics cards and the gamers like Doom, for example. Then there was the Bitcoin mining, and everybody needed GPUs. And now it's large language models where people are using GPU to train them so specialized architectures can help. And on top of that, it's, of course, driving the demand for larger, more powerful chips, and at the same time, reducing the power of those chips. You don't need a power plant to run these data centers, is a challenge, and that keeps us in the driver's seat, that essentially pushes us forward to building new chip architectures. And from my perspective, since we're working on the software that is used to design those chips, that's of course, putting new challenges for us, because we now have to support much larger chips, and we want to get them to market much quicker. So I personally see it as a good thing, and that's how you drive technology forward.

 

Rob Stevenson  11:01  

Certainly. Yeah. So we're dancing around a little bit, but maybe it'll be worthwhile to speak about synopsis at a high level, just to set some context for the rest of the conversation. So where exactly does Synopsys live in this hardware like chip manufacturing, all the way to getting in the hands of the people who can use it to run, you know, trillion parameter models, where does Synopsys live? ,

 

Thomas Anderson  11:22  

Yeah that's a very good question, because we're essentially a business to business company. So we built the software that's used to design all the electronic chips in pretty much every equipment, everything from high performance chips, ARM cores, CPUs, NVIDIA GPUs to process in your iPhone to like little things, microcontrollers that maybe sit in your car or in your washing machine. So any type of, I would say, integrated chip has to first be specified. If you come up with a description of what functionality you want, you describe this in a language kind of similar to writing code like C Plus Plus Java. It's more hardware oriented languages that are being used for that. And then you essentially translate the behavioral description of what it should do all the way down to something that you can manufacture onto a chip. You can essentially put it in a foundry like TSMC or Samsung or Intel foundry, and they put it into this little microchip, and then they ship it to you, and then you use your iPhone, and it just works. So our motto is silicon, the software. And this is a very, very exciting space, because it has essentially lots of complex problems that we need to solve throughout this whole process of chip design.

 

Rob Stevenson  12:38  

So the chip manufacturers themselves, like Nvidia, for example, they're not competition. They're not also trying to write the software for their own chips.  

 

Thomas Anderson  12:46  

No, they're our customers. And I think in the early days of our industry, of the what we call the electronic design automation industry, I think many, many of the chip makers had their own software, but at some point they realized it is just too complicated to write their own software. I mean, as an example, you use, probably Microsoft software for, I don't know, Outlook, PowerPoint, you could, of course, everybody could say, I'm going to write my own software. I'm going to have my own word processor. Doesn't really make sense. And in this case, it's particularly specialized. It's, I would say, highly computational algorithms. There's many of the new technology nodes that chips are being manufactured in. Like, now we're down to two nanometer chips, for example. They have very, very complex manufacturing rule. So writing the software is extremely complex. You could have like, say, just one of our many tools may require up to 1000 people who maintain this and, of course, a software license itself is also very expensive. It's not like $99 has a list price of, like several million dollars as an example. And yeah, our customers are essentially, pretty much all the chip manufacturers that you you can think of.

 

Rob Stevenson  13:58  

Gotcha, it's an interesting position of the manufacturing pipeline, or the production pipeline that you you sit in. And I'm curious when you you kind of think about the challenges your customers are facing. What is the feedback they give you that sort of helps you in your own chip design?  

 

Thomas Anderson  14:14  

Yeah, so that goes back to what we discussed a little bit earlier. So essentially, chips are getting larger and bigger, because all the compute needs that are coming from from the different requirements, especially from AI and training models. So because these chips are getting larger and more complex, the runtimes, for example, of our tools to build these huge, multi million chips would explode unless we, of course, continue to advance our technology. Similarly, the number of people required to get essentially, an electronic chip out the door is growing. So all of our customers are asking for the same thing. Give me something so I can get my chip out the door faster and with less people. And if you compare this to say software development, software development is a. Relatively in comparison, straightforward process. You write something, you compile it, and it works. And, you know, you spend like, several months maybe work on this, and then your program works, and you keep updating it to build like a chip. Let's say, I'll just pick an example, Qualcomm Snapdragon chipper, so that sits in many mobile phones. This is an undertaking that takes more than a year, and there's hundreds, possibly 1000s of people involved in the design process. There is no room for error whatsoever. So it's a highly, highly complex process, and it can cost upward of hundreds of millions to essentially design such a chip, and for that reason, you wanted to be as efficient as possible. You don't want to spend two years to build a chip. You want to hopefully do it in less time, and you want to do it with less people, especially as demands and sizes of chips are growing. For example, some of the AI accelerators that sit in data centers. Some notable examples, maybe things companies like cerebras, graphcore, they have these humongous waivers. They're like, like, the size of my head, essentially, they're that big. They're not as tiny. So they wouldn't fit in your phone, and they also, you couldn't ever operate them with batteries. So they use a lot of power, and they have so many replicated AI cores that essentially do these multiply, accumulate operations. So these chips are incredibly complex to build, and there is just no room for error.

 

Rob Stevenson  16:28  

Yeah, yeah, of course not. You know, there's all this increasing demand for these increasingly advanced chips, so you'd think that there would be more manufacturers, but the barrier to entry is so high. Do you think it's just, we're going to be kind be, kind of be stuck with the same dozen, two dozen chip manufacturers, or will we see more competition in that side of things?

 

Thomas Anderson  16:49  

That is a very interesting topic that you brought up. And so the interesting thing is, I think maybe 10 years ago, there were maybe more chip makers, and they would all make their own chips. So they would have their own fabs. So the part that has gotten really expensive is the manufacturing of the chip itself, which is why there is a very few remaining fabs that actually manufacture the chips, like TSMC is an example. Intel's an example, GlobalFoundries, Samsung. These are the big guys and most of the companies now that build chips are so called fabulous, meaning they designed the chip, but then they send it off to manufacturing. Let's say TSMC or Samsung. So we've seen a shift where our chip companies have become fabulous, with a few exceptions, like Intel, who has their own fab. But at the same time, we've actually, in the recent years, have seen more chip companies, and in fact, what we have seen as traditional software companies, to name a few examples, this would be the likes of Google, meta or Amazon. They all built their own chips. So they were traditional software companies. They never built their own chips. They would buy some chips from somebody, and they just put it into their whatever hardware they were building, but now they all have their own specialized hardware. So as an example, Google is building their own chips for the consumer division, of course, because they have pixel devices and there's tensor power chips that sit in phones or tablets in Amazon, there's also, of course, a consumer division, like with echo devices and so on. But on top of that, one additional reason why all these companies built their own specialized chip is because they built special AI accelerators that sit in their data centers. So Google tensor chip, Amazon has their own chips. Meta is building their own chips. So that itself has sparked, I would say, like, almost like a resurgence in the chip world, because there's so many more players. So they don't manufacture the chips, but traditional software companies that didn't use to build chips, they all have now designers in house that essentially design their own chip with their own secret sauce. So that's been quite interesting for us, and that also has made the entire chip market so much more interesting. And you've probably also heard about the chips act, that was essentially the investment of the US government into the chips market, so that, in addition, has fueled this overall market, one of independence from foreign suppliers. You want to have your own fabs in the US, and that has also helped essentially fuel all the whole chip design industry, and on top of, like the big names that I just mentioned, so traditional software companies, I forgot Microsoft. Apologies. Of course, Microsoft is also a company that builds their own specialized custom chips that used to be just a traditional software company. But on top of that, you also have lots of little players. You have lots of startups. They build specialized AI chips. Of course, as always with startups, you don't know how many of them will remain. Some will be absorbed. Maybe some will make it big, and many, of course, will disappear. But that. Has really helped fuel the whole chip design market.

 

Rob Stevenson  20:04  

So the question is not, are we stuck with the same few chip designers or chip manufacturers? It's like, it's about fabrication. So even some of these big companies, they are still beholden to the fewer companies who do fab. But those companies you you listed, the traditional software companies, are they doing their own fabrication?

 

Thomas Anderson  20:20  

No, they don't do their own fabrication, but they designed a chip. So essentially, you take, like a language description of what the chip should do, like a specification of the behavior, and then you come up with essentially a format on how this will be manufactured. Then you send it to the FAB, and then you get your samples back make sure your yield is high, so that you can manufacture those chips at a reasonable cost, and then fabs like TSMC, for example, will supply the chips to them. That's how it works.

 

Rob Stevenson  20:49  

I was kind of under the impression that these companies were going to fire up their own chip manufacturing because using vendors like Nvidia got to be so expensive, and their GPUs are just fantastically like, you know, you see Jensen Huang holding up this potion stamp size thing, and he's like, this is $40,000 but is it more about that their needs are so specialized, it's better to do it themselves. It's less about the supply and demand.

 

Thomas Anderson  21:12  

So companies, including Nvidia, they will design the chip, but they won't manufacture the chip, right? They all go to fabs. One of the reasons why they do this is because, like I said earlier, there's just very few fabs or manufacturing plants, so to speak, that remain simply because the cost of entry in this field is so so high. You can compare this to the early days where I said companies used to do their own software to design the chips, and then they said, Ah, this is so complex, we're going to buy it from vendor, like synopsis, and it's the same thing has happened on a manufacturing side. It's so complex, there's so much investment. There's other folks who can specialize in that, and we essentially do the parts in between. So Nvidia will say, design their GPUs, and all the secret sauce comes in, how you design it, but then manufacturing is outsourced to another fab.

 

Rob Stevenson  22:02  

Got it? Okay, that makes sense. Thanks for walking me through that process. I'm curious to know just a little bit more about your role, Thomas, so when you you know when things start to bifurcate from the traditional software development, and you and your role are tasked with injecting some AI and ML into the proceedings, what does that look like? How would you kind of characterize your role?  

 

Thomas Anderson  22:19  

Yeah, that's a very interesting point. So since we are a company that essentially builds business software, when we want to build machine learning OAI algorithms, we essentially don't have access to huge amounts of public data, like, as an example, in the early days when people were building algorithms for image recognition. So you can see there's a cat or there's a dog in this picture, right? You probably remember those days, several years ago, that requires a lot of training data, but most of this training data is readily available on the internet. You can find millions of pictures and cats and dogs, and then I can train something, and with the help of humans, I can weed out errors, and ultimately, I can train a system so it is probably better than a human to identify certain objects, and similar techniques can be applied to things like healthcare, where you maybe find tumors or something that human may overlook. But the reason why this works so well is because I essentially have access to literally unlimited amounts of data. Similarly, let's say you look at meta or Facebook and Instagram like everybody gives their information away for free, so to speak, right? So in our world, we obviously don't have access to this if we want to train our software on data. Pretty much this data is very compartmentalized, like we have many different customers, and each of their customers is very protective of their IP, and it's pretty much impossible to take their data, and especially from multiple companies, and build training models across those so that's one of the big challenges we have in terms of data access. Doesn't make it impossible, but it makes it harder compared to most other AI or ml applications. The second thing I would say that makes, I would say, my life more difficult in terms of developing applications that help accelerate chip design through AI and ML is the fact that things are changing very rapidly. So I was talking about advanced nodes and the fabs and so on. All these things like, say, you move from a seven nanometer process to a five nanometer to two nanometer process. All of these have very complex manufacturing and design rules, and these design rules essentially change every 12 to 18 months. It's sort of the equivalent of, let's say you are training self driving cars and all the scenarios of the road. And it recognizes the road signs, and after a while, it has seen enough possible scenarios where maybe it needs to take an action, like, say, somebody walks in front of the car and need to quickly brake, or you need to make an emergency maneuver to avoid something. So you can train all these things. Yes, but imagine every year these things were different. Suddenly, the road signs are different, the rules are different, and there's different scenarios that are popping up. So in the natural world that we live in, things don't change as rapidly, right? Like, let's say your cat and dog pictures, I don't suddenly, every year have a new species that has appeared that I need to say, Oh, wait, I have this new species now. I need to train again to recognize that. So in our world, because we're dealing sort of with industrial data that is being generated, I have to, so to speak, retrain, and the learning that I may have done a year ago might not apply anymore, and I have to retrain again. So that's another thing that makes our world a little more challenging, having said that, of course, there is a lot of opportunities to use AI and ML in chip design, simply because chip design today, as I mentioned earlier, is a very long and labor intensive process, while There are software tools that say, Take description of functionality, and then create like a circuit graph, and ultimately comes up with transistors that would be implemented on a chip. There's a lot of design choices that a human still has to make, and our focus is on automating those human tasks and helping the human that operate our tools and designs those chips to operate at a higher level, and similar to like self driving cars, you can say, take away or automate some of the initially simple tasks, while, of course, I would argue, creative tasks that humans are doing in the design process, We're very, very far from automating any of those. So that's sort of how we approach things.  

 

Rob Stevenson  26:45  

To extend the metaphor of the road signs changing, the landscape changing on you. There's not new species introduced the image recognition. What are the new cats and dogs that you're faced with? Like, staying abreast of like, when you say that the landscape is changing, what do you mean?  

 

Thomas Anderson  26:59  

I mean, like design rules. So for example, you have a description of, hey, I would like to build an integrated chip that has this functionality. And like last year, I was able to implement this on a technology node that was using a certain type of transistor. And there's many, many complex rules that I need to conform to implement this and then send it to the manufacturer, to the FAB, make it happen. Now suddenly they said, Well, we have a new process. And now everything that we told you last year doesn't apply anymore. And now there's new rules, there's new types of transistors, and there's these new types of devices. And because of that, in order to get, let's say, for example, good yield, when I'm active manufacturing them, you need to implement different rules. So it's essentially the equivalent of the rules of the road are changing all the time, and that's why I said it requires retraining. And for that reason, essentially what we have developed, a lot of it is trained at the customer side, so it may come with some pre training that we do have that sort of covers the best common or generic cases. But then you essentially, you deploy these AI tools, and at Company A, it will be trained versus differently than at Company B, because they all maybe have slightly different chips. They have different design process. And that actually, to some degree, the companies like that, because it ensures that they still have a differentiation. Because if everybody had the exact same thing, then everybody would come of the exact same chips, and there would be essentially no difference in their competitiveness.

 

Rob Stevenson  28:34  

The example of training data being to identify something finite, like a natural phenomenon, you know, that's observable that is maybe large, but finite. For example, living species on the planet, human ingenuity and human design processes are less finite, right? Yes, that's why the approach that human beings are taking to these problems are always changing and shifting. That is that what you mean  

 

Thomas Anderson  28:56  

Exactly. That's exactly right. That's exactly right. So just to give you an example, so when I started the AI and ML work at Synopsys, like something like six, seven years ago, I remember there was a lot of skepticism. So there was, of course, a lot of interest in, oh, there's Machine Learning Everywhere, and we can do these things. But a lot of these attempts didn't work for the aforementioned reasons, being, I don't have enough data to train. The training takes too long because it's constantly changing. So one of the first approaches that we employed was using reinforcement learning. So reinforcement learning, I think, became popular through AlphaGo. I don't know if all the listeners know the story behind it, but I'll maybe I'll just quickly tell it, because it's kind of interesting. So I do remember in the 1990s back when people played Doom, there was a there was these contests where I think the world champion of chess was playing against IBM's Deep Blue supercomputer, yeah, Kasparov. Yeah, exactly. Kasparov, so I think it was 9697 something around this time frame. And anyway, so the way the Deep Blue was essentially handling the play chess is you would just calculate a certain number of steps ahead and calculate all possible moves, right? And at some point, if you calculate all, I mean, you can't calculate all possibilities. That's not possible, right? The solution space for chess, if I remember correctly, is something in the order of 10 to the power of 123 so imagine 123 zeros. That's all the possible solution space. You cannot calculate all possible moves, but you can calculate enough moves ahead so that at some point you can do better than a human. Now the interesting thing is, of course, a human doesn't work that way. I mean, no good chest layer works that way. A bad chess players, you will think like you will think two moves ahead, if I move this, ah, that guy is going to move that. But that's not how the human brain works, right? We're not computing all possible scenario. So a human essentially has certain scenarios in his head, and he makes trade offs, and he has experience in it, and that's why I think he operates this way. Anyway, in the late 90s, at some point, Deep Blue was able to beat Kari Kasparov and chess. Now there is another game called Go, and the game of Go essentially just has more states. Because you can argue, well, in my chessboard, if I made the chessboard larger and I had more fields essentially to move, it would increase my compute requirements to beat a human exponentially. And the compute power has is not growing exponentially. So the game of Go, they still, even today, with just sheer compute like brute force power, you would not be able to beat human experts. And this is where, like alpha go from Deep Mind comes into play. So Deep Mind is now part of Google. They essentially revolutionized this idea of reinforcement learning, where, instead of computing all possible combinations to a certain death, you essentially just make a decision of which move is better than the other one, and you prune essentially certain branches of your decision tree, so to speak. That's the idea of having an agent who goes through a complex system. And essentially, if a move is rewarding, then you accept that move, and it has the notion of what's called delayed reward. Delayed reward means I may make a move that, at the moment, appears like I'm actually losing, like chess would be a good example. Like you want to win the end game, you want to have to check mate. Doesn't matter how many of your soldiers or figures, you knights, or whatever you lose along the way, right? So you're not optimizing for that. So that's what I mean with the later award. You may give up everything, but in the end you win, you win the game. And that's what Reinforcement learning is really good at. And we applied this technique to some of our chip design processes, because in our world, humans also have many choices to make, like design choices, like if I if I built my chip, how do I implement it? What kind of connectivity do I have? What technology parameters do I use? And there's without going into technical details, that are too deep in the chip design. Essentially, these are things that are done by humans. And humans, of course, are limited in that, in like doing this type of optimization. So we have found that reinforcement learning optimization can do it much better and much faster. And for us, this was essentially the first few products that we introduced that added this layer of artificial intelligence AI based optimization on top of our design tools, and that really helped improve productivity of humans, and essentially helps automate some of the tedious tasks that a human has to do.

 

Rob Stevenson  33:52  

Well, I'm amazed that you spoke about automating human tasks without saying the word generative. Thomas,

 

Thomas Anderson  33:58  

yes,

 

Rob Stevenson  34:01  

I would be shocked to hear you're not deploying some kind of generative use case, but the fact that reinforcement learning was sufficient in these cases to make as good or a sufficiently good decision as a human being is encouraging here,  

 

Thomas Anderson  34:14  

definitely. So I was going to joke what's generative. But okay, that's too lame. So

 

Rob Stevenson  34:21  

it is 2024 though, if we don't speak about generative the AI podcast, police will shut down my show. We are bound to speak about it.

 

Thomas Anderson  34:29  

I'll make sure that doesn't happen. So generative AI, of course, is a very, very promising technology, but there's different applications for different techniques, like what I described to you in terms of AI optimization, using reinforcement learning essentially solves a different problem space. Generative AI is extremely exciting in the sense that it can actually generate content, and it is particularly powerful in summarizing large amounts of data, like, say, if I take chatgpt as an. Example. I know there's many examples of where it can go wrong for various reasons. We can talk about that. However, if I train a large language model with essentially a lot of good data, meaning not everything that's out in internet, but especially like selected good data, why I feel comfortable the information that's in this book? Yes, I trust that information. It can give you very, very good answers. And no doubt it would outperform any human, because I don't think any human has read all these books and can memorize all these things. So in terms of taking all this data in summarizing content, I think it is far superior than what a human could do there is, of course, challenges, right? And one of the examples, I do remember one of the examples, it wasn't chat GPT. I think it was Gemini from Google, but it doesn't matter which one it was. One of the examples of where people were laughing at it was when I think somebody was looking up a recipe for pizza dough, and it was essentially suggested to use glue in it. And you would think, like, Ah, there you go. See the thing is wrong. But of course, this comes from the source data. And in this particular case, I believe there was some Reddit post where somebody was like, essentially doing satire. It was a joke, and there was but it was upvoted heavily. The system, of course, doesn't understand that. So it somehow got into there that, hey, this is a good idea to add this to the dough. Now, every human being would laugh at this and say, how as can that be? But this comes down again to having good input data and reliable input data. So if you have that, I think generative i is extremely powerful, and we are, of course, using that in many areas, or be it things like having a chatbot that supports our tools and gives you answers and essentially summarizes all the information all the way to and now it gets interesting. Remember, I talked about chip design starts with essentially specifying, sort of like a programming language, kind of like C or C Plus Plus, but more hardware related, but essentially is a language that you write the way you would write a program. So generating that code based on your intent is another extremely powerful capability that we're of course, working on.  

 

Rob Stevenson  37:16  

So  that's the generative use case for you as code generation.

 

Thomas Anderson  37:19  

Code generation, I may actually many more things. So code generation for essentially specifying chips, Chip intent, like the language, so to speak. But there is many other content that you need during chip design. Be it, again, without going into technical details, where I'm losing the listeners, there's essentially technology parameters that need to be described. There's, for example, verification constraints and things like that that need to be generated. So there's all kinds of data sources that today are written in some kind of human language spec. So you can say somebody gives you a document, here's 100 pages, and you read through the spec of what the thing should do. Now, of course, having generative AI is much better because it can summarize things, it can auto generate the constraints so that a tool can directly absorb it, but in a human writing it. So essentially, content generation in all kinds of form is powerful. And of course, there's more, I think, as I'm sure many of the listeners are familiar with, there's this whole context of agentic systems, so combining LLM agents to do something so automate certain tasks. So as an example, let's say I have one of our tools, a compiler, so to speak, and I can ask the tool in a chat, but I can say, hey, how do I solve this problem? How do I fix this? And it would give me like a paragraph back. Would say, oh, you should do this. And you know, you can use this command, or you can use this approach. The next step, of course, is to automate this more, to have an agent system that actually generates tasks and execute tasks, and like a human, it would learn which task works, which one doesn't work, it tries multiple approaches. So that's the next level of automation that we sort of see, where generative AI combined with Agent system can really, I think, push the envelope to the next level.

 

Rob Stevenson  39:12  

I'm really pleased, Thomas, to hear you have what feels like a sense of, maybe not skepticism, but just like, hey, the right tool for the right purpose. You know, like Reinforcement learning is good. In this case, generative is exciting. Don't treat it like this thing that can ought to be just shoehorned into everything, which I feel like has been kind of a compulsion the last year and a half there feels like there's a general pullback in terms of the hype, and that people are being a little more delicate about where to deploy this kind of tech. But it sounds like you have that delicacy at synopsis, which is refreshing, Thomas, we are creeping up on optimal podcast length here, so as we kind of delicately thread the needle here at the end of the episode, before I let you go, I would just love it for you to share some Wit and Wisdom, perhaps for the folks out there listening, for folks who are forging their own career in this space, in AI and ML, maybe. They want to wind up at a cool company like synopsis, in a role like yours. What advice would you give them?

 

Thomas Anderson  40:04  

My advice is, always be bold. Try out many new things. Make sure you use the right approach for the right tasks, because, as you just mentioned, there is different tools for different applications. I generally I'm not the guy that buys in the hypes where people say, Oh, yesterday was this, and today it's that. I think there's many different tools and applications that have a specific purpose, and you should really think about that when you approach a problem of how you want to solve it. So I think that's one of the general advices I would give. It's not necessarily specific to AI. I think this applies to everything in life, but I think that's one of my best advices I can give.  

 

Rob Stevenson  40:45  

It's great advice. Thomas at an episode full of great advice and technical expertise. So at this point, I would just say thank you so much for being on the show. This has really been a wild ride with you today. So thank you for being here and for sharing all of your experience. I've loved chatting

 

Thomas Anderson  40:57  

with you absolutely. It's been a pleasure. Thank you.

 

Rob Stevenson  41:02  

How AI happens is brought to you by sama. Sama 's agile data labeling and model evaluation solutions help enterprise companies maximize the return on investment for generative AI, LLM and computer vision models across retail, finance, automotive and many other industries for more information, head to sama.com you.