How AI Happens

x.AI Founder Dennis Mortensen

Episode Summary

Today’s guest, Dennis Mortensen, is a serial entrepreneur whose most recent venture (x.ai) focuses on teaching machines to schedule meetings. He shares with us what this challenging (and rewarding) journey looked like, and the valuable (and sometimes surprising) lessons he and his team learned. We also discuss giving empathy to machines, expectations, and how we can get more comfortable with imperfect progress.

Episode Notes

Whether you’re building AI for self-driving cars or for scheduling meetings, it’s all about prediction! In this episode, we’re going to explore the complexity of teaching the human power of prediction to machines.

Key Points From This Episode:

Tweetables:

“The whole umbrella of AI is really just one big prediction engine.” — @DennisMortensen [0:03:38]

“Language is not a solved science.” — @DennisMortensen [0:06:32]

“The expectation of a machine response is different to that of a human response to the same question.” — @DennisMortensen [0:11:36]

Links Mentioned in Today’s Episode:

Dennis Mortensen on LinkedIn

Bizzabo [Formerly x.ai]

Episode Transcription

Rob Stevenson  0:04  

Welcome to how AI happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson. And we're about to learn how AI happens. Here with me today on how AI happens is multi time entrepreneur whose last venture was x.ai, recently acquired by bizzabo. Dennis Mortensen, welcome to the podcast. How are you today?

 

Dennis Mortensen  0:42  

Good, good. Thanks so much for having me.

 

Rob Stevenson  0:44  

Really pleased to have you, you have lent your abilities to a variety of different use cases and companies, I'm really interested to hear about your journey, and how you wound up in AI and all of your takes on the space. Let's start at the beginning, though, would you mind sharing a little bit about your background, how you wound up deciding to play in the artificial intelligence space?

 

Dennis Mortensen  1:04  

Sure, I can give you the phone number for my mom, then you can get the forearm version. And it is spectacular I guarantee you.  

 

Rob Stevenson  1:15  

Dennis was born at a young age.

 

Dennis Mortensen  1:17  

The shorter version is that I spent the last 26 some odd years working on five distinct ventures, all of them really rooted in the idea that I might be able to extract some value from a distinct data set. And each one of them, obviously, given time became ever more sophisticated, not necessarily because it became smarter just because technology advances. The last one we worked on was this idea that there might be an opening to will create this intelligent agent that can schedule meetings on your behalf. And we spent about half a decade trying to solve that problem and got acquired last year.

 

Rob Stevenson  1:57  

Congratulations. And I want to hear more about the intelligent agent. Because I feel like there's a lot wrapped up in those two words. What do you mean, when you say intelligent agent?  

 

Dennis Mortensen  2:06  

I'm trying to paint a picture of between the difference of you and me making a prediction, having some sort of prediction engine for where you give it an input, it gives you a prediction, say what next song to listen to on Spotify, what's next movie to watch on Netflix, what product you might be willing or able to buy on Amazon and so on so forth. All those recommenders are suggesting a next step, when I at least try to paint a different picture of the intelligent agent, it is that of trying to make a difference between that which you can perhaps suggest falls into the single task bucket. And the agent was forced into the job bucket, meaning there's a job that is very likely requiring multiple terms and or multiple steps, and requires an idea of when you reached a conclusion, and upon conclusion, being able to execute on what you've been able to assemble over those terms, and get it to that point for where I would otherwise have been asked to do it myself. But the machine did it for me. And that is the perhaps in a Venn diagram, overlapping set of ideas between the task of a job or the prediction and the agent.

 

Rob Stevenson  3:26  

So do you believe that predictive and recommending piece is that a significantly large swath of the AI landscape at this moment?

 

Dennis Mortensen  3:34  

Yes, almost all of it. And there's nothing negative in that the whole umbrella of AI is really just one big ass Prediction Engine. And it's fantastic because it's been doing very well over the last at least short of a decade where it's accelerated dramatically. There's a lot of pushback, positive pushback in the industry in general on whether we are on the right path. If we're on some sort of path where we hope to create this AGI come the end of it. And this path here for where we can almost decipher any image into a set of objects, we can almost decipher any text into an idea of what comes as the next word, the next sentence, the next paragraph. So we suddenly aggressively kind of move forward, whether we on that right path towards some all knowing machine at the end. That's a different question. I've been as a computer scientist, very positive as rice with what we've been able to do over the last decade. Certainly if I look back on what we were able to extract when I took my degree 23 some odd years ago,

 

Rob Stevenson  4:52  

is the reality of a lot of AI as a predictive engine is that based on more It needs or is that based on this notion that humans are also predictive engines? And we're trying to replicate human thought? Good question.

 

Dennis Mortensen  5:08  

I do think many of the jobs you and I have, as humans, are jobs where we're given a set of unstructured or semi unstructured inputs. And then the firm hopes were clever enough to take those inputs and turn them into an answer and a set of actions. A lot of those inputs are not constant or deterministic. And you have to apply a little bit of sometimes common sense judgment. And this is really a prediction you make, it's just that we're very good at making predictions on very fast data, as humans much better than machines many times. And well, some of those, we can both increase the accuracy and increase the speed and decrease the cost by turning into a machine prediction instead,

 

Rob Stevenson  6:04  

going back to the idea of this Venn diagram, right, with the scheduling service at SDI, overlapping on the predictive nature of previous tech. Why is that not a circle? I guess is my question. Where does the overlap stop? And where does predictive as an approach stop being useful?

 

Dennis Mortensen  6:22  

So let me tell you about the scary parts of what we did, which is that we made predictions on all sorts of things. First of all, if we all just can agree on the fact that language is just not a solid science. So why would anybody start any venture for were you hints on the ability to predict some meaning from a set of unstructured characters, words and paragraphs? That seems like crazy talk? Now, we thought and assumed that we might be able to carve out a corner of the language universe for where we surely wouldn't be able to understand language. And his footnote, for anybody who's seen anything of late for where we've done some really kind of interesting work as an industry, where you see the GPT, three and similar come out with texts where it is so close to something with a human could have responded that you might not be able really on a sample of a dozen or more to say which ones that Rob Right, which one's the machine, right? That doesn't mean that we've reached the inflection point of us knowing exactly what's in the text. So it means that we can now mimic a response that is to your satisfaction, and is in line with what was expected doesn't mean that we understand what was in it in our particular vertical for you need to schedule the meeting, it is not enough that I can mimic a response for you feel that seems like a good enough response. Because I need to actually do the job, not mimic a dialogue, I need to do the job of getting you and me on the calendar for some specific week for half an hour on Zoom. So that particular job and the space were required full understanding, to use the self driving car as an analogy, it needs to take some sort of input where it needs to understand the world it exists in, it can't understand half of it, how should we actually navigate as in if you have an object in front of you, and you don't know whether that's a baby or a plastic bag, I'm not so sure, it is safe to kind of continue. So we need to understand all of the objects that you placed in that universe, obviously, not a replica of the real world, but some sort of simplified version of that. So you can kind of navigate it. And for us, it was the same. And we just thought it was a very small universe, it turned out to be like many things start up dramatically bigger than we had imagined. And the noise that comes attached to it. And the fact that people are out, right, crazy. They seem normal when you look at them, but they are just untrustworthy, lying, not when they don't even know they're lying. But they are all the time. Like, I'll give you a little example here. So Rob, will sue Dennis an email at 1am tonight saying he did is a come up with this idea. Can we speak first thing tomorrow morning? Which seems both sane, honest, correct. But it's a lie. Because tomorrow already started, it's 1am. But most humans will use the word tomorrow all the way up to the point where they go to bed. Now take 10,000 instances where we say something as human, which is technically incorrect. But I'm not really in the business of correct knew. I'm the business owner getting this on the calendar. So it was just dramatically harder than we had imagined. That's the one part. The other part was. It's scary. And again, similar, and I'm not suggesting we had a challenge on par with the self driving car, but he certainly had very many things similar to it was that the car derives many of its next predictions on some of its I have additions, meaning that you actually rely on anything which you just did being true or true is, and if not, it can very quickly spiral out of control. And we saw the same, which is that if I'm in a conversation with five participants all at once, it's on turn four in the dialogue. And remember, each turn is a whole host of character, poor predictions, location predictions, people, predictions and intent predictions on five people. If one of them is not just spiraled out of control, and I believe it's true, then the whole conversation can very quickly go sour. And that was something for where, perhaps, if I could do it over again, I would immediately have cut out what we defined as multi participant meetings, because they almost get exponentially harder by every person added into the dialogue, and just started with, let's see if we can get Robin Dennis together, but I thought I already compressed it. But that was not the case. Sorry, I'm turning this into a little therapy session for dentists.

 

Rob Stevenson  11:06  

No, that's okay. We can go right into your relationship with your parents if you want to continue down that thread. But it's interesting when you mentioned the notion of a human being knowing what tomorrow means, even if they send you an email at 1am. But a machine not. So it's not enough to teach a machine a language, it needs to understand the ways in which humans use it incorrectly.

 

Unknown Speaker  11:29  

This that, here's another bend to that challenge. The expectation of a machine response to the same question is different to that of a human response. So we spent, here's just a quick example, perhaps nine months along on a specific feature that allowed for some common sense, or really elasticity in your scheduling hours, as any human EA would do, say, Rob, is in downtown Manhattan, what a meet up with Dennis, you're only in New York for one day, I set my scheduling hours to end at five. But obviously, since we only meet every three years, I will stress those hours to have you come by the office at 530. No biggie. And I will just trust my EA human IE, it's going to have that mind. And whenever she would, or he would execute on that it would be all good and kosher. So we spend a lot of time trying to figure out how can I make a prediction that suggests it is now allowed for me to going to split your scheduling hours. And it took forever, as in, I can't do it every time. That means you have no scheduling hours, and can not ever do it. That means we have no electricity, I can do it when it seems acceptable. That's a very kind of vague term. So as we put it into production, and we actually thought it did something very similar to that of a human EAA, which is once every blue moon, I would stretch it ever so slightly. And it worked. In my words, fan tastic. However, none of the users really understood it. As in, they understand the concept, but they will immediately go, what the hell, how much money do you need to raise do an if then statement, because 530 is later than five. And I think I solve it for you as in, they just went all in, then we went back in hundreds, if not 1000s of kind of support tickets, explaining your friend Suzanne is in from Dallas for one day. And we assumed given the delta between this and the last meeting or some elements of the dialogue that you will be willing to stretch it. So we've done so, ah, I'm surprised to support that, of course, I want to do the meeting. So it's actually something which we saw from a science point of view, but spend a lot of a lot of design thinking to almost all of it. And in the end, we ended up doing something which I was just almost personally against for the longest period of time was this a start to communicate not as a human but as a machine. And that means we will totally over communicate will actually apply to suffocation for some of our decisions. So we were not a black box. When you work with humans, they're mostly kind of blank boxes as in, you won't provide much kind of justification for your actions unless they are out of the norm. So we started to inject that that did wonders, but it was not part of really a science challenge that was part of a machine agent to human aid and kind of dialogue challenge. I'll give you another one. And sorry, then I'll leave you here. We also found and again, I naively assumed early on, that the inflection point for our success will be us being as good or just on par with the human system and our Because human assistance existed in our universe as in, I set up a meeting with Rob, Rob has an EA, can I have that EA communicate with the system. So we could see their level of accuracy and response speed, understanding of time zones, and all sorts of kind of things. And the interesting thing is, we knew exactly how well they did. And we knew exactly how well the machine did two things came out of this one. But there's actually a couple of studies. For the geeks out there, if you want to have a look at them, for where I believe it was on stock trades, they had a machine to a set of trades with a set number of built mistakes, and had a human do the same. And the willingness to forgive, just didn't exist at the same level for the machine. Which is interesting, because people get almost immediate expectations of superhuman performance once you attach a machine to it. Not wonderful, where I used to yesterday with humans be willing to accept this level of mistakes. I'm not anymore. Not since you introduced the machine that is now ours, superhuman performance. That was the first surprise, which made it harder. The second one, which is more difficult to explain and design around, which is that humans make obviously, human mistakes. As in, you forget that over the summer, there's like three weeks is where there's only four hours between London and New York. Dammit, yes, daylight savings is a little bit different in England versus the US. And we all make the mistake once a year, and that's fine. But the human mistake things that I both understand and can empathize with machines will never make that mistake. They'll have all the timezones memorize for the next kind of century into the future, as in that is the easy part. Now, it will make machine learning mistakes that you don't understand and can't empathize with. That was a forever struggle to explain to non technical people or people actually not even interested, they just noticed the fact that you did something which I didn't like, and I don't understand. And I can't really empathize with because this was this, in my mind, the simplest of requests, and we tried to explain. Well, we operated with two types of temporal expressions, what we call negative time and positive time, days time is you distinctly telling me you can't do Tuesday positive time is you either saying nothing or giving me a set of Windows that I can use outside of your kind of preferences. We had a lot of challenges on negative time, which is that you come up with this long sobbing story about having to do something. And we have to kind of decipher that little essay of yours into effect. You can't do Tuesday, because you wouldn't write Rob can't do Tuesday, can we do X, Y and Z? No, it was always one of, hey, my kid has a thing at school. And I can't really not do the play. Hey, by the way, so long story where this I know, and I can see means you can't do Tuesday. But you do understand this hard to decipher, for a machine that this means not Tuesday, right? So it's just a very interesting kind of space to be in.  

 

Rob Stevenson  18:24  

It all boils down to humans have empathy for other humans, but not for machines. And what's interesting is people want disruption, but only if it works perfectly. I had an experience like this where I was in a Starbucks in San Francisco, probably five or six years ago, right around the time Apple Pay was coming out. And some of the front of the line was trying to use their Apple Watch to do the Apple Pay. And it was taking a minute like not a long time, but longer than it would take to swipe a credit card or pay in cash. And the Starbucks turned on this woman and started booing. And you think if there was anywhere on the planet where people would understand it would be San Francisco. But no people were miffed. They're like no, just use your credit card. And let's keep this moving. So there is this expectation that it has to work perfectly related, though, back to the empathy thing. Is it reasonable to ever expect humans to extend the same empathy to a machine that they would to another human being?

 

Rob Stevenson  19:20  

Perhaps the answer is yes. Let me just quickly poke you on what you said before, which is perhaps subtle, but important. So sometimes, we'll be exposed to technology that doesn't work perfectly just yet alone, and will struggle a little bit and be willing to overcome it. As soon as you introduce things like embarrassment, pride, and what have you into the equation, meaning that you're actually at Starbucks in front of other people and you could be slightly embarrassed. If you're not a geek, then it becomes exponentially harder to have people except your mistakes, because it hurts so much more. We were in a similar space for when we made mistakes. It is okay if it was just you having to in Photoshop crop that image again, Ah dammit, I should have known it doesn't do the border or whatever. No, we totally made the mistake on your guests. You now look foods. And it was just the worst place to be at as in, we try to figure out what the curve look like on how many good interactions do I need for me to be able to build up enough of a relationship where I can survive as in non turning events, a negative experience. And there was a dramatically loss attempt, as soon as you go out of the is not just Rob was affected, but people outside of that. Now, going back to your question, is it reasonable to suggest to have some form of empathy for the machines, they'll say study, I actually don't remember the study, I remember the write up in the New York Times that suggested that it is not just the machine, which is being affected by your rudeness, or whatever it might be, perhaps, who is changing in this is you. So it's not that we need to treat machines nicely, because they don't care, we should treat machines nicely, because if we don't, we might change along the way. And that was interesting, as in Rob used to be nice. But after he bought that Alexa, he's just a little less nice as in a little bit more crude as in bottom line and asshole he changed is not that dramatic, they did do a test though, again, in this particular study, and people should kind of look it up for where they did it on kids. And all that was actually just a little sad when you read it for where, of course, in the very young version of you, we don't yet know what is right or wrong, how to behave, really, the real world does a very good job of telling you, as in another little kid will punch you in the face or telling you a vase exactly how choppy you are. And you will learn that and probably shouldn't have said that because when he told me that hurt machines not so much. That means you could actually end up morphing kids into other versions of not so healthy kids in the future, if you don't have this in mind. And I remember, and I don't know whether they kept it or whether it was just the a test that the Alexa se had a mole for where it wouldn't execute the command. If you did it as a rude ask. And I think it was some sort of kids mode. And I'm not sure what they kind of kept it or not. But I certainly took a mental note of it thinking that is probably the right thing to do. As in, I don't want my kids doing 50 regrets to Alexa all day long, and then just being evermore aggressive, that needs to be some sort of response for where Well, that didn't work. So if you want to play that video, or hear that song, asked me nicely.  

 

Rob Stevenson  23:07  

The reason that I wouldn't be rude in a request to you, Dennis, is because I want you to do what I want, right? I want you to comply with the request, which an Alexa will do no matter what, except in this case of that setting. But also because I know you have feelings and thoughts and emotions. And if I'm rude to you, I don't want to like ruin your day. I don't want to be like we get off this call. And then Dennis goes and complains like, I'm with this guy who's a total jerk. Now I'm all worked up or blah, blah, like, I don't want to do that to you. I don't want to like inflict harm on your day and your feelings and what have you, um, machine doesn't have that going on. So I'm polite to machines, because they're learning from us. And I want them to be polite, but I can understand why someone else would be like, Look, I'll give a person space and forgiveness and empathy that I won't give to machines. Do you think that's a part of it?

 

Dennis Mortensen  23:54  

Most certainly. And somehow, in many of the models we go apply to the real world, we make the assumption that people are somewhat rational, but they're not. I think we figured that out many moons ago. I just wish though, in our implementation of these agents, that will be a little bit more rational, which is that I want a job done at a level of accuracy, which I find acceptable for the price was I'm paying for the job. Rob highest there needs to do a job, you have some expectation of my ability to do that job and do it to the best of my ability and hope it's going to fit within what do you ever imagine? And if not, well, then you go find another one who can actually do the job. I would just hope at least if we are to kind of implement all these intelligent agents in the future, we're seems very likely that we understand. They don't necessarily all need to be superhuman. They need to do the job, which I've hired them to do and the level of accuracy, which I expect for the price that I'm paying versus the kind of superhuman expectation Shouldn't for where? Well, you might actually be poor off because of it. If you're not willing to accept that, well, it will make these mistakes. This is not a justification for my kind of prior venture here, which, right kind of sound like that. But I do like the fact that pick a number 18 out of 20 requests I do to Alexa to play a song from the Spotify app goes, as I expected, then there's two freeware, that's not what I asked. I still use it, though, because I think that is a fair price to pay. And the cost of error is, of course, there's other domains for where the cost of error is much, much higher.

 

Rob Stevenson  25:38  

Right, right. Where the higher the stakes are, the more important accuracy is accuracy in a recommendation engine on Netflix is not so important as the example earlier where LIDAR has to decide if this is a baby or a bag of garbage.  

 

Dennis Mortensen  25:51  

Don't push back, but just sit on that for a second for where you and me if we just completely utilitarian, would assume that any number of deaths less than 40,000, or whatever the number is for this year in traffic deaths is a good one. And we should immediately flip over to self driving cars. But I think both you and me know that's not going to happen, as it needs to be a number much, much lower, as in, there'll be unwillingness to accept, well, way more, they kill about 21k a year. Are we okay with it? Probably not. Even though Jane and John Doe, they kill 40k is in a hot play out this and I have where society finds that acceptable, even though we probably should.

 

Rob Stevenson  26:35  

So is this just a misalignment of expectation between the consumer and the ability of technology? Meaning we expect things to be superhuman, as you put it, right? We expect things to be really, really much more advanced than a human, does it? And if they're not much more advanced, then we say we shouldn't bother? Or is this a market problem? What is the issue here? Do you think

 

Unknown Speaker  26:54  

the version for where we have a piece of technology that ends up killing somebody is the most dramatic version we can come up with, but it's only one for where it has been tested as we speak. So it's not completely made up? I'm not sure what it is. But I do see it play out in many places, for where I wish, some part of our education, wherever that happens, we'll have us better understand how software fits into our lives for where a generation before us didn't really have to kind of bother with that because most of their life, private and work life didn't really have much of an attachment to software today, though, and especially our kids, those things are completely intertwined in you must start to have some sort of attitude towards when do I kind of deploy my software? When do I terminate it? When is this a job for a human and human only when it is a job for a machine or machine only? When can we see here? And we don't have a good raw intuition about when we choose to not use technology. And when we should be acceptable of the errors? That intuition just seems off to me.  

 

Rob Stevenson  28:09  

Yeah, that makes sense. Yeah, I think the difference between expectation and what people will pay what like when you gave the example of 18 out of 20 songs Correct. Feels valuable for the cost of using this versus walking over and selecting it yourself. So I think that sort of approach can be applied to any other use case, right? Where the accuracy and the performance is commensurate with the price right now for self driving cars, for example. It's a very high price. Right? And so is it significantly better than a human driving? In your average drivers mind? Maybe not even if that's that's not the case? But is it worth me paying an extra 4050 $60,000 for a car? Probably not, I think is probably the calculus, people are running right now. So in the same way as the Apple Watch, and the Starbucks, it's like, okay, this is now there's social capital at stake here. It's not much better, it's like it's a little bit worse, even. Or even if it's only a little better. I'm still irritated because it represents a change that I'm not comfortable with, for for it having been disruptive, I guess I don't know. It just it feels like the way we approach these calculations is completely unreasonable. Frankly,  

 

Dennis Mortensen  29:15  

It does. My hope, though, is that there's a generation arriving as we speak for where that raw intuition is just better today. I think I can go back and do a little bit of calculus to come to a same conclusion, so that my intuition is a little bit more fine tuned. But I do hope there's a whole kind of generation in front of us for where it just comes very natural, like many other things in life, where we have good intuitions on what is the next step. This just seems a little bit off kilter.

 

Rob Stevenson  29:48  

I think you're right. I think we must remember how new this technology is right? It's not new to the people working in this space, who have been doing it for perhaps 10 more years, and we think about it every day, but to the average consumer, it's quite new. And also, I think it's just less about AI and machine learning than it is about just the way technology is generally accepted. Like we if you look back to like, sending an email via dial up was not probably better than sending a fax. And I'm sure there were plenty of people who were like, Why would I spend, you know, six minutes getting my internet on and then typing something out when a fax takes 30 seconds. And it's like, Well, look, an email is better, right? Because you can send it to more people. And there's a record of it. And what I've always the reasons but people are resistant to change, I think that's probably what we're dealing with here. Less than dislike fear of some sort of AI being a different sort of technological life form.

 

Rob Stevenson  30:39  

And also think is because multiple changes are happening at the same time. So many times you'll see a single change, and you can kind of wrap your head around the single chains, and even force yourself into a new setting. And you acceptable kind of setting. This, though, is many things at once. So one of the things that we assumed would be true, by May 2022, certainly when we started was that the conversational UI, if you think of it as a UI paradigm, would have been on equal footing with some of the prior UI paradigm. As in, I took my CS degree on the command line, my mom kind of grew up on the graphical user interface, my kids grew up on the mobile UI, if that says distinct UI compared to the graphical one. And then we have this conversational UI that I assumed. Can we started in 2014, by 2022, and many others assumed as well would be a large part of that pie chart of the UI we will use on a daily basis, not that it would overtake any one of the others just like today, you probably touch these two UI paradigms. But voice would have three boys would have been not a single digit, which I think it exists in today. But a double digit, often will speak to our computers, but not really. If you sit on a Mac, as in, you probably didn't use Siri on your Mac today or yesterday, even though many of the things are faster. If you use voice, as in, this is not scientific. But I did a little test because it annoyed me. And I shouldn't have been annoyed. But I were that one of my kids used voice for their calculator, I still to this day, and you're probably the same. Use your fingers for your calculator. It is not faster, not even close. It is just way faster to use voice to do just basic calculations, you're doing some math homework, and you need some of the parts. But even though we did the little test with X number of things, I take them out. So you kind of voiced them out. Not even close, and it worked it I don't take that fast, right, I worked it. This is a competition I want to win, but not even close. Even upon learning that I use my calculator like two hours ago, I typed it in. So what is it in my mind did have me not being willing to just kind of ask it.  

 

Rob Stevenson  33:04  

We're creatures of habit, even a technological man like you. And I'm sure your daughter after the competition ended, looked up at you and said the future is now old man. Something like that. Dennis, this has been a fantastic chat. We're creeping up on optimal podcast length here before I let you go, I want to ask you to maybe prognosticate a little bit or just to share with me what in this space of AI is most exciting to you, when you look at some of the work being done whatever they use cases? Are there any research papers or reports that have come out recently that make you truly excited about the way this technology is being deployed?

 

Dennis Mortensen  33:37  

There's so many good things going on as we speak. But perhaps as a general theme, I am very excited about our current ability as an industry to generate text. And we are surely closer, we're still far from under truly understanding text. We can do tons of positive predictions on unstructured text, and create all sorts of kind of products around them. But our current ability to generate text that hold real meaning is very impressive. And we've all kind of been exposed to it. And I don't think we've really seen yet all the kind of wonderful products that is about to kind of come out and the expectations for where even the little ones and arrived what decade half ago, two decades perhaps for where Autocomplete is almost going to difficult if you don't have autocomplete today as in what is this something is wrong. But Autocomplete is just the short version for Where is to really be and we close. Now you can even see that in your email. You kind of get half sentence complete now versus the kind of word autocomplete but why don't we really see that turn into full sentence for paragraph complete for where I'm going to describe the parameters. And it's not just for kind of raw text in code I've been if you checked it out yourself, or somebody listening in here, if you checked out copilot was this, this call it a plugin or engine from GitHub that allows you on code, even just to describe, either you do it as a kind of one line comment, it will write the function for you, where you write a function header, and it writes the function in form, including kind of sub functions. And then is so good, that is kind of scary for where sometimes I've imagined something. Like, I should also kind of make sure I have this case in mind and write something and then the autocomplete and it does not just autocomplete on syntax, no, write out a whole function for where that little kind of edge case I had imagined, it rolled that out as well. Damn, where did that come from? It just excites me. And again, anybody who tries to just go in and write to a little kind of comment, find first 200 prime numbers, and then tab, Ah, here's a function for that odd requires a sub function ah, then we'll write the function that of that then writes that whole function it's just very good. You see the same with some of the kind of write copy for where if not full blog post, then certainly kind of paragraphs and headers and what have you just, this is very good. That excites me as you can hear.

 

Rob Stevenson  36:34  

I love it. Dennis, this has been a fantastic conversation. Thank you for being with me here and sharing your views and your experience. I've loved chatting with you today.

 

Unknown Speaker  36:41  

Time Well Spent cheers mate.

 

Rob Stevenson  36:44  

How AI happens is brought to you by sama. Sama provides accurate data for ambitious AI specializing in image video and sensor data and notation and validation for machine learning algorithms in industries such as transportation, retail, ecommerce, media, med tech, robotics and agriculture. More information, head to somma.com