How AI Happens

Laurence Moroney: Google's Lead AI Advocate

Episode Summary

Laurence Moroney is an industry veteran who has authored several books on AI development, taught a series of AI/ML MooCs, and even advises British Parliament on their AI approach. His mission at google is to evangelize the opportunity of AI and work towards democratizing access to the development of this technology. Laurence joined the podcast to discuss the nature of AI hype cycles, how AI practitioners can navigate them within their own organizations, and some of the amazing opportunities coming in to play when access to AI & ML is made global.

Episode Notes

Laurence Moroney is an industry veteran who has authored several books on AI development, taught a series of AI/ML MooCs, and even advises British Parliament on their AI approach. His mission at google is to evangelize the opportunity of AI and work towards democratizing access to the development of this technology.

Laurence joined the podcast to discuss the nature of AI hype cycles, how AI practitioners can navigate them within their own organizations, and some of the amazing opportunities coming in to play when access to AI & ML is made global.

Pre-Order Laurence's new book, AI and Machine Learning for On-Device Development: A Programmer's Guide

Study with Laurence on Coursera

Subscribe to the Tensor Flow YouTube Channel

Episode Transcription

0:00:00.0 Laurence Moroney: And then when they fed this back through an intention mechanism, they realized they didn't build a camouflage detector, they built a cloudy sky detector.

0:00:12.2 Rob Stevenson: Welcome to 'How AI Happens', a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we are about to learn how AI happens.

0:00:42.0 RS: You don't have to be an AI expert to be skeptical about all the hype surrounding artificial intelligence and machine learning. Every company claims they have it, every sales deck mentions it, and worse, the media act as if the rise of the machines is happening some time in late 2022. But amidst this hype cycle, those in the know understand the opportunity has never been greater. Enter Laurence Moroney. He's an industry veteran who has authored several books on AI development and even advises British Parliament on their AI approach. His mission in his role at Google is to evangelize the opportunity of AI and work towards democratizing access to the development of this technology.

0:01:25.3 LM: I'm an AI lead at Google, so I lead the developer advocacy team, and our job is really to help inform and inspire the world around machine learning, artificial intelligence, deep learning and all that good stuff. So working with developers, working with communities, universities, all of those kind of folks to really help scale out the message and the opportunity that's there with AI.

0:01:47.4 RS: Laurence joined the podcast to discuss the nature of AI hype cycles, how AI practitioners can navigate those cycles within their own organizations, and some of the amazing opportunities coming into play when access to AI and ML is made global.

0:02:01.8 LM: As for my background, I was doing developer advocacy for a few years prior to Google, at places like Microsoft, a wonderful start-up in Israel, called Main Soft, and at Reuters, the news agency, kind of doing an internal advocacy role there, and then prior to that, the typical software engineer, all of those kind of things. Although my background at school was actually physics, my degree was in physics, but I came to the realization that nobody hires physicists, or very few people hire them, and I guess I wasn't good enough a physicist to be hired, so I ended up in this wonderful field instead.

0:02:37.9 RS: It's interesting, you're the second individual I've spoken to, who got their start in physics and now have a career in AI. Is that a natural progression? What do you think is the link there?

0:02:48.2 LM: I honestly... I don't think there is a natural progression, it's probably just a happy coincidence and maybe you're over-fitting in your audience. Sorry, AI joke there. For me, my path to AI actually came... It was really interesting that... 'cause when I first graduated college as a physicist, and it was in the UK, and it was in the middle of the worst recession that they had had since World War II. The current one, obviously, 'cause of COVID, is even worse. But back then, this was a pretty bad one, there was no kind of jobs or anything. And the government launched an initiative in 1992, the UK government, that they were gonna put together a cohort of 20 people to become AI specialists who could maybe be form the backbone of trying to help industry through AI and all of that kind of thing. And they needed people who were smart, but unemployed, and I at least fit one of those criteria, and I was unemployed, but we kinda did this battery of tests, it was like these kind of strange movies kind of thing, and I was accepted into the cohorts, which was really, really cool. And then I guess I got bitten by the AI bug then.

0:03:53.7 LM: But in 1992, trying to do any kind of AI program was intensely difficult, it really didn't have any practical use. We were learning the languages like Prolog and LISP, and there was no industrial use for them, but there was some really fun academic stuff that you could do. But in the end, the program failed completely, but the potential was there, and I gotta give credit to the UK government, of figuring this out back in 1992. They were a little bit early, but it was really cool that they did it. It's funny that recently, in the last couple of years, I've been doing briefs to the UK governments, around AI, and I was like, "Hey, do you know about that program?" And of course, the MPs there, they're all long gone, the ones that did it. And the current ones were like, "Did we really do that? That's awesome." I guess that's what got me bit by the bug and that led me down a career of programming and software engineering, to get me where I am today.

0:04:45.9 RS: I'm interested to hear that you are spreading this message of the opportunity of AI, but then you also see all of these companies who are sort of saying they have AI or using AI in their messaging. Is there a gap of actual technology there? What is the difference between the reality of the technology and maybe the hype surrounding it?

0:05:04.7 LM: Yeah, it's a great question. And by the fact that there's so many people doing this and waving around the AI magic pixie dust, hoping for customers or VC funding, that... If nothing else, that is a signal that this technology does have legs. The question is, does it get lost in the hype cycle or do we bust out of the hype cycle and start doing something really interesting? I always like to talk about, there's the... Gartner has this hype cycle curve, where you start with the peak of inflated expectations, and then you drop into something called the trough of disillusionment, and then once you're in the through of disillusionment, that's when you can really understand what the technology is and then you start climbing up through the plateau of productivity. And the kind of behavior that you're talking about just means that we're kind of on the wrong side at the moment, of this peak of inflated expectations. Part of what I'd like to describe my job as, is to do some quantum tunneling through that peak and end up in the trough of disillusionment. So I'm a professional disillusioner.

0:06:02.9 LM: And then once you get into that and you understand what the technology actually is and what it does, then you can start being really useful with it. Obviously, you can look to the past to be able to predict the future. And in my career, there have been two massive tectonic shifts in computing. The first was really the widespread advent of the web and internet technology. The second was the smartphone. And if you think about, exactly the same thing happened in both those cases. I'll talk about the smartphone, which is the more recent one. So the hype cycle at the time was like, "Throw away your desktop PC, throw away your laptop. You'll be able to do everything on your phone." And it's like, "Forget about Office Suites, forget about programming environments, all of these kind of things. You're gonna get your phone, you're gonna plug it into a station on your desk and a big monitor will magically appear and it'll change how work is done." Well, that didn't happen. That to me, was a great example of the hype around the smartphone. But the smart phone still was a massive revolutionary technology that created a tectonic shift in the industry. I saw a stat, the largest creator of jobs in Western Europe during COVID, was the smartphone ecosystem. So not just people building smartphone applications, but people using them, and all of the stuff around that, like delivery services and all that type of stuff.

0:07:23.8 LM: So we could see, that revolution started in 2007, and even now, 14 years later, the economy is benefiting greatly from it. The web revolution, the same thing, there was a whole ton of hype around the web, every shop in existence will go out of business, libraries will close. There was disruption and there were changes as a result of the web, but of course there was, I believe, an overall net gain. So when you start seeing these kind of things like when the hype first came in, but the people who were able to see through the hype and to be able to do something reasonable and productive when they fell into the trough of disillusionment, created whole new industries. Google came out as a result of the web, Amazon, Facebook, the Apple are the highest market cap company in the world right now, and that came as a result of the smartphone revolution. So there's so much that can happen when you can understand the actual limitations, start building to them and then rise up through that plateau of productivity as it's called in the Gartner Hype Cycle.

0:08:28.1 LM: And that's really what I'm here to do, that's my role at Google, is to help people who are technically savvy to understand, "Here's the possibility of things that you can do. Here's what you need to communicate within your business," and so when your product managers or when your CEO is wanting to wave that AI magic pixie dust, that kind of stuff, then it's the case, well, you can be the person who's got the expertise, who's able to say, "I know this domain, and here's where AI can be used in this domain for real." And it might be nice to attract attention through marketing or through VC, but when you build a real product and you start building a real market around that, that's when the business can take off.

0:09:09.7 RS: So if I'm an AI practitioner and I am contending with the hype around AI, or the example you gave of the CEO who's white boarding, "Can we do this with AI?" How can I level set expectations? There seems to be this little bit of education necessary, to make sure that people are steeped in reality when it comes to, "What can this technology do? And what can you reasonably expect within your organization?"

0:09:34.5 LM: Yeah, I think effective communication is the number one tool, managing upwards like that is the number one tool. I've had a number of those conversations with folks who just thought that they can wave their arms and say, "AI," and then find a programmer who could build the AI for them as they envisage it. But then to kinda just talk them through, "Well, this is how it actually works, this is what it actually is. And if you wanna reach these goals, here's the kind of work that you would need to do, to be able to reach them." And often, it's setting lower goals and having a plan to be able to reach those lower goals and then use that as a plateau to go further and further and further. And I find in general, like CEO speak or CXO speak, they like that, instead of a yes person going, "Yes, we can do whatever your vision is," that kind of thing, to actually be able to say, "Well, here's a plan for how we can get to a very profitable future. It may not be the vision that you have, but it's concrete," and often, the folks in that level see themselves as the inspirational folks who get the plan moving in that direction by setting the goal and setting the long-term vision.

0:10:45.8 LM: And when somebody can communicate up like that to say it's like, "Well, we can't reach the exact nirvana that you're specifying, but we can build great products to do A, B, C and D, not all the way A through Z, and we can do it in this time frame," that, having that level of expertise to be able to speak to that comfortably and realistically, ends up being, I think, a great gift for everybody. If we go back to the conversation of what AI is and what AI isn't, is that I always like to draw this diagram that I say, "Okay, here's traditional programming," and traditional programming, I draw it as a box, and that box is saying, "You're putting rules in, you're putting data in and you're getting answers out." This is what programmers and the software department in your company have been doing since the dawn of software time, and the case is, what a programmer does is, they have to figure out how to express those rules in the programming language, so computer can do the work.

0:11:36.2 LM: So for example, a very simple thing, like in financial services, there's a ratio called the price over earnings ratio, that's often a good one to determine the value of a company or one of the signals to determine the value of a company. And that's a very simple rule. Get the data of the price, get the data of the earnings, divide one by the other, and then you get an answer. There's obviously far more complex ones than that, but I like to use that one as a simple example. And you hire programmers because they know how to express those rules in a programming language and run them in an infrastructure. In the machine learning and AI world, I flip the axis around on that box. So instead of you trying to figure out the rules, you give the machine the answers and the data, and you have it figure out the rules. So for something like price over earnings, it's overkill, you don't need to do it. But what if there are patterns in your data that you don't see?

0:12:26.5 LM: There are things about this company, and you can get a wealth of data around a company that you're doing an analysis on, and you can see that this company has done extremely well in the stock market, but you have no idea why, and this company has done extremely well, and you have no idea why, and then these bunch have done badly and you've no idea why. So then you have the answers, they've done well, they've done badly. You have the data, and the idea behind machine learning and AI is then, you can build a system that can do that pattern matching of the answers to the data and figure out what the rules are, to be able to do that.

0:13:01.1 LM: So for you to do that effectively, you need good data scientists. It's not just, you get a shovel and you throw the data into the machine and something magic happens. You have your data scientists to try and make this as efficient as possible by picking the columns in the database or maybe doing feature crosses on those columns, where multiply this one by this one, that kind of thing. And the same way as your coders today, they're not just typing on a keyboard and stuff magically appears, they are figuring out the rules, they're figuring how to scale them. And that's really where the magic of good data science department applies, and so you've got skilled people who know the domain data, who know how to build models, so that the data is being used efficiently, so you can train a model in a couple of hours instead of a couple of decades, and that kind of thing. So it's like, that's where those folks, beyond trendy, really, really can show massive value for the company. And I'd say the same analogy, if you can get a programmer to build an effective program that runs your business in a day or a week, as opposed to an ineffective programmer who takes years to do the same task.

0:14:07.6 LM: The same kind of thing can be applied with data scientists, that they can figure out which parts of the data to shovel, which parts not to shovel, they can figure out how to label those answers and all of those kind of things, so that the machine learning engineer can do their job effectively.

0:14:21.7 LM: The way I generally like to define AI itself is, when you make a machine that responds the same way that an intelligent being would respond. So computer vision is a good example of that, is that if I show you a picture of a cat... If I show you a picture of my pet, you would say, "That is a dog." Showing a computer a picture of my pets, prior to AI machine learning, deep learning, it would see a whole bunch of pixels and it has no parsing of the content, other than white pixels, blue pixels, those kind of things. When you start using machine learning and deep learning to then kind of train a computer to understand the difference between a cat and a dog, and then I show it this picture of my pets, and the computer will say, "That's a dog." Now, the computer is responding in the same way as an intelligent being would respond, and that to me, is what artificial intelligence is all about. So you play it a sound, and instead of it saying, "Here's a number of audio levels," it's actually able to determine your speech and to determine what you're saying, the same way as an intelligent being would. That to me, is artificial intelligence.

0:15:31.6 LM: There are lots of ways that you can get there. Machine learning, deep learning are probably the most efficient way for things like computer vision, for audio processing, for tax processing and those types of things. So if we think about it and what it is... In terms of what it is and what it's not, it's not this magic thing that you can just say, "We're gonna... Like in a Dilbert cartoon, we're gonna say, "Let's put machine learning and artificial intelligence into our product and we get an upgrade." It doesn't really work like that.

0:15:58.7 RS: The training is in the interest of an inference. When your technology can make an inference, an accurate inference, it has mimicked human cognition, right?

0:16:06.5 LM: Yeah, exactly. And the nice thing is then, it can even go beyond human cognition, and let me give an example of this, that blows my mind. And so we worked on a project for diabetic retinopathy, at Google, where diabetic retinopathy is the world's leading cause of blindness. And the thing about it is that, it's easy to diagnose and it's easy to cure with early diagnosis. India has the world's second largest population, but it has a shortage of over 100 thousand qualified ophthalmologists. So we worked with doctors and hospitals in India to gather lots and lots of retina scans, to see... We'd label these retina scans... Data plus answers. We'd label these, based on those five different buckets, no diabetic retinopathy, all the way up to severe, and trained a machine learning model on this, to be able to be an artificial intelligence to respond the way an ophthalmologist would, and it ended up, like the publications that we did in various journals, showed that it was at least equivalent to a qualified ophthalmologist and often better. And so, that's the first mind-blowing part.

0:17:13.5 LM: But then the second mind-blowing part, and the one that really hooked my interest in this was then a scientist within Google was looking at the data and realized, we don't just have labels of the diagnosis, we also have labels of the person's birth gender, or the person's age, or the person's blood pressure.

0:17:33.6 LM: Now, an ophthalmologist can look at the scan of an eye and see blood clots and determine do they have diabetic retinopathy or not, but an ophthalmologist can't look at that scan and pick their age, or pick their gender. And so what if you have all of this data, you have your answers, you have your data, what if we could feed this into a model and do it? And it ended up, they trained a model that was 98% accurate in picking the assigned gender at birth, which is as good as, if not better than the average human, but obviously much better than a human looking at a retina, that kind of thing. It would be 50-50, it would be a coin flip. But it was 90% accurate, and it was also able to predict their age, with a mean average error of about three years.

0:18:19.1 LM: And a few times in the past, I've told this story to an audience and I'd asked the audience to guess my age, and on average, the audience was... The mean average error from the audience was way more than that, and they're looking at me, they're looking at my gray hair, they're looking at my mannerisms, they're not looking at my retina, and they're still getting it even more wrong than this was... Again, looking at the retina. So we talk about human cognition and that kind of stuff, but in some ways, doing this kind of pattern recognition, we can go beyond human cognition, with examples like that one. If you have the data and if you have the labels that it's possible now for a machine to be able to do the matching of that data to that label and spot patterns that you as a human, wouldn't previously spot, and then there's massive, untold opportunities in that. So again, if we get down into that trust of disillusionment, and part of that is, I'm saying Machine Learning is fancy pattern matching.

[chuckle]

0:19:13.8 LM: And that kind of thing. There's nothing magical about it. And then when you understand that and you say, "Well, I have this wealth of data in my business, can I find new business opportunities with this?" And the answer to that, then is potentially, yes. In the same way as that scientist at Google was able to build a system to be able to predict somebody's age from a retina scan, which nobody knows how you can look at a retina and determine an age. From the model that they built, so you can now do an audit of that, and there's something called attention mechanisms, so you can see where the computer is paying attention to, to be able to derive what it is in a retina that let's you pick somebody's age. But it's like those are the kind of things that now, that the brute-force aspect of sheer compute power, doing that kind of pattern matching allows you to come up with these new scenarios that will rise you up through that plateau of productivity.

0:20:07.3 RS: Yes, so you said it was an attention mechanism? And this allows you to clue in on, this is the variables that it was taking into account to result in this insight?

0:20:18.1 LM: Yeah. Exactly, exactly. I teach it in one of my Coursera courses. I do advanced computer vision. And there's one really fun example that we go through in that one, it's not the retina one, that one is a little bit too complex, but there's a very famous machine learning exercise, which is pictures of cats and dogs that I was talking about earlier on, and how you train a computer to be able to recognize the difference between a cat and a dog. And you build a machine learning model in the course that can quite accurately tell the difference between a cat and a dog. But then you also do the attention mechanism stuff on that. And it turns out the primary difference that in this case, the computer was looking at, to pick the difference between a cat and a dog was the eyes. Sometimes you think, "Oh the cat has pointy ears, the dog has floppy ears, for the most part" or "Their noses look differently", but for the most part, when this model was actually working to pick the difference, it was like those were the features that it had zeroed in on and so then I was able to learn from that and go, "Aha, so now when I build a model, maybe I should focus on the eyes to be more efficient", that type of thing.

0:21:16.7 RS: Yes, it strikes me as a crucial mechanism in removing harmful biases, for example, from a black box AI, from being able to look under the hood and say, "Okay, this is what it was looking at to get this insight". That can help remove a lot of this fear and a lot of these potentially, harmful biases or incorrect assumptions that technology would make.

0:21:41.9 LM: Yeah, yeah, exactly. And there's a technique, it's also called... There's a thing that you can build, called a class activation map, and the idea with the class activation map is, you're seeing what the computer was paying attention to. A funny story about them, the US Army, realized that maybe computer vision could be used to see things and images that humans couldn't see. And say take for example, on the battlefield, what about being able to see a camouflaged tank? That like a human could look at it, camouflage is designed to fool the human eye, but what if you could have a machine be able to detect like a camouflage camouflaged tank? So they did an experiment where they got a bunch of data scientists and a bunch of machine learning engineers and they gave them a tank, and they said, "Hey, you go out into the woods and one day take a whole bunch of pictures of this tank un-camouflaged", and then the following day, they got the camouflage nets, and they put the camouflage nets on the tank and take a whole bunch of pictures of this tank camouflaged. And so build a model off of this one to see if you can pick a camouflage tank or a non-camouflage one, and they did what all good data scientists do, they had a training set of data, they had a test set of data, they had a validation set.

0:22:54.3 LM: They built their model, they ran it and it was like 99% plus accurate. And they were like, "Oh my gosh, we have built something that can really, really change the course of the battlefield". They presented that to the Army, the Army loved it, and then they took it out and tested it and it failed completely.

0:23:10.8 RS: Oh no. [chuckle]

0:23:12.5 LM: And the reason why it failed completely was that, they took the pictures of the un-camouflaged tank one day, and they took the pictures of the camouflaged tank on another day. And on the day that they took the camouflaged tank, the sky was cloudy, and on the other day they had a blue sky, and then when they fed this back through an intention mechanism, they realized they didn't build a camouflage detector. They built a cloudy sky detector. [chuckle] So with the black box element of this kind of thing, it's easy to think that these are hard to debug and that kind of stuff, but they're not necessarily that difficult to debug if you understand how they're architected, and if I gave the elevator pitch for how you to do this, when you train an AI system or a machine learning system, you're flowing data one way and doing back prop the other way, but when you wanna do these attention mechanisms, those kind of things, it's just the way of flowing data in the other direction and effectively de-compiling it. If you're going through convolutions, you're de-convoluting it and that kind of stuff, and you can get a pretty good estimate for how the computer is looking at your data.

0:24:12.1 RS: What is the difference between a classification map and an attention mechanism?

0:24:15.6 LM: A class activation map is a...

0:24:17.1 RS: Yeah. Thank you.

0:24:18.3 LM: So it's a case of when you build a Convolutional Neural Network in particular, you're learning filters that can isolate features in a map, and a class activation map is where you just figure out where those features are on, you light it up on a diagram with a heat map or something like that. And that is a type of attention mechanism. There are also other ways of you being able to pick out attention within a machine learning model or something like that. Class activation maps are a very common one that are used in Computer Vision. Anything that you do, Convolutional Neural Networks, to be able to identify features, there are... Sometimes also use used Convolutions for sequence maps, so if you wanna predict weather in the future or something like that, you may use a one-dimensional convolution on that, and you can potentially have a class activation map there where it spots like, "Hey, when you got a spike followed by a dip", in this kind of thing, then that's usually followed by something else. But typically, it would be in an image-based ones, it's where it's most commonly used.

0:25:17.6 RS: I like how you mentioned the examples of the smartphone and understanding Hype Cycles, I'm curious if there are any lessons you think we can learn from the way that, that sort of technology was deployed and iterated upon that we can correct or do better with... As AI is sort of spread to the world.

0:25:35.7 LM: The first part is letting people realize that they are in a hype cycle, we've been in hype cycles before. The people who were successful, were the ones... Or initially successful, at least, were the ones who saw through that, and this is what they did to see through it. Exposure to the platform, exposure to the technology, trying out new and exciting and different things, there's a whole graveyard of failed apps on Android and iOS, which laid the framework and the pathway for those apps that were successful. So really being those kind of early adopters, having that, try what you can, learn, iterate, continue. That's what's led to success, and I think that's the same kind of thing that can lead to success in the AI space.

0:26:19.2 LM: One of the advantages of the AI space, is that the amount of investment that you have to make to be successful is a lot less than the amount of investment you might have previously had to make to become the big mobile app developer or to become the big website, and as a result, you don't necessarily have to be housed in the traditional areas on centers of excellence and success. So if we can try to democratize AI as much as possible by making it as available to as many people as possible so that they can seize opportunities that the rest of us may not actually think about, that could pave the way to success for them and for everybody else also. For every success, there's probably going to be a 100 failures, and it's really understanding that, realizing that, but I would rather have a 101,000 people do something so that there's a thousand successes and 100,000 failures, than have 1001 people do it where there's only one success, if my math add up. I told you I'm not very good at math.

[chuckle]

0:27:23.4 RS: Yeah, the one... Yeah, I think that adds up. Of course, the YouTube channel and the MOOCS, and a lot of the content that you produce is any in the interest... It's accessible anywhere. Someone who has internet access can learn from an expert, such as yourself. I do worry though, at what point is there a breakdown in terms of the hardware and the ability to actually design this technology? Does one need access to cloud computing and a work horse of a laptop to be able to play in this field?

0:27:53.3 LM: To be able to get started and play in this field, absolutely no. To be able to go huge in this field, you do need access to high-end hardware like GPUs and TPUs and that kind of thing. So to split those two audiences for the Getting Started one, that's where we've been very carefully focused on easy high-level APIs that will run in Python, which is easy to install and use, that you can do on any laptop with the a CPU so that you can get up and running and kick the tires with these kind of things and to make that as quick and easy as possible for anybody to do. When you go beyond that though, and you start trying to train bigger models, not everybody has access to GPUs, not everybody has access to TPUs. So part of our strategy there was, we have this thing called Google Colab, and Google Colab is an in the browser notebook that runs with a Google Cloud back-end that can provide you free access to a GPU or a TPU. Obviously it's limited, but it's pretty generous. It's many hours of training that you can get for that, and all you need is a browser and a web connection to be able to do that, if you don't already have the hardware.

0:29:00.3 LM: So that's the one first part of the offering. The another part of the offering though, is that when we start thinking about where do your models execute? Okay, so that many models are gonna be built to execute in data centers, the likes of Google or Amazon or Microsoft or on... But that's not the only area of opportunity, we can see that the area of opportunity on mobile handheld devices, embedded systems and all those kind of things is possibly even larger. And with the price of them dropping sharply, the hardware to build a phone is getting cheaper, the hardware to build embedded systems is getting cheaper. Then as long as we have an ecosystem of tools that will allow you to build for them with as low a dollar cost of entry as possible, as low an intellectual cost of entry as possible, those kind of things, that's when those markets can be seeded and those markets can grow. And like I said, I think we can all benefit. Let me share one example, 'cause there's a great project that... It was a couple of years ago, that was built by a bunch of high school and college students in India. And it's called Air Cognizer, I think that's the right phrase.

0:30:09.6 LM: And it's on the YouTube web... The TensorFlow YouTube website. And what they did was that they realized that in their city in India, there was extensive air pollution. And you know what it's like, we all probably are encountering and nowadays with fires nearby, I live in Washington, so every year we have to start looking at air quality because of forest fires. But what they realized was that, when you look at air quality and you see it on the news, or you see it on a website, that's the air quality at a sensor, which is being operated by somebody. Now that might be 20-30 miles from where you live, and the air quality where you live might be severely worse. Elderly parents that they had and grandparents were afraid to go out because they don't know the air quality and they could get sick. So these students got together and they realized if they get a sensor to measure the air quality, and they get a phone, a cheap Android phone with a camera on it, if they take a picture of the sky, they have data. If they measure the air quality on the sensor, they have a label, and if they go all over their city and they take lots of these pictures and lots of these sensors, you do that basic pattern matching to kind of build a model where you're saying, "Well, when the sky looks like this, the pollution is like this".

0:31:27.8 LM: And they turned that into an app, and now lots of folks in India can use that app where they can just take a photo of the sky and see a good prediction of the air quality near them, instead of looking at the news and seeing an air quality indicator that could be 20, 30, 40 miles away. And it's like little things like that, little innovations like that, because these were high school and college students, they don't have a lot of money, they're not forming a startup where they're hiring developers to do this kind of thing, the equipment for them to do that, was basic laptops that they had, the data? They generated the data themselves because they had the sensor and they had a cell phone where they could take a picture of the sky, they were able to build a model for this using the open source ecosystem, and they were able to deploy it for free to Android phones.

0:32:12.5 LM: These kind of things, when I talk about really lowering that bar so we could raise the floor, but now it's like, "Well, the rest of the world can benefit from what they learned", because we now have the same problem in the West because of forest fires. And I could potentially go out and do the same thing to build an Air Cognizer for Washington State without needing to invest millions of dollars in a start-up to do so. So when you bust through that hype cycle and you understand how this works, then you can think like that, and that's what they did, they thought like that, and boom, they came up with this really cool solution.

0:32:42.8 RS: This is the focus of your, about to be published, new book. Is that correct?

0:32:47.1 LM: Yep, so it's an AI and Machine Learning for On-Device Development is my upcoming book. I originally was gonna create this mega-book for O'Reilly called AI and Machine Learning for Coders. We realized this weighed too much for one book.

[chuckle]

0:33:02.4 LM: So last year, I released the AI and Machine Learning for Coders, and now this year, it's kind of like the complimentary book/sequel, which is a AI and Machine Learning for On-Device Development. So it's really showing you how, as a mobile developer, you can start using models on Android, on iOS and a little sprinkling of doing it in a server with remote access or doing it on things like Raspberry Pi. It's packed with lots of examples of things like, you take a picture, here's how you can detect a face in the picture. Or here's how you can count the number of objects in the picture, like maybe you're building an app that's counting traffic, driving past your house. How do you count the number of cars? Those kind of examples... So I try to get very hands-on with them, of like, here's basically how this stuff works on your device. As of today, you don't train models on the device, you use models on the device. So the concept of my first book, AI and Machine Learning for Coders, or my first book in the series was really, "Here's how you build the models", and then the second book is, "Okay, when you have models or there are off the shelf models available, here's how you use them, or here's how you can customize them to actually use them on your device".

0:34:16.9 RS: Okay, so the model is not constructed locally? The model is accessed?

0:34:20.9 LM: Yeah. As of today, trying to train a model on a mobile device, it's just going to be very hostile towards your battery because model training is very intensive. We are doing a lot of work on making that better, but as of today... Yeah, as a developer, you're better off training a model in the cloud with something like Colab, or on your developer workstation and then deploying it to your device.

0:34:46.4 RS: Yes.

0:34:47.3 LM: But that's changing. That is changing.

[chuckle]

0:34:48.1 RS: Yeah. Well, fans of this podcast will remember our episode with Sama CEO, Wendy Gonzalez, who was speaking about this similar kind of problem of, "How do we democratize access to AI?" and I can envision an approach to that, which is just drop a 100 copies of your book and a 100 Android devices, just anywhere in the world and let a rip, right?

[chuckle]

0:35:10.2 LM: Yeah, yeah, please do. I'd love to see the results.

[chuckle]

[music]

0:35:16.8 RS: Laurence has published all manner of content about the realities an opportunities of AI, both philosophical and technical. In the episode description, you'll find links to his MOOCs, books, and the TensorFlow YouTube channel where he frequently contributes. You can also find Laurence's resources on the new, How AI Happens, LinkedIn group. Here, we'll post all the research and resources mentioned by our guests and give you the opportunity to rub shoulders and ask follow-up questions with the experts you hear featured on the show. Just search How AI Happens on LinkedIn and say, hello. How AI Happens is brought to you by Sama. Sama provides accurate data for ambitious AI. Specializing in image, video and sensor data annotation and validation for Machine Learning algorithms in industries such as transportation, retail, e-commerce, media, MedTech, robotics and agriculture. For more information, head to sama.com.