PhD, Vice-Rector at Kozminski University, Harvard research associate, and Polish futurist Aleksandra Przegalinska joins to discuss the future of NLP, the true value in improving transformers, and the current state of automation with regard to creative vs. mundane tasks.
Today's guest is Aleksandra Przegalinska PhD, Vice-Rector at Kozminski University, research associate, and Polish futurist. From studying pure philosophy, Aleksandra moved into AI when she started researching natural language processing in the virtual space. We kickstart our discussion with her account of how she ended up where she is now, and how she transferred her skills from philosophy to AI. We hear how Second Life was common in Asia centuries ago, why we are seeing a return to anonymization online, and why Aleksandra feels NLP should be called ‘natural language understanding’. We also discover what the real-world applications of NLP are, and why text processing is under-utilized. Moving onto more philosophical questions around AI and labor, Aleksandra explains how AI should be used to help people and why what is sometimes simple for a human can be immensely complex for AI. We wrap up with Aleksandra’s thoughts on transformers and why their applications are more important than their capabilities, as well as why she is so excited about the idea of xenobots.
Key Points From This Episode:
Tweetables:
“My major discovery [during my PhD] was that people are capable of building robust identities online and can live two lives. They can have their first life and then they can have their second life online, which can be very different from the one they pursue on-site, in the real world.” — @Przegaa [0:06:42]
“We can all observe that there is a great boom in NLP. I’m not even sure we should call it NLP anymore. Maybe NLP is an improper phrase. Maybe it’s NLU: natural language understanding.” — @Przegaa [0:14:51]
“Transformers seem to be a really big game-changer in the AI space.” — @Przegaa [0:16:40]
“I think that using text as a resource for data analytics for businesses in the future is something that we will see happen in the coming two or three years.” — @Przegaa [0:19:46]
“AI should not replace you, AI should help you at your work and make your work more effective but also more satisfying for you.” — @Przegaa [0:25:31]
Links Mentioned in Today’s Episode:
Aleksandra Przegalinska on LinkedIn
Alexandra Przegalinska on Twitter
EPISODE 33
[INTRODUCTION]
“AP: I have no idea what safety and ethical protocols are implemented for the Xenobots. But this is going to be a big breakthrough. And this is no longer AI or just AI. This is AI with something else. This is wetware, a way of processing information that is done on proteins.”
[00:00:21] RS: Welcome to How AI Happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we're about to learn how AI happens.
[INTERVIEW]
[00:00:49] RS: I'm truly excited about today's guest on How AI happens. She is Vice-Rector at Kozminski University. For our Western folks, rector equals president. So Vice President of Kozminski University, ambassador for the world leading AI network, Swiss Cognitive, as well as Senior Research Associate for the Harvard Labor & Worklife Program, not to mention, Polish futurist, Aleksandra Przegalińska.
Aleksandra, welcome to the podcast. How are you?
[00:01:15] AP: I'm very good. Thank you so much for the invitation. Happy to be here.
[00:01:19] RS: Happy to have you. And I neglected to add doctor and professor to your litany of titles there. I want to make sure people know exactly what they're getting into with this conversation. So I wanted to make sure I don't leave that out.
[00:01:29] AP: Thank you. Thank you. I know that the surname together with the titles is sort of a lot. But thank you for including all of that, and making the effort.
[00:01:39] RS: Yeah. It is. But I talk for a living. So I ought to be able to at least get that right. But I'm so pleased you're here. You're working on so many fascinating things. I can't wait to get into some of it. Just for the folks at home, would you mind setting a little context about you and your current work? Maybe sharing a little bit about your background and how you wound up in your current role or roles, I should say?
[00:01:57] AP: Sure. I’m happy to do that. So I think you introduced me so well, in terms of my current functions. But let me just add that my work is focused on natural language processing. So as a researcher, this is my main interest, text generators systems that can speak to humans in a fluent way. So chatbots, virtual assistants. Optimizing them. Making sure that they work. And that interaction with them is successful is something that I focus on with my research team at Kozminski University.
I actually did my PhD in philosophy. Already, at that time, I was very much interested in artificial intelligence. And I was, I think, lacking a bit a practical component of work. But I was also sort of obsessed about the Turing test. That's something that I devoted my dissertation to. And afterwards, after finishing it, when I decided to really pursue a career in the academia, I said to myself, “Okay, now you're going to have to combine all these meta physical questions and all those big philosophical questions with something truly practical. And you're going to start to create systems that actually do interact and check to what extent their interaction can be compared to a real interaction.” So that's something that really, up until now, I think, makes me very excited about artificial intelligence. And that's a big part of the decision why I decided to join the academia.
And currently, since Kozminski University, that you have also mentioned, as a business school, I was trying to bridge my interest in AI, and human machine interaction, with also business practices and managerial practices. And therefore, I'm pursuing a project focused on automation of work and the future of work, and how artificial intelligence can support human work. And this is also the reason why I joined Harvard recently to work on a project entitled, let's say, Collaborative AI. So that's, I think, a bit more context to the introduction.
[00:03:54] RS: Yes, that's very helpful. Thank you for sharing. And I do so enjoy your story, Aleksandra, because you winding up in AI feels like it was the logical stepping point for your own curiosity, as opposed to you saw it as a marketable skill set. You were kind of interested in the technology. You had other interests that led you here, starting perhaps with your PhD in philosophy, which, correct me if I'm wrong, was in virtual entities, like the phenomenology of virtual entities?
[00:04:22] AP: Yeah, yeah.
[00:04:23] RS: Does that mean metaverse? Does that mean VR? What did virtual entities mean to you at the time?
[00:04:28] AP: Well, that's an interesting question, because at that time, I was really focused on a project that was actually sponsored by the National Science Foundation here in the US, and it was embedded in Second Life. And it was about researching virtual relations between virtual identities that people take in a game like the Second Life. I'm not sure if everybody kind of remembers what that game was, but it was definitely a prototype to metaverse. I guess it was the first step to metaverse. Surely not immersive and not fully virtual. It was just a non-quest MMO game. But in that game, me and research team, we were building bots that could interact to humans and facilitate interaction between other players.
And so, I think that was really something that allowed me to think about the future of human-human and human-machine relations in different digital spaces in a broader context. So indeed, I think when you asked about the metaverse, you were really correct. But it's just was like a very, I would say, scars and very narrow version of something that we can expect to call metaverse in the future.
You also mentioned those marketable skills. I think when I started being interested in AI, I did not even know that it was a marketable skill, frankly speaking. My interest was triggered around 2007, 2008. It was most certainly before the boom on artificial intelligence. I do remember my advisor telling me that well, “You want to design those bots? You want to write about them? Is that really something that you want to do? Those bots are just doing such a poor job talking to humans. And they really generate so much nonsense. Are you sure that this is something that you want to do?” And I was like, “Yes, I'm really interested. Even if the project is failing, I just want to know why.”
But then few years later, new bots, new virtual assistants started to appear. Some of them were pre-installed on our smartphones. And I think the interaction with them became increasingly more smooth and easy to do. And that's something that pushed me even further. And I think at that time, around 2012, I started realizing that understanding how to program using machine learning algorithms is something that can be considered a really good marketable skill. But I just didn't know that at the point of starting this whole project. So now I do know, obviously, that you can build a big career as a data scientist, machine learning architect and whatnot. But I think my whole interest in that started in an era where that was not the case.
[00:06:59] RS: Right, right. You were indulging your own curiosity at the time. And I can see how that early work, programming bots for the sake of seeing how they would interact with people, naturally led you to work in natural language processing. I do want to get into your current work in NLP. First, though, what did you learn from that experiment when you were working with Second Life? I guess particularly about human nature and then how humans interacted with themselves in that sort of metaversal capacity or robots in the metaversal capacity?
[00:07:30] AP: Oh, that's an awesome question. I think my major discovery was that people are capable of building really robust identities online and can really live two lives, right? So they can have their first life. And then they can have their second life online, which can be very different from the one that they pursue on site in the real-world.
And I have met people who were, let's say, men in real-life and decided to become females on Second Life, and pursue a completely different career. For instance, in real life, they were, let's say, professors. And on Second Life, they were visual artists. And they had dual careers and dual identities. I remember interviewing some of them, and they said, “Oh, actually, my character on Second Life has completely different features than I do. It has a completely different way of interacting with people, because I'm a very shy person. But as this digital character, I'm actually very open to interacting with people. This allows me to expand certain features of character that I can't seem to expand in real-life.” So I think that was the major discovery that people are capable of building those versatile identities. And the different ecosystem allows them to do that, right? So a digital ecosystem that is properly built can allow them to do that.
And I do remember, because my supervisor on that project was Professor [inaudible 00:08:58], who is Japanese. And she told us at the beginning of the project that she had a theory related to how and why VR, for instance, was so popular and still is very popular in Japan and playing games. And she said, “Listen, a few hundred years ago, during the role of the Tokugawa dynasty in Japan, people would put on masks and speak in public. But they would take different names. And that was disconnected. So that persona was disconnected from their real life.” And she said, “Well, that very experience laid a foundation for further expansion of immersive technologies in Asia and in Japan. That people are more capable to actually build something this robust that is not resembling the real-life in any way online.”
And I do think that it was very important for me to understand that and we would see how, for instance, players from Asia quite often would have characters that were not even resembling humans at all. They were pandas, or daily objects. And it was so interesting to talk to them and to understand where they come from and what triggers their interest to pursue a digital life really, in a way. So that was something that for us was very interesting. And we needed to build those bots to start those interactions, right? Because you obviously had real characters on that game. But you also needed to secure a space where, even if there's nobody there, people can still talk to someone. So we would have to build those even generic bots to have at least a minimal interaction, greet the person entering a given space and whatnot. So all these micro interactions were so interesting. And some of them led to really long dialogues, conversations, and even friendships, I would say. So that was a big takeaway for me from that project.
[00:10:47] RS: I love the connection to the individuals who would put on a mask to speak in public. And it's so easy to understand why they would do that. You risk no social capital when you put the mask on, right? You have this identity that is your human face, your physical person that can be recognized and moves from place to place. And if you say something people don't agree with, then you risk ostracization, you wish ideological cancellation, or just a loss of any credibility, social capital, etc.
And that desire for anonymization was very characteristic of the early Internet. It's a recent, probably post-2010 thing, where your Internet identity was mapped one-to-one with who you are, where it's, “Oh, here's Aleksandra. Here's her LinkedIn. Here's her face. Here's her job. Here's her email address.” And we are starting to see that flip back around to the early anonymous nature with Web 3 and decentralization where people, their faces, and NFT profile picture, and their name is something else. And they're starting businesses together. And they've never seen each other. And they have avatars.
So this desire to have a separate identity feels, as you experience, like human nature. Particularly, Second Life was not an advanced technological game, right? It was not particularly pretty to look at. It didn't have all this processing power. It didn't really have much to it. Nor, by the way, does Decentraland, which if you've ever run around Decentraland, it looks just like Second Life. It is nothing compared to the power of current video games. But it's not about that, right? It's about this ability to represent yourself and in a way other than you are walking around as a physical person. So it's so fascinating that you saw that desire early on, in Second Life anyway, of people to kind of characterize themselves in this other way. What was the NLP process of that for you? What was the development of the bot that could allow you to research people in a meaningful way?
[00:12:41] AP: Well, it was, again, generic because there was an interface and a whole inventory on Second Life that would allow you to build bots for Second Life. Sort of in-game bots. And there were certain pre built or predefined things that you could do with that bot, because it was very frequent to set up a few bots here on there so that the place doesn't feel empty. And please remember that in comparison with current social media, Second Life was not a very occupied space. You had 3 million people, perhaps worldwide using it, perhaps reaching 8 million at some point. But when we were there, I think in 2009, the verified registered amount of users was around 1 million. So actually, many open but unoccupied spaces. And you really needed those bots. So it was fairly easy to build them.
I think, at some point, I started noticing that I would want or expect a bit more from these interactions. So something that would just escape those rule-based systems, those old decision trees that can have, let's say, five interactions with you, and then just end it and then look themselves to repeat the very same thing they said at the beginning when they were reading you.
So I think, in a way, that increased my NLP appetite, you could say, building those early bots. But it also allowed me to, in a way, become very realistic about what NLP was at that time, right? Because obviously, somewhere in the back of our heads, we had those ideas that perhaps there are architectures out there or types of algorithms that would allow for more. But it felt safer to use just rule based systems and very simple decision trees, as I mentioned, to build something that at least is predictable.
And the rise of deep learning, fueled by data, this is something that started just a year later, 2010, 2011, and then increasingly became something mainstream. Obviously, today, we have completely different architectures that can cater to NLP leads or conversational leads. And I do think that this is a tremendous shift and a very promising pathway for NLP as a whole subdiscipline of artificial intelligence. So that's very, very interesting.
But most certainly at those beginnings, we were very much focused on this script that can predict what the user says or at least approximate to something that the user says that spits out some sort of at least generic phrase that would maintain the interaction. So yeah, very different from generative AI that we have today.
[00:15:15] RS: Aleksandra, I could speak with you about your philosophy, dissertation and its application in Second Life all day, selfishly. But I think, at some point, we should discuss more AI sort of things. Let's just fast forward in time a little bit. You characterized where you were in the early stages back in ’07, ’08, ’09 with natural language processing? What role does it take in your work now?
[00:15:37] AP: Oh, well, I guess the core, I would say, of my work as of today. Obviously, I think we can all observe it that there is a great boom in NLP. I'm not even sure if we should call it NLP anymore. Maybe NLP is just like an improper phrase, right? Maybe it's more of an NLU, natural language understanding, then natural language processing per se. Because when you think about generative AI, when you think about transformers, so new types of text generators, new types of very robust algorithms, really bundles of algorithms, huge language models with, I don't know, billions of parameters, because that's what it is today. We're talking about systems that are really capable of generating very abstract text and also understanding command that is expressed in natural language in a very abstract way, too.
So this is, I think, a big shift, right? Normally, your classical NLP system would not be able to have this type of dialogue management that currently bots that are enabled by transformers can have. In fact, those bots today, they can talk to you in a very nuanced way, understand abstract commands, reach for sources, usually verified sources online to have a conversation with you. Rephrase what they find so they just don't copy-paste like whatever content they find that they find suitable can afford the interaction, but really, really rephrase it.
And I think this is a major shift that we're observing that those new architectures like transformers turned out to be so fruitful for both bots, but also text generation, specialized text generation. And not only, but also image processing, numerical processing. They are architectures for doing so many different things. So that expands to me beyond NLP or NLU. That is actually the core of what AI can promise. Also, in terms of the predictions that it can make an accuracy of those predictions. Transformers seem to be like a really big game changer in the AI space.
[00:17:39] RS: I like your, perhaps, redefinition as NLU, natural language understanding, and just you pointing out the nuanced ability of bus to interact with people. You mentioned earlier the Turing test. And I'm sure that's top of mind for you still. It strikes me, though, that it's not the only thing you're thinking about when you're working on this technology. But how can we better pass the Turing test? What are some of the business applications, real-world practicality that you foresee with advancing NLP or NLU?
[00:18:07] AP: Oh, plenty? Absolutely. Many of them. I would say that maybe that's a risky statement that text is still a very untapped potential, right? So whereas, I think we were able to deploy artificial intelligence to different business processes when it comes to healthcare, or logistics, and really work in a predictive manner with numerical data. When it comes to text, I’ve noticed that internet is just full of text, right? I don't think companies. You can think about big companies, small, medium-sized companies that they were able to really tackle that potential, for instance, to understand what their customers want, or what their competitors are doing. And text is here for them really to understand that better. If you have a good system that can give you a pretty robust sentiment analysis of your customers, you're going to probably build better services and products for your customers. If you want to understand where the market is moving, if you want to forecast the market in a way, you can obviously use some, let's say, algorithms like regression classification, or more advanced algorithms that have that predictive nature and can give you some scenarios about the future. But I think you can also use text for that, right? And text can be your other type of data that you can use by default.
And for that, you obviously need NLP. You need that data to be structured in such a way that other algorithms can also do something with it. And I do think that we're still learning that. I think we're at the very early phase of understanding the true potential of sentiment analysis, which does not have to be something very crude and very simplistic, like, “Did they like it? Or did they not like it?” right? It can be much more sophisticated than that and give you some hints about how to target your audience, how to understand your audiences’ needs better, how to segment your audience, but also how to build products that would really match your audiences’ interests. And also, something that shows you that your audience is most certainly not homogenous.
So I do think that everything that is related to text processing is really promising for business. It's just that it's this untapped potential. I think many people, many businesses also feared unstructured text online messy text online. But currently, with those new algorithms, you don't have to be afraid of unstructured text anymore, because NLP can handle that type of text. So transformers can handle that type of text much better. They can even structure it for you. So I do think that using text as a resource for data analytics, for businesses in the future, is something that we will see happening in the coming two or three years. And it's not necessarily linked with the Turing test, obviously. Turing test is a very interesting, intriguing research question. It can be linked, right?
So if you really want to build a bot that will talk to your consumers and establish long-term relationships with your consumers, maybe something like Turing test does matter, or like any step forward in understanding how to build a system that understands the intentions that people have when they say something, right? And not only what they said, actually. This is something that can be important, right? If you want to use artificial intelligence as a resource that is establishing and maintaining relations with your customers in b2b, or b2c, wherever. However, for all these other things, and text analytics, also generation of text for marketing purposes, etc. For that, you don't need the Turing test. You just need a good algorithm and access to data. And that's it.
[00:21:44] RS: Yes, that's an important call out. The Turing test is a fun experiment, perhaps. But as people get more comfortable interacting with machines, do they need to believe it's a human? I don't think they do as long as they get what they want, right? As long as like, “I know that the virtual assistant in my phone, whose name I will not say, because then she'll begin speaking, I know that it's not a real person. I don't care as long as she correctly reports what the weather is outside.” So perhaps that's not as important to think to pass as just fulfilling the ask you.
I'm glad you brought transformers into the conversation and its relation to the fear of unstructured data. Because I hear this all the time from folks. They worry about getting clean data, properly annotated data, so that they can train their learners. And this process of annotating data and making it actionable is often very manual as it is right now. Do you predict that advances in NLP and more powerful transformers will remove the need for manual labor really to annotate data that we can automate the annotation of data itself?
[00:22:43] AP: Well, that's certainly the hope in the NLP community, and also broader in AI and data science community, because this is very cumbersome work. So I think it will be a very natural step to really move in that direction. And by the way, I'm obviously a researcher of artificial intelligence. But I'm not really detached from the real-life problems that AI should and could solve. And I do think that before we jump to this Turing test and all those big things where we really need even more robust architectures than we have today, we should try and focus on the things that are doable and are needed today.
For instance, something like you mentioned, annotation. This is a very clear and important need that should be addressed. So I think we should be very realistic about artificial intelligence and start with challenges like that, only then moving to those big, lofty challenges in AI that we like to discuss that are triggering our imagination and whatnot, but perhaps are not really contributing to solving real problems and challenges that we face today.
So obviously, I was always a sci-fi fan. And I think that brought me to artificial intelligence watching movies like The Blade Runner and whatnot. But on the other hand, I am very practical. And I do think that annotation today matters more and solving issues of tackling unstructured data. The potential of unstructured data is something that will make the world a bit better place than just thinking about how to create effective system that will talk to you while feeling your emotions, not only understanding them, but feeling. Because this is a long way to go. So let's just focus on what we can solve and do today, step by step.
[00:24:29] RS: Yes, definitely. You mentioned the cumbersome nature of data annotation, right? The work involved there. And that's to me, your short rhetorical hop from just automating work. Period. Right? What work should be automated? What is the relationship between automation and the need for human labor, right? These are big questions. Where do you foresee automation going? How much work we ought to make? And how should we be thinking about AI’s role in your average laborers’ life?
[00:25:01] AP: Well, I think these are great questions. And we don't have answers to them yet. But we should definitely focus on uncovering these answers, right? So my project, the one that I'm working on, the one that I mentioned to you on collaborative AI, I guess gives you just one answer. And that answer is that we should support humans, right? Support humans where and when they need it, right?
If humans consider some type of task, or work to be very mundane, and something that would gladly get rid of, perhaps AI can step in. If people need help, because certain tasks require different types of skill set than their own one, right? So let's say some sort of computational intelligence is something that is very easy for AI, sometimes very hard for humans. If people need to draw conclusions from huge chunks of data – And clearly, they won't see correlations, hidden correlations between that data just by themselves. They need a system for that that can do it very well, then we, I think, should establish AI assistance for them in that type of work as well.
So I do think that the best people to ask is the workers themselves. And this is the philosophy that we have for this collaborative AI project. So we're just saying one thing. AI should not replace you. AI should help you at your work and make your work more productive, more effective, but also more satisfying for you. And it's just more of a bottom-up approach than it is a top-down approach where we say, “Oh, these are the tasks that AI can and should automate. Period.” right? This is not something that I would generally agree with. And my approach is very different here. And it's like, “Let's follow the people who are supposed to use these systems. Let's also understand their jobs, which are usually very complex, as certain tasks.” And some of these tasks can be delegated to an AI system. Some of them can't be delegated to an AI system. The question is why? Right? Because people don't want it, or because they're doing it better, or because they are afraid to delegate it to artificial intelligence. And these are like nuanced responses.
So I think that in this project, we're just trying to understand people better. Currently, we're focusing on marketers. But there are many more professions that we would like to examine as well. And really run experiments that tell us a bit more about people's preferences, and effectiveness of an assistant that we built together with them, not only for them, but actually in a participatory manner together with that. Using the low-code, no-code opportunities, we can really expose people to projects that they can co-build together with us. They don't need that skill set to do it. So we can really invite them to the research.
[00:27:44] RS: Yes. I love this idea that it's not just what can we take away from people. That we think about what can we automate across the board? Because I think this false conception, when it comes to indulging the idea of automation, that there's like a hierarchy of tasks, and that everyone wants to automate all of the same tasks, which is not the case, right? Some people want to do only the creative work. Some people aren't creative and would rather outsource that to something else.
So a holistic approach to automating tasks would create that world you're talking about, where AI supports you rather than replaces you. Is that realistic to be able to develop technology that can automate all sorts of different kinds of tasks? Or will it be simpler tasks first, and then we have to wait for the technological to come more advance before it can automate copywriting, or graphic design, or something like that?
[00:28:38] AP: Well, the funny thing is, I think, that with transformers now, we discovered that those creative tasks are sometimes easier to automate than certain mundane and simplistic. We could think tasks like, for instance, the calendar management, right? If I think about an assistant that would manage my calendar, I think of that as a very simple task to do. Maybe boring, but simple. Well, when you look at systems like GPT-3, Gopher, or others, they're very good at generating text for marketing purposes. You can tell them to write an ad for Facebook that will be targeting parents or other group, right? And at that text that they will generate for you can be actually really good, right? Maybe you will need to play with a bit and redraft it slightly, but it's definitely less work for you. So I do think that we're in a very interesting moment where certain ideas or concepts that we had behind automation are starting to fall apart.
We thought that creative work is something that is very hard to automate. Actually, it looks like some parts, at least, of creative work can be automated very easily, whereas some other tasks that we considered mundane are turning out to be very complex for artificial intelligence or require a completely different system, right? Different privacy setups, different rules. And they also need you to feel safe, right? If you want to allow an AI system to really manage your calendar, what are the preconditions for it? Are you okay with that system really seeing everything? What is happening to that data? Where is that data going? So all these ethical questions that I know that you discuss a lot with your guests at the podcast.
So I do think that we are starting to see a bit better. It's still a very blurry sort of horizon of what can, show, could be automated. But there were some surprises on the way. And I do think that transformers, being so capable of generating both text and images and sounds, like we said, but also creating all those bridges between formal language, natural language, also machine translation and whatnot, that they are challenging us to think about automation in a different way.
[00:30:45] RS: Yes, that's very well put. And I do want to get your take on transformers, too, because we keep mentioning them. But I find transformers to be kind of an arms race, right? Like, it's hard to keep up with the latest and greatest. It feels like every week, there's some example of a newer, better, shinier example. Do you find that to be the case? And how do you evaluate them? What do you make of the whole transformer arm race right now?
[00:31:09] AP: Oh, well, there's a transformer hype. That's for sure. And it feels like all big tech companies feel like they need to have their own transformer. And to me, it's a bit funny how accelerated this race is at this point. Because, as you said, let's say you have the newest release, or the newest version of GPT-3. Three days later, the Chinese are saying that they've just built a bigger language model, Wu Dao, with more parameters. And then you have yet another company like Google or any other saying, “Oh, and we have even bigger number of parameters than that network or that other model.” So it feels like there is a race of parameters with the hope that the more parameters you have, the better your language transformer will be. And I sort of understand that. I'm not sure where that exactly stops, or when that stops. But I do think that it's more important now to focus on applications. To me, that's important.
So for me, the sole fact that you have more parameters, does not really create a better environment for people to use those transformers in the best possible way. And I do think that the friendlier your interface is, the more open your policy is regarding AI mainstreaming and letting different people, also non-specialist, to use your transformer for their own goals. Well, the better it is for your transformer. So I'm not sure if it's still about parameters at this point. It's more about creating opportunities to create applications on top of transformers. That can really change something in sales, in marketing, in text generation, in prediction and market forecasting. So for me, that's the bigger race that is not so often mentioned, but the more important one. So I'm really looking at this whole space with a big curiosity. And I'm wondering cool when this application-oriented race, rather than the one focused on parameters.
[00:33:03] RS: That’s a great way to put it. Perhaps less exciting is, “Hey, this is what's technically possible with unlimited horsepower.” Versus what's more exciting to me is what can you run locally on an Android device that you can ship across the world and have someone who has no other access to the global economy can now run whatever AI technology they like on this mobile device, right? Like they can run a transformer locally. They can start building applications there without a server farm and and a bunch of professionals and experts around them. So that's an interesting point of view. That the ease of use, and perhaps light is perhaps better in this case.
[00:33:39] AP: Yeah, I fully agree. That's my assumption. The next few months will show more of these applications. I think the community is really ready for that at this point. Everybody is already accustomed to transformers. Everybody understands sort of their power, at least people who use them, and those who are also invited to use them and just start their adventure with them. I think many of our study participants are pretty shocked by the possibilities of transformers and their own potential of using them for anything that they can think of.
So it's just, I think, a matter of a couple of months to kind of redefine this race a bit and really focus on applications building big ecosystems, big friendly ecosystems. And gathering large group of people, both experts and non-experts to really contribute to those projects. Because, ultimately, it's not only about building a technology that is just used for some purposes and seems to work there. But it's also about opening yourself up to the ideas of external communities and what they would like to do with it. And just creating a platform of dialogue between all those different people to really come up with applications that we have not thought of yet.
[00:34:52] RS: Yes, very well put. And speaking of applications we haven't thought of yet, I want to ask you to go out on a limb a little bit, Aleksandra, just to cap the show off here. I would love to hear your take on an application or technology you think will prove to be truly disruptive, revolutionary that people aren't really speaking about right now. Do you have a hunch or maybe just sort of an idea that you can't necessarily prove, but your gut tells you, “Look, this is going to be a big deal. And you know, we're only at the beginning stages.”
[00:35:22] AP: How scary can it be?
[00:35:24] RS: We'll go full tinfoil hat. Whatever you like, Aleksandra. Let's go full Blade Runner.
[00:35:27] AP: All right. Okay. All right. So because my first instinct was to say a bit more about, obviously, generative AI or democratizing AI, but these are trends that we can observe. They're very palpable today. And I think, also, decision intelligence, which is very interesting. And [inaudible 00:35:45] is also something that can be a real breakthrough in terms of AI algorithms, and also their practical potential, and application in many different domains, including the battlefield, unfortunately. So these are the trends that I see.
But if we want to go somewhere beyond that, I would say, well, how about combination of synthetic biology and artificial intelligence? We have heard about those Xenobots that are a very primordial form of artificial life. So systems that are modeled by artificial intelligence from skin and heart cells of an African frog that then get to be 3D printed and start to move, and at some point, start to replicate as well. This is something we've heard right from the press release three months ago that they also started replicating. How do we call that, right? And what's the potential of this?
I mean, I have no idea what safety and ethical protocols are implemented for the Xenobots. But some people are saying already now that this is going to be a big breakthrough, for instance, in the medical field. And this is no longer AI or just AI. This is AI with something else. This is wetware. Our promise of a wetware in the future or a way of processing information that is done on proteins. So I think it is in a way very scary. On the other hand, I'm very tempted to see more in this space and understand it a bit better. That's my hunch, right? I mean, this combination of biology and AI, we haven't seen much in that space, except for those Xenobots. But this is definitely a domain of interest for many people already today. And they see a great potential in that.
[00:37:24] RS: Aleksandra, this has been a fascinating conversation. Thank you so much for being here today and sharing all of your expertise with me. I've loved learning from you.
[00:37:30] AP: Thank you so much for inviting me once again.
[OUTRO]
[00:37:40] RS: How AI Happens is brought to you by Sama. Sama provides accurate data for ambitious AI. Specializing in image, video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e-commerce, media, medtech, robotics and agriculture. For more information, head to sama.com.
[END]