Hello, I'm Chris Anderson, welcome to The TED Interview, the podcast series where I get to sit down with a TED speaker and dive much deeper into their ideas than was possible during their short TED Talk. Now, today on the show, Kai-Fu Lee and the epic race to develop artificial intelligence, AI. Kai-Fu was 11 years old when he moved to America from Taiwan to be immersed in the US education system. And from that point, I guess you could say his life has been positioned to straddle two worlds: America and Asia. After graduating in computer science, Kai-Fu developed a reputation as a star AI innovator, and spent time at Apple, Microsoft and Google. But 10 years ago, he decided to invest all of his time and energy into China's high-tech companies. His motivation? He saw something truly extraordinary in the new generation of Chinese tech entrepreneurs.
(TED Talk) Kai-Fu Lee: Chinese entrepreneurs, whom I fund as a venture capitalist, are incredible workers, amazing work ethic. As an example, one start-up tried to claim work-life balance, "Come work for us, because we are 996." And what does that mean? It means the work hours of 9 am to 9 pm, six days a week. That's contrasted with other start-ups that do 997.
CA: Kai-Fu and I are going to discuss how China's gladiatorial fight to the death has made China better positioned to lead the world in the AI race. Also, the amazing ways AI is already being incorporated into China's everyday life, the massive risk of disruption to our economies, and a beautiful way of thinking about how AI could actually enhance the future of human work. I found this to be such a fascinating conversation, alarming and thrilling in equal measure. Let's do it.
Kai-Fu Lee, welcome.
KFL: Thank you, great to be here.
CA: So I'm so excited for this conversation, we get to dive into what you consider, I think what I consider, to be really one of the most important topics, perhaps the most important topic facing humanity's future: artificial intelligence, AI, and where it will take us, and how excited we should be about it, how worried we should be about it, what we might do with that excitement and those worries. A good place to start, Kai-Fu, would be: you have said that artificial intelligence, that we should think of it as a sort of an epoch-changing invention, along the lines of the steam engine and electricity. Why do you make that case?
KFL: I think we've only had a very small number of truly groundbreaking technologies that impact every part of our lives, and artificial intelligence is a technology that can learn to make very accurate decisions, predictions, classifications purely based on seeing a lot of data. So that kind of capability will bring us amazing efficiency, greater profits, cost savings, and also liberate us from a lot of routine jobs. And it impacts every imaginable profession. So it is as big as electricity, and maybe even bigger.
CA: So, explain that to me, and perhaps in the context of the recent breakthroughs that have been made in deep learning. That seems to be the fundamental shift that has changed AI from being a sort of curiosity that some people tried to work on in university departments, to a business-changing and life-changing innovation.
KFL: The field of AI actually began in the 50s. And the field really hasn't made much progress until deep learning came about. And deep learning is learning purely on correlation, on huge amount of data, and then draws the features and conclusions pretty much by itself. So if you think about recognizing pictures of cats, it just sees millions and even billions of pictures, and learns what it is that makes a cat a cat. And it doesn't really use human abstractions or features, and it's just looking at a thousand-dimensional space and drawing a thousand-dimensional curve that separates what is a cat from what isn't. And that curve is really hard for humans to comprehend. It's just like our brainwaves are hard for deep learning to understand. So this is not a replication of human thinking process, but an entirely mathematical classification engine that learns from a lot of data and what humans tell it, "This is a cat, this is not, this is Chris, that's Kai-Fu," and it learns to recognize them better than people if they could see millions or even billions of pieces of data. So they need a lot of data to do well, the more data, the better it does, but you don't really have to teach it specific rules and decision-making rules.
CA: Right, it's able to use that data because someone put a caption of a cat somewhere on a picture at one point, so that it knows, basically, cat or no cat. Is part of it then saying well, any sphere that you apply that power to can show this incredible pace of outpacing us. Talk about some of the fields where there is just vast amounts of data and where AI is already showing remarkable promise.
KFL: Sure, an example is machine translation. There's a lot of data on the web, because we've got all these web pages with translated versions: United Nations proceeds and so on, and what the machine-learning algorithm can do is just take these parallel texts of French going into English, Chinese going into Arabic, and all these pairs of languages that have previously been translated, and feed all of this to an engine, and out comes a machine translator. Now, the current machine translation isn't yet as good as professionals, but it's very good, and that's purely based on text that's been translated on the internet. And similarly, faces and speech, turning speech into text, learning to talk like you or me, even generating natural language, making up a story, AI is getting better and better at it. And there are further advanced applications where it's not yet beating people, but autonomous vehicles are now driving millions of miles, robotics are able to manipulate and pick up objects and move around, so we're going to see many waves and new applications coming up over time.
CA: In his hilarious sci-fi book, Douglas Adams, who wrote "The Hitchhiker’s Guide to the Galaxy," he had in that book a Babel fish that you put in your ear and it automatically translated any language coming in. There are already devices on the market that are staring to do this in still, I guess, fairly primitive form, but how long do you think it is before any tourist can go to any major city on the planet with a little earbud or something in their ear and just have a conversation in real time with anyone, in any of the world's major languages?
KFL: Yeah, you picked a good one. We're almost there with this one, I think in about three years. We're basically just cleaning up the engineering hard edges to make the product flow more smoothly. We already have text-to-text translation, speech recognition is getting more and more accurate, and speech synthesis is getting more and more humanlike. So this little device will have a latency of maybe one to three seconds. It can't quite work like humans, which are instantaneous, but other than this one-to-three-second delay, I think the tourists can already buy devices that are usable. In three years, these will be very good devices. Now, one caveat is, when you said everywhere in the world, so you've got to have enough data to train the translation pairs of languages. So if you go to a country in which the language is spoken only by a few thousand people, then you probably don't have the data. So you've still got to go to one of the more popular languages or more populous countries for this to work well.
CA: It seems to me that this is just one area where people haven't even begun to really follow through on the implications of that. People hear about this maybe, but clearly don't believe it yet, because if you believed that that was coming, and say, in three-years' time this device, with a two- or three-second delay, but that means that in eight-years' time, there's a device with less than a one-second delay. The implications of that are so astonishing. First of all, in terms of lots of jobs that are there right now get taken out, but perhaps more interestingly, just huge amounts more of connection, I imagine. So many people who right now maybe don't want to spend too much time in a foreign place, or if they do, they don't really connect with people because of the language issues. This could be an incredibly powerful device for building bridges between cultures. And I daresay lots of unexpected downsides that we haven't even fully thought through. And like in education right now, if you knew that this was coming, does that mean that it's crazy to send your 13-year-old into Spanish or English or Chinese lessons at this point?
KFL: That's a really good question. From the first part of your question, I think people do hold this disbelief at technology, but once a product comes out, people embrace and accept it quickly. So speech recognition through Alexa and Siri is a great example. Five years ago, nobody was using speech recognition, now everybody uses it. So the same will happen with machine translation. And there are implications, I think it's interesting that in the short-term, there are actually more translators who are getting paid, because people are using machine translation, it is not good enough, and people pay humans to improve it. So they're actually making more money. But as technology improves, probably there will be a lot fewer translators, especially casual ones. Translation will become something that's only for novels and poetry and to represent the head of state, so no mistake is tolerable. And as you said, language learning will become something more cultural. The reason you want to learn another language, let's say Spanish or Chinese, would be closer to the reason you want to learn Latin now. It's to appreciate the classics, and to be able to directly take in the foreign language and understand it. It wouldn't be for using it as a tourist, or using it to trade or contract negotiation. So the casual uses will be no longer that useful. So one could assume maybe fewer people would or should learn foreign languages, because these magical translators will be around. But on the other hand, if we believe AI will take care of a lot of boring, routine jobs, we'll have more time on our hands, so maybe someone should learn Chinese in order to read Confucius, or just as people learn Latin in order to read the classics, it's the same thing.
CA: It's so interesting. You could use your earbud to go into the basics of living in another country and learn the language to dive deep into its culture and history. You can imagine it playing out lots of different ways, actually. And I've always wanted to try this: hey, Alexa, turn up the volume on this podcast, because Kai-Fu Lee is really interesting. Just in case anyone's listening on ...
KFL: You're messing up people's home environments with that.
CA: (Laughs) Sorry.
Now, look, 10 years ago, you started off this venture firm, Sinovation, in Beijing, with strong local support there, and have seen this hub, the sort of Sand Hill Road — Sand Hill Road is the innovation hub at the heart of Silicon Valley — flourishing in really a rapid time there in Beijing, and you are one of the people right at the heart of it. What have you seen recently that has taken your breath away in terms of a new trend or a new type of AI-powered innovation that's happening right there?
KFL: China's venture-capital ecosystem in the last 15 years went from almost nothing to similar maturity as in the US. And more capital is flowing in, more venture capitalists are helping entrepreneurs, and entrepreneurs are working on amazing applications. So we could sort of answer your question in two ways. One is, what are what people consider AI applications, how are they taking off, another is, how's AI being used to assist brand-new applications? And there are a lot of both. So in terms of the AI applications, the instant translator that you talked about — I think the Chinese companies are doing a much better job building very high-quality translators between Chinese and English and so on. I also think if we look at face recognition as a technology, it's being deployed in many more places in China, for example, it's being deployed in the classroom.
CA: Tell me a bit more about that, because that sounds both amazing and a little scary to some people, I suspect. So you've got a camera at the front of the classroom that's looking at the class. And taking note of which kids are paying attention?
KFL: There are multiple uses. So first, there is the within-class use. So there is a physical teacher in the classroom, and there are students. And in that case, the camera is used to make sure the students are in the room. Who's sitting in which seat, who might be missing, and attendance no longer needs to be taken, people want to go to the bathroom, they just go, because the system knows. And certainly, it can also be used to just check the progress of each student. So the teacher knows, this student seems to have been sleeping, or lost or inattentive. So you might need to catch them up. But another use is through remote teaching, which is something really amazing. What is happening is, in China, the education depends on the quality of the teachers. And in the cities, there are great teachers, because they are educated by the top school, they may have been Math Olympiad participants, and then in the countryside, there are teachers who are not nearly as good or experienced. So a system has been created by a number of companies we invested in so that the super teacher can, through video conference, teach maybe 30 classrooms with 1,000 students at the same time. So in this case, it's video-conferencing, broadcasting the teacher to the students, and coming back are 1,000 faces on a giant screen that the teacher can see. So now, when a student raises his or her hand, unlike in a normal classroom, the teacher would not know who the student is. So something would pop up on top of the student's head and say the name of the student, how they did in the class.
CA: How recently they went to the bathroom.
KFL: If you want to know, yeah. So this helps the teacher teach a much bigger classroom and bring education to the countryside and the poorer villages, even. On top of all this, the Chinese education system now incorporates automatic homework grading, automatic test and exam grading, so the teacher hardly has to grade anything anymore. Not only for multiple choice, but also fill in the blank, make a sentence, mathematical proof or a chemical equation. All of those are still written on a paper exam, but graded automatically. So the teacher's time is further saved. Because the routine parts of giving homework and exams and grading them are done by AI. And furthermore, we now have records of students' exam and test scores, so the parents or the students who want more drills can do more homework, and focus on areas in which they're weak. So all of this takes away the routine work from the teachers, so the teachers can focus on the mentoring and the relationship and the communications.
CA: That system you described, I'm trying to imagine someone proposing that in the US or in the West. And say, we've got this great new system that's going to monitor your kids, notice whether they are paying attention and whether they are going to the bathroom and how they are doing. The initial reaction would be uproar, I think. People would say, "That's way too creepy, how dare you, you're invading my kid's privacy, they'll have this record their whole life that they didn't pay attention in geography class, go away." So there's difference in culture. In China, there's just a fundamentally different stance on how much permission we should give the government, say, to intrude on our lives for "the greater good." You've obviously lived a lot of time in the US, you know both counties super well. Talk about that cultural difference.
KFL: Actually, I totally understand and appreciate the American values, and the concerns these parents would have. But most Chinese parents wouldn't have this particular concern, because excellence for the children, in terms of education, is so overwhelmingly important. I think some parents may also feel it's a bit unusual, and there might be ways to opt out, some schools might not have it, but overall, I think most Chinese parents would say giving up some of this privacy for the students is worth it, if their scores can go up and they can do better in school. So what we're talking about here is not purely privacy in isolation, but privacy in exchange for something else. And I think in China, privacy is something people do value, but not as much as in the US and Europe. And the reason, I think, historically, is that China, and actually, most of Asia, haven't been founded on believing that there are individual, inalienable rights that trump everything. The belief is more "We want the society to be great, the country to be great, and our village to be great. Each of us is just part of the whole. And it is worth it to make the whole better and each person's individual rights are not unimportant, but the collective success is more important."
CA: Well, there's going to be this extraordinary test of what Chinese culture will accept and how far it goes with this social credit system that's being built. If I understand it right, the intention is to use every piece of data that's available, whether it's on personal, academic, health, behavior in general in the public sphere to give people social-credit scores that can be used in all kinds of ways. On the face of it, you could imagine lots of cases where people who otherwise had no access to say, a government job or other opportunities, would, because they've built up a good social-credit score. And then on the other hand, you can imagine all kinds of creepier forms of government control. Ultimately, it feels like this is the ultimate case of Big Brother watching you. What's your prediction about how this is received by Chinese citizens, as this is rolled out?
KFL: We're still in the very early stages, so it's kind of hard to tell. So if you imagine that Equifax, as a credit-scoring system in the US, is now run by the government, not by a private company. Which, by the way, is a good thing, because people trust the government more than a private company in China. And on top of the credit score, not only would it be a part of the determination of whether you get a credit card, whether you can get a mortgage — there are some new elements, like whether you might be considered unsafe on a flight. And whether your criminal record might also contribute to some of these financial transactions. I don't think people generally have much issue with it. Obviously, you're making an extrapolation, I don't know if it's going to happen or not. But let's say we did extrapolate a bit. If you believe the government is benign and that people want the society to be better, and that they had a feedback mechanism and it were transparent and fair, and people knew that if they did certain things, they will be able to get more conveniences and more security and more upside. And if they did bad things, it would be punitive and cause them to have fewer chances. If it were a transparent and fair system, I think people here in China would not have a lot of objections. But of course, the presumption may not be believed by people in other countries.
CA: It's such a big issue. Because if you put together what's being built with possibilities of the future, like facial recognition, you could imagine a government that maybe started out benign, but had been seduced by power and really wanted complete control. It feels like in China, it's even harder to have that debate, because if you have that debate too rigorously or seriously, you get your own social-credit score marked down.
KFL: (Laughs) No, I don't think so. Actually, the Chinese government cares a lot about what people think. And the feedback loop does exist. While it's not the same feedback loop as in the American government system, I think any government's right to govern is measured by people's acceptance and happiness with the rules and regulations. So as an example, the Chinese government came out with a possible increase in taxes for VCs like us. So VCs spoke up and said, "Wait a minute, let's look at the implications of this tax. On the one hand, you may get some more money, on the other hand, there will be fewer people doing VCs and that's not good for the entrepreneurial ecosystem." Then the government listened to it and decided, OK, let's pull back on the taxes. So if there were significant numbers of people who complain about aspects of the system, I do think the government wants to please the people and are likely to listen to this feedback.
CA: Well, certainly to please the people who have power. What you're enabling in China is pretty incredible. Switching gears in this conversation slightly, as a child, I was told that God watched every move I made, when I die, that there would be this final judgment where the good and bad we'd done would be weighed up and we'd go with the sheep or with the goats. And in different ways, parents all over the world tell their children that Santa Claus is watching, knows whether you've been naughty or nice. It's possible that humans need a little bit of fear in their lives to behave. This is probably a terribly unpopular thing to say, but it's possible that there's really bad tendencies in most people, and you actually need a level of knowing that you are being seen. That's always what's happened, you know — human social systems have been this trade-off between ... There have to be systems for people to punish freeloaders, or human systems break down. With the gradual decline of religion, part of me wanders whether the future we all have in some way is just inevitably going to involve a lot more exposure of our lives to someone, whether it's a corporation or a society as a whole or the government. And that it will feel uncomfortable, and that there may be some bad consequences from it, there may also be very good consequences from it and we may have to get used to embracing some of that discomfort. Perhaps adopting more of Chinese thinking, of "this is for the good of society."
KFL: Well, let's take an American example. So, I've worked in many companies that have gotten into lawsuits, antitrust, so people became very careful about what they put in their email. In the companies that I worked in, American companies, we've been told, "Don't say anything in an email that you don't want the whole world to see." And I think that's a similar behavior, it's a self-correcting behavior. And similarly, we now see, you know, Jeff Bezos' text messages, which were private, are being exposed. And we see examples like that, so people are concerned, maybe they don't have that much privacy, and maybe things can get exposed one way or another, then they are more self-disciplined and don't say the things they shouldn't or wouldn't say. Now, the debate is whether that's good for the society or not. While one could say, hey, that's less freedom, that's not the way it should be. On the other hand, people are better-behaved. I agree with you, we really don't know the implications of how this would play out in the East and West. I would say that, in the areas we're discussing, using large amounts of data and having some data being accessible by companies or governments, and having some data being exposed to the world, I think is a better fit for societies which are more collectivist in nature, and are viewed as a big problem and looked at with paranoia in societies that are more about individualism.
CA: So, I'd love to turn, Kai-Fu, to your analysis of the competitive situation between US and China, what each country is bringing to the development of AI. In your book, you talk about the original, deep science, deep computing, innovations, were developed in the West and are currently inside some of the big US companies, especially Google. But that fundamental research in AI may not be the key driver of what happens going forward; that there are three other, absolutely key determinants of how this plays out. And in each of those other three, China has a meaningful advantage. They are basically entrepreneurial culture, data and the attitude of the government. And I'd love to talk through them. Both in your TED Talk and in your book, you give really amazing examples of the entrepreneurial, gladiatorial culture that you talk about. Give us a story of a Chinese gladiatorial entrepreneur at work.
KFL: Well, the gladiatorial competition refers to, in a particular field, companies fight to the death and only one is left. So what are the key elements? One is, well, how hard does an entrepreneur work and get something to run very quickly? Take some non-AI examples in the US. In the area of food delivery, there is Groupon, Yelp, Grubhub, OpenTable, all working in a similar area. In China, that's Meituan, which is a 30-billion dollar company that does amazing things with food. And it also now has AI backed up in their capabilities. In China, if you want to eat anything, anytime, just open your phone app, and there are thousand places that will deliver to you at around 70 cents per delivery. Because companies competed on the lower cost of delivery in order to win more customers. And at the same time, they executed AI algorithms for routing food to your home, they competed on finding lower-cost people who could work on delivering on lower-cost mopeds and high-tech, low-tech solutions, whatever it takes. An American professor was in China about four months ago. She got stuck in Beijing traffic, which is a big problem, stuck for an hour, and she was going to miss dinner and have to go to the evening meeting. So her driver said, "Let me order you some takeout." So he managed to order takeout, delivered to the limousine that she was in, by estimating how fast they would be, and she rolled down the window and the food was passed in by the person on a moped. So these kinds of almost science-fiction-like capabilities are now real in China. And people would work in these companies, in Meituan, for example, for 100 hours a week, and it's a combination of winner takes all, hunger of the entrepreneurs, the incredible hard work and the massive amount of money being put in. And it's very outcome-driven. So, no one thinks that they have to beat the competitor on how good an AI algorithm is. They just want the final result, getting the food to your table for 70 cents in 30 minutes. And that kind of tenacity just doesn't exist in the US. And that's the end advantage for the Chinese entrepreneurs.
CA: Is part of it, Kai-Fu, that many of entrepreneurs grew up poor, they literally grew up hungry and so the motivation is more deeply grounded than their competitors int he West.
KFL: If you can imagine someone growing up who might be 30 or 40, the family has been poor for 20 generations. And it was only when Deng Xiaoping said let some people get rich first, that gave people the opportunity. So the gate opened 40 years ago. And the child was born. And is in a single-child family, so that this child is expected to provide for his or her two parents and four grandparents. There are no other grandchildren to provide for them. And if their families were to ever be lifted out of poverty, it would be up to them. So people are extremely hardworking, hungry, and they really treasure this first time in thousands of years anyone can have an opportunity to be an entrepreneur and make it. And that's what makes them work 100 hours a week. And also, role models. They see people like them, like Jack Ma. Someone who never went to a great school, but yet became one of China's most successful and richest people.
CA: Let's go briefly on the other two areas where you think that China really has an edge. One obvious one, I guess, is the data, just the scale of China. And the fact that there's this huge lead on cashless payments, gives this vast treasure trove of data that is an accelerant to building so many companies. The third one is the support by the government, the fact that the government has specifically said, we get it, we see that AI is going to determine so much in the future, we want to have leadership in that area. That's had, in your view, a very real impact on how cities across China have embraced AI start-ups and really seeded success in a way that is not happening at the same scale in the US, right?
KFL: Yes, between these two factors, I think data is actually more important. China is a large market. Not only does it have more breadth in the users, but also more depth in the usage per user. Each user is using the internet more, contributing more data — mobile payment is an example, delivery of food, shared bicycles — there are many, many more examples. So you've got more users, and each user contributing more data, as well as an environment where more people are willing to give up a little more data for convenience, security or advancement. On the government's support, China recognized the importance of AI about two years ago, and started coming out with various policies. The most important one was probably the state council plan, and that plan basically set the tone of how important AI is. But all the local governments started to execute. So some local governments would reward AI companies, others would help create VC funds to invest in AI companies. But most importantly, some of them are building infrastructure that entrepreneurs can't afford to do. So take autonomous vehicles, as an example. US is ahead of China in the technology, but it's all about who launches fast, gets data and gets the training loop going. So what the Chinese government is doing is paving new highways that have sensors that help make autonomous vehicles safer. One example is the city of Xiongan, which is the size of Chicago, a brand new city being built, and it has autonomous vehicles being a part of the design of the whole city.
CA: So it's been noticeable, in the US, there's a growing sense of concern about possible loss of competitive position vis-à-vis China. Maybe this is partly because of your book, Kai-Fu — (Laughs) I don't know.
How does this not end badly? I mean, you're talking about technology that is incredibly strategic for the future. If we end up in a kind of an arms race, where two countries feel like their future ability to influence the world is at stake, that's a recipe for breakneck development without paying attention to things that some people really need to be paying attention to when it comes to AI, such as trying to embed human values into it, of the right kind. Do you see any scenario where the US and China really feel that they're getting benefit from each other's AI development, and actually, some of the edge and fear comes out of that competition, or is this going to be kind of a bloodbath, essentially?
KFL: I'm still an optimist, because I think the companies are working together, the researchers are working together, ideas are being shared from new technologies. The AI academic community has been very, very open to sharing, because it's one of the very few sciences where the same data, the same algorithm can give you the same result, so people believe each other's results and work is always built on the shoulders of giants. And also, I think there are a number of efforts, including the World Economic Forum, in which I am a co-chair of the AI Council. There is partnership for AI of companies that willingly self-organize, get together, and share best practices. And these include things like how to protect the privacy, security, how to deal with data biases, and also job displacements. So there are various groups working together on these, and I would hope US and China, as two governments, will view the opportunity to collaborate equally strong. I think the two countries wanting to be technically stronger in understandable. But that doesn't preclude them from discussing how to deal with issues related to job displacement, security and so on. Also, I think another thing we can try is there are a few application areas that countries naturally view as a common need for humanity. Health care would be a really good one for countries to collaborate, because it's definitely not a zero-sum game. Education might be another.
CA: Health care is definitely a case where I've heard the argument made that massive success of AI in China can help everyone else around the world, if it is a data game. At the moment, people with obscure diseases, there's just not that much data out there for AI to play a role in better treatments, but in a country as large as China, that may be where the key insights come from of, here is a promising cure, and that we will learn from those numbers, and indeed, that it will become in the interest of the world to merge databases etc. But if it's the case that China is just much faster and better at implementing, that's an argument ... I would have thought someone in the US and in the West might view that as an argument for why are we just having an open environment on sharing all these advances. We aren't able to benefit from them as much as others, and I wonder whether there will pressure on that open stance.
KFL: One could speculate — I think, on the one hand, the research community really wants to share. They recognize it was through sharing that got us as far as we got. And I think AI researchers are naturally sharing people in nature. On the other hand, there are new rules coming up in the US, such as export control. And AI is completely under the new rules coming out for export control. So how these two factors will play, I don't know. But I personally really don't think there have been that many huge breakthroughs that can be made and protected, given the open-sharing community that exists. And even just the existing known algorithms can make a huge difference already in China, US and elsewhere. So whether the next professor is open to sharing with the world, or forbidden from sharing due to export control, I don't view that as a big short-term issue.
CA: OK, so let's talk now about the possible downsides of AI to humanity generally. In the West, that argument is often framed in terms of, "What happens when general intelligence arrives, when super intelligence arrives, suddenly we discover we've got these machines who may have goals that we don't understand." This is not what you worry about, in terms of AI. Could you explain why you don't worry about this?
KFL: If we look at what is going to take to have all the human capabilities, many, many more breakthroughs are going to be needed. Today's AIs are basically pattern-recognition engines that are tools under our control. And for these tools to grow into something as powerful as people, or even more powerful, we'll have to figure out all kinds of problems, such as common sense, planning, strategy, creativity, ability to feel emotion and communicate with people effectively. In all of these things, many more breakthroughs are needed — we're nowhere close. So if we take a look at where we are with deep learning, that's one breakthrough — we'll probably need 10 more breakthroughs like it. So to assume we'll have a bunch more breakthroughs and suddenly, AI just becomes smarter than people — there is no engineering path to get us from here to there.
CA: Just to push back on that for a minute. Some of the argument from the people who are so excited about deep learning is around the fact that when you look st the brain, the different cognitive functions that cover most of the things we care about happen in the same physical infrastructure and isn't it possible that what a child does as they develop is a massive series of essentially deep-learning-type algorithms going on there. And that as we continue to connect the dots here, that it may be possible to assemble some kind of general intelligence from that?
KFL: I think anything is possible, but I think it's quite unlikely. Because if you just say that theoretically, everything can be built from parts, we could say that when we have transistors, that's enough to build everything. Or when Alan Turing invented the Turing machine, everything can be built from that. I think it's too theoretical. Until we see the abilities I talked about beginning to be implemented, and real intelligence being shown, I think we're just way too far from where the humans are. And I think I can say with confidence, in 20 years, we will not see general intelligence.
CA: So talk about what you are concerned about.
KFL: I think a big concern I have is about job displacement, because AI is able to do routine tasks better than people in a single domain. And a lot of the jobs that exist today are routine jobs. Both white-collar as well as blue-collar jobs. If you add up all these jobs that have large percentages of routine components, we're going to end up with a large number, like 40 percent of our jobs. And it doesn't mean there will be 40 percent unemployment, but AI will be able to do most of the jobs of 40 percent of the people at a much lower cost and higher accuracy and efficiency. So will that lead to a big unemployment problem? We don't know. Because we don't know how technical feasibility will translate into actual employers deciding to displace people with automation. But I think what it will definitely lead to is a large number of people who have been doing routine work for decades now faced with a world with very few routine work left. I certainly believe AI will create more jobs, but this requires a massive amount of planning and resources and cost and a mentality shift from the society. So I worry, are we going to be able to shift fast enough and prepare for this future world in which the routine jobs are displaced.
CA: I'd love to just dig into this a bit deeper, because in your talk and in your book, I think you have a very distinctive way of thinking about the opportunity here, that I personally found quite inspiring and super insightful. It was brought on by your own brush with death or fears, you had a stage-four cancer diagnosis. And you were forced to rethink your own values. And one particular insight came out of that that I think has driven your thinking.
KFL: Yes, when I faced death, what I realized was that in my whole career, I basically was a workaholic, like all the Chinese entrepreneurs. I was their founder, but I worked just as hard. I worked 80 hours a week, and work became the meaning of my life. When I faced possible death, what I realized was, work isn't the most important thing for me. If I had a few hundred days or 100 days of life left, I'd want to spend it with the people I love. I want to give back the love they've given me, and I've been too selfish to only do work. So that's what woke me up. And I think that revelation, translated to the AI-induced job displacement, is that, have we, as humanity, been brainwashed by the importance of work? But what makes us happy in the world is our family and loved ones. And those are things that AI cannot do. So on the AI-induced job displacement, we should think about, "Well, maybe it isn't AI taking the jobs away, we're in trouble, maybe it is AI being sent here by God or our collective will to remove the routine jobs so we can focus on, and have our children and grandchildren focus on, the creative jobs, the empathetic jobs, the compassionate jobs, the jobs with complexity and depth and creativity." And we can start to finally realize that we are not on Earth just to repetitively do work, and that we should think about things that we love to do, our passion, and what it means to be human.
CA: Someone might listen to that, Kai-Fu, and say that sounds beautiful and inspiring and idealistic. But you've turned it into, really, quite an insightful way to think about jobs of the future might yet scale in harmony with AI. I guess the two key things are, you're saying, pay attention to two axes. On the one hand, a creative axis: how creative is a job? Some people will say that AI has the ability itself to be creative, and I think there's a debate about that. But you would, I think, argue that only to an extent, and that humans, for the foreseeable future, maybe until we have superintelligence in 60 or 100 or 500 years' time, are just more creative. That's something that we're really good at, and we care about human forms of creativity. So that is one axis. You know, creative jobs versus less creative jobs. And the other axis is linked to compassion or empathy or human values, which you describe as social or asocial jobs. And so if you put those two axes opposite each other, you get this quadrant, and that has allowed you to categorize jobs. And in the bottom left, which is the asocial and noncreative jobs, that is the real danger zone, where — do not prepare your kids for those jobs. And some examples of those jobs that you do not recommend that as a parent you prepare your kid for, would include what?
KFL: Well, it would include telemarketing, telesales, customer service, cashier, back-office processing, data entry, truck driver, chauffeur. It would include assembly line work, fruit picking and all kinds of routine, repetitive work. Because routine, repetitive work doesn't require, generally, a lot of creativity. And also don't relate to a human connection and empathy and compassion.
CA: And then you have another category of jobs, which are more creative, not necessarily more social or empathetic than those, but are more creative, that you describe, "Slow Creep," there may be a solid future that AI will gradually intrude on them. Things like being certain kinds of artists or scientists or graphic designers. What else would you put in that category of "maybe OK for a bit, but look out and be ready to retrain?"
KFL: Yeah, I think graphic artist is a good example. AI is beginning to do some of that. Photographer is another example. But I think that category will be the most creative jobs. Those jobs are going to be safe for a long time to come, and also, there is an opportunity for human-AI symbiosis. That is, a creative scientist, trying to invent a new drug, now assisted by AI to do the filtering and the ideation, may be able to create twice as many drugs in his or her lifetime, thereby bringing a lot of benefits to humanity. So that's an example of human-AI symbiosis.
CA: What I think was unique, almost, in your analysis, or certainly was surprising to me, was that you have a strong story to say to people who may not be creative, but you still think have a rich future. Yuval Harari talks about the useless class, that essentially, AIs are going to, unless you're super creative or you can code machines or whatever, that you will end up almost in a useless class, where your work is not needed. But you have a category that you call "Human Veneer," which are jobs that aren't that creative, but they are supersocial. And that you picture AI actually empowering people to enhance their human values and contribute to society in a way that will make us all feel better cared for. Give an example of some of those types of jobs.
KFL: These jobs would have a high degree of human interaction, where the AI performs the role of the analytical engine, and then the human provides the warmth around it. So think about the education example we discussed. AI can grade the exams, and help do drills and teach English pronunciation, while the teacher does the mentoring, helps the kids figure out their career, their future, helps them connect with other students and pay attention to their growth and their weak areas and help them advance at maybe a much lower student-to-teacher ratio. Another example is doctors. The human memory and the human analytical engine cannot be matched against AI. AI is already beating people in specific types of disease diagnosis. And over time, that AI diagnosis will get better and better so that the human doctor is going to not need to memorize all the new drugs and treatments and personalized treatments — AI can do that. A human doctor can connect to the patient, listen to his or her troubles, basically tease out all the symptoms and family history and use the AI engine to come up with diagnosis and recommendations. And connect with the patient, get the patient to trust his or her recommendations, visit a patient at home, spend a lot more time with the patient, thereby giving the patient the feeling that he or she is being [looked] after, will both add a lot to the health-care experience as well as the patient experience.
CA: One thing I find super hopeful about that is that the scale of jobs in that space can actually massively increase. Sometimes, the discussion about future jobs implies that there's just an ever-dwindling number. But by looking at the sector, looking at how many lonely people there are in old-age homes, how many areas where, in principle, society could pay someone to combine human values, service values, empathy values, with assistive intelligence. That was actually quite hopeful.
KFL: Yes, if we look at the US health care, the health care services sector will grow in the next five years. That includes nurses, orderlies, at-home medical care as well as elderly caretakers. So the total number of needs is there. And the people who might lose their jobs in AI displacement from the routine jobs can be trained in these jobs. And in fact, Amazon is providing some training for its employees to become nurses. So I think that, directionally, is very positive. But at the same time, there's a challenge, which is the pay for the care profession is really too low. Compared to say, a truck driver, it's about half. So how do we help affect that transition, when the new jobs that are being created in the compassionate category are not paid very well, and therefore also don't have very high social status. So we need to work on changing that.
CA: And this connects directly with one other recommendation that you have, which I think is really interesting. Which is an alternative to the much talked-about universal basic income. In Silicon Valley and elsewhere, it's fashionable to say the only response, as technology takes more and more jobs, is that we're going to just have to pay everyone a flat wage and let them then figure out their own meaning to their lives and come away from work as defining our meaning et cetera. You're really suggesting a much better idea is for societies to invest much more in these human, compassion-fueled, tech-empowered jobs. Do you see a route that we could get there?
KFL: Well, first, I think universal basic income is potentially very dangerous, because if you provide everybody with universal pay without having to contribute anything or be retrained or take on something new, then people can fall into depression and addictions and just use the money to do nothing. So I think we really need encouragement, because the main issue is with displaced workers not being able to take on the new jobs that are being created. So applying the money, not to give everybody a peanut-butter sum amount per month, but to encourage retraining is important. The other thing, I think, is encourage people to go into the compassionate profession, and to increase the social status. An example of that could be if we feel elderly care is going to employ more people, but people are reluctant to do that. The reason is, we're going to live longer, so the need for elderly care will increase. So how do we encourage people to take on that profession? One possible way is to offer some kind of special assistance to people over 80, that they are entitled to five or 10 hours of care per week. And that's supported as a part of the overall health package and that's the five or 10 hours a week will be paid at 2x minimum wage. So that will provide a better pay for a large number of people with incentives to move in. So I think the money should be selectively spent. I do think a pool of money needs to be created to help assist the displaced workers in moving on. But it's not to give the same amount to everyone, but to give more to those who are engaged in retraining and become gainfully employed, and also those who are going into the professions that are not going to be displaced by AI, such as the elderly care or other types of compassionate professions.
CA: Your book and your talk have framed AI as this — There are really two main players, that US and China are holding most of the cards right now. That leaves five billion people in the world who aren't in those countries. What would you say to them — how worried should they be about their future, is there something that their governments should do to mitigate the fact that they're not currently an AI leader?
KFL: I think the inequality among countries is as big a problem as the inequality within countries. Within countries, we're talking about the AI tycoons who make a lot of money because of AI versus the people who are displaced. Among countries, we're talking about US and China, who have almost all the wealth generated from AI, and who can deal a problem such as displacement, because they've generated all this wealth. But all the other countries are going to be significantly challenged, especially countries with a large population and not any high-tech companies. They might have been looking for the China model of outsourced manufacturing, or the India model of the outsourced IT or service, but those jobs are going to be displaced by AI, because the very fact they can be outsourced means they can be displaced. So the China model, the India model don't work, and the number of workers being displaced is significant, and the large population, which used to be viewed as an asset, is a liability. So with all these problems, I think each country needs to figure out its own plan. I think the more populous countries can see if they can use their own population and data to create some kind of an AI industry, because they've got the population and the data. The countries with the technologies should get stronger and stronger and train more AI people, like Israel has done. And many other countries, who can't really do either, have to think about a couple of things. One is how to bootstrap its technology workforce, because ultimately, those are the ones that will drive the economy forward. Second is, what are some tech businesses that require a local presence. For example, a Uber or Meituan kind of company might be hard to do from US to Africa or China to Africa, might require to be local — those are the ones that won't be taken by US and China. And lastly, I think, countries do have to look at what is the next set of compassionate service aspect that the country can go to. Maybe it is about becoming a tourist destination, maybe it is about doing outsourcing for nanny and elderly care. So a whole new set of outsourcing will happen. Going away from the supply chain, manufacturing and IT outsourcing, moving into this service outsourcing. And those are the types of solutions countries have to look at. I think it's not realistic for every country to say, "We're going to compete with China and US and become the third giant." I've heard so many countries say that, but it's really not realistic. I think they have to look realistically at the examples I gave and see what they can do.
CA: And at any rate, there probably isn't a more urgent public debate in those countries than that, because the train is going to move pretty fast. I think people listening to this just have so many different thoughts of excitement, interest, horror, fear, concern. What idea would you want to plant in people's minds as you think about AI in the future that you just want people to hold on to, if there was one they could focus on?
KFL: I want people to think about 30 years from now. When all this is said and done and things settle, AI will have contributed a lot of things to us as humanity. AI will contribute a huge amount of economic value by being able to do routine work more cheaply. That could be used to reduce poverty and hunger. AI will liberate us from routine jobs. So ultimately, when all the transitions are done, people will no longer have to do routine jobs. And that is a huge gift to humanity. So that people can think more about what they love to do, what they're good at doing, and why it is that people exist, the meaning of life, and pursue their hobbies and spend time with their loved ones. So it is really a liberation at the end of the day, but we just have to get through the challenges that are in the next 15 or 20 years.
CA: I have to say, I love that vision. And I very, very, very much hope that that is what happens. Kai-Fu Lee, thank you so much for spending all this time — it's really so fascinating. Thank you so much for your time.
KFL: Thank you, thanks for having me.
CA: That was Kai-Fu Lee, a computer scientist and one of the leading tech investors in China. To listen to Kai-Fu's TED Talk, visit TED.com.
This week's show was produced by Megan Tan. Our production manager is Roxanne Hai Lash. Our mix engineer — David Herman, our theme music is by Allison Layton-Brown. Special thanks to my colleague Michelle Quint. If this discussion about the future of AI gets you buzzing, tell us about it. Rate and review the show on iTunes or wherever you get your podcasts. I'm Chris Anderson, thanks for listening. On our next episode, one of the world's most famous introverts, Susan Cain. We talk about the science of introversion, and how the world could be so much better for introverts, and extroverts, for that matter — at work, at school, and in our personal relationships. Susan Cain: One place that the difference shows up for us in a way that you might not think about is how you express enthusiasm. I'll often hear from people who will think that their introverted colleagues don't care about something that just happened. And then you talk to that person, and they actually care really deeply.