The TED Interview
DeepMind's Demis Hassabis on the future of AI
July 28, 2022
[00:00:00] Steven Johnson:
Welcome to the TED Interview. I'm your host, Steven Johnson. When future tech historians look back at the first few decades of the 21st century, I suspect they will point to a day in late 2017 as one of the enduring milestones from that period. The day the deep learning software program, AlphaZero played 44 million games of chess against a duplicate version of itself.
The software had begun the day preloaded with only the basic rules of chess. Pawns can only move straight ahead unless they're capturing a piece. Bishops move diagonally. You win by checkmating the king, and so on. But by the end of those 44 million games, which unfolded in less than a day, AlphaZero had become arguably the most dominant chess player the world had ever seen.
AlphaZero is one of a number of pioneering AI projects created by the UK company, DeepMind, founded in 2010 by one of the most fascinating minds in the digital world, Demis Hassabis. Now, if you wanna feel good about your own CV, I suggest you cover your ears right now because Hassabis has had a very productive career for a guy who is just in his mid-forties.
As a child, he was one of the top-ranked junior chess players in the world. In his mid-teens, he talked his way into a job as one of the lead designers of a best-selling video game. After getting degrees in neuroscience from Cambridge and University College - London, he founded DeepMind in his early thirties, selling the company to Google only four years after its founding.
Now, you can probably imagine that when we first started sketching out ideas for a series of interviews about the future of intelligence, Demis Hassabis was very high on the list of people we wanted to talk to. But the strange thing about DeepMind, like a lot of the AI labs at big tech companies right now, is that while the organization is working on some of the most revolutionary and controversial new technology out there, almost none of it is available yet for ordinary consumers to interact with.
DeepMind is working on neural nets that can predict the shape of proteins, which may someday help design a drug that you might take to cure cancer or reverse Parkinson's. They're working on an AI that might be able to control nuclear fusion reactors, which could one day give us a source of renewable energy at a much lower cost.
But most of these projects are still behind the curtain or accessible to a small number of outside researchers. So for the next hour, we're going to ask Demis to give us a bit of a peek behind that curtain and talk about where he thinks AI is going to take us in the coming years. One of the smartest minds in the world talking about the future of intelligence.
That's this week's TED interview.
[00:03:11] Steven Johnson:
Demis Hassabis, welcome to the TED Interview.
[00:03:14] Demis Hassabis:
Thanks for having me.
[00:03:15] Steven Johnson:
We’re really excited to have you on the program, and we're gonna get into some very profound questions about intelligence and machine learning and the future of health and creativity. But, I wanted to start with, well, with video games, which is fine, which is appropriate for, for DeepMind's history, but also in a way that perhaps some listeners don't know very appropriate for, for your history in that really one of the first jobs you had as a teenager was being one of the key designers of a, of a classic nineties simulation game called Theme Park, which I played back in the day, and I also played Black and White, which I think you maybe had a hand in as well, which is a fascinating game.
And I, I guess I wanted to start there just on a kind of biographical note, like how did you get the gig at Theme Park, and how did, how did that lead into the work you're doing with AI?
[00:04:08] Demis Hassabis:
Sure. I mean, uh, yeah, no games is a good, is a good place to start with me. Um, I've been playing games and fascinated by games since I can remember, um, starting with chess, which I learned to play when I was, uh, four years old. And then I, you know, that captained many England, uh, junior chess teams. And actually, for a while, that was what I was going to potentially do, be a professional chess player.
Um, but actually the thing that it left on me, the imprint it left on me was thinking about thinking. So you know, as you try and improve, especially as a junior chess player, you're trying to improve your decision-making and your planning and all the things that make you good at chess, and that chess teaches you, including things like visualization and imagination.
And for me at least, it made me start thinking about what was it about the brain that was coming up with these ideas, sometimes mistakes, and got me fascinated about the brain and neuroscience and, and intelligence. Um, and then I discovered computers a bit later and learn how to program, and, uh, those different loves of computers and programming and games obviously naturally came together in designing and programming video games.
And I was lucky enough to, you know, come second in some national programming competition when I was around 13, 14. And, and the winner got a job at, was what was then the premier software house in Europe called Bullfrog Productions. They made amazing games, some of my favorite games like Populous and so, So I, I rang the, the, the, the CEO up and, and said, “Can I come for work experience?”
[00:05:35] Steven Johnson:
Um, like most 14 years olds do. Most 14-year-olds are just like, “I'll just call up the CEO.”
[00:05:38] Demis Hassabis:
I think yeah. Yeah. So, so he was fast, you know, fascinated by what I was doing. And then, uh, rapidly I ended up taking, you know, some time off between school and universities. And I used that time to program Theme Park, like you said. And actually at the time, in the mid-nineties, it was the golden era of games, uh, design and fantastic creativity going on, but also a lot of the best technology was being developed as part of games, graphics, technology, but also AI.
And, um, all the games I've written, including the two you mentioned, Theme Park and Black and White, have all had AI as the core gameplay component so that the game actually sort of reacts to you and the way that you play as an individual.
[00:06:18] Steven Johnson:
It's such an interesting history, the, those simulation games. I think when, when you're dealing with, you know, managing resources, trying to set goals for yourself, trying to deal with, you know, multiple layers of the, of the simulation…
You know, kind of starting with Sim City and then going through games like Theme Park and, and Black and White, to me, it's an argument I've been making for many, many years that, that those should be taught in schools. I mean, it's an incredibly rich way of thinking and it's very different from the kind of thinking you do when you read a novel or the kind of thinking you do when you solve a math problem.
Uh, but it's actually, it aligns with a lot of the kind of thinking that one has to do in life. Probably more, more than some of those other fields.
[00:06:56] Demis Hassabis:
I agree. I agree. Yeah, I totally agree. And, and, and actually, I mean, I think, first of all, chess should be taught as part of the school curriculum, I think ‘cause it teaches you phenomenal skills you don't learn, I think, that are generalizable and transferable to other parts of life, like planning and visualization and, um, but also I agree with you with these types of simulation games.
You can call them sandboxes even. So the idea is, you know, there's almost like a playpen for your creativity as a gamer. So very different from normal games where the game leads you by the hand through it. Um, and Theme Park, you know, the idea behind that was you designed your own Disney World. And, uh, thousands of little people, AI people, came into your theme park and played on the rides, and, and depending on how well you designed the theme park, they were happy or less happy.
And then of course, if they were happy, you could charge them more in the burger stands and the, for the cokes and, and balloons and other things. So the whole economics mold onto there. So yeah, it's, it was, it was a really interesting and formative experience for me, I would say, and not only professionally, but also demonstrating to me the power of AI.
Uh, and in those days this was just sort of fairly sort of traditional AI right? But obviously, uh, uh, deployed within a game of finite state machines and other things. Not like the kind of AI we build today, but it was still amazing to me how much enjoyment people got from interacting with a game like that, that, um, had AI at its core.
[00:08:15] Steven Johnson:
We're gonna get into this in more detail, but Deep, DeepMind has a long history in involving algorithms that has developed to play games. Um, but as far as I know, none of them have been simulation games, right? I mean, it's kind of Space Invaders and Q*bert and StarCraft and things like that. But, but there isn't any simulated Black and White players, uh, in the, in, in, in the cannon over there at DeepMind.
Is there a reason for that? I mean, in a way it's, it's, it's kind of the, the archetypal vision of a future AI that we have in our heads that, you know, we would have some artificial intelligence that will manage the city for us very effectively. So that presumably is where we want to go. But, but you haven't done that yet, right?
[00:08:58] Demis Hassabis:
No. No, you're right. That's an interesting observation. And actually, there's almost three chapters in my life of games being important to my, my life and career. One is the chess, uh, and the sort of my youth. Then there was designing and, and writing professional video games. And then finally this third chapter of using games at DeepMind from the beginning as part of the thesis of DeepMind and simulations as a training ground for AI systems.
A very convenient training ground for many reasons. Obviously, you can run millions of simulations at once in the cloud. You don't have to deal with things like real robotics, which is, you know, often you end up worrying about the hardware, breaking the motors and other things. So, it was something that was, uh, I thought was the perfect training ground for AI systems to make quick progress.
And of course, the other nice thing about games is, um, you know, game designers and games companies have spent thousands of person-years making these things and they're challenging for human players, right? That was obviously, that's obviously their challenging and fun for human players to play. And you can kind of go up the stack of difficulty even in computer games.
So we started, kind of famously now, with Atari games, you know, probably the earliest computer games that sort of, you know, became into the mainstream from, from the seventies and eighties. Space Invaders, Pong, these classic games. And that was difficult enough already for us back in 2013, 2012. I remember we were, we couldn't, we couldn't win a point at Pong.
[00:10:20] Steven Johnson:
[00:10:20] Demis Hassabis:
And I remember for six months, I think it was something like that, and I remember us thinking “we’re never gonna…” We can, you know, it was moving the bat around, but we couldn't work out for ages. Is it random? Or, you know, occasionally get the ball back, and it couldn't win a point against the inbuilt, obviously built-in AI.
And it was like, “This is impossible.” ‘Cause obviously it was learning just from the pixels on the screen. And then finally, you know, it got a point, we should have recorded that moment, actually the first point it ever got at Pong. And then pretty soon after that, it won a game, you know, to 21 points. And then very soon after that, it was winning 21-Nil, and it couldn't be beaten anymore. And that was the first time we saw that kind of, you know, exponential improvement. And we would see that many times again.
So, of course, we did that with all Atari games. That was our famous first result, I would say, and really the birth of, of deep reinforcement learning, uh, our new technique that we, you know, we've, we largely pioneered, and um, and then we go to more complex games. You know, like Go, uh, the most complex board game out there. And then things like StarCraft, which is the most complex real-time strategy game. And so we’ll , you can sort of pick games that are the sweet spot of being not too easy, so it's trivial to solve them, but not so hard you can't detect any progress.
And, and I think the reason we've chosen games that have, um, are more competitive to begin with, rather than these sandbox games is, uh, it's, it's, it's better, it's more convenient to have a metric that you can hill climb against.
[00:11:43] Steven Johnson:
[00:11:43] Demis Hassabis:
So winning a game get, gets the system a reward. ‘Cause we use reinforcement learning and, uh, maximizing the point score, you know, in something like Space Invaders, uh, so, you know, very quickly you can benchmark if you are making improvements and actually you use that reward. Uh, uh, those metrics to improve your algorithms.
So, but having said that, I think we are moving now. We've, we've basically won at all games there are, so Go and StarCraft. So we're actually moving more towards these free-form sandbox simulations now, where the difficulty there is that the AI in a way has to come up with its own goals.
[00:12:18] Steven Johnson:
[00:12:19] Demis Hassabis:
Right. Like in a Minecraft or, you know, like a Theme Park style game. But that is actually where we are moving into now, including building our, uh, simulations internally.
[00:12:28] Steven Johnson:
Yeah, precisely what makes those games Interesting intellectually as a human player that you, that you set your own goals and, and you decide what kind—do you wanna build a, you know, a giant, you know, dense urban metropolis? Or do you wanna build a, you know, suburban paradise?
All those kind of questions you ask make them harder as a, as a measure of progress, um, when you're in that training mode. I, a couple more things I wanna ask about games, but I think it's probably useful for our listeners who may not have spent as much time in this space. Let's just define deep reinforcement learning, and, and maybe, maybe start with pong.
I mean, I think that's a great example of the, you know starting with a very simple task. Win a game at Pong, which, you know, a six-year-old, six-year-old can do.
[00:13:11] Demis Hassabis:
[00:13:12] Steven Johnson:
It was hard initially because you started from scratch—
[00:13:15] Demis Hassabis:
[00:13:16] Steven Johnson:
—that the computer knew nothing about other than the pixels and just walk us through how that works.
[00:13:20] Demis Hassabis:
Yes, exactly. So the reason that was hard for these Atari games is all we gave the system was the pixels on the screen. The raw pixels, values, the rules, or what it was controlling or, or, or how to get points or anything like that it, or what that was on the screen. It has to kind of figure that out for itself.
And there were two main technologies that we combined. Firstly, there's Deep Learning. Which is all the rage right now. Um, and it was in, very nascent when we started DeepMind back in 2010. And the idea there is a sort of hierarchical neural network loosely inspired by the, the architecture of the brain.
And um, and the job of that part of it is to create a model of the environment or the data stream that it finds itself in. So in the case of Atari, you know, the Atari screen, what are the things, what are these pixel numbers, you know? And obviously there's correlations and structure in those pixels, so it has to figure that out.
Then there's the second part, which is reinforcement learning, which we do a lot of work on, and, um, that part is the, uh, reward, maximizing or goal satisfying part of the system. So you've got a model, you know, of the environment. What do you do with that model? Well, often if you, if the agent or the system finds itself in some environment, it has some goal it's trying to achieve, you know, win a game, maximize the points, you know, specified by the designers of the system.
And it, so it now has this model, and it has to figure out what are the right actions to take at any moment in time that will best get it towards its overall goal. Um, and, and that part is reinforcement learning. And in fact, we know that that's how the brain works too, like in humans and primates. It's the dopamine system in the brain that implements a form of reinforcement learning called TD learning. Very famous result discovered in the nineties.
Um, And so we combine these two technologies together. The, the, the deep learning for the modeling and then the decision-making with the reinforcement learning. And, you know, we, we call that deep reinforcement learning, or deep RL, uh, for short as the, as the combined technology. And it turns out to be extremely powerful. And, um, it's also what we used in AlphaGo, uh, which I'm sure we're gonna talk about, which was our Go program.
Um, and it's, you know, it's very effective, ‘cause effectively what you can think about is the reinforcement learning is like the planning algorithm. You know, it's like doing a search through all possibilities, you know, whether it's a go game or an Atari game or whatever that is. But the problem is, is if you just do a naive brute force search and you look at everything, there's, you know, you're normally in spaces where that's not tractable. It's too big a space. The combinatorial explosion's too big.
So what you do is you use your model to sort of imagine different paths and then the model tells you what will the environment potentially look like if you were to do that action. And that helps, uh, narrow down that search space to so that, so that in the end the system only looks at useful things, you know, much like a human chess ground master would do. They don't look at all possibilities. They just look at the few that are likely to be good ideas.
[00:16:17] Steven Johnson:
One of the things that I think has been so interesting about the convergence of some neuroscience and AI over the last 20 or 30 years is, is our understanding of that reward mechanism, the dopamine mechanism that we talk about in the brain.
It, you know, I think people, the popular explanation of it is that dopamine response to reward in the external world, but in fact, it responds to expectations about reward, right? You're, you're imagining that you're gonna get, you know, $5 and then you get $10, and so there's a dopamine surge because you exceeded expectations, vice versa. And that turned out to be relevant in, in the world of AI as well.
[00:16:54] Demis Hassabis:
[00:16:54] Steven Johnson:
There’s a, there's a kind of expected reward mechanism there as well, right?
[00:16:58] Demis Hassabis:
That's right, that's right. So it turned out that it's not important actually so much that, that you're gonna get the reward. It's actually your expectation of whether you're gonna get that reward.
So in a way, what these reinforcement learning systems do is train your predictive capability. So what's important is, you know, I'm predicting I'm gonna get a reward, uh, and then I get one, that's okay, right? That means my model's good, but if I'm not predicting a reward and then I get a reward, that's really surprising in the good direction.
So then, I need to update my model to, to figure out, so that next time I come across that situation, it's more likely to predict the correct thing, which is there's gonna be a reward here. And, and in the end a lot of intelligence is about predictive capability. Can I predict what is gonna happen next and then use that to inform my planning?
[00:17:42] Steven Johnson:
You alluded to AlphaGo. I wanted to turn a little bit to AlphaZero, that platform that, um, that you developed. Probably the, the, I would say the most celebrated achievement in terms of press, um, the, the AlphaZero success playing Go and playing chess. I mean, I remember reading about AlphaZero playing.
It played 44 million games of chess against itself and went from knowing nothing other than the rules of chess to being the greatest chess player that had ever lived. And what's key to that approach is this adversarial model where you have two versions of the software playing against itself, um, and, and having this kind of competition where they ratchet up to, to, to this Grand Master plus status.
Um, I, I guess my question is how, how applicable is that adversarial model in, in non-game situations? Are you seeing that as a strategy that you can use outside of the, the game world?
[00:18:36] Demis Hassabis:
Yes. So, so you know, with AlphaZero, and I mean it might be worth talking a little bit about the lineage from AlphaGo to AlphaZero.
So the way we, you know, with AlphaGo, what we did is set up two reinforcement learning systems to challenge each other and sort of ratchet themselves up by trying to beat each other. Uh, and, uh, we did that with Go first and Go only, and AlphaGo, and then we, what we did with AlphaZero is remove all the Go-specific things and made it a general games playing system that could play any, to play a game, uh, a per—you know, to better the world champion level.
And, um, you know, it's, it's interesting actually to try and, uh, couch other, more general things, uh, that are not games into this type of self-play mechanism. And sometimes it can be not just two opponents, but it can also be the, the system and the environment being the opponent in some sense. And actually, we extended it in other ways with our StarCraft program, which, uh, played this complex real-time strategy game, StarCraft. And actually there we had a league of agents, so it wasn't just one versus one. We actually had, you know, 20 or 30 in an AlphaStar league, and, um, they would all be seated with different strategies. And then you'd have to take kind of like a Nash equilibrium to find out which agent was the best out of that pack.
You're almost setting up a market dynamic in a way, right? And then you're allowing that to shape the agent development. So we, we've taken that in a lot of ways. We sometimes call this open-ended learn, where we have environments that are procedurally generated in simulation, and then, um, games are almost invented algorithmically, little mini-games of, you know, tag and hide and seek and these kinds of things.
And the agents have to figure it out for themselves, um, in that, in that game. And generalize from other, other mazes and other, other situations they've seen before. Actually, one thing it's worth mentioning is, although we started with games of course as a convenient testing ground, the ultimate aim for DeepMind was, and our algorithms was to build general-purpose algorithms.
So it was always a means to a, to an end, you know, to win at these games. It was never an end in itself, you know, fascinating those things are, especially to a games player like me in chess and Go, and they found, you know, all these fabulous new ideas in these games and changed those games worlds.
But, uh, you know, ultimately we wanted to build powerful general-purpose algorithms that could be transferred to real-world problems and real-world domains, including things like science.
[00:20:57] Steven Johnson:
Yeah. It always occurred to me that the adversarial gaming model, one place where it would have an obvious parallel would be the immune system, right? Evolving in response to new unanticipated pathogens that appear. Is, is that something that you've done work on?
[00:21:14] Demis Hassabis:
We haven't done work on that, but it's something on our uh, to-do lists. So I agree with you. You know, immune systems, microbiomes, this kind of thing. Obviously, we've been thinking a lot about that in biology space.
I agree with you. That could be pretty interesting. Adversarial space also, uh, I think there are applications—we’re not really doing these ourselves—probably in finance and financial, FinTech, you know, where actually, you can think of the stock market as a huge game in some way, right? So almost certainly there will be applications there.
I'm sure other people are using our work in that domain. I’m sure. So I think there are quite a few natural places, but there's a lot of things that can actually be re-couched, even scientific things into this kind of, um, to and fro setup where this ratcheting happens. I could imagine a situation where one AI is the environment itself learnt from real data, and then the other AI is the agent trying to achieve something in that.
And they're almost playing a game with each other. So one, one agent is, you know, actually the one trying to achieve a goal, and the other one is the adversary and ones that you could argue or you know, or the, uh, which is the environment. And, but they're both AI systems. So I think it's actually pretty general how it can extend.
[00:22:37] Steven Johnson:
When you look at the general landscape right now, I mean, I think, as I alluded to, I think a lot of us saw AlphaGo and AlphaZero as a major milestone, but there have been a couple, I think, in the last few years, both inside of DeepMind, and, and maybe in the, in the broader field. Are there any other kind of landmarks of the last three or four years where you said, “Oh, this is big, this is, this is something I didn't know we were gonna be able to do so quickly”?
[00:23:01] Demis Hassabis:
Yeah, we, we were lucky. You know, we were very lucky to be obviously responsible for quite a few of those big moments. As you say, the Atari one first, DQN at AlphaGo, AlphaZero. And then more recently, AlphaFold. But I think the one externally that, that really, uh, was significant was GPT-3 from OpenAI.
Not so much because they invented any new technology behind it, but um, they were the first ones to really try and go for it at scale, like natural language understanding, in a sort of brute force way really, from the ground up. No, no syntactical knowledge, you know, basically not using any of the normal ways one would do natural language understanding.
And what was surprising is, and you know, I saw this development GPT-2, which was the earlier version of that, and that was not very impressive. Yeah. It was sort of, it, it was doing exactly what I expected that kind of system to do, which is just be a sort of poor memorization of its training data.
Right, and, and basically not, when you asked it a new question, it wouldn't do a very good job of giving you back a relevant answer. Right. It would, you could sort of see it was just memorizing things and then trying to pick like the, the nearest word to it and stuff like that. So it was, it was not very convincing and, and I thought for a long time that the two problems with doing language in this way would be it's not grounded in the real world, in real experience.
So even in simulation, that's still real grounding and sensory-motor experience, right? You're getting sensory input and, and uh, and not just linguistic input. And you're, you then can form real concepts about things, you know, grounded concepts, let's call them all abstractions about how the world works and real models about the world, physics models and other things.
So I, you know, we used to debate this a lot in, within DeepMind, but also within the AI community about what would happen if you just read Wikipedia, um, and nothing else. What would you know? You know, this is a classic problem in traditional AI, uh, good old-fashioned AI, it's sometimes called where there were huge projects in the eighties and nineties, the first time people tried to, you know, solve AI at places like MIT and they used to, they, you know, I don't know if you remember this, there was this huge project called CYC, C-Y-C, and, um, Doug Lenat, very famous AI pioneer.
And what it was was, was literally hand inputting into a database, uh, I dunno how many PhDs were done on this, but, um, rules of the world. Logical rules of like how the world works. And I think that there were a million rules typed into this database. And then at some point it would sort of, you would, you know, I think the dream behind it was you would ask it a question and somehow it would then, you know, maybe once you had 10 million rules there, it would be able to tell you back answers, you know, common sense answers.
And it never really worked because it's very hard to, for, for various reasons, very hard to encapsulate all of our knowledge in terms of rules. But one of the really big problems is it wasn't grounded. It was just living in the world of symbols. So, when you asked it about a dog, you know, it didn't really know a dog's got four legs and barks and chases cats and sort of all of this, uh, stuff that we intuitively understand because we've interacted with dogs and that system that doesn't really know what a dog is, even though it had all these logic rules about it.
Um, but what happened with GPT-3 is, is it turned out that guesses going bigger, um, didn't just incrementally improve it—which I don't think would've been very interesting—but sort of crossed the threshold somehow. Uh, and suddenly it was doing some impressive things of not just regurgitating back exactly texts that it's seen, but actually merging and averaging, in a semi-smart way, different things it learnt about, and of course now obviously we have our own very advanced models, Google, Meta as well as OpenAI. So there, so we've all been, you know, pushing these systems to the maximum and it's very interesting that, at this scale, some of those original assumptions that one might have about intelligence and doing it in a brain-like manner and so on may not hold.
[00:26:46] Steven Johnson:
Yeah, it is such an interesting time. I mean, you know, you probably saw that paper that Google did a while ago, maybe four months ago, where they had their, their, I think it's either PaLM or LaMDA explaining jokes that it had never seen before.
[00:26:58] Demis Hassabis:
[00:26:58] Steven Johnson:
So they give a joke that someone made up so that it had never existed before as a joke. And they gave them a series of these jokes and asked the, the algorithm to explain why the joke was funny.
And, you know, this hasn't been duplicated, you know, it's been accused of cherry picking in all these questions about whether it is, but the, the answers that they supplied in this paper are very sophisticated and it's hard not to feel… It’s very important here, I think for the listeners to understand this. When we talk about the AI being capable of understanding a joke, we are not implying that the AI is sentient or conscious or having an internal experience in any way, but rather that it seems to be able to represent the concepts behind the joke and what makes it funny in a way that is intelligible and that can be condensed down or shared or translated into a different metaphor or different kind of explanatory model. And that is something, I think to most people in the field, was not at all clear that that was gonna be something that large language models were gonna be capable.
[00:27:58] Demis Hassabis:
Yeah, no, I totally agree. And, and, and also agree with the point on consciousness in which we can come back to later. But, uh, you know, that's a really interesting question.
But these systems are not nowhere near, you know, there's no, in my view, there's no, uh, even semblance or hint of sentience or consciousness yet. Right? So I think we can put that to one side for now, but, um, certainly even understanding, I don't get the feeling these systems really understand in the sense we mean, you know, we usually mean it, what it's saying, you know.
But despite that, what is really interesting is that it can still say intelligible things, somewhat useful and including things like potentially explain jokes, which is always been thought of as quite a high-order intelligence thing to do. You know, understand irony or sarcasm or something like this, you know, sort of like a meta-level of understanding, right? It's pretty high, high-level function. So it, and, and of course, there's still questions about, you know, how well does it generalize? Was it really in the training data somewhere?
Because I think one reason we, we did not necessarily realize these systems could do, would, would be able to do this type of thing once they got to the right scale is, I think we are, we are way beyond the scale now where, we can do a human thought… as humans, we can do thought experiments usefully about it, right? So I used to sit there and some, you know, few years ago and dream about, “Oh, what if I read all of Wikipedia as a naive AI system? What would I know?” Right? There's kind of, you know, you can kind of, we've all kind of spent hours on Wikipedia following links through and just enjoying reading random articles and stuff like that. There is a ton of information on there.
But I think, I don't think any of us, with our sort of limited minds, can possibly comprehend what it would be like to read the entire Internet. Right? It's that—
[00:29:38] Steven Johnson:
On so many levels. On so many levels.
[00:29:40] Demis Hassabis:
Oh yeah. Firstly, would we want to?
[00:29:41] Steven Johnson:
Yeah, that's a good question.
[00:29:42] Demis Hassabis:
And you know, who knows what's on there? Uh, and, but, um, what would it contain? We've had, what, 30 years now of human beings, billions of us putting things on this shared, you know, knowledge, resource, the internet, and, and, and just think about the number of videos that have been recorded now across all devices. I mean, it's just mind-boggling if you actually think about that. And perhaps we've, you know, it could be that we've actually recorded every corner of the world somehow, almost everything that can be done.
I mean, it's possible it's, that sounds like it must be incalculably big, but, but perhaps, you know, it, perhaps it isn't as quite as big as one might imagine. And so, therefore, if a large model sort of ingests all that information somehow in a useful way, and of course these models are very data inefficient currently, right?
Especially compared to something like the brain. But that can be improved too, probably. Um, then, you know, what actual information is, is out there, in fact, and it might turn out that, uh, explaining jokes is possible, right?
[00:30:41] Steven Johnson:
In my experience with large language models, which has predominantly been through GPT-3, the issue that I feel like is the hardest nut to crack, um, that is still very evident there is what's sometimes called the, the tendency of the model to hallucinate. So I, I once asked GPT-3 to write an essay about the Belgian chemist and political philosopher Antoine DuMachelet, and it delivered this beautiful Wikipedia-like entry, you know, five paragraph song filled with all these details, quotes of his books, his biography, whatever.
I made up this guy, he doesn't exist.
[00:31:12] Demis Hassabis:
[00:31:13] Steven Johnson:
And, and the software just doesn't seem to be able to say, “I don't know the answer to that.” And it will just riff if it doesn't have something to build on. And I think that I, my question is, you know, is there a way to solve that problem? ‘Cause that's a major reliability problem going forward.
[00:31:27] Demis Hassabis:
I, I think that's gonna be, uh, I mean, it's a hard problem, but I, I can see how that would be solved. You know, I think the, the model needs an estimation of its own confidence probably in an answer. And if that's below some threshold, it should say, “I don't know.” And at the moment, I think we're not really allowing the system to do that.
I mean, very modern systems are doing that now, where “I don't know”, or, you know, “What is that?” or “Ask a follow-up question”, you know, well, “Who is he?” would actually be the reasonable response. What they remind me of currently is, is, you know, in, in neuroscience, I studied people with hippocampal amnesia, and things like that, and they have a tendency to confabulate because they, they don't really have memory.
And so these systems also are deficient in memory. And I think that having an estimate about your own, um, answers and, and whether they're likely to be good or not, and then if you're not confident, you know, once you just not answer.
[00:32:17] Steven Johnson:
One thing that has been interesting to me as someone who's written about this a little bit is just how heated the, the, the public opinions are about artificial intelligence and language models.
It feels to me like people have almost like, strongly felt kind of political feelings about these, these tools, and we don't have to get into those particular arguments, but I'm just curious if that surprises you looking out at the, the broader discussion about these things. Is that something you anticipated coming when you first started thinking about, you know, starting DeepMind, or, or has that surprised you in some way?
[00:32:53] Demis Hassabis:
So, so it doesn't surprise me in some sense. I mean, the exact manifestation of it obviously is, you know, one maybe couldn't have predicted, but we, we planned from… Even back in 2010 when we started DeepMind from the beginning, we planned for success. So we always had ethics. And safety as key components of what we were doing, and, um, and, and what we thought about and the actual, you know, eventual impacts of these technologies.
And, um, because we believed in what we were doing, we believed AI would be one of the most important, if not the most important invention, uh, humanity ever makes. And could be br—massively, you know, of course, broadly applicable. And so I, it’s natural for scrutiny to happen. And a lot of, you know, arguments, partly because it's very nascent technology, so you know, it's still being figured out.
And also, you know, there's a lot of potential both for good and for bad in these technologies like most, you know, powerful new technologies. And so we have to steward this correctly and be very thoughtful about it. Uh, in my view, I think we should be using the scientific method to do that, be thoughtful and hypothesis-generate and try and get a better understanding of our things rather than just maybe the sort of Silicon Valley trope of “Move fast and break things.”
I think we should not do that with these kinds of technologies, right? Because breaking things in the real world could be very, very damaging if the technology's very powerful, potentially, right? It's not like a, you know, a game app or a, you know, photo app or something, right?
And I think language specifically has been a lightning rod because, unlike maybe games or even science, which is the two things we are kind of best known for, DeepMind, and my personal interests, those are relatively niche domains in the sense of, like, there are people who obsessed with those things and I think they're hugely important and enjoyable, and in science I think is probably, you know, my view, the most important thing we can do with AI.
But, they're kind of niche as far as the mainstream public are concerned, whereas language, you don't have to be an AI researcher to interact with one of these systems and go, “Wow, you know, what's going on here? What does this mean?” And of course, it's already interacting with some of the difficulties we are seeing with social media in general, and, and deepfakes and all these, these worries that we have already and are, are possible without AI, but AI may end up helping or be part of the solution if used correctly. Right?
But, um, that's, that's, that's also up for debate. So I think it's been caught up in the language space with all of the wider political and cultural dynamics that we see.
[00:35:22] Steven Johnson:
So you alluded to AlphaFold, uh, earlier. Let, and let's turn to that because that has really interesting implications for science and for health. AlphaFold was on the cover of Science magazine. Tell us first what, what AlphaFold is, and, and where you see that going.
[00:35:37] Demis Hassabis:
So AlphaFold is our system to solve what, what's been called the protein folding problem. And if I explain a little bit about the problem first, so proteins are essential to life.
Your genome codes for proteins, each gene codes for a protein, more or less. And proteins are like, they're sometimes called the workhorses of biology. Basically, all biological functions in your body are, are governed by proteins and um, the protein folding problem. It's basically this problem of “Can you predict from the genetic sequence called the amino acid sequence? Can you predict the 3D shape that that protein will fold up into when it's in the body?”
And the reason the 3D shape's important is that the shape of a protein, it often is what, uh, governs its function. So if you wanna understand the function of the protein, what it's doing, and what, how it goes wrong in disease and what drugs to target, and so on, you sort of need to understand the 3D shape. So for 50 years, people have been working on this problem.
So it was, it was first articulated by, uh, a Nobel Prize winner called Christian Anfinsen and, and is part of his Nobel uh, acceptance speech in 1972. And he said, “This should be, in theory, possible to go from the one-dimensional sequence to the three-dimensional shape.” And, um, and you should better predict that computationally.
And, um, the normal way it's done is painstakingly with experimental work. So using massive machines, Cryo E—Cryo-EM and X-Yay crystallography machines. And, and the rule of thumb is it takes one PhD student, basically their entire PhD, four or five years to do one protein. And so in the whole history of experimental biology, there's only been 150,000 proteins that have been, the structure has been identified.
[00:37:25] Steven Johnson:
And what's the total range of proteins?
[00:37:26] Demis Hassabis:
And there's more than a hundred million known to science known to science. Right? So, and, and, and, and, you know, millions added every year because, because our genetic sequencing is, is very fast now, but this, this, this protein structure prediction is very slow experimentally.
So we use that initial data 150,000 to train AlphaFold, which is a, you know, a bespoke, uh, innovative deep learning system with some special case things in it that related to biology and physics that we put into the system. And, uh, it's able to predict and take a, an amino acid sequence and give you back the 3D structure in a matter of seconds within on average the, the atomic accuracy.
So within one Angstrom era. So, and that is the threshold, which it then becomes useful for biologists and chemists. So they need it too. That was always the magic threshold that had to be reached: so that chemists and biologists and life scientists could rely on it for downstream tasks like drug discovery and other things without necessarily having to do the painstaking experiments.
[00:38:28] Steven Johnson:
So it's like there's kind of a codebook where you have these sequences of amino acids and a small subset of the entire field has been translated into a three-dimensional shape. And so you've given that to AlphaFold and it is able to detect some kind of underlying pattern in all of those translated codes that it can then apply to novel codes that it's been given in, into some of the other amino acids.
[00:38:54] Demis Hassabis:
That’s right. That’s right. So, so the system has somehow sort of understood something about the way protein physics works and they, how they fold together. And it can almost do a translation between the, the one-dimensional sequence and then eventually the three-dimensional structure. So it's a pretty, uh, uh, uh, amazing system.
And what we did with it, the first things we did with it is actually fold almost every protein in the human body. So to start: 20,000 proteins. Only 17% of them were known to science, the structure. And uh, overnight, we more than doubled that to high-accuracy structures overnight. It was 20,000 proteins, and then now we've released over a million.
And over the next year we plan to release all hundred million proteins known to science, uh, and then continually update the database. Uh, and we, you know, teamed up with the European Bioinformatics Institute at, uh, at Cambridge who host a lot of the biggest databases in biology in, in the world. And we have fantastic partners to openly release all these data, all these predictions, all these 3D predictions for the benefit of the scientific community and actually as a sort of gift to humanity.
Uh, and we allowed it for any use for, so drug discovery com—you know, pharma are using it already within less than a year. We released this all last summer and, uh, you know, it's been cited around 3000 times already, which is, you know, enormous number for less than a year.
So we think that's pretty much every biologist in the world has, has, has looked up their proteins on this database.
[00:40:21] Steven Johnson:
For a non-scientist, just ordinary person walking around the world. Where will this matter first in terms of kind of the downstream consequences of this being released in terms of their health? Say, where do you think the most immediate application of this?
[00:40:36] Demis Hassabis:
So I think the most immediate application is something we are following up on actually is drug discovery.
[00:40:40] Steven Johnson:
[00:40:41] Demis Hassabis:
So, um, when you try and des—you know, design a new molecule or new compound, a new drug, basically what you're trying to do is figure out where on the surface of the protein does that molecule need to bind to, you know, fix the problem, inhibit the problem, or block it.
And so, um, if you now know the 3D structure and the surface, then you know, you know much better where you should be targeting your drug or your molecule. It, it, it, it's just one part of the drug discovery process, but it's an important part and, um, you know, should speed up all of those processes.
The other thing is, you know what I hope is that a lot of diseases are currently thought to be to do with proteins that misfold—so sometimes fold in the wrong way instead of the, the normal, healthy way. And, uh, the speculation that Alzheimer's might be because of that. For example, with amyloid beta protein, so again, a lot of these regions of proteins are actually unstructured until they interact with something.
And so AlphaFold turns out to be a very good predictor of those types of disordered regions. So people already, it's the, you know, it's the best predictor of those types of disordered regions. So not only can it give you back the 3D structure, it can also tell you which bits are gonna kind of be unfolded unless they interact with something and some of those regions are implicated in disease. So I think those are the two, uh, most obvious near-term things.
[00:42:05] Steven Johnson:
I love thinking about the long-term story here with drug discovery, which is, you know, a hundred years ago, the state of the art was Fleming leaving the Petri dish out on his desk and you know, just a random mold spore happens to fall throughout to the window when he goes on vacation.
And now we've got AlphaFold, who's, hopefully it is accelerating the process a little bit, a little more reliable. We have a question that we ask all of our guests, which is also kind of a prediction question, which is really, I think there are few people in the world I would, I'd like to hear this answer more from than you, which is: in your field, what is the unsolved problem that you are most fascinated in seeing the results of or seeing the mystery solved for? If you could fast forward 10 years, what would the problem you'd most like to see solved be?
[00:42:49] Demis Hassabis:
Well, the one I spend most of my time thinking about and I still think is the most fascinating, outstanding problem is the notion of abstract concepts or conceptual knowledge.
So, you know, there is some evidence that these large models today have some kind of compositionality capability, but it's, it's still quite rudimentary, I feel. And so I, I, you know, I don't think yet they, this is part of understanding, I would say, is actually be able to abstract things and then apply those abstractions in a new situation, like seamlessly.
And it's called transfer learning, uh, or analogical reasoning in psychology. And, uh, of course, humans, we do this effortlessly with our brains, right? Sort of learn thing in something in one domain, find the underlying structure, and then apply it in a new domain. And so far, AI systems don't really do that in a satisfactory way, I would say.
And I think, if one was to crack that, then we would bridge the chasm that's still there at the moment of “How do we get these learning systems which can deal with messy vision, and, and pixels on screens and other things, and find structure in that back up to symbolic manipulation?” So things like mathematics, um, and maybe do mathematical discovery and things like that.
And I think we're still quite far from that. And no one quite knows how to bridge that chasm. We have our idea. You know, half a dozen at least, uh, prototype projects in the works are on this problem. But, um, so far I would say we, we don't know yet how to, how to solve that problem. It's a bit of a mystery what these representations, these conceptual representations should even look like.
[00:44:24] Steven Johnson:
I could talk to you for, for an entire day. But one last question. There's a famous moment in early computing history when Charles Babbage was creating the analytic engine in the 1830s and working with Ada Lovelace, um, arguably the world's first programmer. And she wrote this extraordinary passage as part of a footnote where she predicted that in the future, computers would not just be useful for math but would one day be capable of composing music and doing other creative work.
And so I was curious where you felt we were now and where we will be in the, you know, in the coming years in terms of creativity.
[00:45:02] Demis Hassabis:
This is the way I see it right now, is that I would put creativity into three buckets if we are thinking of, if we define creativity as coming up with something novel or new for a purpose, then you know, I think what AI systems are quite good at the moment of doing is interpolation and extrapolation, I would say.
So interpolation is sort of averaging from examples, right? So you give it lots of images of cats, you know, can you generate me a new cat? Yes. Right? Some kind of, weird, you know, kind of sophisticated averaging.
Extrapolation is more like what AlphaGo did, which is play 10 million games of Go, uh, look at human games, come up with a new Go strategy or chess strategy that's never seen before. Right? And move 37 in game two of the big match we played against the world champion was lorded as a move that no human would ever have thought of. Even though we've been playing go for 3000 years, it was played on the wrong position. All professionals laughed at it. And now books are being written, Go books are being written about, you know, that move. Right? And it's, it's sort of gone down in Go history now. Um, but what's missing is, I would say, true invention. And you can see that because our systems like AlphaZero and AlphaGo can invent new strategies and chess and go, but they can't invent Go.
Right? So that would then be the highest level of creativity. Can you invent a game as great as Go or as great as chess? Uh, and that, they can't do. And it's a little bit of a mystery. What is that out-of-the-box thinking? But I think it's related to this concepts, abstractions I'm mentioned earlier. Uh, and, and I think if we solve that, One could then have systems that do what we would regard as true creativity or out-the-box thinking.
Because if you imagine, what sort of instruction would you want to give a large model to invent Go? What? What you would say is something like, “Can you invent me a game that I can learn in five minutes, but could not be mastered in many lifetimes, but only takes four hours to play so it fits in my day, Um, and is beautiful aesthetically?” Right? Something like that.
But all of those words are super high concepts. I mean, what would our current AI say? Or maybe I should type that into one of my language models and see what it’d do. But I'm pretty sure it wouldn't come up with Go. Right. And so, so, so that, that's the kind of instruction I think we'd like to give our systems.
And you could imagine, in science, I think about, well, okay, we've done AlphaFold, amazing big advance in life sciences. But what would it take for a system to come up with general relativity like Einstein did? Okay? And then really advance our knowledge of the world and physics, which is ultimately what I wanna do with AI, actually is understand the universe around us.
That's the whole reason I've, I've worked on AI my entire life. Um, and, uh, that question I think is going to require true creativity, which, you know, we're not there yet.
[00:47:54] Steven Johnson:
Well, Demis Hassabis, you mentioned earlier that you were adding some things to your to-do list. I can't imagine what your to-do list looks like. Mine is like, “Pick up the groceries and the laundry.”
Yours is a little more ambitious, so we don't wanna take any more of your time. You should go off and check more of those boxes. But thank you so much for this conversation. It's been a real treat.
[00:48:11] Demis Hassabis:
Thank you so much for having me.
[00:48:15] Steven Johnson:
The TED interview is part of the TED Audio Collective. The show is brought to you by TED and Transmitter Media. Sammy Case is our story editor. Fact-Checking by Meerie Jesuthasan. Farrah Degranges is our project manager. Gretta Cohn is our executive producer. Special thanks to Michelle Quint and Anna Phelan. I'm your host, Steven Johnson.
For more information on my other projects, including my latest book, Extra Life, you can follow me on Twitter at @stevenbjohnson or sign up for my substack newsletter, Adjacent Possible.