How to use your time and money for good – as effectively as possible – with Will MacAskill (Transcript)

The TED Interview
How to use your time and money for good – as effectively as possible – with Will MacAskill
April 25, 2024

[00:00:00] Chris Anderson:
Hello there. I'm Chris Anderson. This is The TED Interview. Now, this season we're expanding on an idea that I believe can offer a response to many of the issues we're currently facing as a society. It's an idea I wrote a book about called Infectious Generosity and the spirit of generosity, we're offering free copies of both the eBook and the audiobook to TED Interview listeners. 


You can go to ted.com/generosity. Fill out the short form there to claim yours. Now this podcast series is designed to amplify the themes of the book by bringing some of its main characters to life before your very ears. Today we're going to focus on how we make our generosity more thoughtful and more effective. 


And, the guest I'm about to introduce you to has probably spent as much time thinking about this topic as anyone on the planet. He is Will MacAskill, a leading moral philosopher, co-founder of the Effective Altruism Movement, EA for short, and uh, he's the author of some amazingly influential books like Doing Good Better, and What We Owe the Future. 


Now EA, you may have noticed, has been subject of a lot of discussion in the last couple years, some of it quite heated. I can't wait to get Will's perspective on recent controversies, but even more important than that, a true understanding of his thinking about generosity. I mean, I really think the questions we’ll be asking each other today, a right up there among the most important questions anyone can ask of their own life.

Like, what's the wise way to be kind? How can we maximize our impact on the world for good? How much money should we give away and how can we do that effectively? And, what are the risks if we get this wrong? Okay, let's dig in. 


Will MacAskill, welcome to The TED interview. 


[00:02:24] Will MacAskill:
Thanks for having me on. 


[00:02:26] Chris Anderson:
Will, let's start with a bit about you. I mean, I'm just curious what it was about you that took you on this journey to some pretty exciting, pretty radical, pretty provocative ideas. 

[00:02:37] Will MacAskill:
Sure, so I grew up in Glasgow in Scotland, and even from a pretty early age, I wanted to try and make a difference in the world. 


So, I worked an old folks home, uh, yeah, elderly people who had severe disabilities. I helped to run a scout group for disabled children. I volunteered at a local school. I donated some amount of money to charity and so on, but this was all kind of a bit haphazard, a bit ad hoc. And, it was once I started learning about global poverty and the extremity of poverty that people live in, that this started to change. So, I remember first when I was, I think 17, learning that 46 million people at the time had died of AIDS. And, honestly, I just thought, how are we not talking about that more? How is that not on the front page of every newspaper? 


[00:03:29] Chris Anderson:
This is not that typical behavior for a, for a kid. 
I mean, did, did your friends think you were weird? Did they admire you? How do you think this actually happened? Like, why did you care about these things? Not that many kids spend their time volunteering like that. 


[00:03:45] Will MacAskill:
Uh, yeah. So, I did have friends who were also, um, you know, just concerned to make a difference. 
So, actually all my close friends, literally a hundred percent of my close friends became doctors and, you know, we were doing some of these volunteering activities together. So, I certainly had some social encouragement from some really kind people who I'm still friends with today. But, I do think there's part of this that was just, you know, quite innate and really just something, I was a, a kind of drive, I think I was, I was born with. 


[00:04:17] Chris Anderson:
Hmm. So, you think of that almost as good fortune, not, not as something to boast about necessarily, but just this is just who you were. 


[00:04:27] Will MacAskill:
Yeah. I mean, there's certainly a huge amount of good fortune in the sense that I have been born with so many privileges, born into a rich country, middle class, I got to go to a private school, and then from there to Cambridge for my undergraduate, I really just had all the benefits you could hope for in life. And, so it was very salient to me then that I should be thinking about, well, given that I've got all this privilege, how can I take that privilege and turn it into a way of making the world better? 


[00:04:59] Chris Anderson:
Hmm. It feels like there's always been a bit of a radical side to you, someone willing to go against the grain. I mean, I, I think MacAskill wasn't the surname you were born with. 

[00:05:08] Will MacAskill:
Yeah. That's right. I definitely think that many of the decisions I feel happiest with in my life are ones where I've gone against societal norms. 


Sometimes that's to do with effective altruism. So, I'm sure we'll talk about my giving and so on. Uh, but in other cases it's not. So, when I got married, I took my now ex-wife, um, we're still good friends. Uh, yeah took my ex-wife's grandmother's maiden name, as did she, where the underlying thought was just what is with this tradition where when, uh, a man or woman get married, they by default take the man's name. 


And, so instead I thought, well, why not just choose a name that we have some connection with that we really like? And, I think that's one of the best decisions I ever made. 


[00:05:52] Chris Anderson:
So, I love this. So, you're someone who's just willing to look at the world and say, huh, that makes no sense. The fact that everyone else is doing it isn't a reason necessarily to do it, I'm going to do what I think makes sense. That is who you are at your core, I think. 


[00:06:08] Will MacAskill:
Uh, yeah, and I think that has been something that I've found consistently and in fact, even over time, have learned more and more that very often if there are really good arguments for something, whether that's arguments for the value and importance of charitable giving, for the importance of focusing on effectiveness, for changing your name or for catastrophic risks and AI even if there are good arguments, even if society hasn't yet caught up to those arguments, that doesn't mean the arguments are wrong. And, in fact, you have potential to have an outsized impact in the world by focusing on things where the arguments do make sense, but the world is not yet caught up.
[00:06:52] Chris Anderson:
Yeah. So, you studied philosophy at university I think as, as did I. Most people I know who study philosophy at university didn't do anything that is like in some ways it’s the least practical topic you can, you can kind of study. 


You did. You ended up becoming a co-founder of this, what do we call it, a field, a, a form of thought, a movement, effective altruism. What is effective altruism? 


[00:07:18] Will MacAskill:
Uh, so effective altruism is, uh, about using your time and money to try to make the world better, but using those things as effectively as possible. 
So, rather than just donating to whatever charity approaches you in the street and asks you to donate, instead thinking really carefully and really going with whatever the best evidence and arguments are to donate to the organization where you think it'll have the biggest positive social impact, or when you're thinking about your time in particular, your career, where you have a huge amount of time that can possibly be used for good. 


Really thinking carefully about, okay, where with my scarce hours on this planet, can I have the biggest positive impact I can? And, then it's about just not only thinking about it, but really going and putting it into practice as well. So, this all began with an organization called Giving What We Can, which encourages to people to give at least 10% of their income to whatever charities they believe will do the most good. 
And, from there, this idea kind of broadened out and became what's now known as the Effective Altruism movement. 


[00:08:28] Chris Anderson:
Hmm. When you first came to TED, you encouraged us to ask three questions to guide us towards more effective giving. Can you remember what those were? 


[00:08:38] Will MacAskill:
I think I asked about what problems are biggest in scale? What are, uh, most tractable? So, when you're actually putting effort, you can actually make a difference. And, then finally, what are most neglected where if a problem is unusually neglected, then that suggests that additional resources going into that problem, whether that's your time and money, and really have an outsized impact.
[00:09:02] Chris Anderson:
So, I quote those three questions in the book. I mean, they do strike me as a very good combination there of encouraging someone to move beyond what is the way that we normally get involved with picking a course, which is just, oh, I know someone who suffered from that to trying to imagine what it would take to be most effective. 


And, and those are certainly, uh, each of those in a different way help guide you to that. For you, asking those three questions, what has that encouraged you to focus on? 


[00:09:31] Will MacAskill:
So, for me, I think that's led me to a particular focus on global catastrophic risks and in particular risks from new technologies, emerging technologies. 


So, in that talk 2017, I talked about risks from pandemics and also risks from artificial intelligence and those were very neglected. And, that was in part because they occur so rarely, like really large scale pandemics, you know, occur every 30 years or every a hundred years. And, so society isn't prepared in the way that it should be. 


When we're looking to the future, the possibility of manmade pandemics due to advances in biotechnology, I mean that we might get much larger pandemics and much scarier pandemics than we've even had in the past. And, then similarly with artificial intelligence, this is something where because, at least up until a couple of years ago, the technology was not quite yet there. 


You could see the early development within labs being enormously impressive, but it hadn't yet hit the mass consumer, and so because we were concerned about these things and because they were getting so little attention, it means that if you were to then work on these areas, you could be one of just a handful of people taking seriously a concern that was not on other people's radar at the time. 


[00:10:49] Chris Anderson:
Part of your focus of this, I think, comes because in your philosophy, you've made a, a specific decision that not everyone makes. I think most people, when they think about their moral obligations, they think about their family, their community. You in a lot of the early work of effective altruism encourage people to think more globally and just think about suffering at a global scale. 


We have a moral obligation to future generations that we could unintentionally end human civilization and that that is abandoning our moral obligation to count this millions of potential future humans or sentient beings descended from us. Can you make that case more clearly than I, I just said out there? 


[00:11:36] Will MacAskill:
Yeah, I'm happy to. So, you know, effective altruism is about trying to do as much good as you can with your time and money, but what does good mean? And, while we think about that in terms of how much benefit you're providing to people, but in particular, taking everyone's interests equally, thinking everyone has an equal model claim upon us, and at least for this part of modality, not claiming it's the whole of modality, but for this part, we should take everyone's interests. 


We should treat everybody equally. And, now that naturally means that you start looking at it's trying to help the people who are the very poorest in the world because national boundaries don't seem morally important from this point of view, but it also means you should start looking to the future as well, where I think the fact that someone is born next week rather than tomorrow makes no moral difference to the claims they have upon me, nor in fact, if someone is born in a hundred years time or even a thousand years time. 


The fact that someone can experience joy or suffer, and if we can make a difference to their life, that is still morally important. And, that becomes so crucial because I think with a, a really pivotal moment in time when there are new technologies in particular that have the potential to completely derail civilization, send us back to the Stone Age or lead to some sort of dystopian future. I'm very worried at the moment about the capability for AI to dramatically empower dictators and would be dictators, and that could be a future that we have indefinitely.

Democracy is by no means an inevitable part of a technologically advanced society, and so I really do think that the things we do today can have an impact, not just on the pleasant generation, though the impact there is very great, but an impact for hundreds of years to come or thousands of years to come or even much longer. 


[00:13:35] Chris Anderson:
Oh my. So much to unpack here. I'd like to start actually by just challenging or at least asking this question about all lives have equal value no matter how far away they are, either in distance or time. I mean, part of me totally resonates with that. That seems like a pure philosophical position. 


And, certainly if you were a god who could step back from the present day and look at the whole of the universe and imagine different versions of it. You would want one where humans lived for countless generations into the future with as much joy and thriving and so forth as possible. You would say that that was a better outcome and yet, a lot of people might nod their heads when you say every human life is worth the same.

But, then if you said to them, are you willing to sacrifice your children's interests for those of children on the other side of the world, they would say, oh, now that you mention it, no. Or, if they didn't say that, that is almost certainly how they would act. It's how I would act. So, is this a Darwinian bug that we care more for our children than for children elsewhere? You know, that if the house is burning, you go in and you would rescue your child ahead of someone else's. Is that a bug or a feature? 


[00:14:55] Will MacAskill:
So, I think it's totally reasonable to care more about your near and dear, your family and friends than distance strangers, and I think the world would be a much darker place to be honest, if you had no special affinity to your loved ones. And, so in what we are the future, I do talk about special moral reasons we have, so reasons of partiality, also reasons of reciprocity too. So, people, um, can benefit us and that gives us a reason to repay them. 


However, the world as it is today is very well attuned to those special relationships. You know, people are just very generous to their friends and family and very caring towards them, which is a wonderful thing. But, I'm saying that at least part of a good life should involve taking that impartial perspective and actually thinking at least if you are lucky, like I am, to be, you know, in the middle class of a rich country with the ability to choose a career or to donate some amount of your resources. 
At least part of your life should be about trying to make the world better from this impartial perspective. 


[00:16:04] Chris Anderson:
Mm-Hmm. I think that's, that's powerful. I mean, definitely, you know, we, we don't want our children or grandchildren to go through a horrific, dystopian future, but I wonder whether there's almost more power in saying look what would be lost if this went? We've been many, many millions of years in the making and over the last few thousand years, it's absolutely extraordinary what humanity has built. We cannot let all of this go for naught. Like, part of me thinks that that's a more viscerally felt argument to a lot of people than the possibility that, oh, if all this goes in 200 years time, there's some person who I can't fully picture who won't ever enjoy the beauty of life. 


Even if it's not, doesn't pass the philosophical test, just as a sort of, um, persuasive human argument, is there, is there a case for that, that it's almost more powerful to focus on what is lost than the loss of what might be? 


[00:17:05] Will MacAskill:
Yeah, I think there is a powerful argument here, and it's actually something that my colleague Toby Ord discusses in his book, The Precipice, which is that you can see human history as like a relay race. Every generation passing the baton on to the next generation. And, in particular, when I think about my life and all the good things in my life, so many of them are owed to the effort of previous generations whether that's fruits and plants that have been selectively bred over hundreds of years, or whether that's technology like medicine, again, product of hundreds of years of slow and often faltering technological progress, or whether that's the kind of model and political landscape I live in as well. I think my life would be worse if I didn't live in an egalitarian society that takes the interests of women and minorities and people of all different races, seriously. 


And, so I have a kind of responsibility to the past as well as to the future to ensure that we continue that relay base and enable that there is a next generation that we can pass the baton onto and ensure that they have as happy and flourishing lives as possible.

[00:18:24] Chris Anderson:
Part of this discussion that I think leads to some of the key criticisms of effective altruism that have been made in the last couple years, I think it's along the lines of the more distant you get in terms of the moral obligations or the moral calculations that you encourage people to take, the more risk there is of things going horribly wrong. So, in theory, if you believe that there's a possibility of the future in which say a trillion humans live and flourish and you think there is some act you could take today that would have a tiny percentage impact on reducing the risk of that future not happening, say 1%, anything that made the tiniest difference to that prospect, you could justify in a sort of utilitarian calculation where you are basically just saying we should act in a risk adjusted way to maximize the overall risk adjusted probability of good for the whole universe, calculated overall time. 


You can end up with some wild and crazy decisions and arguably a certain person, Sam Bankman-Fried, was guilty of some of this kind of thinking. Do you think part of him was making EA type calculations? Or, was he just confused or was he always what, what, what's your, what's your view on him? 


[00:19:53] Will MacAskill:
So, yeah, Sam committed the most horrific thought. A million people lost money, some of whom lost their life savings. So, the prosecution recently released some messages that Sam had received on Twitter during the collapse, and it's really harrowing to be like a man who's thinks he's gonna be made homeless, has four children, and another man who was fleeing Ukraine and put his savings onto FTX. 


It's really just quite hard to read. And, so what he did was enormously harmful. Was it the result of some careful calculation, like some gamble that did made sense as a bet? And, I think the answer is quite clearly no. I think there's no perspective on which what happened that FTX made sense. 


[00:20:38] Chris Anderson:
Part of the EA logic has been that there are many ways to make a difference in the world. 
Some of them are to give away money now, but for some people, if they have extraordinary earning potential for the future, it's actually better for them to focus on accumulating wealth and then, at some point in the future, then they direct that wealth to doing good. And I, I wonder whether to the extent that you say that there was some good intent in his mind that part of him was saying to himself, I can justify taking any kind of risk here for the prospect of making countless billions of dollars 'cause I know that one day I'm going to spend that money well. And, therefore the usual rules don't apply to me. 


[00:21:26] Will MacAskill:
I do think Sam had this very elegant attitude, thought he was smarter than other people, and certainly when the collapse happened, I was very worried that wow, maybe what had happened was exactly this sort of calculation. 
Now that we've gotten the evidence that's come out over the last year and a half, including at the file and books that have been written about the topic and so on, that's at least in my interpretation, not what happened. I think there was a combination of them being criminally and recklessly negligent and literally, I mean, it comes up, they had a meeting in June of 2022 where they thought that Alameda had borrowed $16 billion from FTX, but it turned out it was a bug in the code, and it was only 8 billion. They did not know where all the money they had was. They had complete absence of corporate controls, even the most basic sorts of risk management. 


And, in my understanding of what happened, that gross and criminal negligence put them into a hole that they only discovered they were then in June of 2022. And, it was then at that point that they start very seriously engaging in and forward to try and get themselves out.

But, the kind of key thing that happens is not some calculated decision, which would've made no sense, like no sense doing the maths on it. 
Instead, it was just something kind of mindless. And, over time, I've actually learned that quite a lot about other sorts of white collar crimes in particular from this book, Eugene Soltes’ Why They Do It. And, that's something he points to over and over again. A white collar crime is not a result of some careful calculation. 
It's a failure of intuition. It's kind of this mindless, reckless mistakes that people make. 


[00:23:05] Chris Anderson:
Give us a sense of what this whole thing was like for you personally, because before the fall of Sam Bankman-Fried, he was regarded by many as the sort of poster child of effective altruism. I think he, he credited you with changing his philosophy of life, and at least on the surface, he was a very visible proponent of EA. And, then this happened. Must have felt like the most absolute betrayal. I mean, it's hard to imagine just for you, for all that you've built, you must have felt for a bit that, that the whole thing was coming tumbling down. 
What was it like on the inside? 


[00:23:42] Will MacAskill:
Yeah, absolutely. I mean, I felt there was a lot of emotions. So, one was just, yeah, the absolute horror at the harms that had been caused. Second, you're absolutely right, like utter feeling of betrayal. You know, I admired this person, I respected him. I thought he was gonna do a huge amount of good. 


And, yeah, I felt like an utter fool. And, then the final thing was just confusion as well, honestly. And, that's confusion that persists to this day. So, I mean, it really felt at the time, like I'd been punched or stabbed or something like, I remember as a kid in Glasgow, eight years old or something, I was playing in a nearby school and Glasgow has a lot of problems with violence and again, a gang of kids just came up to me and beat me up. 


And, I remember at the time not fighting back, and I just asked, why are you doing this? And, that was just the same feeling I had. It was just like, why? Why on earth would you have done this? It makes no sense to me. And, so I think I was maybe even, when the collapse was happening, even slow on the uptake compared to the rest of the world to appreciate it for the flaw that it was 'cause it felt so incongruous and inconsistent with the experiences I'd had with Sam and with the others who were high up at FTX.
[00:25:00] Chris Anderson:
So, his fall gave many people license to pile on and criticize EA from all angles. I mean, some of it just, just from my viewpoint, felt like this is sort of a glee in piling on possibly, I would argue, because it relieves people of the responsibility to ask any difficult questions of themselves. 
But, nonetheless, there was legitimate criticism at this point. I'm curious, Will, as to what your takeaways are and to what extent you have felt that you, you needed to reframe a bit how EA should be thought of. 


[00:25:41] Will MacAskill:
You know, we've always emphasized that effective altruism does not entail ends justify the means reasoning. 
You know, there's good reasons for that, non-consequential list of reasons, as in just it's intrinsically wrong to do harm for the greater good. But, also just it doesn't work. This has been known for hundreds of years. There are certain moral rules that have evolved in our culture for a reason like don't inflict harm for the purported greater good. 


But, in terms of communication about EA going forward, historically we've talked about what's distinctive about living a good life, which like I said, is using more of your resources, your time, and your money to help others. And, with that, trying to do as much good as you can. But, now the EA has gotten more successful and certainly in the light of the FTX scandal, I think we need to emphasize more just what a wholly virtuous life looks like, where that involves all the common sense virtues, being honest, being cooperative, being high integrity, being kind. And, what we're saying is just take all of that. Don't throw it away, but crank up the dial on these other virtues of benevolence, how much you just care about others for their own sake, wherever they are in the world. 

Kind up the dial on truth seeking as well, and like regular than your thinking, especially when applied to that attempt to help others. So, you know, that's something I think I'm gonna be emphasizing much more going forward.

[00:27:17] Chris Anderson:
Will, you, you said something to me a few months ago that I, I think a lot of people don't get about EA, which is they think of EA as this sort of set of moral prescriptions or, you know, these are the organizations you should support. These are the causes you should care about. And, you described that as a complete misunderstanding and that, that actually EA was intended as a process. 


It's, it's a way of people thinking, people asking the right questions. I found that powerful. I mean, just to take the basic question, which I would ask of any pile on critic of EA, would you rather your altruism was effective or ineffective. I mean, if you start right there, most people, I think, want their altruism to be effective. 


[00:28:13] Will MacAskill:
Yeah. So this is exactly right. So, the model or inspiration I would take is like science. So, what is science? It is not a body of pieces of knowledge that we have, nor even is it a body of widely accepted theories. It is primarily a process. It's the use of experiment and formal more reasoning to help us get to the truth. That is the core of what science is. 


What is effective altruism? Again, it's not a set of recommended charities and recommended career paths 'cause I am very confident that we are still in the dark about lots of things. I'm very confident there's an enormous amount we don't know. I've changed my mind on a huge number of topics in the last 15 years. 
In the next 15, I expect to change my mind many times again. Instead, what it is, is a question. The question of, well, with the time and money that I'm willing to put towards doing good, how can I make that as effective as possible? How can I do as much good as possible? I'm gonna really just look at the evidence as best I can.
I'm gonna engage with all the arguments that are relevant, and I'm gonna really try and take this seriously and work it through because it's so important that I really wanna get the right answer. 


[00:29:21] Chris Anderson:
So, let's talk about AI. What worries you most and what could we do about it? 


[00:29:30] Will MacAskill:
Sure. Where I wanna place most of the emphasis with respect to AI is on the idea of explosive growth and capabilities. 
So, this is a way in which AI is different, I think, than any other technology is that at some level of capability, which we might well get in the next few years or the next decade, we will have AI that can build better AI. That is AI that's significantly helping with the process of research and development of AI systems themselves, and those better AI systems will be able to build better AI and so on. 


And, so this is an argument that goes all the way back to I. J. Good, a computer science pioneer who called it the intelligence explosion. And, more recently, much more in-depth work has been done to scrutinize that argument, embed it into formal models of economic growth. And, I really think that the argument is surviving. 


And, that's really quite a dizzying thought, because what it means is that what you would naturally think of as many centuries of technological progress. So, everything that might happen technologically speaking between now and the year, 2500, let's say, all of that might occur within the course of just a few years because you get AI that can create better AI, and quite soon you have billions of AI scientists working 24 hours a day to create better and more powerful technology.

And, I think by default that doesn't go very well. It means that we're inventing new weapons of mass destruction, including new weapon lead that we haven't even conceived of yet. It means that we're able to automate military power such that a single person in principle could control all military force, basically our robot army. 
It means potentially we'd be creating beings that have model status themselves that actually we should give some model consideration to. And, then finally, if we are creating in such a short period of time, AI systems that are far, far more intelligent than us and more capable than us, it thinks about the risk that we could lose control to those systems too. 


And, so the way I think of AI is like an analogy to the industrial revolution where the industrial revolution just meant that the pace of technological change increased by a factor of about 30. And, I actually think AI could increase that pace of change by a factor of 30, even, even more than that again. 


And, so it's like this accelerator that brings a whole suite of different concerns that we need to pay attention to and carefully govern. 


[00:32:05] Chris Anderson:
Mm-Hmm. It really does feel as if all bets are off and it also feels as if it's in, gonna be incredibly hard to stop the train now. A lot of people in the space say that there's a 5% chance that things could go horribly wrong, but they'll probably go right, they'll probably be better. 


Um, but horribly wrong can include consigning, all of humans to irrelevance. And, that's definitely the outcome I, I fear most is, is that we're not gonna be the main game in town much longer. And, what will come will be amazing and astounding and who knows what we unlock. But, I also worry about the fact that any good thing that happens in the world, you know, it happens as a intentional battling against a chaotic universe.

You know, we build things carefully and slowly and you put them together and you make gradual progress, and then bad things happen very quickly and sort of blow up part of that progress, and then you try again. Try again. But, the more powerful the things we build, the bigger those bands can be, and the fact that we're creating technological power that could take out 8 billion humans very quickly is shocking. Is there a pathway to… 


[00:33:24] Will MacAskill:
To a good outcome?

[00:33:25] Chris Anderson:
Avoiding this, 
avoiding this without, without sounding like complete Luddites.

[00:33:29] Will MacAskill:
Uh, yeah.

[00:33:30] Chris Anderson:
Perhaps we should become Luddites. 


[00:33:31] Will MacAskill:
So, I think there is a pathway, um, and I think AI has potential to do an enormous amount of good as well. 
And, I think part of the solution, in fact, will be using AI to help us with the problems we face, including the problems of AI alignment. So, one thing I think we can do, is try to accelerate the helpful parts of AI and push back the kind of scarier, more dangerous parts. So, there's an idea called tool AI, which is AI that just you ask it to do something and it helps you out. 


It's kind of just like an input output, so ChatGBT is like this contrast with agentic AI or AgentAI. And, that's where you can tell it to do something. And, it's not just giving you an answer, like a kind of article. Instead, it's actually going out in the world and making a lot of changes. That is a lot more dangerous. 


And, upon the, the release of GPT-4 one year ago now, many people tried to turn it into an agent, including some people who created, uh, what they called ChaosGPT, that was explicitly instructed to take over the world and achieve digital immortality. Uh, thankfully it wasn't very good, but what we could do. Is to say, look, tool AI is amazing. 

tool AI can really help us out. Agentic AI, we are not prepared for, so let's maybe even subsidize accelerate tool AI, but really put intense regulation and the brakes on agentic AI. That I think would help an awful lot. 


[00:34:57] Chris Anderson:
Well, look the, this is such a huge conversation and if this was a cause that someone wanted to support on get into, is there any resource that you can point them to? 


[00:35:10] Will MacAskill:
So, for effective altruism in general, the single place I'd most love to point people is Giving What We Can. So, that's the organization that I helped set up 15 years ago now, and it encourages people to give at least 10% of their income to take a pledge to give 10% and I believe you're a member of this, is that?

[00:35:29] Chris Anderson:
I am a member and this is actually what I want to turn to now. It is a big and bold thing where you are asking people to do what certainly religions in a way have asked people to do for a long time, but which has fallen outta fashion, is to make a pledge. Pledging anything, I think, shifts you from impulsive charitable giving to thoughtful charitable giving. And, that to me is almost like the biggest single thing. If you know that each year you're going to give away a material amount, then it just makes it natural, completely natural to start to think, well, okay, what should I give it to? 


[00:36:08] Will MacAskill:
Yeah, absolutely. And, it's certainly the case for me that I don't think I would be giving nearly as much as I am if I hadn't made those earlier statements and I didn't have a community of people around me who, uh, are encouraging and think of this as like a cool thing to do rather than something weird or abnormal. 


[00:36:25] Chris Anderson:
I mean, look, in the, in the book, I argue for this pledge and specifically I think givingwhatwecan.org actually is the best place you can go to join a community, make the pledge public. They've got a lot of great tools there that allow you to potentially step up to 10% if you, if you're not able to do that initially, but I think for a lot of people, Will, we need more sort of life hack type motivations to get there. Maybe you need to see a picture of the child from the organization who you are supporting. Maybe need you to take on a trip with them from time to time and actually see the work in person so that you can feel it. 
Maybe you need to hook up with a community of supporters who will help you feel, okay, we're doing this as a team effort. 


[00:37:12] Will MacAskill:
Yeah. So, you know, when I was thinking about, okay, how much am I gonna pledge? How am I gonna stick at 10%? Am I gonna go even more? I actually did just go through photos of children in poor countries suffering from neglected topical diseases, and what I thought to myself was, can I look these children in the eye? You know, it's photos of them, but looking them in the eye and say, look, I can justify having this money to myself. And, I remember in particular, one child who had, um, an unusual condition, lymphatic filariasis of the face, but also known as elephantiasis. 


So, it's a condition that just makes your body part swell up. And, so their face was just in incredibly swollen and utterly debilitating disease. And, honestly, I just thought, look, if my donations, if half my income or whatever can prevent this one child from having a condition like that, it will have been worth it. 
It will have been well worth it. And, let alone the fact that, actually, it's hundreds of times more than that. 


[00:38:17] Chris Anderson:
Mm-hmm. So, I think it's completely legitimate to do that. To try and find, to think of yourself as this complex mix of head and heart and to say, I need to find the ways that that bring my heart along on the journey here, because the head is important, but it's probably not gonna keep me going the whole way. Speaking of which, I worry that for some people like you, you, you described yourself as, you know, you're now giving more than 50% of your income. I mean, that's, that's incredible. Some people will hear that and go, oh my God, if this is the logical outcome of the journey I'm musts go to, I just don't want to hear anymore. 


And, so Peter Singer's arguments are so powerful, and yet they have scared a lot of people and so I, I tried in the book and I'm, I would like to bounce this off your philosophical critique. I tried in the book to put together an argument that said that if you embark on this journey, it's actually not a journey towards unlimited giving, that a pledge can be a ceiling as well as a flaw in terms of what we require from each other. And, the argument was this, first of all, it was to expand the pledge so that it wasn't just an income pledge. For the very rich and income pledge is actually not that challenging. Many of billionaires, for example, don't have much income relative to their wealth. 

Um, and for the very rich, I think the, the traditional Muslim pledge, Zakat, of two and a half percent of your net wealth annually is a much more challenging and a much more important pledge to make.

So, I then did a calculation, Will, that that said what would happen if a meaningful number of the people who could afford to do so gave the higher of either 10% of their income, or two and a half percent of their net worth annually, how much would that actually raise? And, even if only a third of the people who could do that did that, I think on the numbers we came up with, it was 3or 4 trillion dollars annually. And, working with another colleague of yours, Natalie Cargill, she helped calculate that the amount that this could achieve, you could argue, I'm not sure whether she would argue, but I certainly could argue that it kind of counts as, as almost maximum philanthropy that at this level, the bottleneck to making a better future is no longer philanthropy. It's just, it's execution of that huge amount of philanthropy.

Therefore, it's reasonable to say that that is enough, that if people commit to pledging the higher of 10% of their income, or 2.5% of their net worth, they don't have to feel guilty beyond that. They have done, they have fulfilled their obligation and that we actually want obligations out there that a typical human can reasonably fulfill, that if we have moral obligations that are so high, they'll just get ignored. 
So, I'm in a clumsy way trying to articulate a, a moral philosophy that says moral principles need to take into account human limitations. 


[00:41:38] Will MacAskill:
Yeah, I mean I think it's just an excellent point. So, you know, you mentioned Peter Singer who had the most hardcore give until you're at the poverty line. I mean, he doesn't do that. 


He gives a lot. He gives 40% or something of his income, which is a good income. I don't give as much as I could and I don't give until I feel miserable. I mean, it feels embarrassing to say, but I felt a lot of guilt 'cause I felt like there are these huge problems in the world. The world is going to hell in a hand basket. 
I am not doing anything about it. I don't know what to do and I just felt really bad. And, once you stop giving,10%, okay, it's not as much as you could in principle do, but it's more than most people are doing. Most people give 1% or 2% and it's meaningful. It's really making a meaningful difference in the world. 


And, you know that feeling of guilt significantly does go away. Instead, do you start to think more like, okay, I'm actually embarking on this practical project of making the world better. That feels inspiring rather than demoralizing. 


[00:42:42] Chris Anderson:
Hmm. Okay. So, I think we're both in our different ways, saying we can do two things at once. 
We can embrace the full power of the Peter Singer arguments that we have an obligation actually to every person on the planet, maybe every person of the future, every person of the past. We have a huge, unlimited moral obligation, but we also can be human. We can, we can live within reasonable human capabilities and it, it is not a moral objective to lead a life of permanent guilt. 


It really isn't. And, I love the site you've built there, the givingwhatwecan.org. Honestly, I would urge anyone listening to this, if you're persuaded by any of this, to consider heading on over there, um, using some of the tools they have, and you don't have to start with 10% or two and a half percent of net worth or anything. 


You could, you start with something, start with something, and then use it as an excuse to have, you know, a, a regular discussion with those close to you about how to spend that money effectively, effectively, and be amazed at the joy that that actually brings with it. Is there any final thing you'd like to say to people? 


[00:43:54] Will MacAskill:
I just thought that was, yeah utterly correct and really quite inspiring Chris. And, I just wanna thank you as well for your support. So, both putting, giving into practice and then not only that, but going out to bat to try and get people to donate more. We all have this huge, amazing opportunity to do good in the world just by moving a fraction of our money, spending it in a different way. 


And, that's something that should be celebrated and I'm so glad that you are, uh, being part of that celebration.
[00:44:25] Chris Anderson:
Well, we're fellow travelers here, and we certainly welcome anyone else who wants to come and be a fellow traveler. Come on and be part of this journey. It's an amazing journey. Will, you've been an incredible thought leader for a long time, and you've been through a lot and it's been very powerful to hear your vision of the story here over the last couple years. So, thank you so much for your work and for this time, this conversation now.
[00:44:51] Will MacAskill:
Thank you, Chris. 


[00:44:55] Chris Anderson:
Okay, well that's about all for today. If you liked this conversation, please consider sharing it with others. I mean, you could think of that as your own act of infectious generosity. You never know, one person you mention this to or someone they might mention it to might be inspired enough by Wil MacAskill to do something big. 


And, if they do, it'll be because of you. Next week we're talking to Hamdi Ulukaya. He's the founder of Chobani, the most popular Greek yogurt brand in the USA. But, despite his success, he has referred to himself as an anti-CEO because of his unique approach to bringing a spirit of generosity into his business strategy. 


I can't wait to introduce you to his work. For more, follow along in my book, Infectious Generosity. You can access a free copy of the audio book at ted.com/generosity. The TED Interview is part of the TED Audio Collective, a collection of podcasts dedicated to sparking curiosity and sharing ideas that matter. 


This episode was produced by Jess Shane. Our team includes Constanza Gallardo, Grace Rubenstein, Banban Cheng, Michelle Quint, Roxanne Hai Lash, and Daniella Balarezo. This show is mixed by Sarah Bruguiere.

Thank you so much for listening. I'll catch you next time.