Sam Altman on the future of AI and humanity (Transcript)
ReThinking with Adam Grant
Sam Altman on the future of AI and humanity
January 7, 2025
Please note the following transcript may not exactly match the final audio, as minor edits or adjustments could be made during production.
[00:00:00] Sam Altman: One of the surprises for me about kind of this trajectory OpenAI has launched onto since the launch of ChatGPT is how many things can go wrong by one o'clock in the afternoon.
[00:00:12] Adam Grant: Hey everyone, it's Adam Grant. Welcome back to ReThinking, my podcast on the science of what makes us tick with the TED Audio Collective.
I'm an organizational psychologist, and I'm taking you inside the minds of fascinating people to explore new thoughts and new ways of thinking. My guest today is Sam Altman, CEO and co-founder of OpenAI. Since Sam and his colleagues first dreamed up ChatGPT, a lot has changed.
[00:00:37] Sam Altman: You and I are living through this once in human history transition where humans go from being the smartest thing on planet earth to not the smartest thing on planet earth.
[00:00:49] Adam Grant: The exponential progress of AI has made me rethink many of my assumptions about what's uniquely human, and raised far more questions than answers. Since the source code is a black box, I figured it was time to go to the source himself. Having crossed paths with Sam at a few events, I've appreciated his willingness to think out loud instead of just sticking to scripted talking points, even when his opinions are unpopular.
[00:01:11] Sam Altman: I suspect that in a couple of years on almost any topic, the most interesting, maybe the most empathetic conversation that you could have will be with an AI.
[00:01:25] Adam Grant: Sam Altman does his own tech check. How did that happen?
[00:01:29] Sam Altman: Uh, you know, I don't know. It's fine.
[00:01:31] Adam Grant: There's no handler here. Sometimes I have to start where I'm sure many conversations have kicked off over the past year, which is what did it feel like to be fired from your own company?
[00:01:40] Sam Altman: This like surreal haze. The confusion was kind of the dominant first emotion. Then there were like, I went through everything, but confusion was the first one.
[00:01:50] Adam Grant: And then?
[00:01:50] Sam Altman: Then like frustration, anger, sadness, gratitude, I mean, it was everything.
[00:01:56] Adam Grant: Wow.
[00:01:57] Sam Altman: That 48 hours was like a full range of human emotion. It was like impressive in the breadth.
[00:02:02] Adam Grant: What did you do with those emotions in that, in that 48 hours?
[00:02:05] Sam Altman: Honestly, there was so much due, just like tactically that there was not a lot of time for like, dealing with any emotions. So in those 48 hours, not much. And then it was like hard after when the dust was settling in, I had to like get back to work in the midst of all of this.
[00:02:20] Adam Grant: I remember Steve Jobs saying years later, after he was forced out of Apple that it was awful tasting medicine, but I guess the patient needed it. Is that relatable in any way? Or is this situation just too different?
[00:02:32] Sam Altman: Maybe it hasn't been long enough. I have not reflected deeply on it recently. I think it was so different from the Steve Jobs case in all of these ways, and it was also just so short.
The whole thing was like totally over in five days. This like very strange fever dream and then like back to work picking up the pieces.
[00:02:48] Adam Grant: I guess five days versus a decade is a slightly different learning curve, right?
What, what did you learn lesson and wise?
[00:02:54] Sam Altman: I actually, maybe I was wrong. Maybe it's only four days.
I think it was four days. I learned a bunch of stuff that I would do differently next time about how we communicated during and after that process and like the need to just sort of be direct and clear about what's happening. I think there was this like cloud of suspicion over OpenAI for a long time that we could have done a better job with.
I knew I worked with great people, but seeing how good the team was in a crisis and in a stressful situation with uncertainty. One of the proudest moments for me was watching the executive team kind of operate without me for a little while, and knowing that any of them would be perfectly capable of running the company.
And I felt a lot of pride both about picking those people, about teaching them to whatever degree I did, and just that the, the company was in a very strong place.
[00:03:37] Adam Grant: I'm surprised to hear you say that I, I had assumed your proudest moment would've been just the sheer number of employees who stood behind you.
I, I thought as an organizational psychologist, that was staggering to see the outpouring of loyalty and support from inside.
[00:03:49] Sam Altman: It did feel nice, but that's not the thing that like sticks in. I remember feeling like very proud of the team, but not for that reason.
[00:03:55] Adam Grant: Well, I guess that's also very Jobsian then. When he was asked what his proudest achievement was, it wasn't the Mac or the iPod or the iPad, it was the team that built those products.
[00:04:05] Sam Altman: I don't do the research. I don't build the products. I make some decisions, but not most of them. The thing I get to build is the company, so that is certainly the thing I have pride of authorship over.
[00:04:15] Adam Grant: So what do you actually do? Like how do you spend your time?
[00:04:19] Sam Altman: It's a great question. On any given day, it's pretty different and fairly chaotic. Somehow the early mornings are never that chaotic, but then it often like all goes off the rails by the afternoon and there's all this stuff that's happening and you're kind of in reaction mode and firefighting mode. So I've learned to get the really important things done early in the day. I spend the majority of my time thinking about research and the products that we build, and then less on everything else. But what that could look like at any given time is very different.
[00:04:46] Adam Grant: So one of the things that, that I've been very curious about as I've watched you turn the world upside down in the last couple years is like, what, what's gonna happen to humans?
I've been tracking what I think is the most interesting research that's been done so far, and humans are losing a lot faster than I hoped, a lot faster. So I think we're already behind on creativity, on empathy, on judgment, on persuasion, and I want to get your reactions to some data points in each of those areas.
But first, like just your commentary on the overall. Are you surprised by how quickly AI has surpassed a lot of human capabilities?
[00:05:21] Sam Altman: Our latest model feels smarter than me in almost every way, and it doesn't really impact my life.
[00:05:27] Adam Grant: Really?
[00:05:27] Sam Altman: I still care about the same kinds of things as before. I can work a lot more effectively.
I assume as society digests this new technology, society will move much faster. Certainly, scientific progress I hope will move much faster. And we are coexisting with this amazing new artifact, tool, whatever you wanna call it, but how different does your day-to-day life feel now from a few years ago?
Kind of not that different? I think that over the very long term, AI really does change everything. But I guess what I would've naively thought a decade ago is the day that we had a model as powerful as our most powerful model, now everything was gonna change, and now I think that was a naive take.
[00:06:05] Adam Grant: I, I think this is the, the standard. Like we overestimate change in the short run and underestimate it in the long run.
[00:06:11] Sam Altman: Right, exactly.
[00:06:12] Adam Grant: So you're living a version of that.
[00:06:13] Sam Altman: Eventually, I think the whole economy transforms. We'll find new things to do. I have no worry about that. We'll, we always find new jobs, even though every time we stare at a new technology, we assume they're all gonna go away.
It's true that some jobs go away, but we find so many new things to do and hopefully so many better things to do. I think what's gonna happen is this is just the next step in a long unfolding exponential curve of technological progress.
[00:06:37] Adam Grant: I think in some ways the AI revolution looks to me like the opposite of the internet. Because back then people were running companies, they didn't believe that the internet was gonna change the world, and their companies died because they, they didn't make the changes they needed to make. But the people who bought in, it was really clear what the action implications were. Like, I need to have a functioning website. I needed to know how to sell my products through that website.
Right? It was not rocket science to adapt to the digital revolution. What I'm hearing right now from a lot of founders and CEOs is the reverse. Everybody believes that AI is game changing and nobody has a clue what it means for leadership, for work, for organizations, for products and services. They're all in the dark.
[00:07:16] Sam Altman: In that sense, it's more like the industrial revolution than the internet revolution. There are huge known unknowns of how this is gonna play out, but I think we can say a lot of things. About how it is gonna play out too.
[00:07:27] Adam Grant: I wanna hear those things. A couple hypotheses that, that I have. One is that we're gonna stop valuing ability and start valuing agility in humans.
[00:07:36] Sam Altman: There will be a kind of ability we still really value, but it will not be raw, intellectual horsepower to the same degree.
[00:07:42] Adam Grant: And what do you think the new ability is that matters?
[00:07:44] Sam Altman: I mean, the like kind of dumb version of this would be figuring out what questions to ask will be more important than figuring out the answer.
[00:07:51] Adam Grant: That's consistent with what I've seen even just in the last couple years, which is we used to put a premium on how much knowledge you had collected in your brain, and if you were a fact collector that made you smart and respected. And now I think it's much more valuable to be a connector of dots than a collector of facts that if you can synthesize and recognize patterns, you have an edge.
[00:08:12] Sam Altman: If you ever watch that TV show, Battlestar Galactica, one of the things they say again and again on the show is, all this happened before, all this will happen again. And when people talk about the AI revolution, it does feel different to me in some super important qualitative ways, but also it reminds me of previous technological panics. When I was a kid, this thing came out. New thing launched on the internet. I thought it was cool. Other people thought it was cool. It was clearly way better than the stuff that came before. I was not quite old enough yet for this to happen directly to me, but the older kids told me about it.
The teachers started banning the Google because-
[00:08:47] Adam Grant: Did they call it The Google?
[00:08:49] Sam Altman: The Google. If you could just look up every fact, then what was the purpose of going to history class and memorizing facts? We were gonna lose something so critical about how we teach our children and what it means to be a responsible member of society.
If and if you could just look up any fact instantly, you didn't even have to like fire up the combustion engine, drive to the library, look in the card catalog, find a book. It was just there. It felt unjust, it felt wrong. It felt like we were gonna lose something. We weren't gonna do that. And with all of these, what what happens is like we get better tools, expectations go up.
So does what someone's capable of and we just learn how to do more difficult, more impactful, more interesting, whatever things. And I expect AI to be like that too. If you asked someone a few years ago, A, will there be a system as powerful as O1 in 2024? And B, if an Oracle told you you were wrong, and there will be, how much would the world change?
How much would your day-to-day life change? How would we face an existential risk or whatever? Almost everybody you asked would've said, definitely not on the first one, but if I'm wrong, and it happens like we're pretty fucked on the second. And yet this amazing thing happened and here we are.
[00:10:00] Adam Grant: So in the realm of innovation, there's a new paper by Aiden Toner Rogers, which shows some great news for R&D scientists that when they're AI assisted, they file 39% more patents, and that leads to 17% more product innovation.
And a lot of that is in radical breakthroughs, novel chemical structures being discovered. And the major gains are for top scientists, not bottoms. There's very little benefit if you're in the bottom third of scientists, but the productivity of the top ones almost double, and that doubling seems to be because AI automates a lot of idea generation tasks, and it allows scientists to focus their energy on idea valuation, where the great scientists are really good at recognizing a promising idea, and the bad ones are vulnerable to false positives.
So that's all good news, right? Incredible unlocking of scientific creativity. But it comes with a cost, which is in the study, 82% of scientists are less satisfied with their work. They feel they get to do less creative work, and their skills are underutilized, and it seems like humans are in that, in that case, are being reduced to judges as opposed to creators or inventors.
I would love to know how you think about that evidence and what, what do we do about that?
[00:11:10] Sam Altman: I have two conflicting thoughts here. One of the most gratifying things ever to happen at OpenAI, for me personally, is as we've released these new reasoning models, we give them to great, legendary scientists, mathematicians, coders, whatever, and ask what they think and hearing their stories about how this is transforming their work and they can work in new ways.
I have certainly gotten the greatest professional joy from having to really creatively reason through a problem and figure out an answer that no one's figured out before. And when I think about AI taking that over, if it happens that way, I do feel some sadness. What I expect to happen in reality is just there's gonna be a new way we work on the hard problems.
It's being an active participant in solving the hardest problems that brings the joy. And if we do that with new tools that augment us in a different way, I kind of think we'll adapt, but I'm uncertain.
[00:12:05] Adam Grant: What does that look like in your job right now? Like how do you, how do you use ChatGPT, for example, in solving problems that you face at work?
[00:12:12] Sam Altman: Honestly, I use it in the boring ways, and I use it for like, help me process all of this email or help me summarize this document, or they're just the, the very boring things.
[00:12:20] Adam Grant: It sounds like then you're, you're hopeful that we'll adapt in ways that allow us to still participate in the creative process.
[00:12:27] Sam Altman: I am hopeful, that's so deeply the human spirit and the way I think this all continues kind of no matter what, but it will have to evolve and it will be somewhat different.
[00:12:34] Adam Grant: Another domain where I expected humans to have an edge much longer than then we've stuck it out so far is empathy. My favorite experiments that I've read so far basically show that if you're having a, a text conversation and you don't know whether it's a human or a ChatGPT, and then afterward you're asked, how seen did you feel?
How heard did you feel? How much empathy and support did you get? You feel that you got more empathy and support from AI than you did from a human, unless we tell you it was AI and then you don't like it anymore.
[00:13:03] Sam Altman: Right.
[00:13:03] Adam Grant: I look at that evidence as a psychologist and I have a couple reactions. One is, I think it's not that AI is that good at empathy.
It's that our default as humans is pretty bad and poor, right? We slip into conversational narcissism way too quickly where somebody tells us a problem and we start to relate it to our own problem as opposed to showing up for them. So I think maybe that's just a, an indictment of human empathy having a poor baseline, but also I wonder how long.
Like this, "I don't want it if I know it's from an AI" is gonna last as we start to humanize and anthropomorphize this tech more and more.
[00:13:36] Sam Altman: Let me first talk about the sort of general concept of people sometimes preferring the actual output of something, if it's AI, until they're told that it's AI and then they don't like it, that you see that over and over again.
I saw a recent study that even among people who claimed that they really hated AI art the most, that the scale that you choose, they still selected more output of AI than humans for the pieces of art they liked the most until they were told which was AI and which wasn't. And then of course it was different.
We could pick many other examples. But this trend that AI has in many ways caught up to us and yet we are hardwired to care about humans and not AI I think is a very good sign. We're all in speculation here and I'll, so I'll say I have a very high uncertainty on all of this, but although you'll probably talk more to an AI than you do today, you will still really care about when you're talking to a human, that this is something very deep in our biology and our evolutionary history and our social functioning, whatever you wanna call it.
[00:14:38] Adam Grant: Why do you think we will still want human connection? It sounds like a version of the Robert Nozik argument that led to the matrix, of people preferring real experience over sort of simulated pleasure. Do you think that's what we're craving? We just want the real human connection, even if it's flawed and messy? Which of course, AI is gonna learn to simulate too.
[00:14:56] Sam Altman: I think you'll find very quickly that talking to a flawless, perfectly empathetic thing all of the time, you miss the drama or the tension or what I, I, I, there'll be something there. I think we're just so wired to care about what other people think, feel, how they view us. And I don't think that translates to an AI.
I think you can have a conversation with an AI that is helpful and that you feel validated and it's a good kind of entertainment in a way that playing a video game is a good kind of entertainment. But I don't think it fulfills the sort of social need to be part of a group and a society in a way that is gonna register with us.
Now, I might be wrong about this and maybe AI can so perfectly hack our psychology that it does, and I'll be really sad if that's the case.
[00:15:39] Adam Grant: Yeah, me too. You're right. It's hard for AIs to substitute for belonging. It's also hard to get status from a bot, right, to feel important or cool or respected in ways that like we rely on other human eyeballs and ears for.
[00:15:53] Sam Altman: That was kind of what I was trying to get at. I can imagine a world soon where AIs are just like unbelievably more capable than us and doing these amazing things. And when I imagine that world and I imagine the people in it, I imagine those people still caring about the other people quite a lot, still thinking about relative status and sort of these silly games relative to other people quite a lot. But I don't think many people are gonna be measuring themselves against what the AI is doing and capable of.
[00:16:19] Adam Grant: So one of the things that I've been really curious about is, in a world where information is increasingly contested and, and facts are, are harder and harder to, to persuade people of, we see this, for example, in the data on conspiracy theory, beliefs like people believe in conspiracies because it makes them feel special and important and like they have access to knowledge that other people don't.
That's not the only reason, of course, but it's one of the driving reasons. And what that means is it's really hard for another human to talk them out of those beliefs because they're kind of admitting that they're wrong, and I was fascinated by a recent paper. This is Costello, penny Cook and Rand. They showed that if you have a single conversation with an AI chatbot, it can even months later, basically get people to unie a bunch of their conspiracy theories.
It starts by essentially just targeting. A false claim that you believe in.
[00:17:09] Sam Altman: Yeah.
[00:17:09] Adam Grant: And debunking it. And I think it works in part because it's responsive to the specific reasons that you have attached to your belief. And in part because like nobody cares about looking like an idiot in front of a machine, like they do a human. And not only do people, I think about 20% of of people let go of their absurd conspiracy beliefs, but also they let go of some other beliefs that the AI didn't even target. And so I think that that door opening is very exciting. Obviously this can be used for evil as well as good, but I'm really curious to hear about what, what your take is on this newfound opportunity we have to correct people's misconceptions with these tools.
[00:17:47] Sam Altman: Yeah, there are people in the world that can do this, that can kind of expand our mind in some way or other. It's very powerful. There's just not very many of them, and it's a rare privilege to get to talk to them. If we can make an AI that is like the world's best dinner party guest, super interesting, knows about everything, incredibly interested in you, and takes the time to like understand where
they could push your thinking in a new direction, that seems like a good thing to me, and I've also had this experience with AI where I, I had the experience of talking to a real expert in an important area and that changing how I think about the world, which for sure there is some human that could have done that, but I didn't happen to be with him or her right.
[00:18:27] Adam Grant: Then it also obviously raises a lot of questions about the hallucination problem and accuracy. As an outsider, it's really hard for me to understand why this is such a hard problem. Can you explain this to me in a way that that will make sense to somebody who's not a computer scientist?
[00:18:41] Sam Altman: Yeah. I think a lot of people are still stuck back in the GPT-3 days, ancient history back in 2021 when none of this stuff worked really.
It did hallucinate a lot. If you use the current ChatGPT, it still hallucinates some for sure, but I think it's like surprising that it's generally pretty robust. We train these models to make predictions based off of all the words they've seen before. There's a bunch of wrong information in the training set.
There's also sometimes cases where the model fails to generalize like it should, and teaching the model when it should confidently express that it doesn't know versus, you know, like make its guess is still an area of research, but it's getting a lot better. And with our new reasoning models, there's a big step forward there too.
[00:19:23] Adam Grant: I've prompted ChatGPT in various iterations, like, is this true? Can you please make sure this is an accurate answer? That should be built in as you know, as a required step in the iteration. So is that where we're heading? Then that that just becomes an automatic part of the process.
[00:19:37] Sam Altman: I think that will become part of the process.
I think there will be a lot of other things that make it better too, but that will be part of the process.
[00:19:43] Adam Grant: There's some brand new research and there have been a bunch of these kinds of studies over the last year or two, but the one that that sort of blew my mind this past week was when you compare AI alone to doctors alone, of course AI wins, but AI also beats doctor AI teams.
And my read of that evidence is that like doctors aren't benefiting from AI assistance because they override the AI when they disagree.
[00:20:11] Sam Altman: You see versions of this throughout history, like when AI started playing chess. There was a time where humans were better. Then there was a time when AIs were better, and then for some period of time, I forget how long, the AI plus humans working together were better than AI alone because they could sort of bring the different perspectives.
And then there came a time where the AI was again, better than an AI plus a human because the human was overriding and making mistakes where they just didn't see something. If you view your role as to try to override the AI, in all cases, then it turns out not to work. On the other hand, the second thing, I think we're just early in figuring out how humans and AI should work together.
The AI is gonna be a better diagnostician than the human doctor, and that's probably not what you wanna fight. But there will be a lot of other things that the human does much better, or at least that the people, the patients want a person to be doing. And I think that'll be really important. I've been thinking about this a lot.
I'm expecting a kid soon. My kid is never gonna grow up being smarter than ai. The world that you know, kids that are about to be born, the only world they will know is a world with AI in it. And that'll be natural. And of course it's smarter than us. Of course it can do things we can't, but also who really cares?
I think it's only weird for us in this one transition time.
[00:21:21] Adam Grant: In some ways, that's a force for humility. Which I think is a good thing. On the other hand, we don't know how to work with these tools yet, right? And maybe some people are getting a little too dependent on them too quickly.
[00:21:31] Sam Altman: You know, I can't spell complicated words anymore because I just trust that autocorrect will save me.
I feel fine about that. It's easy to have moral panics about these things, even if people are more dependent on their AI to like help them express thoughts, maybe that is just the way of the future.
[00:21:45] Adam Grant: I've seen students who don't wanna write a paper without having. ChatGPT handy because like they've gotten rusty on the task of rough drafting and they're used to outsourcing a lot of that and then having raw material to work with as opposed to having to generate something in front of a blank page or a blinking cursor.
And I, I do think there is a little bit of that, of that dependency that's building. Do you have thoughts on, on how we prevent that? Or is that just the future and we ought to get used to it?
[00:22:10] Sam Altman: I'm not sure that is something we should prevent. For me, writing is outsourced thinking and very important, but as long as people replace a better way to do their thinking with a new kind of writing, that seems directionally fine.
One of the sad things about getting more well known is if I don't phrase everything perfectly for very little benefit to me or to Open AI, I just like open up a ton of attacks or, or whatever, and that is a bummer.
[00:22:34] Adam Grant: I, I do think that is, that is a privilege you, you lose the ability to just riff and play with ideas publicly and, and be partially wrong or have incomplete thoughts.
[00:22:44] Sam Altman: Mostly wrong. Mostly wrong with some gems in there.
[00:22:46] Adam Grant: I mean, that being said, like some of us are grateful that you're a little more circumspect than some of your peers who don't exercise any self-reflection or self-control.
[00:22:55] Sam Altman: Well, that's a different thing. Like there's also something about just like being a thoughtful, somewhat careful person, which, yes, I think more people should do.
The thing I think is really silly, a reasonably common workflow is that someone will write the bullet points of what they wanna say to somebody else, have ChatGPT write it into a nice multi-paragraph email, send it over to somebody else. That person will then put that email in ChatGPT and say, tell me what the three key bullet points are.
And so I think there is some vestigial formality of writing and communication or whatever, that probably doesn't still have a lot of value, and I'm fine to get to a world where the social norms evolve, that everybody can just send each other the bullet points.
[00:23:35] Adam Grant: I really want a watermark or at least some internal memory where ChatGPT can say back, Hey, like this was already generated by me and like, you should go back and tell this person you wanted bullet points so that you all can communicate more clearly in the future. In part what's going on is, is a lot of people are, are slow to adapt to the tools. We are seeing some really interesting human ingenuity, so the, the evidence that that jumps to mind for me is a, a, a study by Sharon Parker and her colleagues. Um, this is in the, the realm of robotics technology. So they go into a manufacturing company that's essentially starting to replace humans with robots. And instead of getting panicked that people are no longer gonna have gonna have jobs, a bunch of employees say, well, we need to find a unique contribution. We need to have meaning at work. And they get that by outsmarting the robots, like they study the robots, they figure out what they suck at, and then they're like, okay, we are gonna make that our core competence.
Now there's, I think the scary thing with O1 and the advances in reasoning is that like a lot of the skills that we thought would differentiate us last year are now already obsolete, right? Like the, the prompting tricks that a lot of people are using in 2023 are no longer relevant, and some of them are, are never gonna be necessary again. So what, what are humans gonna be for in 50 or a hundred or a thousand years?
[00:24:47] Sam Altman: No one knows, but I think the more interesting answer is what is a human useful for today? And I would say being useful to other people, and I think that'll keep being the case. A thing that someone said to me, this was Paul Buhe many, many years ago that really stuck with me as he had been thinking, and thinking, and thinking. This is like before OpenAI started, he thought that someday there was just gonna be human money and machine money and they were gonna be completely separate currencies and one wouldn't care about the other.
I don't expect that to be literally what happens, but I think it's a very deep insight.
[00:25:18] Adam Grant: Fascinating. I've never thought about machines having their own currency.
[00:25:23] Sam Altman: You will be thrilled that the AI has invented all of the science for you and cured disease and you know, made fusion work and just impossible triumphs we can't imagine. But will you care about what an AI does versus what some friend of yours does or some person running some company does? I don't know. Probably not that much.
[00:25:43] Adam Grant: No.
[00:25:43] Sam Altman: Like maybe some people do, maybe there's like some really weird cults around particular AIs and I will bet we'll be surprised the degree to which we're still very people focused.
[00:25:55] Adam Grant: Okay. I think it might be time for a lightning round.
[00:25:58] Sam Altman: This is me in like GPT 4 mode instead of O1 mode where I just have to like one shot at, you know, as quickly as I can. I'll put the next token.
[00:26:04] Adam Grant: First question is, what's something you've rethought recently on AI or changed your mind about?
[00:26:09] Sam Altman: I think a fast takeoff is more possible than I thought a couple of years ago.
[00:26:13] Adam Grant: How fast ?
[00:26:14] Sam Altman: Feels hard to reason about, but something that's in like a small number of years rather than a decade.
[00:26:18] Adam Grant: Wow. What do you think is the worst advice people are given on adapting to AI?
[00:26:22] Sam Altman: AI is hitting a wall, which I think is the laziest fucking way to try to not think about it and just, you know, put it out of sight, out of mind.
[00:26:29] Adam Grant: What's your favorite advice on how to adjust, or what advice would you give on how to adapt and succeed in an AI world?
[00:26:35] Sam Altman: This is so dumb, but the obvious thing is like, just use the tools. One thing that OpenAI does that I think is really cool, we put out the most powerful model that we know of that exists in the world today, oh one, and anybody can use it if you pay us 20 bucks a month.
If you don't wanna pay us 20 bucks a month, you can still use a very good thing. It's out there like the, the leading edge, the most capable person in the world, and you can access the exact same frontier, and I think that's awesome. And so go use it and figure out what you like about it, what you don't, what you think is gonna happen with it.
[00:27:03] Adam Grant: What's your hottest hot take or unpopular opinion on AI?
[00:27:06] Sam Altman: That it's not gonna be as big of a deal as people think, at least in the short term. Long term, everything changes. I kind of genuinely believe that we can launch the first AGI and no one cares that much.
[00:27:17] Adam Grant: People in tech care and philosophers care, those are the, those are the two, two groups I've, I've heard react consistently.
[00:27:23] Sam Altman: And even then they care. But like. 20 minutes later, they're thinking about what they're gonna have for dinner that night.
[00:27:28] Adam Grant: What's a question you have for me as an organizational psychologist?
[00:27:30] Sam Altman: Oh. What advice do you have for OpenAI about how we manage our collective psychology as we kind of go through this crazy super intelligence takeoff, like how do we keep the people here sane, for lack of a better word?
We're not really into like super intelligence part of the takeoff, but I imagine as we go through that it'll just feel like this unbelievably high stakes, immensely stressful thing. I mean, even now as we're in sort of the AGI ramp, it feels a little bit like that. I think we need much more organizational resilience for what's to come.
[00:27:57] Adam Grant: And when you think about organizational resilience, what does that look like? Does that mean people are not as stressed as they're likely to become? Does that mean they're able to roll more quickly with change than they might naturally?
[00:28:09] Sam Altman: Good decisions in the face of incredible, incredibly high stakes, uncertainty and also adaptability as the facts on the ground and thus the actions that we need to consider or to take change at a very rapid rate.
[00:28:24] Adam Grant: I think for me, the place to start on that is, is to draw a two by two and ask everybody at Open AI to think about how consequential each choice they make is.
How high are those stakes? And then how reversible is each choice? Are they walking through a a revolving door or is it gonna lock behind them? And I think where you really have to slow down. Do all of your thinking and rethinking upfront is the highly consequential, irreversible decisions because they really matter and you can't undo them tomorrow.
I think the other three quadrants is fine to act quickly, experiment, pilot, stay open to doubting what you think. But that quadrant is where it's really important to get it right. And that's where I want people to put their best thinking and, and probably their best promptings.
[00:29:05] Sam Altman: Makes sense.
[00:29:06] Adam Grant: So I wanna ask you about something you wrote.
You did a, a blog about how to be successful.
[00:29:10] Sam Altman: So long ago.
[00:29:11] Adam Grant: It was a long time ago.
[00:29:12] Sam Altman: I don't have that loaded in memory anymore.
[00:29:14] Adam Grant: That's okay. I have it right here. There was one section of it that I thought was particularly fascinating on self-belief, so I'll quote you to you here you wrote, self-belief is immensely powerful.
The most successful people I know believe in themselves, almost to the point of delusion. Cultivate this early as you get more data points that your judgment is good and you can consistently deliver results. Trust yourself more. Do you still agree with that?
[00:29:36] Sam Altman: I think so. It's hard to overstate. When we were starting OpenAI, we believed this thing. That was like right about the time of like maximum skepticism in OpenAI relative to what,
on the outside, relative to what we believed inside. And I think my most important contribution to the company in that phase was that I, I just kept reminding people like, look, the external world hates anything new. Hates anything that like, might go in a different direction than established belief. And so people are saying all of these crazy negative things about us, and yet we have this incredible progress. And I know it's early and I know we have to suspend disbelief to believe it'll keep scaling, but it's been scaling, so let's push it ridiculously far. And now it seems so obvious. But at the time, I truly believe that had we not done that, it might not have happened for a long time because we were the only people that had enough self-belief to go do what what seemed ludicrous, which was to spend a billion dollars scaling up a GPT model. So I think that was important.
[00:30:32] Adam Grant: I think it's all true. I think it's also scary because those same people are the ones when, they believe in themselves to the point of delusion or almost delusion, who make terrible decisions outside of their domains of expertise.
And I think, I think if I were gonna modify what you wrote, I would say as you get more data points that your judgment is good in a given domain.
[00:30:52] Sam Altman: Yes, yes.
[00:30:53] Adam Grant: Then you should trust yourself in that domain more.
[00:30:55] Sam Altman: That would've been a much better way to phrase it. I don't think it's true that experience and ability doesn't generalize at all, but many people try to generalize it too much.
I should have said something about like in your area of expertise, but there's nuance because I also think you should be willing to like do new things. You know, I was a investor and not a AI lab executive, you know, six or seven years ago.
[00:31:17] Adam Grant: It also really matters whether you're in a stable or dynamic environment because you can trust your judgment that's based on intuition in a stable environment because you have subconsciously internalized patterns of the past that are still gonna hold in the future.
Whereas if you're in a more volatile setting, oftentimes your, your gut feeling is essentially trained on data that don't apply right?
[00:31:38] Sam Altman: In that world, I think you wanna get even more towards the like, really core underlying principles that you believe in and that work for you because yeah, even more valuable.
[00:31:47] Adam Grant: The last topic that I wanted to to talk with you about is ethics. I know this is also something you've been thinking a lot about, talking a lot about this is the domain in which most people are most uncomfortable outsourcing any kind of judgment to AI.
[00:31:59] Sam Altman: Me too.
[00:32:00] Adam Grant: And I think this is where we have to rely on humans at the end of the day. I'm hearing a lot of nuclear deterrence kinds of metaphors of like, okay, what we need is we need a race ahead of bad actors. And then we'll have mutually assured destruction. Like, wait a minute, the arms race metaphor doesn't work here because a lot of the bad actors are not state actors.
They don't, they don't face the same risk or consequences. And then also like, now we're gonna trust a private company as opposed to elected officials? This feels very complicated and like it doesn't map. So talk to me about that and how you're thinking about the, the ethics and safety problems.
[00:32:36] Sam Altman: First of all, I think humans have gotta set the rules, like AI can follow them and we should hold
AI to following whatever we collectively decide the rules are, but humans have gotta set those. Second, I think people seem incapable not to think in historical analogy, and I understand that and I don't think it's all bad, but I think it's kind of bad because the historical examples just are not like the future examples.
So what I would encourage is people to ground the discussion as much as they can in what makes AI different than anything before. Based off what we know right now, not kind of wild speculation, and then trying to design a system that works for that. One thing that I really believe is deploying AI as a tool that significantly increases individual ability, individual will, whatever you wanna call it, is a very good strategy for our current situation and better than one company or adversary or person or whatever, kind of using all the AI power in the world today. But I will also cheerfully admit that I don't know what happens as the AI become more agentic in the big way. Not like we can go give them a task where they program for three hours, but where we can have them go off and do something very complicated that would normally require like a whole organization over many years. And I suspect we'll have to figure out new models again. I don't think history will serve us that well.
[00:33:53] Adam Grant: Now it frankly hasn't in the software world. I think that any other technology that's powerful is regulated in the US and I think, you know, it seems like the EU might be a little bit more competent when it comes to Congress.
[00:34:05] Sam Altman: I think what the EU is doing with AI regulation is not helpful for another reason, like for example, when we finish a new model, we can launch it even if it's not that powerful, we can launch it in the US well before we can launch it in the EU
'cause there's a bunch of regulatory process. And what if that means that the EU is always some number of months behind the frontier? I think they're just gonna build less fluency and economic engine and understanding and kind of whatever else you wanna put in that direction. It's really tricky to get the regulatory balance right and also we clearly, in my opinion, will need some.
[00:34:37] Adam Grant: What worries you the most when you look ahead in the next decade or so?
[00:34:41] Sam Altman: I think just the rate of change. I, I really believe in the sort of human spirit of solving every problem, but we got a lot to solve pretty quickly here.
[00:34:48] Adam Grant: One of the other things that I've been grappling with when I think about ethics and future impact is I thought so many digital technologies were gonna be democratizing, and we thought they were going to sort of prevent or at least chip away at inequality.
And very often it's been the opposite, that the rich have gotten richer because they have had better access to these tools. Now you pointed out that O1 is, is pretty cheap by American standards. There's still, I think, an access discrepancy. What is it gonna take to change that? What does it look like for AI to be a force for good in the developing world?
[00:35:22] Sam Altman: We've been able to drive the price per unit of intelligence down by roughly a factor of 10 every year. Can't do that for that much longer. No. But we've been doing it for a while and I think it's amazing how cheap intelligence has gotten.
[00:35:35] Adam Grant: I guess in some ways though, that that works against the problem of, well, at least right now, like the only players that can afford to make really powerful models are governments and huge companies that are accountable.
[00:35:47] Sam Altman: For now to train it, yes.
[00:35:49] Adam Grant: Yeah. But to use it is very different. So as you sit back and like look at the last, I mean three years, it's gotta be like. You've gone through a lifetime of change.
[00:35:58] Sam Altman: It's been weird.
[00:35:59] Adam Grant: Why are you doing this? I guess is one way to put it.
[00:36:02] Sam Altman: I am a techno optimist and science nerd, and I think it is the coolest thing I could possibly imagine and the best possible way I could imagine spending my work time to get to be part of what I believe is the most interesting, coolest, important scientific revolution of our lifetimes. So like what a fucking privilege. Unbelievable. And then on the kind of like non-selfish reason, I feel a sense of duty to scientific progress. As a way that society progresses and of all of the things that I have a personal capacity to contribute to, or maybe just of all of the things, this is the one that I believe will drive scientific progress and thus standards of living, the quality of the human experience, whatever you wanna call it, forward the most.
And I feel a sense of duty, but not in a negative sense like a, a duty with a lot of gratitude for holding it that I get to contribute in whatever way.
[00:37:00] Adam Grant: Sounds like responsibility.
[00:37:01] Sam Altman: Sure.
[00:37:02] Adam Grant: With a, with a child on the way, as a soon to be father, what kind of world are you hoping to see for the next generation?
[00:37:10] Sam Altman: Abundance was the first word that came to mind to prosperity was the second, but you know, generally just a world where people can do more. Like, be more fulfilled, live a better life. However, we define that for each of ourselves, all the, all those things. Probably the same thing every other soon to be dad has ever wanted for his kid.
I've certainly never been so excited for anything. And I think it's also like no one should have a kid that doesn't want to have a kid. So I don't wanna use the word duty here, but society is dependent on some people having some kids.
[00:37:35] Adam Grant: At least for now.
[00:37:36] Sam Altman: At least for now.
[00:37:36] Adam Grant: I don't think I've, I've heard you express as strongly as you did today, how much you're also a believer in humans, not just in technology. And I think in some ways that's a risky place to operate, like we've seen that with social media, but I think it's also like it is table stakes when it comes to building technology is you have to care about and believe in.
[00:37:57] Sam Altman: I skew optimistic, even though I try to just be accurate.
But if there's being too optimistic about technology, like whatever, if you're too optimistic about humans that could be a danger for us if we put these tools out and we like, yeah, people will use it for way more good than bad and we're just somehow really wrong about human nature. That would be a flaw with our strategy, but I don't believe that.
[00:38:15] Adam Grant: Well, fingers are crossed. Sam, thank you for taking the time to do this. I learned a lot and thoroughly enjoyed it.
[00:38:19] Sam Altman: Thanks for having me. This was fun.
[00:38:20] Adam Grant: Was it? You be the judge. My biggest takeaway from this conversation with Sam is that technological advances may be unstoppable, but so is human adaptation.
Machines can replace our skills, but they won't replace our value or our values. ReThinking is hosted by me, Adam Grant. The show is part of the TED Audio Collective, and this episode was produced and mixed by Cosmic Standard. Our producers are Hannah Kingsley Ma and Aja Simpson. Our editor is Alejandra Salazar.
Our fact checker is Paul Durbin. Original music by Hans Dale Sue and Allison Leighton Brown. Our team includes Eliza Smith, Jacob Winnick, Samaya Adams, Roxanne Hai Lash, Banban Cheng, Julia Dickerson and Whitney Pennington Rogers.
[00:39:14] Sam Altman: I get a surprising number of emails, like cold emails or something where someone will like say, I confess, I wrote this with the help of ChatGPT. And I try all, if I reply, I try to always say like, no need to ever do that again. If you ever email me again, I'll take the bullet points. So that's my one little contribution to the fight.
[00:39:28] Adam Grant: Wow. And then there's a little disclaimer at the bottom saying, this response was also written by ChatGPT.
[00:39:33] Sam Altman: Um, if I do that, I do disclose, but I, I don't, I usually just write my two bullet points back.