Aza Raskin on why technology – and democracy– are in an imagination crisis (Transcript)

ReThinking with Adam Grant
Aza Raskin on why technology – and democracy– are in an imagination crisis
May 14, 2024

[00:00:00] Aza Raskin:
Whatever it is that is the solution to humanity's problems. I'd argue it's probably not in our imagination because if it was, we'd be doing it. So what we're looking for are things that are outside the sphere of human imagination.

[00:00:17] Adam Grant:
Hey everyone, it's Adam Grant. Welcome back to ReThinking, my podcast on the science of what makes us tick with the TED Audio Collective.I'm an organizational psychologist, and I'm taking you inside the minds of fascinating people to explore new thoughts and new ways of thinking.

My guest today is tech pioneer Aza Raskin. As co-founder of the Center for Humane Technology, he's a leading advocate for the responsible re-imagining of the digital world to prevent polarization and promote wellbeing. Aza’s work focuses on solving some of the biggest collective problems of our age, especially as our tech rapidly evolves.

[00:00:53] Aza Raskin:
AI is like the invention of the telescope and when we invented the telescope, we learned that Earth was not the center. I've been thinking a lot about the implications of what happens when a AI teaches us that humanity is not the center.

[00:01:11] Adam Grant:
If you don't know Aza by name, you know some of his creations. He designed the feature that makes doom scrolling possible, which he now regrets.

And he coined the phrase, “Freedom of speech is not freedom of reach.” Since then, he's expanded his scope by co-founding the Earth Species Project where he is using tech to decipher animal communication between improving social media and talking to whales. We had a lot to discuss and Aza challenged me to rethink my assumption that these two missions aren't as different as they might seem.

Hey, Aza.

[00:01:50] Aza Raskin:
Hey Adam.

[00:01:50] Adam Grant:
I'm excited for this. I feel like there is so much ground we could cover. I hardly know where to begin.

[00:01:56] Aza Raskin:
Yeah. It's only care for first all humans and then after that, all beings.

[00:02:02] Adam Grant:
So you grew up in tech and I understand we have you to blame for infinite scroll.

[00:02:08] Aza Raskin:
Yeah. Everyone's just gonna start pelting me with tomatoes.

I did invent infinite scroll and I think it's really important to understand my motivations and then what went wrong. Because it's, it was a big lesson for me when I invented infinite scroll. This was before social media had really taken off. This was way back just when MapQuest, you know, I don't know if you remember that.

[00:02:28] Adam Grant:
Of course I do.

[00:02:28] Aza Raskin:
Right? Like we have to click and then the map would move over and then you have to reload the page and the thought hit me. Like I'm a designer. Every time I ask the user to make a decision they don't care about, I've failed. When you get near the bottom of a page, that means you haven't found what you're looking for.

Just load some more stuff. And I was designing it for blog posts. I was thinking about search results, and it's just, honestly, it is a better interface. And then I went around to like Google and Twitter. I was like, “Oh, you should adopt this interface.” And I was blind to the way that my invention created with positive intent was going to be picked up by perverse incentives of what would later become social media where it wasn't to help you but to essentially to hunt you, right? To extract something from you using an asymmetric knowledge about how the human mind works, which is that your brain doesn't wake up to ask, “Do I want to continue?”Unless it gets something like a stopping cue. What does that mean? That means like generally you don't ask, “Do I wanna stop drinking wine?” until I get to the bottom of the cup of wine? So, my invention got sort of sucked up by a machine and wastes on the order of a hundred thousand human lifetimes per day now it's, it's horrendous.

And this is what I think people miss all the time in the invention of technology, that it's not about the intent, good or bad of the inventor. When you invent a technology, you uncover a new class of responsibilities. We didn't need the right to be forgotten until the internet could remember us forever.

And then two, if that technology confers power, you're going to start a race for that power. And if there is some resource that we need that we depend on that you can be exploited for that power in this case, like attention and engagement with the attention economy, then that race will end in tragedy unless you can coordinate to protect it.

And so I was completely blind to that structure when I was creating infinite scroll and you can see the results. That thing we call doom scrolling would not exist without infinite scroll.

[00:04:29] Adam Grant:
So what I mean, obviously there's a tension between social media business models and what we think is the humane option here, but a lot of people hate doom scrolling.

Why have we not seen a company yet experiment with a limit on that? What would you do at this point? How would you think about solving this?

[00:04:49] Aza Raskin:
It's a great question. So the way we've often talked about the attention economy is, it's a business model that is fundamentally about getting reactions from the human nervous system.

You get people angry. You show them things that they cannot help but look at, so you addict them. If the incentive is to get reaction and make reactive the human nervous system, it's sort of obvious that we're gonna get polarization, narcissism, more outrage, eventually democratic backsliding, like that's all a predictable outcome of just make the human nervous system more reactive and get reactions from it.

And that's why we're able to call it in 2013 of building all the way up to The Social Dilemma in in 2020. And so if we're going to think about solving it, it's not a thing that an individual company can do. We get into that paranoid logic. If we don't do it, we'll lose to the person who does. So you have to do something to the entire space as a whole so everyone can start competing for the thing that is healthy and humane.

[00:05:49] Adam Grant:
I've talked with multiple social media companies about this over the last few years is just run the A/B test of let's have people preset how many hours a day or ideally minutes a day they actually wanna be scrolling and then it just flags that their time is up. I think to your point, that would reduce attention and engagement perhaps, but it would also make people less angry at the platform. And I wonder if there's a net benefit there, and at least I would wanna test that?

[00:06:15] Aza Raskin:
Right.

[00:06:16] Adam Grant:
I think you probably have a better idea, but tell me what's wrong with mine and then where you would go.

[00:06:20] Aza Raskin:
There's a different version of yourself before you've started eating french fries and a different version of yourself after you started eating french fries, before you've eaten one single french fry, you're probably like, “I don't know if I want to have french fries” after you've eaten one. You're in a hot state, you're just gonna keep eating them until they're gone.And that's sort of the thing I think you're pointing at.

[00:06:37] Adam Grant:
In psychology, we would talk about this as a fundamental want-should conflict. You know, you should stop scrolling. But in that moment you wanna keep doing it and it's hard to override the temptation. And the good news about your should self is that although it's weaker in the moment, it's smarter in the long term.

And so if we can activate the should self in advance and pre-commit, as you're saying to that target. The probability should go up that you would be willing to, to stick to that commitment once you've made it. It's not a perfect solution. But what I like about it, as an example is it's something one company could try that could be differentiating in a positive way and doesn't require congressional intervention.

[00:07:15] Aza Raskin:
Mm-hmm.

[00:07:16] Adam Grant:
Or you know, all of the companies to form a coalition.

[00:07:18] Aza Raskin:
Right.

[00:07:19] Adam Grant:
So tell me what I'm missing there and what your more systemic approach would look like.

[00:07:23] Aza Raskin:
I used to be addicted to both Twitter and to Reddit, and I'm like, “How do I get myself off?” So as a technology maker, every designer knows that retention is directly correlated with speed.

That is the faster your website loads or your app loads, the more people continue to use it. Amazon really famously found that for every a hundred milliseconds their page loads slower, they lose 1% of revenue.

[00:07:52] Adam Grant:
Wow.

[00:07:53] Aza Raskin:
So using this insight, I actually wrote myself a little tool that said the longer I scrolled on Twitter, the more I used Reddit, the longer I would get a random delay as things would load.

Sometimes it'd be sort of fast, sometimes it would get slower. The longer I used it, the slower it would get. And what I discovered is that this let my brain catch up with my impulse and I would just get bored by like, “Do I really wanna be doing this?” And it wasn't a lot, it was like 250 milliseconds. It's human reaction time.

It gave me just enough time to overcome the dopamine addiction loop and within a couple of days, honestly, my addiction was broken 'cause I'm just like, oh no, I actually don't wanna be doing this.

[00:08:31] Adam Grant:
Wow. Okay. So this is an ingenious invention. Uh, how do I download it? And are you gonna make it widely available?

[00:08:38] Aza Raskin:
For everyone listening to this podcast, this is not a super hard thing to make. I did it with my own personal little VPN and proxy. If anyone wants to come help build this thing, please. It's not hard, and I think there's a big opportunity. I just personally don't have the time. I just made it for myself.

So let, let's put out that plea. Let, let's then jump up to the, the global solution. We need to hit these companies essentially in their business model. We have to hit them where sort of a scorecard for the effects of social media on teen mental health, depression and suicide, on the health of public discourse, on the backsliding of democracies.

Um, you, you know, you, you make the big list of all the different kinds of harms, and then you have some way of evaluating how well each company is doing, and then you institute some kind of latency sanction, like, alright, looks like Facebook's doing really badly on teen mental health. We're going to, in a democratic way, slow Facebook down after the first n number of minutes, it'll never get more than like, I dunno, 500 milliseconds of delay.

It's not like you're stopping it. There's no censorship. You're just saying, “Hey, you're having negative externalities, you're affecting the whole.” And you would imagine how very quickly these companies are gonna innovate their way to solving those metrics like quarter after quarter.

The first thing, of course, I'm sure in people's minds is that seems really scary. Like who would wanna give the government the ability to like slow down websites? And so this sort of speaks to the next thing, which is especially in the era of exponentially powerful technology as we move into AI, we're going to need forms of trustworthy governance that are hard to capture.

We're going to need to have the equivalent of citizens, juries, or other things like other forms of institutions, which are hard to capture and are capture and corruption resilient. And this, I think, would be an excellent place to start prototyping what that future vision of what resilient institutions would look like.

[00:10:41] Adam Grant:
I actually think that would be really compelling.

[00:10:40] Aza Raskin:
Note that this is a solution that never touches content. It never touches content moderation. It never touches censorship. It's a solution born out of seeing the world as incentives, leading to outcomes, and trying to shift things at the incentive level so that you can unleash the amazing amount of creativity and ingenuity inside of these companies that are just doing the thing that the incentives tell them to do.

[00:11:07] Adam Grant:
For uninitiated listeners, can you explain what a bridging algorithm does?

[00:11:11] Aza Raskin:
You have used Twitter and have seen community notes. This is actually a bridging algorithm in practice, and essentially what bridging algorithms do is they look for consensus across groups that normally disagree and once it finds statements that people across multiple different divides agree on, then it raises that up.

It sort of promotes those. One of my, my favorite examples of thinking about this is something called the perception gap. And the perception gap is how differently I perceive you and your beliefs than your actual beliefs. When we are fighting we are often not actually fighting with the other side. We're fighting with a mirage of the other side as sort of a caricature, and now we end up in a really interesting place because we could start to measure what kind of content at scale increases perception gap that is sort of fills you with false beliefs about the other side beliefs and which kinds of content decreases, helps you see more accurately.

And you could imagine then an algorithm which helps go viral the content that lets us see each other correctly. It doesn't make all disagreements go away, but it says at the very least, we should be able to accurately see what all the other sides are saying.

And because we are actually closer than we believe, it's sort of like bringing the two sides of a wound closer together so it can start to heal.

[00:12:48] Adam Grant:
I think you alluded to your skepticism that a social media platform would, would like this idea because outrage is, you know, is more activating.

[00:12:57] Aza Raskin:
Yeah.

[00:12:57] Adam Grant:
In some ways than connection.

[00:12:59] Aza Raskin:
That's right.

[00:12:59] Adam Grant:
I'm not entirely convinced. I, I just wonder if, if we haven't tried the right approaches to bridging yet.

[00:13:05] Aza Raskin:
Mm-hmm.

[00:13:05] Adam Grant:
So, for example, there was some evidence that was published a few years ago showing that people would rather have a conversation with a stranger who shared their political views than a friend who didn't.

And I think people recognize that as a massive problem in their lives. If we think about family members and friends and close colleagues, uh, who are not speaking to each other or having a hard time getting on the same page, um, that's an audience for a bridging algorithm.

[00:13:30] Aza Raskin:
Facebook, they discovered a very simple thing that they could do for fighting hate, speech, disinformation, misinformation, like all the worst stuff.

What was that one simple thing that they could do? It was they could remove the reshare button after two share hops. That is like, I could share something, somebody else could click the reshare button. Somebody else could click the reshare button, but after that, the reshare button would disappear. Now, if you're really motivated, you could copy the text and paste it again.

So again, no censorship. It's just introducing a little more friction, but it comes at the cost of engagement. That's why I think we're not gonna see much traction with like bridging algorithms until those fundamental incentives are fixed.

[00:14:15] Adam Grant:
Did you co-coin the phrase, “freedom of speech is not freedom of reach”?

[00:14:19] Aza Raskin:
I coined it. And then Renée Diresta, a brilliant researcher at, uh, the Internet Observatory now. She wrote an article in WIRED that popularized it.

[00:14:29] Adam Grant:
So I, I love that phrase. There's a certain level of reach that concerns me when the content is consequential. So, uh, you know, thinking about during COVID health information and misinformation or disinformation, um, thinking about posts that are safety relevant, uh, in an area where there might be danger or a threat of violence.

I've often wondered when health and safety information reaches a certain level of virality, why isn't it flagged to not be reshared unless it's fact-checked? And why isn't there a process for that? Is something like that viable?

[00:15:06] Aza Raskin:
Uh other countries do this so it is viable.

[00:15:09] Adam Grant:
I, I'm thinking of Sinan Aral’s work showing that lies spread faster and farther than the truth.

[00:15:14] Aza Raskin:
Falsehoods go six times faster than truths. This is really important because one of the things we want to sidestep is, is this piece of content true or false? Um, fact checking is hard, and then the thing that's next to fact checking is frame checking. And now it gets very hard to adjudicate. So we should be looking not at specific pieces of content, but at the context surrounding them, how fast they're spreading, what is the way that they're spreading, what are the incentives for it spreading and so that we can move out of the morass of free speech.

'Cause as soon as we head down that pathway and solutions that require a debate about free speech, we lose.

[00:15:55] Adam Grant:
I'm realizing it's not so simple as just fact checking.

[00:15:59] Aza Raskin:
Yes.

[00:15:59] Adam Grant:
Frame checking is almost impossible.

[00:16:02] Aza Raskin:
Yes. That, that is exactly right. And that is why that whole program of let's get more fact checkers in is just going down the wrong solution branch. And so we need to be thinking about it at a more systems level, incentive level, context level than content.

[00:16:19] Adam Grant:
Uh, Congress realizes there are a bunch of Luddites, they put you in charge of a committee to make a series of recommendations for what ought to be done societally.

[00:16:30] Aza Raskin:
Yeah.

[00:16:31] Adam Grant:
What are you proposing?

[00:16:33] Aza Raskin:
We are at the cusp of the next era of technology, of AI. Well, which way is it gonna go? Are we gonna get like the incredible promise of AI or are we gonna get the terrifying peril of AI? And our point was the same as it's always been, which is if you want to understand where it's gonna go, look to the incentives.

That's how we're able to predict social media. So what are the incentives for AI? It's to grow your company as fast as possible, to grow your capabilities, get them into the market as quickly as possible for market dominance. So you can sort of like wash, rinse, repeat. And the shortcuts you're going to take are always going to be shortcuts around safety, and we are going to recapitulate all of the problems of social media, just orders of magnitude bigger.

And the way we like to say it is that social media was actually humanity's first contact with AI and whereas AI in social media, it's the thing that sits behind the screen, choosing which posts and which videos hit your eyeballs and your eardrums.

[00:17:41] Adam Grant:
It's the algorithms.

[00:17:42] Aza Raskin:
It's the algorithm, and it was a very simple, unsophisticated version of AI and its small misalignment optimizing for the wrong thing sort of broke our world.

[00:17:53] Adam Grant:
So tell us, what might you do?

[00:17:56] Aza Raskin:
If I could wave a magic wand and say, alright, every one of these major AI companies, there needs to be some wave for them to give, I don't know, 25%, 40% of their compute to forecasting all of the foreseeable harms that they can possibly foresee using the new sort of cognitive labor that AI affords so that there's now some kind of appropriate liability for not doing enough to constrain those foreseeable harms.

And then you could imagine we're gonna need some kind of graduated sanctions and the sanction comes in the form of like, I dunno, like a, a compute tax or something like that. This is very like much of a sketch 'cause this is hard, but I'm just trying to give a flavor of how we might start to think about it in a way that's at the incentive layer, not at what is a specific thing that one company can or cannot do layer

[00:18:56] Adam Grant:
I think what's tricky about it then is, okay, you're gonna pour all those compute resources into anticipating risks.

[00:19:03] Aza Raskin:
Yeah.

[00:19:04] Adam Grant:
And then you're not just gonna rely on the AI to decide which of those risks are high versus low probability or high versus low severity. We need to bring in human judgment.

And then what do we do when we have a, a low probability, high severity threat?

[00:19:21] Aza Raskin:
This is where it gets very, very challenging because we as human beings are not very good at like emotionally attuning to tail risks, especially when on the other side of the equation, right? 'Cause AI could enable like terrible bio weapons and race-based viruses, a a whole bunch of terrible things, and you can imagine AI just increasing all of those tail risks.

But on the other side, we get incredible benefits and the benefits are concrete, like cancer drugs, they happen for you immediately versus these risks which are diffuse, probabilistic, amorphous, and our brains just can't deal with that trade very well at all.

I'm actually very curious what you would say about it. How do you make those kinds of risks?

[00:20:07] Adam Grant:
It's a really good question. It's a hard one. I frankly, I don't think we've cracked it yet. I want to just delegate the problem to Phil Tetlock and his team of super forecasters. And say, okay, we have individuals who have demonstrated a consistent ability to do this, so let's treat them as, as one of your juries.

They know what they know and they know what they don't know. We know a lot about how to train people to be better forecasters. Um, a second is I think we can make some of this probabilistic information easier to digest. I think of the work of Gerd Gigerenzer, for example, and colleagues where. They've shown that natural frequencies are easier to process than statistics.

And so instead of saying that something is, you know, 0.1% odds, say this is one in a thousand, and all of a sudden people are more likely to, to take it seriously. Like, wow, that could happen.

[00:20:57] Aza Raskin:
Mm-hmm. You have to make them visceral. You have to feel it. What we need is a process, and this is where I think it gets really exciting because the United States was founded on the principle that we could build a more trustworthy form of governance.

No one really, I think deeply trusts like the institutions that we have now, and if we just handed power to regulate AI to the government, it would probably mess it up in some way. There's probably some kind of deep centralization of power that would happen, and that's super scary in the era of essentially forever dystopias, there is no such thing as privacy in like the AI world, everything that can be decoded will be decoded.

Governance needs to scale with AI 'cause otherwise, as AI increases its intelligence, you're driving a car whose engine is going faster and faster, but your steering wheel is going faster and faster, that thing is going to break. And we're gonna need a way of having human collective intelligence scale with AI, otherwise AI will overpower human collective intelligence was, which is another way of saying we lose control and obviously this is a, a complex, hard topic and they're like mini Publix. And Audrey Tang's work from Taiwan, I think is the best living example of how you can put these values in to practice that it isn't just sort of like a theoretical framework, you know, she with a whole community has built the tools that do these kinds of bridging algorithms we're talking about. So that citizens can set the agenda for government to have to listen to.

They did this for say, how should Uber and other ride sharing apps, how should they integrate into society? And they ask everyone in society to give the ability to contribute what their values are, what they care about. And it's a lot of these incredible little design philosophies that I find super fascinating.

Like in her system, there isn't a reply button. You can only say your value. And if you disagree, you don't disagree. You just have to state in the positive your value. Um, and now you have this big map of everyone's values so that when you then can thumb up something be like, oh, yes, I, I, I agree with this idea.

They can use a bridging algorithm to say, well, we know what everyone's like positive value statements are, so let's find the policy or the agenda that we really care about that sits across those, those divisions in our society. So we're finding the things that knit and heal versus the things that divide.

[00:23:39] Adam Grant:
Audrey Tang was a software programmer.

[00:23:41] Aza Raskin:
Yeah.

[00:23:41] Adam Grant:
And then they created a new position for her, and now she is Taiwan's first ever Minister of Digital Affairs.

[00:23:47] Aza Raskin:
Yes.

[00:23:48] Adam Grant:
Like what? Why are we not doing that?

[00:23:50] Aza Raskin:
I think honestly, it's a imagination gap. We cannot imagine a system different than we have now. It turns out there are a huge number of really brilliant people working on these kinds of things.

So, if the US government were to put, you know, like, I dunno, let's just say $10 billion per year into upgrading democracy itself, not just like digital democracy, adding more forms aligned, but like let's do the most American thing, which is like innovate our way to a new form of what democracy looks like. I wanna vote not left, not right, but upgrade.

[00:24:30] Adam Grant:
So if I were to draw a Venn diagram of a techie and a hippie, do you live in the middle?

[00:24:36] Aza Raskin:
I've, I spent a lot of time in nature and there is something profound about feeling the smallness of your breath against the largeness of the universe. I don't know if I'd say I'm a hippie and I don't know if I'd say I'm a techie.

[00:24:49] Adam Grant:
Well, that actually is a great segue to where I wanted to go. I was stunned when you said last summer that you thought it might be possible for us to understand whales and maybe even talk to them one day. Why do you wanna communicate with whales?

[00:25:04] Aza Raskin:
Right. We, we're, we're trying to talk whales and already, that's not why we're trying to do it. We do not change when we speak. We change when we listen. The goal for Earth Species Project is to learn how to listen to whales and orangutans and parrots. The other non-human cultures of Earth sometimes, which have been communicating for 34 million years, passing down languages and dialects and cultures because whatever it is that is the solution to humanity's problems, I'd argue it's probably not in our imagination because if it was, we'd be doing it.

So what we're looking for are things that are outside the sphere of human imagination. And just to preempt, I think your listeners' questions like we're talking about animal languages, does such a thing even exist? And I just wanna give a couple quick examples that I think will like help illustrate this.

Many animals have names that they will call each other by sometimes even in the third person, parrot parents will spend the first couple of weeks of their chick's lives leaning over and whispering into each of their individual children's ears, a unique name. And the children will sort of like babble back until they can get it, and they will use that unique name for the rest of their lives.

[00:26:26] Adam Grant:
Mind blown.

[00:26:27] Aza Raskin:
Um. Then just to give a, another example, a 1994 University Hawaii study where they were teaching dolphins two gestures, and the first gesture was, do something you've never done before. And it takes a lot of patience and a lot of fish to like communicate that idea to a dolphin, but they will get it.

[00:26:45] Adam Grant:
Were you a kid who was obsessed with Aquaman? What's the origin story of this?

[00:26:51] Aza Raskin:
I, I was a kid that was obsessed with everything. It must have been a very annoying kid. This idea really came actually, um, from hearing a story on NPR about this incredible animal, the Gelada monkey, the researcher said they had one of the largest vocabularies of any primates, humans accepted.

And when you listen to them, they sound like women and children babbling. And they sort of do turn taking and it's this complex vocal thing. And she's like, we don't know what they're saying, but I swear they're talking about me behind my back. They were out there with like a hand recorder, hand transcribing, trying to figure out what they were saying.

And the thought sort of struck me like, why aren't we using machine learning to translate. And that changed in 2017 when suddenly AI developed the ability to translate between human languages without the need for any Rosetta Stone or any examples. And that's the moment that it was time to start Earth Species Project start actually going out to the field and learning from biologists.

And the why really grew with it. When I look out at the structure of humanity's largest problems, like I think there's a connective thread between all of them. Um, whether it's the opioid epidemic or the loneliness epidemic or climate change or inequality, it always takes the form of a narrow optimization at the expense of a whole.

Some part of the system like optimizing, whether it's for GDP at the expense of climate or whether it's trying to grab people's attention at the expense of mental health and backsliding democracies, it's always a narrow optimization that breaks the whole and earth species is fundamentally about reconnection.

A narrow optimization at the expense of the whole is fundamentally a different way of saying that is disconnection from ourselves, from each other, the natural world, a disconnection of our systems to their large scale effects. And when you think about sort of many of the Just So stories, indigenous myths, they almost always start out with human beings talking with nature, talking with animals, and that moment of disconnection is symbolized by the moment we can no longer communicate with nature.

This isn't just a question of what we must do fundamentally this is a question of who we must be like to change our identity. To change the stories we tell ourselves in order to live, to change our myths, to reconnect us, like at the deepest level.

That's the hope of what earth species can help bring about. And just to name in self-awareness that no one thing can do this. There is no silver bullet, but maybe there is silver buckshot.

[00:29:42] Adam Grant:
Let me suggest now we go to lightning round. What is the worst advice you've ever gotten?

[00:29:47] Aza Raskin:
That feeling in your body that's telling you, be careful or something's up. Don't listen to that. Push through.

[00:29:55] Adam Grant:
Oof. If you could talk to any animal species, which one would you choose?

[00:30:01] Aza Raskin:
If you were to talk to everyone on the Earth Species team, each person have a different animal they're most excited about, but for me it's beluga because beluga, if you actually listen to them, they sound like nothing you're expecting.

They sound like an alien modem. Um, the cultures of belugas and dolphins and whales, they go back 34 million years for something to have survived 34 million years of cultural evolution, there has to be some deep wisdom in there and I just, I am so curious to get the very first glimpses of what that might be.

[00:30:35] Adam Grant:
You're in conversation with a beluga whale. If you could ask one question, what would it be?

[00:30:40] Aza Raskin:
I'd wanna know like what does it feel like to be them?

[00:30:45] Adam Grant:
What is the question you have for me?

[00:30:46] Aza Raskin:
You prove like, okay, animals think, they have language, there's an interiority. Like what for you changes? What do you think the implications are?

[00:30:56] Adam Grant:
I guess my hope is that we start to realize that we need to do a much better job, both avoiding harm to and taking care of species that aren't human.

[00:31:07] Aza Raskin:
Mm-hmm.

[00:31:08] Adam Grant:
And that this is a watershed moment. This skeptical side of me says we've tried this with a lot of human cultures and failed pretty much every time.

[00:31:19] Aza Raskin:
Yeah.

[00:31:20] Adam Grant:
It's so easy to dehumanize people that we already know our sentient and entire groups that we already know feel extreme pain.

[00:31:28] Aza Raskin:
Yeah.

[00:31:29] Adam Grant:
Why would it be any different with animals?

[00:31:31] Aza Raskin:
Whenever I think about that, I'm like, well, it is true that even though we know other humans speak, we still do terrible things to them.

And imagine how much worse it would be if they couldn't speak at all.

[00:31:43] Adam Grant:
You mentioned earlier, whales, orangutans, parrots. How did you go about deciding which animals?

[00:31:51] Aza Raskin:
A lot of which animals we decide to work with are driven by the deep insights of the biologists that have been out there in the field. So, for instance, like why start thinking about orangutans?

It's because one of our partners, um, Adriano Lameira was able to show in the last couple of years that orangutans have a kind of past tense. They can refer to events that happened at least up to 20 minutes ago. It's probably longer, but that's as far as he's been able to show so far. And when you think about language, two of the big hallmarks of language are being able to talk about things that are not here and not now.

Parrots as you're talking about they have names they call each other by, and I honestly, I think even just a campaign that let the world know that animals have names like that would already start to shift human culture and, and how we relate.

[00:32:40] Adam Grant:
I think probably for a long time, I assume that that cognitive capabilities tracked with vocal range.

[00:32:48] Aza Raskin:
Mm mm-hmm.

[00:32:48] Adam Grant:
But we all know that's not true. Like parrots, they can say incredible things. I don't think they're thinking capacity is anywhere near what a dolphin’s is, for example. How do you weigh those two sets of factors?

[00:33:02] Aza Raskin:
I'll push back a little bit. There was a nature publication, um, maybe three, four years ago now, where they're looking at like ravens and crows and their cognitive capabilities compared to say, the great apes and they're on par.

This is the general thing we find, which is as human beings, our ability to understand is limited by our ability to perceive, and generally speaking, we just haven't been perceiving enough.

[00:33:29] Adam Grant:
It seems like this is long overdue because I've, I've, I've looked for years at these supposed intelligence rankings of animals.

[00:33:37] Aza Raskin:
Mm-hmm.

[00:33:37] Adam Grant:
And said, well, this is just a function of the tasks that we've given them.

[00:33:40] Aza Raskin:
Yes, exactly.

[00:33:41] Adam Grant:
And the way that we know how to score them. And it's really easy to discover that a pigeon is dumb if you don't give it a navigation task.

[00:33:49] Aza Raskin:
Yes.

[00:33:50] Adam Grant:
And then all of a sudden you do and you realize, wow, it's a lot smarter than us when it comes to finding its way around the world. And I wonder how many species we've underestimated that way.

[00:33:59] Aza Raskin:
One of my favorite examples of this comes from the mirror test is when you take an animal and you paint a dot on them where they can't see and they're unaware of it, they look in a mirror and then they start trying to get that dot of them or investigate it.

It's a test that tests self-awareness. They have to look into a mirror and say, “oh that image in the mirror, that is me.” So that's a, that's a big step to take. It means there's an interiority and a sense of self. It was thought for the longest time that elephants couldn't pass the mirror test, but then it was turned out that it's just because scientists were using small mirrors.

[00:34:39] Adam Grant:
No.

[00:34:40] Aza Raskin:
Right? It's just like if you measure the thing wrong, all it needed was a bigger mirror. Then suddenly what looked unintelligent becomes very intelligent.

[00:34:49] Adam Grant:
You were really careful to stress that we should just listen.

[00:34:51] Aza Raskin:
Mm-hmm.

[00:34:52] Adam Grant:
Or that listening is, is the primary goal here.

[00:34:54] Aza Raskin:
It it's at the center.

[00:34:55] Adam Grant:
Yes.

[00:34:55] Aza Raskin:
It's the primary goal. Exactly.

[00:34:56] Adam Grant:
As soon as we're capable of deciphering and understanding, someone is gonna want to communicate. What's, what's your answer to the question of should we open Pandora's box? Because I feel like the standard Silicon Valley response to this is not satisfying. It's, well, somebody else is gonna do it if we don't, and we're more ethical than they are, so we need to do it first.

[00:35:16] Aza Raskin:
Yes.

[00:34:16] Adam Grant:
Which to me is just dripping with narcissism and arrogance.

[00:35:19] Aza Raskin:
Yeah. It's the like, well, I wanna do it, so I'm gonna find the belief that lets me do the thing I want to do.

[00:35:26] Adam Grant:
Exactly. So why do you wanna open the box despite that risk?

[00:35:31] Aza Raskin:
Mmm. We're gonna uncover a whole bunch of new responsibilities about what does it mean to be able to communicate with the other non-human cultures of Earth.

And of course, if it confers any kind of power, it's gonna start a race and that race will end in tragedy. So I think to be a sort of humane technologist or responsible technologist really should just be, to be a technologist means to pre-think through all the ways you are going to start some kind of race.

What are the ways that your technology is going to be abused or cause harm? We, we might create like a whale QAnon or something. We, we, we don't know. Um, so we need to be really careful about going out and starting to just speak in the same way you could imagine factory farms using it. Um, you could imagine poachers using it to attract animals.

You could imagine, uh, ecotourism using it to attract animals. So there is no such thing as a technology that doesn't have like externalities and doesn't have bad actor abuses, so what do we do? So that means we need to race ahead and start thinking about what are the international norms and treaties and laws and other things that can bind those races.

I think we're going to need whatever the equivalent of a Geneva Convention for cross species communication is. And to give another example, when we started our species, we were doing everything open source. We're like, it's good to get these models out to as many of the scientists as possible because as we build the tools to decode animal communication and translate animal language, we're also building the tools that it turns out all biologists need just to do their work and their conservation work.

And we've realized, actually, that was a naive value that we can't just open source everything. We're gonna have to go through a gated release. So as we build these models, we're just not gonna ship them to everyone. There's gonna have to be some kind of application process, and then we're gonna have to start thinking through, and this is not just for us, but the wider space.

What is the right way so that we as one entity can't sort of abuse our centralized power? How do we find these processes that we've been talking about that make it a trustworthy process for who gets access to the models?

[00:37:47] Adam Grant:
How do you think about the problem of privacy violation?

[00:37:52] Aza Raskin:
In general, or for animals?

[00:37:54] Adam Grant:
For animals. You know, I'm thinking that you're trying not to disrupt or disturb whales by listening in.

[00:37:59] Aza Raskin:
Yeah.

[00:38:00] Adam Grant:
But they also didn't give you permission to listen in.

[00:38:02] Aza Raskin:
In the process, if we learn how to ask whether we're violating consent, then, then we can actually just ask and find out.

[00:38:09] Adam Grant:
If we think about whales, for example, what year do you think it'll be when we can understand everything that they're saying or not, maybe not everything, but where we can decipher, you know, a significant chunk of their communication?

[00:38:22] Aza Raskin:
Of course we're talking about science here, so I just wanna caveat any prediction of where we're going to be. But we are this year heading into our first non wild two-way, AI to animal communication experiment.

And we're seeing can we essentially pass the Turing test for a specific kind of songbird, a zebra finch? Can you swap one zebra finch? Out for the AI, zebra finch and see if the actual animal can tell the difference? What sound does an elephant make or two elephants make when they come together? And that means something about greeting affiliation, but maybe it means I miss you, or maybe it means I'm glad to see you.

Um, maybe it means their name, but you can see, okay, well happens with one way or one elephant is running really quick and running its ears. And, and we know that that has emotional conjugation to it. And so you can see that as we start to pass the Turing test, we get towards decoding pretty quickly.

[00:39:13] Adam Grant:
This is a real frame shift.

We started out and I was under the impression that the goal is to learn something that can benefit humanity.

[00:39:21] Aza Raskin:
Hmm.

[00:39:21] Adam Grant:
And also, it would be really nice if it got us to be kinder to animals.

[00:39:27] Aza Raskin:
Yeah.

[00:39:27] Adam Grant:
It did not occur to me that you're in a position to actually help the very animals that you're communicating with.

[00:39:33] Aza Raskin:
Mm-hmm.

[00:39:33] Adam Grant:
So the idea, for example, that you could put a warning device on every major ship that would signal to any underwater creature get outta the way.

[00:39:41] Aza Raskin:
Mm-hmm.

[00:39:42] Adam Grant:
That could be very meaningful. You could potentially do the same thing in a rainforest with birds, right?

[00:39:47] Aza Raskin:
Mm-hmm.

[00:39:48] Adam Grant:
Is one of your aspirations to, to be able to use some of what you learned to actually save species from extinction?

[00:39:55] Aza Raskin:
Yes, absolutely. And I just want to paint a picture in everyone's head for what might a translation look like, because are we just talking about like a Google translate and you say whatever you want and it comes out and it probably won't look like that? I think there are parts of the experience that we share with animals.

We know that whales will carry their dead children for like up to three weeks, like pilot whales do this, and it looks like grief is a shared experience, but then there are huge portions of their experience that we might never be able to directly translate. You know, like sperm whale spend 80% of their life a kilometer deep in complete darkness and like seeing in 3D sound like that's not like anything in the human experience.

And so what might those translations be? And I think those translations are likely to be much more poetic. It might be like a snatch of music with a specific kind of color like it's, it's some kind of multimodal translation and we won't know what it means exactly, but we will get a sense over time and maybe it'll be our children who grow up immersed in these sort of like odd translations from other beings and other cultures that they're like, oh, I get it. I have a sense for what that thing is.

[00:41:08] Adam Grant:
Things to look forward to brace ourselves for.

[00:41:11] Aza Raskin:
Yeah, exactly. Awesome.

[00:41:13] Adam Grant:
Thanks Aza.

[00:41:13] Aza Raskin:
Well thank you so much Adam. This is super fun.

[00:41:15] Adam Grant:
To be continued.

[00:41:16] Aza Raskin:
Agreed.

[00:41:21] Adam Grant:
As we're waiting for digital platforms to evolve, I have one thought about a small step we can each take to reduce polarization and misinformation. People often say, “I'm entitled to my opinion.” I wanna rethink that. Yes, you're entitled to your opinion in your head. But if you decide to share that opinion, it's your responsibility to change your mind when you come across better logic or better evidence.

ReThinking is hosted by me, Adam Grant. This show is part of the TED Audio Collective, and this episode was produced and mixed by Cosmic Standard. Our producers are Hannah Kingsley-Ma and Aja Simpson. Our editor is Alejandra Salazar. Our fact checker is Paul Durbin. Original music by Hansdale Hsu and Allison Leyton-Brown.

Our team includes Eliza Smith, Jacob Winik, Samiah Adams, Michelle Quint, Banban Cheng, Julia Dickerson, and Whitney Pennington Rodgers.

[00:42:22] Aza Raskin:
Humpback whales. Their song goes viral. And for whatever reason, Australian humpbacks are like the K-pop singers and their songs will spread and we don't know why over the entire world, within a couple of seasons sometimes. And then everyone is singing like the Australian like pop songs.