Can AI read your mind? The battle for your brain w/ Nita Farahany (Transcript)

The TED AI Show
Can AI read your mind? The battle for your brain w/ Nita Farahany
September 9, 2024

[00:00:00] Bilawal Sidhu: 

Scene one interior, an office space, fluorescent bear. The year is 2035 and neurotechnology is the new normal. An employee sits at their desk, they're wearing earbuds that their workplace has issued them, but these earbuds are doing much more than playing music. Because they have brain sensors embedded inside them.


[00:00:25] Nita Farahany: 

It's also tracking their stress levels while they're interacting with their screen and typing a memo. And then they're sitting back relaxing and they're reviewing their brain data over the past couple of months, and they notice that there's something unusual going on when they're sleeping. And so they pull up a new email, type it through their brain sensing earbuds, and they send off a quick message to their doctor and say like, “Hey, notice that there's something unusual going on here. Could you take a look and let me know what you think?” 


[00:00:53] Bilawal Sidhu: 

There are no physical screens or keyboards here, only desks where the workers sit and stare straight ahead clicking around office software with their minds. Our worker is sitting with her arms folded at a desk as she waits for her doctor to respond, her thoughts wander.


[00:01:10] Nita Farahany: 

She starts to fantasize about one of her colleagues and then suddenly starts to worry, realizing that her employer has access to all of her brain data, and she notices that a little message pops up on her screen warning about interoffice romances, and you know, she doesn't wanna get in trouble in that context, she's relieved when later in the day she gets an email from her boss that tells her that she's getting a performance bonus, and the reason is her brain metrics show that she's really just been on it. 

She leaves work, she's still jamming to the music with her brain sensing earbuds in. Playlists will become increasingly responsive to brain activity. That's kind of a given.

She gets home, has her brain sensing earbuds in while she sleeps at night. That allows her to track brain activity during the night, it also potentially could allow people to do things like market to her while she's sleeping. And then she comes into work the next day. There's a somber cloud that's fallen over the office.

And the reason is that one of her colleagues has been arrested under suspicion of engaging in some kind of fraud, and she's really worried because she's been secretly working with that person on a startup. But it turns out that as you work more closely with someone, you can start to see synchronization of brain data between two people, and she's worried that that synchronization data is gonna be used by the authorities to implicate her in his wrongdoings.


[00:02:39] Bilawal Sidhu: 

A world overrun by neurotechnology. It all sounds very sci-fi, but our guest today, Nita Farahany, says that many of our devices already have these sensing capabilities and this kind of workplace experience is just within reach. 


[00:02:54] Nita Farahany: 

There's already workplaces that are issuing these earbuds that have brain sensing devices in them could really both benefit and be used by an individual and be used against them.


[00:03:07] Bilawal Sidhu: 

I am Bilawal Sidhu, and this is The TED AI Show where we figure out how to live and thrive in a world where AI is changing everything.

Today we're gonna dive into AI enhanced Neurotechnology with ethicist and legal scholar Nita Farahany. Nita is the author of The Battle for Your Brain: Defending The Right to Think Freely in The Age of Neurotechnology, a book I found both fascinating and terrifying. It's about how neurotechnology without regulation has the power to infringe upon our last bastion of privacy, our privacy of thought, but is this kind of technology inevitable? 

And if so, how do we preserve our cognitive liberty? A term that Nita has used that I find myself thinking about a lot lately. I followed up with Nita because I had so many questions.

Nita, welcome to The TED AI Show. 


[00:04:10] Nita Farahany: 

Thanks for having me. 


[00:04:10] Bilawal Sidhu: 

First, I have to ask, can you start by giving us a brief overview of neurotechnology? 


[00:04:16] Nita Farahany: 

Sure. 


[00:04:16] Bilawal Sidhu: 

What it is, how does it work at a high level, and what are some of the dominant use cases that companies are pursuing? 


[00:04:23] Nita Farahany: 

Up until now, if somebody hears neurotechnology, they associate it with maybe something like what Elon Musk is doing, which is with his Neurolink company.

You know, drilling a small, uh, you know, hole into the skull, implanting electrodes deep into the brain, and then enabling somebody who maybe has lost the ability to communicate or has lost the ability to move, to regain some of that functionality that is also neurotechnology, but it is not the neurotechnology that's going to impact most of us.

What I'm focused on is the internet of things and wearable technology, so the fact that, you know, people are increasingly wearing devices like a smartwatch or earbuds or XR devices, like a virtual reality and augmented reality glasses that are packed with sensors, and those sensors are picking up different aspects of our bodily function.

Anytime you think, anytime you do anything, neurons are firing in your brain. They give off tiny electrical discharge. Hundreds of thousands of neurons may be firing at any given moment that reflect different mental states. So when you're happy, when you're sad, when your mind is wandering or when you think up, down, left, right, and you wanna navigate around a screen. 

Those are electrical signals that can be picked up by these sensors, and then AI enables the decoding of that into commands. Most people are used to, for example, heart rate or the number of steps that they're taking, or even temperature being something that's tracked and what's already on the market, but starting to come in a much more widespread, uh, fashion is embedding brain sensors into those devices, mostly electroencephalography sensors, EEG sensors, and those sensors can be put into earbuds or headphones or even small wearable tattoos behind the ear that pick up electrical activity in the brain. 

And so really the way to think about neurotechnology is not some separate device, although there are many of those that already exist on the marketplace, like a forehead band or sensors that can be put into a hard hat, but instead to think about our wearable devices we're already wearing with new sensors that have the capability of picking up brain activity. 


[00:06:37] Bilawal Sidhu: 

Even recently with a release of the Apple Vision Pro or the Meta Quest 3. You know, the way I've been thinking about it is on one hand it's a VR headset.


[00:06:44] Nita Farahany: 

Yeah. 


[00:06:44] Bilawal Sidhu: 

On the other hand, it's a biometric recorder on your face. 


[00:06:47] Nita Farahany: 

That's right. 


[00:06:48] Bilawal Sidhu: 

What is currently possible with this technology as far as mind reading goes and what is not quite possible yet, but is on the near horizon? 


[00:06:57] Nita Farahany: 

Yeah, it's a good question. You know, XR is in many ways from the ground up, new technology, right?

So it's a, it's computing on your face, but it doesn't have to be constrained by the same old rules and the same old rules being something like a keyboard and a mouse or a joystick like that doesn't have to be how we navigate through that. And so as you build a new class of technology, it's possible to reimagine what in like interacting with that technology looks like.

And so that's what these companies have done is they've packed them full of biometric sensors that, you know, whether those are cameras or facial recognition or eye gaze or brain sensors, all of them are being trained on the ability to be able to make inferences about brain and mental state. So, are you happy?

Are you sad? Are you tired? Are you, um, awake? Is your mind wandering or are you paying attention? Some of these kind of basic emotional states also, most of them are pretty good at being trained on brain activity to figure out cursor activity, so up, down, left, right. So a lot of the things that you could do with your mouse, you could do with a brain computer interface device, and then they're getting better at being able to decode an intention to type what's not there yet from a true mind reading capacity for wearable devices is literally what you're thinking or continuous language in your brain. 

So it, you can't quite at this point think, “Oh, I need to send a quick text message to my husband.” And then have that somehow decoded from your brain, sent to your, um, mobile device, and then sending off, uh, a message to him by thinking about doing so, but that's coming.

So that's the kind of thing where the intention to communicate is something that can increasingly be decoded by these devices, and that's where generative AI really can be a game changer because generative AI trained on natural language and conversations cecomes much more powerful at being able to predict the next word that it is that you wanna type.

And so that kind of auto complete feature of generative AI paired together with brain activity and the increasing capability of being able to put large language models on a device mean that the devices and the brain sensors can co-evolve with the individual so that it becomes increasingly more precise at more and more mind reading.


[00:09:16] Bilawal Sidhu: 

It's really powerful. So it sounds like this technology can kind of coarsely infer your brain states not exactly read your internal monologue, but your point, as you introduce new modalities, you're looking at what does the camera feed see? 


[00:09:30] Nita Farahany: 

Right. 


[00:09:30] Bilawal Sidhu: 

What is like, what is the user's eye gaze focused on? And then throw in generative AI to throw a layer of prediction on top of that.

Even with these very core sensing capabilities, you can actually start making very accurate predictions. 


[00:09:42] Nita Farahany: 

That's right. 


[00:09:43] Bilawal Sidhu: 

So I wanna understand why is this something so many companies are racing towards and investing in heavily? What are some of the most positive use cases that you're excited about? Like how is this actually gonna help users?


[00:09:55] Nita Farahany: 

Yeah, I think there's a lot of ways it can help users, and I don't think about neurotechnology as just those brain sensors that are put into an earbud. I think about this as an entire class of cognitive biometrics, which allow predictions about what a person is thinking and feeling, and that can have a lot of really positive use cases.

The more we can actually gain really accurate insights about what's happening inside of our brains, so the companies who are racing to this space are all of the major tech companies, right? Apple has all kinds of patents and investments in this space, and you see the same thing at Meta and Google and others because if XR technology really takes off, which I think eventually it will.

It doesn't make sense to have it powered or have us tethered to a keyboard and a mouse or a joystick, like there has to be a more natural and seamless way of interacting with these devices. The approach has not been like, let's figure out how to commodify all the brain data, but it's like how do we build a new class of technology from the ground up and have a new way of thinking about interacting with that technology?

But then, you know, there's also a huge amount of investment that's been happening to the side, which is on mental health and recognizing that the brain is an untapped potential of areas and products that could be targeted at mental health. And so, you know, this is things like seeing a huge number of apps that are focused on, whether it's AI based mental health, or if it's journaling or meditation apps.

There you have, you know what billions to trillions of dollar industry potentially around brain health and wellness. And then they start to converge because suddenly if you have this new class of devices that have been focusing on neural interface and you have this untapped potential of brain health and wellness, suddenly you have access to much better insights and much better ways of being able to actually interact with the brain and to gather data that could be used for much more targeted products.

So I think it's like the biggest untapped market, if you think of it that way. 


[00:11:59] Bilawal Sidhu: 

That's well said. I mean, I, I believe Zuckerberg called neural interfaces the holy grail of VR, right? 


[00:12:05] Nita Farahany: 

That's right.Yeah. The QWERTY keyboard doesn't make any sense when you're thinking about like XR, right? It just doesn't make sense nor does the mouse. Right. I mean, it's. 


[00:12:14] Bilawal Sidhu: 

Totally. 


[00:12:14] Nita Farahany: 

Like, like we've, it's become second nature to us, but the idea that I have to work on a mouse to navigate around my screen, like touchscreen was a good innovation that way, but that's still awkward. 


[00:12:25] Bilawal Sidhu: 

Yeah. 


[00:12:25] Nita Farahany: 

Um, and like we just we're inefficient in our interactions with technology.


[00:12:34] Bilawal Sidhu: 

This world you painted sounds very efficient, if that's the right word for it, but there is something uncanny about it. Right? And I wanna get into what makes it uncanny in evaluating some of the risks of this technology, uh, you brought a lot of awareness to this idea of cognitive liberty. So how do you define cognitive liberty, and why do you find it to be a useful term when you're looking at this new wave of neurotechnology?


[00:12:59] Nita Farahany: 

I came on the term cognitive liberty around 2010 or something, and it resonates with me well, because what it reflects from my perspective is this right to self-determination over our brain and mental experiences as a fundamental matter. And like what does that even mean to have self-determination?

There's this huge amount of literature that has developed over the past couple of decades around what self-determination means and why self-determination is fundamental to human self-actualization. You need the basic autonomy, the competence, and the capacity for relatedness to other people and so for me it's about that.

It's about those pillars of self-determination, the ways in which technology can both enable it, but also have increasingly come to interfere with the capacity for self-determination and how cognitive liberty would give us the goalpost to say like, “What is it that we're trying to achieve in technological design or in technological regulation?”

We're trying to preserve this space of self-determination for individuals to enable them to form their identity, to have a space of mental privacy, to have the capacity of freedom of thought, the preconditions to be able to become a fully self-actualized human. So it resonates really well for me as, as a liberty interest, as a kind of fundamental right that individuals have as a precondition to being able to flourish.


[00:14:21] Bilawal Sidhu: 

It's like our minds are this sanctum sanctorum, right? We, we feel like we have complete dominion over it, though already technology is influencing us at a very deep, visceral level without us even knowing, and now we're creating these higher bandwidth forms of, uh, sensing. And I'm kind of curious, is there like an interesting analogy here where, you know, many folks got their DNA sequenced over the past decade or two without really thinking about a scenario where, you know, maybe a data breach would occur.

And it may me wonder from like a medical data perspective, what are gonna be the effects of increasing brain and neurological health transparency without adequate privacy regulations, and obviously America is very unique in that there is no federal law on, uh, internet privacy. Could this medical data kind of be weaponized against users?


[00:15:12] Nita Farahany: 

Yeah, I mean, I think very much so. So, you know, part of it is people went into direct to consumer genetic testing and could never have imagined it would eventually be used to solve cold cases and, uh, you know, for law enforcement agencies to collect all of that data and, but like put us all into, um, warrantless searches for here on out, right?

And, and then some people are like, “Well, that's okay. I don't mind that. Like I didn't commit a crime and so if it helps, you know, find that person, that's great.” But, you know, as you start to imagine every possible use case, there are so many use cases that people just don't even contemplate about the ways in which data can be used or misused against them.

I think with brain data, it's even more fundamental than that. Like I don't think it's just, let's point to how law enforcement might use it one day, or how others might use it one day. I think it's about the importance of having a space like a, a, a, you know, the inner sanctum, the, the mental privacy, the space we need to even just be us. 

And that's almost impossible for us to even grapple with or imagine a world in which we don't have that. But that's where I spend a lot of my mental energy is imagining that world where we don't have that. And, you know, think about like as a kid, all of the thoughts where you're like, “Maybe I'm weird. I, you know, have different gender identity or preferences and sexual orientation or you know, maybe I don't wanna be a doctor and my parents really want me to be a doctor.” 

And all of these little thoughts that you have every day. If you don't have a space where you can do that where you don't, where you feel safe to just be that person that figures out who you are.

What does that world look like for humans to become right? This like act of becoming that we're constantly engaged in. That's the world that I think we're entering into without realizing it. It's a world where, what we've taken for granted is this most fundamental aspect of being human that suddenly may no longer exist, and we're not putting into place the right protections to ensure this linchpin of humanity is still safe.


[00:17:27] Bilawal Sidhu: 

This is what it has come to, we're talking about neuro surveillance. Let's go even further down the rabbit hole and like imagine this future a bit more. Do you think we'll get to a place where our brain states our inner thoughts are fully transparent to one another? How would this impact our personal relationships?

How's this gonna impact society? It kind of feels like Twitter on steroids, like without the crude thumb typing to express your thoughts. Just knowing what everyone is thinking at all times just feels wild. 


[00:17:58] Nita Farahany: 

So I taught a class at Duke called, Let's Talk About Digital You. And it was a class for undergraduates where the goal was to have them think critically about digital technologies and, and how they interact with them.

Then also think about what does it mean for them and who they are as a person. And one of the things I learned in that class when we were talking about privacy was how almost every kid in the class shared their location data on their phone with every one of their friends, like hundreds of friends who were tracking them at all times.

I was shocked by this. And then I think about how in the writing of the book, I was interviewing this person who leads a meditation class that uses neural devices and how they've created Facebook groups where they're sharing readouts of their, uh, meditation sessions. They're like, “Oh, look at my gamma activity there. Look at what's happening with my alpha activity here.” 

And I think, “Is this gonna be the latest status update where, you know, you're like, Nita's in a bad mood, don't talk to her right now. Or like, you know, Nita is thinking about food.” And my friends reach out and say, like, “Hey, I'm thinking about food too. Like, let's go get to a bite to eat.” 

Like it, it just becomes something that, you know, we decide like we're gonna share all this data with, and it's not hard to imagine that we get to that place where it becomes something that is much more transparent. Is that all bad? I don't know. Right. There are some people who believe, you know, total transparency is better.

I can buy into that argument up until you get to mental privacy. And then I really think this act of becoming requires that we have much greater control over what, if anything, is shared from our brain data. 


[00:19:36] Bilawal Sidhu: 

You're bringing up a point about this technology that is very much a consent issue and sometimes I feel like utility with technology almost ends up being a Trojan horse.

Like we do get a lot of benefit from this stuff, but by consenting to these things, all these other things happen that we did not expect and we may not even be aware of. 


[00:19:55] Nita Farahany: 

Right. And I, I think that's what I'm trying to really highlight for people. There's been a lot that's happened over the past year, and part of it has been me worrying about normalizing neural surveillance.

Like the risks become invisible to us, and we accept this new technology without even stopping to recognize all of the implications of what it is that we're adopting. And part of it is interesting, it's, it's how the technology is introduced to us, a lot of times it's with, as you put it, utility, right?

There's some utility that we buy into or there's some experience where it's like the only way you get this like fun experience is by opting into this new technology. Without then sensitizing us to all of the risks of doing so and really contemplating like we're crossing a new barrier here. We're crossing some threshold that we've never passed before, and it has profound implications.

It is this space of what it means to be human. Part of how we define vulnerability and intimacy with each other is what we choose to share and what we choose to hold back. And I don't think that's just a consent issue, right? Because like at some point there's enough peer pressure and social pressure that like if you're not sharing your location data, like you're weird.

You know, why aren't you sharing it with everybody else? So there's this coercive pressure toward the norm where we have this collective action problem. If each of us individually decides like, “Oh, I'm okay with sharing my neural data, and I don't care if I don't have mental privacy, I don't have anything to hide.”

Suddenly we have the collective, having not recognized that they've given up this space of what it means to be human without us ever having really talked about it, thought about it, recognized how profound it is of a thing to actually seed to other people or to companies.  


[00:21:46] Bilawal Sidhu: 

Yeah, it's like, uh, “We like targeted ads. Oh, it's fine. I like using Instagram.” 

Like, and these algorithms have the courses, you know, kind of view into our hopes, wishes and desires, and are still doing such a great job. It feels like we're going into this future where people are like, “Oh yeah, you know, I'll, I'll have, I'll get that cheap Alexa.” And then suddenly they're having dreams about Bud Light or Coors Light or whatever, but they're like, “Oh, that's cool with me. Like, I'm asleep. I don't really care.”


[00:22:12] Nita Farahany: 

Yeah, I mean that example as a real one, right? 


[00:22:14] Bilawal Sidhu: 

Yeah. 


[00:22:15] Nita Farahany: 

It's like, um, I write about this in my book where, you know, Coors was really frustrated. They've been locked out of the halftime show for years for the Super Bowl. So they got together with dream researchers, figured out that it's possible to incubate in people's minds when they're in their most suggestible state because their conscious brain is basically checked out as they're falling asleep.

Associations between Coors and mountains and lakes and streams and like so that you have this idea that like Coors is crisp and refreshing. But then I imagine this dystopian future of you're wearing your sleep earbuds to track your sleep activity. The entire economic, you know, system of these companies is based on targeted advertisements and real time bidding of, you know, here's new real estate to be able to access Nita, whether she's on her search engine or the app store, and it's like, “Hey, we've never targeted when Nita's asleep, but here's her most suggestible sleep state. She's got a Amazon Echo in her bedroom, like prime time to advertise and to play a little jingle is right now.” 

And so like what stops that? What prevents targeted advertisements to us while we are sleeping? Nothing. Right? I mean, other than like the company shouldn't do that.

Okay, what's gonna stop them from doing that, if that becomes the most effective way to advertise to us and becomes the most expensive ad real estate that they could sell to ad brokers? 


[00:23:43] Bilawal Sidhu: 

Oh, good lord. Oh good lord. I mean, this study is, is basically this targeted dream incubation, demonstrates what companies and social media algorithms can do, and also what neurotechnology is capable of already. 


[00:23:57] Nita Farahany: 

Right. 


[00:23:58] Bilawal Sidhu: 

And so it's, it's wild because social algorithms not only extract or infer our way of looking at the world, but they are actively shaping how we perceive the world. 


[00:24:08] Nita Farahany: 

That's right. 


[00:24:09] Bilawal Sidhu: 

They predict what we want, but as these platforms become such an intricate part of our lives, they can almost define what we want.

So when it comes to this threat of neuro surveillance. How do you think the rise of this tech could influence what we feel safe to think on a conscious level or even more subliminally, like what we're capable of thinking? 


[00:24:30] Nita Farahany: 

That's the question I struggle with the most these days is, you know, as I try to unpack, like what does self-determination mean anymore in a world where we're being steered constantly? 

And you know, the easiest way people can really understand that is if you sit down to watch a single episode of a show and you end up watching four, it's by design, you are less likely to get up and leave if it's just automatic, like if you don't have to make a self-control choice.

And we can see these differences. Like if you look at the studies, there's a difference between when you are making a choice about what video to watch next versus an algorithm is deciding for you what video to watch next. In that the parts of your brain that are responsible for self-control are basically just turned off.

They're silent when you're just being fed content over and over again. 


[00:25:23] Bilawal Sidhu: 

Wow. 


[00:24:23] Nita Farahany: 

And when you see that right self-control is turned off, you are being fed information and that's changing how you feel. That's changing what you believe. We are being steered and is there such a thing as self-determination in a world that's increasingly connected to technology and a world where that steering will become much, much more precise?

It it, if it's not just the crude interpretation of how many seconds or milliseconds you spent on a video to try to make an interpretation of you, but literally a direct measurement of your reaction to information. And then you're in an immersive environment. You're not even in the real world anymore, where at least there's some things that are static and that immersive environment can continuously change in response to your brain activity.

It feels like we're approaching The Matrix pretty quickly. Um, and so then I struggle with like a, okay, well, like is like, is there such a thing as self-determination in that world of self-actualization in the world where there's true autonomy, even if it's relational autonomy, where there's true competence, where we're exercising self-control and critical thinking skills where we're fostering relationships with each other and relatedness with ourselves and with other people?

I still believe there is. I think if we have cognitive liberty as a guiding principle that points us to different ways of designing technology that align with self-determination rather than align with human diminishment. 


[00:26:47] Bilawal Sidhu: 

I love the, the phrasing that you're using of technology steering us and really the question is, who is at the steering wheel, right?


[00:26:54] Nita Farahany: 

Yeah. 


[00:26:54] Bilawal Sidhu: 

Like who controls the objective function? And so this future starts sounding rather scary. So I wanna zoom back out just a bit here and ask you, you know, you've been a major figure in both spreading awareness about these breaches of cognitive liberty and trying to prepare us for the threats ahead.

You've also been working to encode cognitive liberty as a legal rights. I'm curious, what would that look like in regulation and how would it be enforced? 


[00:27:22] Nita Farahany: 

First I, I'll just emphasize, I think cognitive liberty, to your point, it's systemic change, right? It's in part about encoding it into law, but it's also about changing our incentives, changing, you know, economic alignment with cognitive liberty, commercial design, and redesign.

From a legal perspective, I think it starts with a human rights approach, and this is so that there's a universal global norm around a right to cognitive liberty and recognizing that as an organizing principle for how we interpret existing human rights law. It's recognizing privacy includes a right explicitly to mental privacy which also safeguards from interference and interception with the automatic processes in our brain. 

And so much of this technology is really about that. It's not about robust thought. It's about these automatic ways that our brain reacts and interfering with them, hijacking them, manipulating them. And then there's freedom of thoughts.

As long as it's been a right, it has been recognized as an absolute human right. But it's pretty narrowly constructed because it's an absolute right. And so that's really around protecting the interception, manipulation and punishment of thought. And here we're really in the mind reading realm, right?

Which is protecting this space of, you know, what we would really commonly and common language, think of as thought and images in our mind. That's the human rights perspective. Any, uh, good critic of human rights law will say, “Well, that's only as good as like, you know, the, whether people adhere to it and how it's implemented at a national level.”

So I also think it's important to recognize what that looks like, you know, at nation state levels. And part of that is doing things as part of privacy laws to, you know, create robust rights around cognitive biometric data. Employees have a right to mental privacy, giving children and students in the educational system a right to not be surveilled for their mental activity and giving them that space.

Right? So it's, it's taking these high principles and then context by context, creating robust laws that actually implement those high concepts into law at different, you know, nation and nation state levels. 


[00:29:29] Bilawal Sidhu: 

So where are we currently at there? I'm curious if there's been any notable progress on this subject since your book came out 


[00:29:36] Nita Farahany: 

A lot.

Yeah, so in the US there's been finally some momentum, I'd say, I'm not that excited about what's happened here yet, but I, I'm, I'm appreciative of the fact that there have been conversations. So Colorado passed a new law that provides some protections around neural data when it's used for identification purposes.

That's a very narrow subset, but it's something. California has a broader law that's currently pending that would make it a sensitive category of data. The Uniform Law Commission here in the United States, which has appointed commissioners from every state, has just agreed to create a study committee that might lead to a drafting committee to have model legislation for all of the states about how to protect this category of data around cognitive biometrics, which is exciting.

That launches the summer. Um, UNESCO has had a major process underway, 194 member countries voted to move toward adopting a global standard around the ethics of neurotechnology. The first draft of that was published in April. I'm part of that process, so I was, um, appointed by the US to that process and, and the co-chair of the expert committee and the second draft will come out at the end of August.

And then internationally, there's been a lot of conversations happening around this, the concept of cognitive liberty, the concept of, you know, whether or not there need to be special rights, broadening the conversation beyond neurotechnology to understand this is a new class is moving forward. So it's encouraging, but what you find is trying to claw back rights is much harder than from the get go, having a set of rights in place. 

So I think we need to move faster and use that momentum to actually lead to real change before these products are wide scale across society. 


[00:31:20] Bilawal Sidhu: 

This technology is coming fast and furious and as you've alluded to, and actually as you've outlined, it's, it's being infused into tech that we use every single day.

People are just gonna go buy the next Gen AirPods. One question I have is for the listeners that are perhaps being exposed to this idea of neuro surveillance and neurotechnology, what advice would you have for them as they go about, you know, navigating the world and become exposed to these technologies, both in their personal life, but also in the workplace?


[00:31:48] Nita Farahany: 

We still have time to make choices individually and collectively. We should demand that if Apple launches EEG sensors, that they be very clear about what their data privacy policies are with respect to that data and they've done that with Apple Provision, right? They've said like they've been incredibly specific to say, “Here's what's happening with the eye tracking data. Here's what lives on device. Here's are the inferences that leave the device.” 

That kind of transparency is great, and I applaud them for doing that. We should only buy the products that are both transparent with respect to how they're thinking about each stream of sensor data and not buy products from companies that do not offer those same assurances. 

That's a space we have a right to protect and that we should demand as a set of protections and so advocate for the rights that, um, you know, are being called for. Be part of the process of advocating for UNESCO to move forward with this process for states, for countries to move forward with a robust set of rights around cognitive liberty and be part of the change that makes that happen.


[00:32:50] Bilawal Sidhu: 

That's beautifully put. Thank you so much for joining us. 


[00:32:54] Nita Farahany: 

Thanks for having me.


[00:33:00] Bilawal Sidhu: 

This tech is amazing and it's not hyperbole to say that we're dealing with mind reading here. One of the central benefits of this technology is gaining visibility into our own minds, quantifying that endless stream of activity that really influences our day-to-day perception. And through this new tech, what seems to be as opaque as emotional experiences can be translated into measurable neurobiological patterns.

This means that we can better understand what's happening in our heads, and it also means that we can more easily change what's happening in our heads if you understand what's going on in a person's mind, you can create a stimulus to influence it, whether that's Orwellian workplace monitoring or the most persuasive advertisement you've ever seen.

It's giving away read, write access to the core of who we are. So what happens to a society where there are no secrets? We're all open books. Maybe this is an inevitable transition. We already share so much of ourselves via social media and do so proactively. That said, we need to place a lot more value in our personal data.

Data that we currently view to be innocuous and often sign away without even thinking about it. But when it comes to tech that has both read and write access into our minds, we have to be more proactive than we have been about social media. And we have to value the data before it becomes table stakes to function in society.

After all, the stakes here are high. We're talking about our hearts, minds, and how we view the world.

The TED AI Show is a part of the TED Audio Collective and is produced by TED with Cosmic Standard. Our producers are Elah Feder and Sarah McCrea. Our editors are Banban Cheng and Alejandra Salazar. Our showrunner is Ivana Tucker, and our associate producer is Ben Montoya. Our engineer is Aja Pilar Simpson.

Our technical director is Jacob Winik, and our executive producer is Eliza Smith. Our fact checker is Krystian Aparta. And I'm your host, Bilawal Sidhu. See y'all in the next one.