The TED AI Show
What really went down at OpenAI and the future of regulation w/ Helen Toner
May 28, 2024
Please note the following transcript may not exactly match the final audio, as minor edits or adjustments could be made during production.
[00:00:00] Bilawal Sidhu: Hey, Bilawal here. This episode is a bit different. Today I'm interviewing Helen Toner, a researcher who works on AI regulation. She's also a former board member at OpenAI. In my interview with Helen, she reveals for the first time what really went down at OpenAI late last year when the CEO Sam Altman was fired, and she makes some pretty serious criticisms of him.
We've reached out to Sam for comments, and if he responds, we'll include that update at the end of the episode. But first, let's get to the show.
I am Bilawal Sidhu, and this is the TED AI Show where we figure out how to live and thrive in a world where AI is changing everything.
The OpenAI saga is still unfolding. So let's get up to speed. In case you missed it, on a Friday in November 2023, the board of directors at OpenAI fired Sam Altman. [00:01:00] This ouster remained a top news item over that weekend with the board saying that he hadn't been, quote, consistently candid in his communications, unquote.
The Monday after Microsoft announced that they had hired Sam to head up their AI department, many OpenAI employees rallied behind Sam and threatened to join him. Meanwhile, OpenAI announced an interim CEO and then a day later plot twist. Sam was rehired at OpenAI. Several of the board members were removed or resigned and replaced.
Since then, there's been a steady fallout on May 15th, 2024. Just last week as of recording this episode, OpenAI's Chief scientist, Ilya Sutskever, formally resigned. Not only was Ilya a member of the board that fired Sam, he was also part of the Super Alignment team, which focuses on mitigating the long-term risks of AI with a departure of another executive, Jan Leike, many of the original safety conscious folks in leadership positions have either departed OpenAI or moved [00:02:00] on to other teams.
So what's going on here? Well, OpenAI started as a nonprofit in 2015. Self-described as an artificial intelligence research company. They had one mission to create AI for the good of humanity. They wanted to approach AI responsibly to study the risks up close and to figure out how to minimize them. This was gonna be the company that showed US AI done right.
Fast forward to November 17th, 2023. The day Sam was fired. OpenAI looked a bit different. They'd released DALL-E and ChatGPT was taken the world by storm with hefty investments from Microsoft. It now seemed that OpenAI was in something of a tech arms race with Google. The release of ChatGPT prompted Google to scramble and release their own chatbot Bard.
Over time, OpenAI became closed, AI starting 2020. With the release of GPT3. OpenAI stopped [00:03:00] sharing their code, and I'm not saying that was a mistake. There are good reasons for keeping your code private, but OpenAI somehow changed. Drifting away from a mission-minded nonprofit with altruistic goals to a run of the mill tech company, shipping new products at an astronomical pace.
This trajectory shows you just how powerful economic incentives can be. There's a lot of money to be made in AI right now, but it's also crucial that profit isn't the only factor driving decision making artificial general intelligence or AGI has the potential to be very, very disruptive, and that's where Helen Toner comes in.
Less than two weeks after opening AI fired and rehired, Sam Altman, Helen Toner resigned from the board. She was one of the board members who had voted to remove him. And at the time she couldn't say much. There was an internal investigation, still ongoing, and she was advised to keep mom and oh man, she got so much flack for [00:04:00] all of this.
Looking at the news coverage and the tweets, I got the impression she was this techno pessimist who was standing in the way of progress or a kind of maniacal power seeker using safety policy as her cudgel. But then I met Helen at this year's TED Conference and I got to hear her side of the story, and it made me think a lot about the difference between governance and regulation.
To me, the OpenAI Saga is all about AI board governance and incentives being misaligned amongst some really smart people. It also shows us why trusting tech companies to govern themselves may not always go beautifully, which is why we need external rules and regulations. It's a balance. Helen's been thinking and writing about AI policy for about seven years.
She's the director of strategy at CE, the Center for Security and Emerging Technology at Georgetown, where she works with policymakers in DC about all sorts of AI issues. [00:05:00] Welcome to the show.
[00:05:01] Helen Toner: Hey, good to be here.
[00:05:02] Bilawal Sidhu: So Helen, a few weeks back at TED in Vancouver. I got the short version of what happened at OpenAI last year.
I'm wondering, can you give us the long version?
[00:05:12] Helen Toner: As a quick refresher on sort of the context here, the OpenAI board was not a normal board. It's not a normal company. The board is a nonprofit board that was set up explicitly for the purpose of making sure that the company's, you know, public good mission was primary, was coming first over profits, investor interests, and other things.
But for years, Sam had made it really difficult for the board to actually do that job by, you know, withholding information. Misrepresenting things that were happening at the company, in some cases outright lying to the board. You know, at this point everyone always says like, what? Give, give me some examples, and I can't share all the examples, but to give a sense of the kind of thing that I'm talking about, it's things like, you know, when ChatGPT came out November, 2022, the board was not informed in advance about that.
We learned about ChatGPT on Twitter. [00:06:00] Sam didn't inform the board that he owned the OpenAI Startup Fund even though he, you know, constantly was claiming to be an independent board member with no financial interest in the company. On multiple occasions, he gave us inaccurate information about the small number of.
Formal safety processes that the company did have in place, meaning that it was, you know, basically impossible for the board to know how well those safety processes were working or what might need to change. And then, you know, a last example that I can share, 'cause it's been very widely reported, relates to this paper that I wrote, which has been, you know, I think way overplayed in the press.
[00:06:36] Bilawal Sidhu: For listeners who didn't follow this in the press, Helen had co-written a research paper last fall intended for policymakers. I'm not gonna get into the details, but what you need to know is that Sam Altman wasn't happy about it. It seemed like Helen's paper was critical of OpenAI and more positive about one of their competitors.
Anthropic. It was also publisher, right when the Federal Trade Commission was investigating [00:07:00] OpenAI about the data used to build its generative AI products. Essentially, OpenAI was getting a lot of heat and scrutiny all at once.
[00:07:09] Helen Toner: The way that played into what happened in November is, is pretty simple. It had nothing to do with the substance of this paper.
The problem was that after the paper came out, Sam started lying to other board members in order to try and push me off the board. So it was another example that just like really damaged our ability to trust him and, and actually only happened in late October last year when we were already talking pretty seriously about whether we needed to fire him.
And so, you know, there's kind of more individual examples and for any individual case, Sam could always come up with some kind of like innocuous sounding explanation of why it wasn't a big deal or misinterpreted or whatever. But the, you know, the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us.
And that's a completely. Unworkable place to be in as a board, [00:08:00] especially a board that is supposed to be providing independent oversight over the company. Not just like, you know, helping the CEO to, to raise more money. Um, you know, not, not trusting the word of, of the CEO, who is your main conduit to the company, your main source of information about the company.
It's just like totally, totally impossible. So, um, that was kinda the background that. The state of affairs coming into last fall, and we had been, you know, working at the board level as best we could to set up better structures, processes, all that kind of thing to try and, you know, improve these issues that we had been having at the board level.
But then in, mostly in October of last year, we had this series of conversations with, um, these executives where the two of them suddenly started telling us about their own experiences with Sam, which they hadn't felt comfortable sharing before. But telling us how they couldn't trust him about the, the toxic atmosphere he was creating.
They used the phrase psychological abuse, um, [00:09:00] telling us they didn't think he was the right person to lead the company to AGI, um, telling us they had no belief that he, you know, could or would change. No point in giving him feedback. No point in trying to work through these issues. I mean, you know, they, they've since tried to kinda minimize what, what they told us, but these were not like casual.
Conversations. They were, they were really serious to the point where they actually sent us screenshots and documentation of some of the, the instances they were telling, telling us about, of him lying and being manipulative in different situations. So, you know, this was a huge deal. This was a lot. Um, and we talked it all over very intensively over the course of several weeks, and ultimately just came to the conclusion that the best thing for OpenAI's mission and for OpenAI as an organization.
Would be to bring on a different CEO and. You know, once, once we reached that conclusion, it was very clear to all of us that as soon as Sam had any inkling that we might do something that went against him, he, you know, would pull out all the [00:10:00] stops, do everything in his power to undermine the board, to prevent us from, you know, even getting to the point of being able to fire him.
So we know we were, we were very careful, very deliberate about, um, who we told, which was essentially almost no one in advance other than, you know, obviously our legal team. And so that's kind of what took us to, to November 17th.
[00:10:19] Bilawal Sidhu: Thank you for sharing that. Now, Sam was eventually reinstated as CEO with most of the staff supporting his return.
What exactly happened there? Why was there so much pressure to bring him back?
[00:10:29] Helen Toner: Yeah. This is obviously the, the elephant in the room and uh, unfortunately I think there's been, um, a lot of misreporting on this. I think there were three big things going on that help make sense of kind of what happened here.
The first is that really pretty early on the way the situation was being portrayed to people inside the company was, you have two options. Either Sam comes back immediately with no accountability, you know, totally new board of his choosing, or the company will be destroyed. I. And, you know, those weren't actually the [00:11:00] two, only two options.
And the, the outcome that we eventually landed on was neither of those two options. But I get why, you know, not wanting the company to be destroyed. Got a lot of people to, to fall in line. You know, whether because they were, in some cases about to make, uh, a lot of money from this upcoming tender offer or just because they loved their team, they didn't wanna lose their job, they cared about the work they were doing.
And of course, a lot of people didn't want the company to fall apart. You know, us, us included. The second thing I think it's really important to know that has really gone under reported is. How scared people are to go against Sam. Um, they had experienced him retaliating against, against people, retaliating against them for past instances of, of being critical.
Um, they were really afraid of, you know, what might happen to them. So when some employees started to say, you know, wait, I don't want the company to fall apart, like, let's bring back Sam. It was very hard for those people who had had terrible experiences to actually. Say that [00:12:00] for a fear that, you know, if, if Sam did stay in power as he ultimately did, you know that would make their lives miserable.
I guess the last thing I would say about this is that this actually isn't a new problem for Sam, and if you look at some of the reporting that that has come out since November, it's come out that he was actually fired from his previous job at Y Combinator. I. Which was hushed up at the time. And then at, you know, his job before that, which was his only other job in Silicon Valley, his startup looped.
Um, apparently the management team went to the board there twice and asked the board to fire him for what they called, you know, deceptive and chaotic behavior. If you actually look at his track record, he doesn't, you know, exactly, have a glowing trail of references. This wasn't a problem specific to, um, the personalities on the board as much as he would love to kind of portray it that way.
[00:12:50] Bilawal Sidhu: So I had to ask you about that, but this actually does tie into what we're gonna talk about today. OpenAI is an example of a company that started off trying to do good, uh, but now it's moved on [00:13:00] to a for-profit model, and it's really racing to the front of this AI game, along with all of these like, ethical issues that are raised in the wake of this progress.
And you could argue that the OpenAI saga shows that trying to do good and regulating yourself isn't enough. So let's talk about why we need regulations.
[00:13:17] Helen Toner: Great, let's do
[00:13:18] Bilawal Sidhu: it. So, uh, from my perspective, AI went from the sci-fi thing that seemed far away to something that's pretty much everywhere and regulators are suddenly trying to catch up.
But I think for some people it might not be obvious why exactly we need regulations at all. Like for the average person, it might seem like, oh, we just have these cool new tools like DALL-E and ChatGPT that do these amazing things. What exactly are we worried about in concrete terms?
[00:13:44] Helen Toner: There's very basic stuff for very basic forms of the technology.
Like if people are using it to decide who gets a loan, to decide who gets parole, um, you know, to decide who gets to buy a house. Like you need that technology to work well if that technology is gonna be [00:14:00] discriminatory, which AI often is, it turns out, um, you need to make sure that. People have recourse.
They can go back and say, Hey, why was this decision made? If we're talking AI being used in the military, that's a whole other kettle of fish. Um, and is not, I dunno if we would say like regulation for that, but certainly need to have guidance, rules, processes in place, and then kind of looking forward. And thinking about more advanced AI systems, I think there, you know, there's a pretty wide range of potential harms that we, we could, well see if AI keeps getting increasingly sophisticated, you know, letting every little script kitty in their parent's basement, having the hacking capabilities of, you know, a crack NSA cell, like that's a problem.
I think something that, that really makes AI hard for regulators to, to think about is that it is so many different things and plenty of the things. Don't need regulation. Like, I don't know that, how Spotify decides how to make your, your playlist, the AI that they use for that, that like, I'm happy for Spotify to just pick whatever songs they want for me and if they get it wrong, you know, who cares.
Um, but for many, many other use cases, you wanna [00:15:00] have at least some kind of basic common sense guardrails around it.
[00:15:03] Bilawal Sidhu: I wanna talk about a few specific examples that we might wanna worry about. Not in some battle space overseas, but at home in our day-to-day lives. You know, let's talk about surveillance.
AI has gotten really good at perception. Essentially understanding the contents of images, video and audio. Yep. And we've got a growing number of surveillance cameras in public, in private spaces, and now companies are infusing AI into this fleet, essentially breathing intelligence into these otherwise dumb sensors that are almost everywhere.
Yep. Madison Square Garden. Uh, in New York City as an example, they've been using facial recognition technology to bar lawyers involved in lawsuits. Against their parent company, MSG Entertainment from attending events at their venue. Uh, this controversial practice obviously raised concerns about privacy due process and potential for abuse of this technology.
Can we talk about why this is problematic?
[00:15:54] Helen Toner: Yeah, I mean, I think this is a pretty common thing that comes up in the history of technology is you have some. [00:16:00] You know, some existing thing in society. And then technology makes it much faster and much cheaper and much more widely available like surveillance where it goes from like, oh, it used to be the case that your neighbor could see you doing something bad and go talk to the police about it.
You know, it's one step up to go to, well there's a camera, CCTV camera, and the police can go back and check at any time. And then another step up to like, oh, actually it's just running all the time. And there's an AI facial recognition detector on there. And maybe, you know, maybe in the future an AI like activity detector that's also flagging, you know, this looks suspicious.
Um, I, I. Some ways there's no like qualitative change in what's happened. It's just like you could be seen doing something. But I think you do also need to grapple with the fact that if it's much more ubiquitous, much cheaper than, than the situation is different. I mean, I think with surveillance, people immediately go to the kind of law enforcement use cases, and I think it is really important to figure out what the right trade offs are between achieving sort of law enforcement objectives and, and being able to.
Catch criminals and, and, you know, prevent bad things from happening while also recognizing, you know, the, the huge issues that you can [00:17:00] get if this technology is used with overreach, for example, you know, facial recognition works better and worse on different demographic groups. And so if police are, as they have been in some parts of the country, going and arresting people purely on a facial recognition match, and on no other evidence.
There's a, a story about a woman who was eight months pregnant, having contractions in a jail cell after having done absolutely nothing wrong, and being arrested only on the basis of a, you know, a bad facial recognition match. So I personally don't go for, you know, the, this needs to be totally banned and no one should ever use it in any way for anything.
But I think you really need to be looking at how are people using it? What happens when it goes wrong? What recourse do people have? What kind of access to due process do they have? And then when it comes to private use, I, I really think we should probably be, be a bit more, you know, restrictive. Like, I don't know, it just seems pretty clearly against, I don't know, freedom of expression, freedom of movement for somewhere like Madison Square Gardens to be kicking their own lawyers out.
I don't know. I'm not a lawyer myself, so I, I don't know what exactly the, um, the state of the law around that is. But I think, I think the sort [00:18:00] of civil liberties and um, uh, privacy concerns there are pretty clear.
[00:18:05] Bilawal Sidhu: I think, uh, the, the, the problem with sort of an existing set of technology getting infused with more advanced capability, sort of unbeknownst to the common population at large, is certainly a trend.
And one example that shook me up is, uh, a video went viral recently of a security camera from a coffee shop, which showed a view of a cafe full of people in baristas. And basically over the heads of the customers, like the amount of time they spent at the cafe and then over the baristas was like. How many drinks have they made?
And then, you know, so what does this mean? Like ostensibly the business can one track who is staying on their premises for how long? Learn a lot about customer behavior without the customer's knowledge or consent. And then number two, the businesses can track how productive their workers are and could potentially fire, let's say, less productive baristas.
Let's talk about the problems and the risk here and like how is this legal?
[00:18:55] Helen Toner: I mean, the short version is, and this comes up again and again and again. If you're doing AI policy, um, [00:19:00] the US has no federal privacy laws. Like there's no, there are no rules on the books for, you know, how companies can use data.
The US is pretty unique in terms of how few protections there are of what kinds of personal data are protected in what ways. Efforts to make laws have just failed over and over and over again, but there's now this sudden stealthy new effort that people think might actually have a chance. So who knows, maybe this problem is on the way to getting solved, but at the moment it's, it's a big, big hole for sure.
[00:19:23] Bilawal Sidhu: And I think step one is making people aware of this, right? Because people have to your point, heard about online tracking, but having those same set of analytics and like the physical space in reality, it just feels like. The Rubicon has been crossed, and we don't really even know that's what's happening when we walk into whatever grocery store.
[00:19:39] Helen Toner: I mean, again, I, yeah, and again, it's, it's about sort of the, the scale and the ubiquity of, of this, uh, because again, it could be like your favorite, um, barista knows that you always come in and you sit there for a few hours on your laptop because they've seen you do that a few weeks in a row. That's very different to this.
This data is being collected systematically and then sold to, you know, [00:20:00] data vendors all around the country and used for all kinds of other things or outside the country. Um, so again, I, I think we have these sort of intuitions based on our real world post to person interactions that really just break down when it comes to sort of the size of data that we're talking about here.
[00:20:15] Bilawal Sidhu: I also wanna talk about scams. So folks are being targeted by phone scams. They get a call from their loved ones. It sounds like their family members have been kidnapped and being held for ransom. In reality, some bad actor just used off the shelf AI to scrub their social media feeds for these folks voices and scammers can then use this to make these very believable hoax called, um.
Where people sound like they're in distress and being held captive somewhere. So we have reporting on this particular hoax now, but what's on the horizon? What's like keeping you up at night?
[00:20:46] Helen Toner: I mean, I think the, the, the obvious next step would be with video as well. I mean, definitely if you haven't already gone and talked to, you know, your parents, your grandparents, anyone in your life who is, uh, not super tech savvy and told them like, you need to be on the lookout for this.
You should, you should go do that. [00:21:00] I talk a lot about kind of policy and what kind of government involvement or regulation we might need for ai. I do think a lot of things we can just adapt to and we don't necessarily need new rules for. So I, I think, you know, we've been through a lot of different waves of online scams and I, I think this is the newest one and it, it.
Really sucks for the people who get targeted by it, but I also expect that, you know, five years from now would be something that people are pretty familiar with and, and will be a, a pretty small number of people who are still vulnerable to it. So I think the main thing is, yeah, be super suspicious of any, any voice.
Definitely don't use voice recognition for like your bank accounts or things like that. I'm pretty sure some banks still offer that. Ditch that. Um, definitely something more secure and yeah, be on the lookout for, for video scamming as well and for people, you know, um, on video calls who look real. I think there was recently a, just the other day, um, a case of a guy who was on a whole conference call where there were a bunch of different AI generated people all on the call and he was the only real person got scammed out a bunch of money.
Um, so that, that's coming.
[00:21:55] Bilawal Sidhu: Totally. Content based authentication is on its last legs it seems.
[00:21:59] Helen Toner: Definitely. [00:22:00] It's always worth like checking in with what is the baseline that we're starting with. And I mean, so for instance, a lot of things, um, a lot of things are already public and they don't seem to get misused.
So like, I think, uh, I think a lot of people, what people's addresses are listed publicly. You know, we used to have literal, you know, white pages where you could look up someone's address. Um, and that mostly didn't result in, you know, in terrible things happening. Or, you know, I even think of silly examples like, like I think it's really nice that, uh, delivery drivers or when you go to a restaurant to pick up food that you ordered, it's just there.
[00:22:27] Bilawal Sidhu: So let's talk about what we can actually do. It's one thing to regulate businesses like cafes and restaurants. It's another thing to rein in all the bad actors that could be used. This technology, can laws and regulations actually protect us?
[00:22:41] Helen Toner: Yeah, they definitely can. I mean, and they already are. Again, AI is so many different things that there's no one set of AI regulations.
There's plenty of laws and, and regulations that already apply to ai, so, and. There's a lot of concern about ai, you know, algorithmic discrimination, um, with good reason, but in a lot of cases there are already laws on the book saying, you know, you can't discriminate on the [00:23:00] basis of race or gender or sexuality or whatever it might be.
Um, and so in those cases, it's not even, you don't even need to pass new laws or make new regulations. You, you just need to make sure that the agencies in question have, you know, the staffing they need. Um, maybe they have the, the. Maybe they need to have the exact authorities, they have tweaked in terms of who are they allowed to investigate or who are they allowed to penalize or things like that.
There are already rules for things like self-driving cars, you know, the Department of Transportation is, is handling that. It makes sense for them to handle that for AI and banking. There's a bunch of banking regulators that have a bunch of rules. Um, so I think there's a lot of places where. You know, AI actually isn't fundamentally new and the existing systems that we have in place are, are doing an okay job at, at handling that, that they may need, again, more staff or slight changes to, to what they can do.
And I think there are a few different places where there are kind of new challenges emerging at sort of the cutting edge of ai where you have systems that can really do things that, that computers have never been able to do before, and whether there should be rules around making sure that those systems are being kind of developed [00:24:00] and deployed responsibly.
[00:24:01] Bilawal Sidhu: I'm particularly curious if there's something that you've come across that's really clever or like a model for what good regulation looks like?
[00:24:09] Helen Toner: I think this is mostly still a work in progress, so I don't know that I've seen anything that I think really absolutely nails it. I think a lot of the challenge that we have with AI right now relates to how much uncertainty there is about what the technology can do, what it's gonna be able to do in five years.
You know, experts disagree enormously about those questions, which makes it really hard to make policy. So a lot of the policies that I'm most excited about are about shedding light on those kind of questions, giving us a better understanding of where the technology is. So some examples, um, of that are things like, uh, president Biden created this big executive order last October and had all kinds of things in there.
One example was a requirement that companies that are training, especially advanced systems, have to report certain information about those systems to the government. And so that's a requirement where you're not saying you can't build that model, can't [00:25:00] train that model. Um, you're not saying the government has to approve something.
You're really just sharing information and creating kind of. More awareness and more ability to respond as the technology changes over time, which is, you know, such a challenge for government keeping up with this fast moving technology. There's also been a lot of good movement towards funding, like the science of measuring and evaluating ai.
A huge part of the challenge with figuring out what's happening with AI is that we're really bad at actually just measuring how good is this AI system, how, you know, how do these two AI systems compare to each other? Is one of them sort of quote unquote smarter. So I think there's been a lot of attention over the last year or two into funding and, and establishing within government, um, better capabilities on that front.
I think that's, that's really productive.
[00:25:45] Bilawal Sidhu: So policymakers are definitely aware of AI if they weren't before, and plenty of people are worried about it. Uh, they wanna make sure it's safe, right? Uh, but that's not necessarily easy to do. And you've talked about this, how it's hard to regulate [00:26:00] ai. So why is that?
What makes it so hard?
[00:26:03] Helen Toner: Yeah, I think there's, there's at least three things that make it very hard. One thing is AI has so many different things, like we've talked about. Um, it's cuts across sector, you know, it has so many different use cases. It's really hard to get your arms around, you know, what it is, what it can do, what impacts it'll have.
A second thing is it's a moving target. So what the technology can do is different now than it was even two years ago, let alone five years ago, 10 years ago. Um, and you know, policymakers are not good at sort of agile policymaking. Um, they're not like software developers. And then the third thing is no one can agree on how they're changing or how they're gonna change in the future.
If you ask five experts. Know where the technology is going. You'll get five completely different answers, often five, very confident, completely different answers, so that makes it really difficult for policy makers as well, because they need to get. Scientific consensus and just like take that and run with it.
So I think maybe this, this kind of third factor is the one that I [00:27:00] think is the biggest challenge for making policy for ai, which is that for policy makers, it's very hard for them to tell who should they listen to, what problems should they be worried about, um, and how is that gonna change over time.
[00:27:10] Bilawal Sidhu: Speaking of who you should listen to, obviously, you know, the very large companies in the space have an incentive and there's been a lot of talk about regulatory capture. When you ask for transparency, why would companies give a peak under the hood of what they're building? They just cite this to be proprietary.
On the other hand, you know, they might be, uh, these companies might want to set up. In a policy and institutional framework that is actually beneficial for them and sort of prevents any future competition. How do you get these powerful companies to like participate and play? Nice?
[00:27:42] Helen Toner: Yeah, it's definitely very challenging for policymakers to figure out how to interact with those companies again, because you know, in part because they're lacking the expertise and the time to.
Really dig into things in depth themselves. Like a typical senate staffer, um, might cover like, you know, technology issues and trade issues and [00:28:00] veterans affairs and agriculture and education, you know, and that's like their portfolio. Wow. Um, so they are scrambling. Like they have to, they need outside help.
So I think it's very natural that the companies do come in and play a role. And I also think there are plenty of ways that policy makers can really mess things up if they don't, you know, know how the technology works and they're not talking to. The companies are regulating about what's gonna happen.
The challenge, of course, is how do you balance that with external voices who are going to point out the places where the companies are, are actually being self-serving. And so I think that's where it's really important that civil society has resources to also be in these conversations. Certainly what we try to do at CSAT, the organization I work at, we're totally independent and, you know, really just trying to work in the best interest of, you know, making good policy.
The big companies obviously do need to have a seat at the table, but you would hope that they have, you know, a seat at the table and not 99 seats out of a hundred in terms of who policymakers are, are talking to and listening to.
[00:28:55] Bilawal Sidhu: There also seems to be a challenge with enforcement, right? Uh, you've got all these AI [00:29:00] models already out there, A lot of them are open source.
You can't really put that genie back in the bottle, nor can you really start, you know, moderating how this technology's used without, I don't know. I. Like going full 1984 and having a process on every single computer monitoring what they're doing. Uh, so how do we, how do we deal with this landscape where you do have, you know, closed source and open source, like various ways to access and build upon this technology?
[00:29:25] Helen Toner: Yeah, I mean, I think there are a lot of intermediate things between just total anarchy and full 1984. Um, there's things like, um, you know, hugging face, for example, is a very popular platform for open-source AI models. Hugging face in the past has, has delisted models that are, you know, considered to be offensive or dangerous or, or whatever it might be.
And that actually does meaningfully reduce kind of the usage of those models because hugging face's whole deal is to make them more accessible, easier to use, easier to find. You know, depending on the specific problem we're talking about, there are things that, for example, uh, you know, social media platforms [00:30:00] can do.
So if we're talking about, um, as you said, child pornography or, um, also, you know, political disinformation, things like that, maybe you can't control that at the point of creation. But if you have the, the Facebooks, the Instagrams, um, of the world, uh, you know, working on, they, they already have methods in place for how to kind of detect that material, suppress it, report it, um.
And so there, you know, there, there are other mechanisms that you can use. And then of course, specifically on the kind of image and audio generation side, there are some really interesting initiatives underway, mostly being led by industry around what gets called content providence or content authentication, which is basically how do you know where this piece of content came from?
How do you know if it's real? And that's a very rapidly evolving space and a lot of interesting stuff happening there. I think there's, there's a good amount of promise not for perfect solutions where we'll always know, is this real or is it fake? But for making it easier for individuals and platforms to recognize, okay, this is, this is fake.
It was AI generated by this particular model, or this is real, it [00:31:00] was taken on this kind of camera. And we have the cryptographic signature for that. I don't think we'll ever have perfect solutions. And again, I think, you know, societal adaptation is just gonna be a big part of the story. But I do think there's, there's pretty interesting technical and policy, um, options that, that can make a difference.
[00:31:16] Bilawal Sidhu: Definitely. And even if you can't completely control, you know, the generation of this material, there are ways to drastically cap the distribution of it. And, and so like, I, I think that reduces some of the harms there. Yeah. At the same time, labeling content that is synthetically generated, a bunch of platforms have started doing that.
That's exciting because like. I don't think the average consumer should be a deepfake detection expert. Right. But really, like if there could be a technology solution to this, that feels a lot more exciting. Um, which brings me to the future. I'm kind of curious in your mind, what's like the dystopian scenario and the utopian scenario in all of this?
Let's start with a dystopian one. What does a world look like with inadequate or bad regulations? Paint a picture for us.
[00:31:59] Helen Toner: So many [00:32:00] possibilities. Um, I mean, I think, I think there are worlds that are not that different from now where you just have automated systems doing a lot of things, uh, playing a lot of important roles in society, in some cases, doing them badly, and people not having the ability to, to go in and question those decisions.
There's obviously this whole discourse around. Existential risk from AI, et cetera, et cetera. Kamala Harris had a whole speech about like, you know, if someone's, I forget the exact examples, but if someone loses access to Medicare because of an algorithmic issue, like, is that not existential for that, you know, an an elderly person?
Um, you know, so, so there are already people who are being directly impacted by algorithmic systems and AI. In really serious ways. Even, you know, some of the reporting we've seen over the last couple months of how AI is being used in warfare. Like, you know, videos of a drone chasing a Russian soldier around a tank and then shooting him.
Like, I, I, I don't think we're a full dystopia, but there's, there's sort of plenty of, plenty of things we worried about all already. Something I, I think I worry about quite a bit, or that feels intuitively to me to be a particularly plausible way Things could go is sort of what I think of as the, [00:33:00] um, the WALL-E future.
I dunno if you remember that movie.
[00:33:03] Bilawal Sidhu: Oh, absolutely.
[00:33:04] Helen Toner: Um, with the little robot and the piece that I'm talking about is not the like. Junk Earth and whatever. Yeah. The piece I'm talking about is the people in that movie, they just sit in their soft roll around wheelchairs all day and, you know, have content and um, uh, content and food and whatever to keep them happy.
And I think what worries me about that is I do think there's a really natural gradient to go towards what people want in the moment. And we'll, you know, we'll go, we'll choose in the moment. Which is different from what they, you know, will really find fulfilling or what will build kind of a meaningful life.
And, and I think there's just really natural commercial incentives to build things that people sort of superficially want, but then end up with this really kind of meaningless, shallow, superficial world and potentially one where kind of most of the consequential decisions are being made by machines that have no concept of.
What it means to lead a meaningful [00:34:00] life. And, you know, because how would we program that into them? Because we have no, we, we struggle to kind of put our finger on it ourselves. So I think those kinds of futures where, not where there's some, you know, dramatic big, uh, event, but just where we kind of gradually hand over more and more control of the future to computers that are more and more sophisticated, but that don't really have any concept of.
Meaning, or beauty or joy or fulfillment or, you know, flourishing or, or whatever it might be. Um, I, I hope we don't go down those paths, but it, it definitely seems possible that we will,
[00:34:32] Bilawal Sidhu: um, they can play to our hopes, wishes, anxieties, worries, all of that. Just give us like the junk food. I. All the time, whether that's like in terms of nutrition or in terms of just like audio visual content, and that could certainly end badly.
Uh, let's talk about the opposite of that, the utopian scenario. What does a world look like where we've got this perfect balance of innovation and regulation and societies thriving?
[00:34:54] Helen Toner: I mean, I think a, a very basic place to start is can we solve some of the big problems in the world? And I do think that [00:35:00] AI could help with those.
So can we have a world, um, without climate change, a world with much more abundant energy that is much more, you know, cheaper and therefore more people can have more access to it. Um, where, you know, we have better agriculture, so there's greater access to food. And beyond that, you know, I think, I think what I'm more interested in is, is setting, you know.
Our, our kids and our grandkids and our great grandkids up to be, to be deciding for themselves what, what they want the future to look like from there. Um, rather than having kind of some particular vision of, of where it should go. Um, but I, I, I, I absolutely think that that AI has the potential to really contribute to solving some of the biggest problems that we kind face as a civilization.
It's hard to say that sentence without sounding kind of grandiose and, you know, trite, but, um, but I think it's true.
[00:35:45] Bilawal Sidhu: So maybe to close things out, just like what can we do you, you mentioned some examples of being aware of synthetically generated content. What can we as individuals do when we encounter, use, or even discuss AI?
Any recommendations? [00:36:00]
[00:36:00] Helen Toner: I think my biggest, I. Suggestion here is just not to be intimidated by the technology and not to be intimidated by technologists. Like, this is really a technology where we don't know what we're doing. The best experts in the world don't understand how it works. And so I think just, you know, if you find it interesting being interested, if you think of fun ways to use it, use them.
Um, if you're worried about it, feel free to be worried. Like, you know, I think the main thing is just feeling like you have a right to your own take on what you wanna happen with the technology and, and. No regulator, no, you know, CEO is ever going to have full visibility into all of the different ways that it's affecting, you know, millions and billions of people around the world.
And so kind of, I dunno, trusting your own experience and, and exploring for yourself and seeing what you think is, is I think the main, main suggestion I would have.
[00:36:47] Bilawal Sidhu: It was a pleasure having you on Helen. Uh, thank you for coming on the show.
[00:36:50] Helen Toner: Thanks so much. This was fun.
[00:36:54] Bilawal Sidhu: So maybe I bought into the story that played out on the news and on X, but I went into that interview [00:37:00] expecting Helen Toner to be more of an AI policy.
Maximalist, you know, the more laws, the better, uh, which wasn't at all the person I found her to be. Helen sees a place for rules, a place for techno optimism, and a place for society to just roll with adapting to the changes as they come for balance. Policy doesn't have. Have to mean being heavy handed and hamstringing innovation.
It can just be a check against perverse economic incentives that are really not good for society. And I think you'll agree, but how do you get good rules? A lot of people in tech are gonna say, you don't know shit. They know the technology the best, the pitfalls. Not the lawmakers and Helen talked about the average Washington staffer who isn't an expert, doesn't even have the time to become an expert, and yet it's on them to craft regulations that govern ai for the benefit of all of us.
Technologists have the expertise, but they've also got that profit motive. Their [00:38:00] interests aren't always gonna be the same as the rest of ours. You know, in tech you'll hear a lot of. Regulation bad, don't engage with regulators, and I get the distrust. Sometimes regulators do not know what they're doing.
India recently put out an advisory saying every AI model deployed in India first had to be approved by regulators. Totally unrealistic. There was a huge backlash there and they've since reversed that decision. But not engaging with government is only gonna give us more bad laws, so we gotta start talking if only to avoid that WALL-E dystopia.
Okay. Before we sign off for today, I want to turn your attention back to the top of our episode. I told you we were gonna reach out to Sam Altman for comments. So a couple of hours ago we shared a transcript of this recording with Sam and invited him to respond. We've just received a response from Brett Taylor, chair of the OpenAI board, and here's the statement in full.
Quote, we are [00:39:00] disappointed that Ms. Toner continues to revisit these issues. An independent committee of the board worked with the law firm, Wilmer Hale, to conduct an extensive review of the events of November. The review concluded that the prior board's decision was not based on concerns regarding product safety or security, the pace of development.
OpenAI's finances or its statements to investors, customers, or business partners. Additionally, over 95% of employees, including senior leadership, asked for Sam's reinstatement as CEO and the resignation of the prior board. Our focus remains on moving forward and pursuing OpenAI's mission to ensure AGI benefits all of humanity.
End quote. We'll keep you posted if anything unfolds.
The TED AI Show is a part of the TED Audio Collective and is produced by TED with Cosmic Standard. Our producers are Elah Feder and Sarah McCrae. Our editors are Banban Cheng and Alejandra Salazar. Our show runner is Ivana [00:40:00] Tucker, and our associate producer is Ben Montoya. Our engineer is Aja Pilar Simpson.
Our technical director is Jacob Winik, and our executive producer is Eliza Smith. Our fact checkers are Julia Dickerson and Dan Kechi, and I'm your host, Bilawal Sidhu. See y'all in the next one.