Why we can't fix bias with more AI w/ Patrick Lin (Transcript)

The TED AI Show
Why we can't fix bias with more AI with Patrick Lin
June 11, 2024

[00:00:00] Bilawal Sidhu:

My morning routine goes something like this. I wake up, hit snooze a couple times, finally get outta bed and make myself a cup of tea. The tea is probably the most important part, and then I sit down with my tea and look at Twitter to see what everyone's talking about. And yes, I'm still calling it Twitter.

Now in Feb of this year, I got outta bed, brewed my tea and started scrolling. And that's when I saw that Google had launched an AI image generator inside their chatbot called Gemini. And Twitter was on fire. People were calling Gemini, all kinds of things. Racist, woke, biased, and everything in between. And the thing was, everyone seemed to have a completely different take on what was happening.

So, what actually happened? Well, Google launched their image generator inside Gemini, and folks started using it, one of the initial tweets from the user @EndWokeness showed screenshots for Gemini's response to a bunch of different prompts ranging from the Founding Fathers of America to Viking to Pope.

While you might expect that each of these prompts would yield pictures of mostly men and mostly white men at that Gemini would prove you wrong. The Founding Fathers, Vikings, and Popes Gemini generated were people of color, and in the instance of the Pope, some of them were women. Okay, so this is obviously a pretty significant problem of historical inaccuracy.

Other users shared screenshots of Gemini's responses to their own prompts, and some of them were pretty bad, actually, among the most egregious pictures of people of color in Nazi uniforms, definitely not the kind of diversity we're looking for. Then Google weighed in. Prabhakar Raghavan, a senior vice president at the company wrote in a blog post that the Google team tried to get ahead of, quote “Some of the traps we've seen in the past with image generation technology, such as creating violence or sexually explicit images.”

In other words, they were trying to correct for AI bias, which had become a major topic of conversation in AI ethics circles. But this resulted in pissing everyone off to some on the left. It was advancing a kind of colorblind identity politics that glossed over the history of oppression to some on the right.

It was over representing minority groups and advancing some kind of conspiratorial, big tech woke agenda. In fact, Elon Musk, always the provocateur called Gemini, “Both woke and racist.” Go figure. Everyone was shouting about the bias baked into the system. But in doing so, they ended up revealing their own biases.

In other words, the lens through which they see the outputs of these AI systems.

I'm Bilawal Sidhu, and this is The TED AI Show. And on this episode, we're tackling one of the thorniest issues out there, bias in AI.

So look, some of these images that Gemini generated are really bad. I'm not excusing those, but I get what they were trying to do. Essentially, they ended up overcorrecting for some of the major misses that AI image generators like DALL-E, Midjourney, and stable diffusion have stumbled through in the past.

Like in late 2023, a group of universities put out a joint study about their findings on text to image generators. They found that when they prompted for pictures of surgeons and surgical trainees, they got some troubling results. The vast majority of images were white men in surgical gear. Which isn't actually reflective of the demographic breakdowns of surgeons today.

Another investigation by The Washington Post on bias and AI generated images showed similarly troubling trends. The prompt Muslim people revealed men in turbans and other head coverings. The prompt attractive people yielded pictures of only young light-skinned folks, and the prompt productive person generated pictures of men sitting at desks, most of them white.

Now there's been so much talk about bias in AI from flaws in the outputs to flaws in the training data. In cases like the Gemini scandal, we risk historical inaccuracy and miseducation undeniably real problems. And in other cases, like the study in the investigation I just mentioned, bias in AI can keep perpetuating harmful stereotypes that don't actually reflect reality.

So if everyone agrees bias and AI is a problem, then why isn't anyone fixing it? To help us unpack this, I sat down with Patrick Lin, a professor of philosophy at California Polytechnic State University, and the director of the University's Ethics and Emerging Sciences Group, which tackles topics like AI and predictive policing, the future of autonomous vehicles and cybersecurity in space.

He's been examining the ethics of technology for a long time and is thinking a lot about bias in AI, where it comes from and how it impacts us on a daily basis.

So when we say the ethics of AI, we're talking about a huge, almost never ending topic, right? Can you explain for our audience why AI ethics is such a vast topic?


[00:05:10] Patrick Lin:

So when we're talking about human ethics, you know, uh, your ethics and my ethics, we could do that because we like to think we have free will and we can make choices.

And some of the choices we make are, um, ethical or unethical or neither. But when you talk about machines, some people rightly point out, “Hey, they're not moral ages. They don't know what they're doing. So, how could they be held to any kind of ethical standard?” So quickly uh, right off the bat, I would say that's a wrong interpretation, a wrong understanding of technology ethics or AI ethics.

Ethics could be about, you know, not so much about the technology as an agent, but about how the technology is designed. It could be about the ethics of the technology developers. It could also be the ethics of the technology users. It's about a whole ecosystem of developers, users, stakeholders, unintentional stakeholders, environmental interest, um, you know, and so on.

When it comes to AI. Now, AI, I mean, you know, at a very high level, you could think of AI as, you know, an automation of a decision making process, right? So AI decides, well, what is this image I'm looking at? What is this text I'm seeing? And it makes a decision on predicting the ne the next words, right? So it's a, it's a decision engine of sorts.

And because it's a decision engine, it could be used to replace decision makers. If AI can be, uh, integrated in society in a lot of these decision making roles, then that already you know, already implicates countless domains, right? AI and agriculture and chemistry, and education and warfare. Uh, it's hard to imagine a single domain where AI cannot be applied to this means that, you know, you're really looking at the entire universe of ethical issues, potentially for AI ethics.


[00:07:00] Bilawal Sidhu:

That's a great point, especially as AI permeates all these verticals and domains, as you say, the surface area for this bias to manifest itself is also very, very broad. Right? And so today you and I are gonna talk about bias in AI and there's a bunch of interesting examples in there, just a few that we've come across.

One is racial bias and facial recognition. Right? Some facial recognition systems have been shown to have higher error rates for people with darker skin tones, potentially leading to false identifications, right? Amazon a scraped an AI recruiting tool after discovering that it was penalizing resumes that contain the word women's after downgrading graduates of all women's colleges reflecting gender bias in the training data.

What are some concrete examples of bias that you've encountered in the AI space?


[00:07:50] Patrick Lin:

If we start out with the, you know, technology du jour, which is LLMs, you know, uh, like Chat GPT and, you know, uh, AI writers or chatbots, um, we could already see bias in their outputs.

I'm thinking about Google Gemini's recent, uh, debacle, where it wants to diversify the ethnicity of Nazis and the Founding Fathers of the United States, right, who are all white. Um. Uh, but, but you know, if you think that, wait a minute, uh, anti-bias, anti-discrimination means you gotta mix it up with the, with the ethnicities and the genders, then that's how you get some false negatives.

But I think the big stake examples are still related to AI bias in hiring, which you mentioned, but also AI bias in bank lending in criminal sentencing. And I would even include things like AI policing. So, these are potentially life and death decisions. Even a bank loan decision could be a life and death decision.

If you're denied a, a loan for a mortgage, then that could mean you lose your house, you could become homeless. And, you know, a homeless, homeless unhoused folks tend to have a shorter lifespan than, than, um, other folks. Right? So this, so these are big, serious decisions, and even if the AI doesn't look specifically for gender, um, ethnicity, age, it can still deduce a lot of this information from, from, uh, other data.

So, for instance, um, a banking AI, you know, making a loan decision could, could, could be programmed or trained to ignore ethnicity, right? Ignore race. Um, but. It could still discriminate in its outputs in its impact. So, for instance, it might say, it might, uh, you know, given its training data, it could say, oh, you know what, uh, borrowers from a certain zip code, you know, have a high, uh, uh, rate of default.

So, we're gonna just not give loans to people in the zip code. But guess what? It turns out that zip code is, is full of minority neighborhoods. Alright? So, it is a proxy for race or ethnicity, which is discriminatory. For almost any given AI application, you could probably come up with some kind of, you know, some kind of weird case, uh, of, of bias.

I mean, we talk about healthcare AI. If a medical AI is trained primarily on, say, white patients, then it might misdiagnose someone who you know of African descent or Asian descent. Or Jewish descent. Uh, and by the way, this isn't just a, you know, a white versus other thing. I mean, if you look at facial recognition, uh, uh, projects in China, for instance, totally, where they train their AI, mainly on Chinese faces, they have a hard time recognizing differentiating white faces, right?

Just because white faces are underrepresented in their dataset. So, so it's not, it's not that AI is inherently racist, uh, in one direction. It depends on the training data.


[00:10:56] Bilawal Sidhu:

Absolutely. It, it brings up the concept of implicit bias also and how it might surface in AI in ways that we don't expect. One example that I've been really fascinated by recently is if you type in just the word or token thief into any text to image model.

You're gonna get an image that resembles a character from the video game, Assassin Creed, or the 2014 video game Thief, rather than, you know, the stereotypical depiction of a thief wearing that mask with a money bag slung over the shoulder, or worse, a racist caricature. You get this person in a ca, in a cape, uh, and that's what the model thinks a thief is, right?

And so this seems to reflect biases present in the training data, which in this case over represents video game imagery. How can we account for and mitigate these sort of implicit biases in AI systems? You know, so we ensure that we're not embedding or reinforcing problematic stereotypes, let's say, for media or pop culture.


[00:11:55] Patrick Lin:

Implicit biases by definition are hidden. They're under the surface. I, I mean, you, you've lived in it for so long, you don't even realize it's there, you know? I mean, you must have heard the joke where, you know, there's two fish and one, one fish asks the other fish, “Hey, how's the water?” And the other fish says.

“What's water?” Right? I mean, it's just so pervasive that you're not even aware it exists. I mean, that is part of the problem, recognizing bias when you see it. And, and, um, and AI bias is a popular problem because people understand bias. They can imagine that. You know, they could be on the wrong end of an AI decision someday.

You know, no matter who you are, no matter how privileged you are, um, that, that, that could be you. So, that's why AI bias, out of all the various issues in AI ethics, you know, might be the, the most, um, the most, uh, well-known one, most widespread one. Um, but the big trick is how do you get rid of AI bias? Bias is such a tricky problem because of you know, I, I think how humans are just simply hardwired and constructed, um, I mean, you know, think about our brains, right?

We're not just flawed machines. We are stereotyping machines. That's what we're built to do. You know, we're built to, uh, we're built for one shot learning. We're built to learn very quickly.

Um, and, and I mean, early on, uh, you know, humanity's, uh, history, this is a, this was critical for survival. So imagine you know, imagine you're the first caveman who've ever come across a a carrot. You think, “Ooh, what is this weird orange thing? I wonder if it's poisonous or not?” You nibble it, you eat it, you survive.

Right? It's, it's natural to make a judgment that anything that looks like this is also gonna be edible. Right? So that's, that, that's a form of stereotype. It can also go too far, especially when you're talking about people, individuals have so much, uh, variation from one, one person to the next, even inside the same groups, you know, whether you're talking about ethics group, religious groups or um, or whatnot.

Um, but also another tricky thing about. Bias and, and stereotypes is that, you know, it, it seems that there's some kernel of truth in some in the stereotypes, right? Otherwise they wouldn't be stereotypes, but, you know, but to, but to make such a broad judgment and to start making decisions based on stereotypes that, uh, seems to cross the line.


[00:14:27] Bilawal Sidhu:

I think that's a really good point, right? Like, as you say, bias has existed since time immemorial. It's almost intrinsic to our nature. It's perhaps a simplistic way of, you know, looking at patterns and extrapolating based on that. And so it's gonna be really hard to solve this problem in the AI space, right?

And we can't use more AI to solve it because AI doesn't know right from wrong. Like what even is, right? What is the truth, right? Since it can't detect what is and isn't biased or racist or misogynist, um, you know, uh, the logical fixes to train the AI on less biased data. But where do we find all this less biased data?

Because the data is generated by humans that bring their own biases to the party.


[00:15:11] Patrick Lin:

Mm-hmm.


[00:14:27] Bilawal Sidhu:

So, this is no small task. Right? And I think you've set up the problem appropriately. So, to your mind, how can we even start to fix this bias problem in artificial intelligence?


[00:15:12] Patrick Lin:

We don't understand bias well enough.

We do have a intuitive, superficial understanding of bias. You know, you might think of bias or discrimination as just treating people differently because they're different because of their different gender or ethnicity or religion. And these are generally legally protected categories. That's the usual understanding of what bias is.

But if that's all you have, you're gonna get it wrong. We need a deeper, more nuanced understanding of bias. If, if we're gonna truly, uh, tackle the problem.


[00:15:59] Bilawal Sidhu:

What are problems that arise when our definition of bias and AI isn't sufficiently nuanced?


[00:16:07] Patrick Lin:

One example would be this, if you think it's inappropriate, if you think it's discriminatory and biased to treat people differently because of age or gender, um, you know, if that's all you think bias is, it's gonna give you a lot of false positives, right?

So here's an example that shows that it might be okay to discriminate on age and on gender. On ethnicity at the same time. Right? So, imagine I'm a filmmaker and I'm interviewing actors, uh, um, for the title role of Martin Luther King, Jr. Right? I'm gonna reject every single Asian teenage girl who auditions for that role, and I'm, and I'm rejecting them for a job precisely on the basis of their age, their gender, and their ethnicity.

But that seems okay. At least if I'm trying to make a historical accurate biopic, right, it seems legitimate, um, that I could, I could, uh, filter out applicants based on their profile if they don't match the age, ethnicity, and gender of, of what I'm, what I'm, um, aiming for.


[00:17:13] Bilawal Sidhu:

You can't simplistically say, “Thou shalt not discriminate based on protected categories.” And hope for the best, right?

So clearly there are a lot of problems. What role do you think subjects like ethics, philosophy, social sciences, play in the training of these AI researchers and developers that are building these next generation systems?


[00:17:35] Patrick Lin:

Oh, I think it's huge. I have great respect for science and technologists. I wanted to be one of them when I was growing up before I accidentally found philosophy.

They fundamentally want, are curious, they wanna know how things work, but more than that, they wanna change the world for the better. Here's a problem. Uh, you also gotta understand the world in order to make those kinds of interventions. Technology isn't, you know, it doesn't really do a great job in solving social human problems.

Only humans can really solve social problems. Technology and AI. They're tools. They, they could do some things. They could, you, uh, you know, alleviate some of the symptoms of these problems, but they have a hard time getting at the root of the problem. Take the human, very human problem of drunk driving right now.

It's hard to change culture. It's even harder to change drinking culture in America. But one thing we can do is make cars safer, right? So, we can make, uh, cars more survivable if you get in an accident. They're saving more lives, but are they doing anything to the drunk driving problem? You know, are they making any progress in rolling back drinking culture?

I would say no. Right? Uh, and in fact it might be worse. It, they could be encouraging more drinking and more drunk driving. If you know your car is safer and you're more likely to get home, uh, in one piece and you're less likely to kill random pedestrians, uh, or other drivers, then that's an incentive to drink more because, you know, things will be okay.


[00:19:15] Bilawal Sidhu:

So, we've talked about the problems AI creates. I wanna switch gears a bit and talk about solutions.


[00:19:20] Patrick Lin:

Mm-hmm.


[00:19:21] Bilawal Sidhu:

I'm wondering if you see folks or companies or organizations out there doing anything to fix the problem of bias in AI?


[00:19:29] Patrick Lin:

Well, uh, I mean, I do see a lot of organizations say they're working on things, fix AI.

It's not entirely clear how they're doing it. Some of this proprietary information, um. Still, you know, again, I would still be skeptical that, that, uh, their solutions are gonna do a whole lot in solving the problem. The one move they, they, they're thinking of is to throw more AI at it. And this is the, you know, this is exactly the problem of if all you have is a hammer.

Everything looks like a nail, right? If you're an AI, you know, programmer or developer, of course you're gonna think AI is gonna be able to solve that problem, and that's where you're gonna try. But I think with bias, um, uh, it's a different kind of challenge. For one reason. Bias is a social construct. So, the challenge facing, uh, developers and making it AI that can detect bias is the same kind of challenge, uh, with developers who think they can make AI that can detect pornography or can detect an unethical situation, right?

Pornography ethics they're social constructs too. They're very squishy. They're very hard to define. They resist definition. The US Supreme Court, I famously, uh, concluded “We might not be able to define pornography. But we know it when we see it.”

Right? And I think humans are like that with bias too. Like we, we could probably, we can recognize bias when we see it most of the time. Right? In my Martin Luther King example, you know, film example, you could recognize that I'm not being malicious, I'm not, I'm not doing anything inappropriate.

Um. A machine might not be able to. So machines are not great with ambiguity. There's no law of nature that says technology will solve all your problems. Right? I mean, it, it's, it's, it, it's made life easier in a lot of ways. It's made us more secure in a lot of ways, but it still hasn't solved hunger, um, racism.

You know, think about all the societal ills, if it could why aren't we working on that? I mean, all we have now are apps that make life more convenient. Here's an app that could find me a ride. Here's an app that could find me a, a place to crash tonight. Here's an app where someone will do is chore for five bucks.

We're putting AI and all, all our best minds onto these projects to do things that someone else's mom will do for you. They're not, they're not like these great world shaking, uh, applications. So, back to bias I, I think the temptation here is just to throw more data at it. A couple ways we can go here. Uh, yes.

You could curate your data sets to ensure that AI is being trained on, uh, examples where there's no bias. But when you're talking about millions and billions of, of, you know, examples in a, in a large training set, that's not a very feasible solution. It's definitely not scalable. Um, so, so if you want a scalable solution, seems that you would need to create an AI, train it, program it to identify bias when it sees it, but to do that.

It needs to be crystal clear on what bias is, what discrimination is. If you try to look up the definition of bias, you're not gonna find a really good one. They say things like discrimination is the unfair treatment of people. Now you have to define what fairness means or unfairness means, right? But I think so that work hasn't been done and if uh, if developers don't understand the nature of bias, then they're gonna have a hard time fixing the problem.

You know, I'm trying to imagine an AI that can understand. All the nuances of every situation here, you know, of, of, of whether this is relevant or this factor is not relevant. And I have a hard time imagining, um, that, uh, that that could be done.


[00:23:25] Bilawal Sidhu:

It sounds like what you're saying is we're not gonna have one model to rule them all.

Right? It seems like it's a lot about context. I wanna spend a little time on that. There's a lot of discussion about national AI models that are tailored to cultural context that vary from region to region, right? The concept being that bias and AI isn't a one size fits all solution, and what constitutes bias can vary significantly based across, you know, across different cultures, regions, and nations.

What's considered acceptable or unacceptable, offensive or benign can differ based on local norms, values.


[00:24:00] Patrick Lin:

Mm-hmm.


[00:24:00] Bilawal Sidhu:

Histories and sensitivities. What do you think about, so do you think we should have a, a plurality of models that are tuned to various cultural context, and could that be one near term solution to address this rather thorny issue of bias?


[00:24:16] Patrick Lin:

As a general approach, I think it makes sense because there's no one set of values to rule them all right? There's no one ethical theory to rule them all. And there are variations in ethics, uh, from culture to culture, and many of them are reasonable variations. Right? I mean, others are not reasonable.

Others are just plain offensive, right? So if a culture that doesn't, uh, uh, want women and children to be educated and thinks it's okay to throw acid on them, uh, to prevent them from gonna school, that's bad, right? I, I don't see any reason to respect that kind of, um, you know, those kinds of values, but other, other differences could be reasonable.

So, for instance, uh, you know, in Asian cultures elderly tend to be more valued than kids, right? So, for instance, let's imagine a AI app that does triage in a hospital, right?


[00:25:12] Bilawal Sidhu:

Mm-hmm.


[00:25:12] Patrick Lin:

Uh, that does a hospital admissions and, and, and these hospitals in these big cities are generally, uh, overworked. So, it's gotta figure out a, a priority list for, uh, um.

You know, for the, for the patients to be seen, um, in one culture, you know, let's say in Asia it might, it might give bonus points if you're older, if you're elderly, it might move you up the priority list, right? And that seems okay. Neither here nor there, and other cultures, they might have the opposite value.

They might, you know, treat their, uh, children as kings and queens. They, and they put, uh, a premium on them, in which case, and a hospital, AI or triage AI in that culture would, would move younger people up on the priority list. I think we would want to respect, uh, diversity and these variations, especially since there's no one, uh, one ethics, no one culture to rule them all.

You know, I, I certainly don't think, you know, we, we have it right here in America. Same with, same with just about every other culture. But if, if I were a AI company looking to roll out products worldwide. Now I'm thinking, wait, I gotta localize my, my products, you know, my AI and these products. That means I gotta train my AI from data that comes from those geographies and cultures for every market I wanna play in.

Right? And that, um sounds like a lot of work. I mean, it could be a deal breaker, you know, the diverse, the diversity, uh, model makes sense, uh, in, in theory, but in practice, how do you implement that? I, I don't know.


[00:26:51] Bilawal Sidhu:

So, I have to say, do you think we're gonna be stuck in the same whack-a-mole loop where the problem grows and multiplies exponentially, especially as all these major labs chase this current paradigm we're on, which is let's throw more data and more compute at it, do even bigger training runs.

Or is there a chance that we could get this under control?


[00:27:12] Patrick Lin:

I do see potential fixes, but they're hard fixes and people don't want to hear about 'em because they're about human labor. Things that human beings need to do. The work we gotta put in to solve this problem.


[00:27:23] Bilawal Sidhu:

What should individuals do to address bias and AI? Right?


[00:27:28] Patrick Lin:

Look, this, this, this is a hard problem because it's a social problem. It's a, it's a human problem and I think it would be a mistake to put all your, all your eggs in one basket and hope that technology can solve this. Unfortunately, I think it's gonna take a lot of hard societal level work, it's worth trying out.

Even if only technology can paper over the symptoms, you know, that might be okay for now, right? Just let, just as safer, safer cars aren't fixing the problem, drunk driving, but they're saving lives, you know, that might be enough. So, um, you know, I would say good luck to the developers, but I think, you know, if you really are serious about tackling bias, you gotta understand what it is.


[00:28:11] Bilawal Sidhu:

Thank you so much Patrick. This is clearly a complex problem and I really appreciate you taking the time to break it down and explain to us why it isn't a very simple one size fits all solution. So, thank you for your time. Yeah. And um, yeah, we really appreciate it.


[00:28:27] Patrick Lin:

You're welcome. Thanks for having me on.


[00:28:31] Bilawal Sidhu:

So, Patrick said something that I think is really important for us to recognize. In order to solve the bias in AI, we have to solve the bias in ourselves, which is a pretty tall order, right? Especially when, as he says, “Bias is pretty much implicit to human nature.” And I tend to agree with Patrick that throwing more AI at already flawed AI systems isn't necessarily gonna solve the problem for us.

Because here's the thing, AI is a reflection of who we are. After all, it's trained on us. Our art, our memes, movies, jokes, history, music, math, science, philosophy. It's complicated because we are complicated. It's flawed because we are flawed. It's biased because we are biased, but I also don't want to throw up my hands and say, we can't fix this, or there's nothing we can do, because I think there are some things that we are actually in control of when it comes to bias in AI, particularly our responses to this nascent technology.

Now much has been said about large companies creating transparency around their training data, and that's a welcome step. But even training data transparency presents its own challenges. For starters, these data sets are enormous. We're talking billions of images and trillions of words. It will be a massive effort to comb through it all, find all the flaws, and make the necessary changes.

So it's a big, big problem with elusive solutions, but I wanna offer up a couple of solutions that I think could, at the very least, help. The first is something I mentioned in my interview with Patrick. The idea that we could create more nuance in our AI models by keeping it regional, like generative AI systems in Singapore, probably should not behave identically to generative AI systems in California.

In fact, the Singaporean government called for AI sovereignty. Addressing the fact that quote, “Singapore and the region's local and regional cultures values and norms differ from those of Western countries where most large language models originate.” End quote. I believe that AI sovereignty can help preserve our diversity, whether that's state to state or country to country.

The second might seem at first glance, to be a little too obvious. You might not like it at first, but hear me out. What if we gave ourselves a bit more agency in this issue and committed to getting better at using tools like Chat GPT, Midjourney and Gemini? Let me give you an example. I wanna share a process I go through in my mind every time I prompt an image generator or a chat bot or what have you.

First, I remember that I'm using a flawed tool. Just because it's AI and it's supposed to be really smart, it's not always gonna gimme the most accurate results. Second, once I get a response to my prompt, I scrutinize it in the same way I scrutinize the news I read. I'm skeptical about the source of the results, knowing that these AI systems are trained on imperfect data.

Third, I take a beat before I'm even tempted to jump on Twitter and post a spicy screenshot of this messed up response to my prompt. I pause. And I think about what I need to do to get a better response. And I revise it accordingly. Maybe I'll respond to Chat GPT with something like, “Hey, not all nurses are women. Can you show me some images of nurses that are more reflective of the actual demographics of the nursing field?”

Because if these generative AI tools are only as good as the data we feed them, they're also only as good as the prompts we give them. And yes, it is up to these big tech companies to make better products and give us more and more transparency about how they're trained.

They should also give us more transparency into how and when they're trying to solve for bias within their systems. For example, if Google had told us they're trying to address some of the bias in Gemini, it may not have solved the problems in the images generated by their system, but it at least would've helped perhaps to know why the AI system was generating those images in the first place.

But also we need to be educated consumers and users of these tools and know that the better we are at identifying their flaws, the better we will be at prompting them for better responses. That's why developing AI literacy is so crucial. We need to understand how these systems work, how they learn, and how they can go wrong, sometimes horribly wrong, and we need to stop taking their outputs as gospel or a factual reflection of reality.

It's as crucial as being literate about our own biases. If what an AI system generates is not consistent with our values, we can absolutely take control and shape it for the better.

The TED AI Show is a part of The TED Audio Collective and is produced by TED with Cosmic Standard. Our producers are Elah Feder and Sarah McCrea. Our editors are Banban Cheng and Alejandra Salazar. Our show runner is Ivana Tucker, and our associate producer is Ben Montoya. Our engineer is Aja Pilar Simpson.

Our technical director is Jacob Winik, and our executive producer is Eliza Smith. Our fact checker is Julia Dickerson, and I'm your host, Bilawal Sidhu. See y'all in the next one.