AI therapy is here. What does it mean for you? w/ Dr. Alison Darcy and Brian Chandler (Transcript)

The TED AI Show
AI therapy is here. What does it mean for you? w/ Dr. Alison Darcy and Brian Chandler
June 25, 2024

[00:00:00] Bilawal Sidhu: 

Like many of us, Brian Chandler was struggling with his mental health during the 2020 lockdown. 

[00:00:06] Brian Chandler: 

Yeah, so the beginning of 2020, I couldn't go anywhere and I knew, you know, in the past that I had some anxiety, but I was always able to find things that I could kind of use to distract myself, you know, get out, do things.

But when I was stuck at home, it was like I couldn't run away from that anymore. And I was forced like so many other people to confront that anxiety. And from there I was just seeking something that could help me. You know, I was trying to read books on mental health. Um, I was trying meditation, I was trying all kinds of different apps, and Woebot just was in the mix of those apps I downloaded.

[00:00:49] Bilawal Sidhu: 

Woebot, a therapy chatbot users type in how they're feeling, the chat bot deciphers, what kind of problem they're talking about, and then offers a prescripted response. So if Woebot asked Brian how he was feeling and Brian said something like frustrated or annoyed, Woebot would chat back, asking Brian to rephrase his feelings in a more concrete way, sort of like the way a therapist might to promote deeper self-reflection.

[00:01:14] Brian Chandler: 

And I remember, you know, one afternoon and I opened up the app and I just went through the prompts and afterward I found myself feeling a little bit better. 

[00:01:27] Bilawal Sidhu: 

There's something about a chatbot therapist that can make people a little squeamish. You know, it's the idea that the messiness and intricacies of the human mind can be helped by something entirely unhuman.

The power of therapy chatbots is nothing new, as most of the coverage of Woebot has reminded us. The very first chatbot ever was designed to mimic a psychotherapist in the 1960s. MIT computer scientist, Joseph Weizenbaum, created a bot called ELIZA, largely to make a point, ELIZA was supposed to emulate a kind of talk therapy where the practitioner would often repeat their patient's responses back to them.


If the patient mentioned their father, the bot said, “How do you feel about your father?” When a patient said, “My boyfriend made me come here,” the bot said, “Your boyfriend made you come here.” And if the bot didn't know what to say next, it would just say, “Tell me more.” Weizenbaum's whole point with ELIZA was to show just how bad robots were at understanding the complexity of human emotion.


But his experiment totally backfired. Turns out the people who got access to ELIZA were so entranced by the bot's active listening skills that they wanted to keep talking to it. In fact, Weizwnbaum's secretary asked him to leave the room so she could be alone with ELIZA. Could it be that humans are just so bad at listening to each other, that we're willing to convince ourselves that robots really can understand what we're saying?


Or maybe human emotion really can be broken down into repeatable patterns, data that these chatbots can respond to effectively. Regardless, now that mental health AI is back on the rise, we have to ask the question, are we looking at a future where AI systems replace human therapists altogether?


I am Bilawal Sidhu, and this is The TED AI Show where we figure out how to live and thrive in a world where AI is changing everything.


In this episode, we're digging into therapy chatbots. There are many on the market based on different therapeutic models and with different capabilities. But today we're gonna focus on Woebot, which we should note now requires a unique access code from your healthcare provider, or in some cases from your employer.


It's far from the only therapy chat bot though. And as these bots become more sophisticated and accessible, it seems more and more likely that they could disrupt the field of mental health care. Later in the episode, we're gonna circle back to Brian Chandler, the Woebot user, about his experience with the bots and why he finds it effective even without additional therapy.

But first I'm speaking with Dr. Allison Darcy, founder and president of Woebot Health. When Allison was working in a Stanford Health Innovations Lab, she and her colleagues tested a whole bunch of ways to make mental healthcare more accessible and engaging. They try to gamify it using immersive video experiences and even explore face-to-face models.


But it turned out that the most effective option seemed to look a lot more like ELIZA. Allison, walk us through what is Woebot and how did it come to be? 


[00:04:44] Alison Darcy: 

Okay. Woebot is a, an emotional support ally. It's basically just a chat bot that you can talk to during the day, um, that, that helps you sort of navigate the ups and downs.


So it's explicitly not therapy, um, very, very different, but is nonetheless based on constructs, uh, borrowed from the best approach to mental health that we have today. So how did it come to be? I guess I was, um, a a a sort of a clinical research psychologist and at the same time I think I'd had this brief moment in my early twenties of, um, learning to code and sort of being in that, um, dev world.


I mean, very brief moment. Uh, and there's something called the research practice gap, which is really a sort of intervention science problem, which means whatever we do in the lab doesn't actually get translated well in the community. And so you have all of these people and actually a growing number of people with problems.

And, uh, fewer and fewer resources, um, there available, uh, and those that are available just hard to access and expensive and stigmatized. And actually, you know, in learning about really great cognitive behavioral therapy, It was very clear like, why are we sort of keeping this up into the clinics? Like, should we not be teaching this stuff in schools?


Should this not be preventative? Could we not do it in a way that's scalable before it becomes a very clinical issue? And, um, so we set out to kind of try and look at ways that we might go about that. Like how can you develop that habit? It was like, how can we make good thinking hygiene as engaging as it possibly can be so that people will want to interact every day.

[00:06:18] Bilawal Sidhu: 

You said it's based on a therapeutic model of CBT, but not explicitly therapy. Um. What is the user experience of using Woebot today and how is it different from a therapist? 

[00:06:29] Alison Darcy: 

That's a great question, and one we get, I think, not enough actually, in the case of mental health, you know, we alm you almost always hear, you know, like if you're struggling, reach out to someone.


But the experience, the lived experience of being in a difficult moment, it actually is the hardest moment to reach out to somebody else. And so, you know, really what we're doing is meaningfully designing for that moment. How do you make that as simple as it can be? And it turns out not being a human is so important to that equation.


As was demonstrated by a different research group in Southern California, people are more likely to disclose to an AI than to a human. And while that might sound dystopian to some folks, it's not that they're choosing the AI over a human. Really the choice is how can I do something right now to help myself?


Whereas, um, the architecture of therapy is just obviously completely different, and it is based on fundamentally a relationship, right? The human to human relationship. That's not what this is. We have certain relationship dynamics, but that's really about being able to facilitate the encounter when it occurs, but in much, much smaller, simpler nuggets as you live your life.

In fact, interestingly our data shows that about 80% of conversations are happening between 5:00 PM and 8:00 AM the next morning. So absolutely when there aren't other folks around, other professional folks around. Um, and so that's actually why it works. This is about a, a, a sort of a toolkit that folks can use momentarily as they as they feel like, okay, I'm in, in this rotten place. 


They can just reach out to this thing and be like, “Hey, I'm not doing great.” And then Woebot and say, “Okay, do you want help right now with this thing?” And then, okay, if you accept that, then step by step talking somebody through how to use their own resources to maybe feel a little bit better and then just get back to life, right. 

[00:08:31] Bilawal Sidhu: 

You know, I, I really like this notion of sort of meeting the person where they are. Uh, in that moment you're perhaps most vulnerable, where it's hardest to reach out to people. And while it's fresh in your mind, you're, you're living that experience. It's playing out in real time. The product person in me is curious, how do you measure success and what are, what are those like success metrics for the user experience, uh, right now, uh, that you do optimize for if it's not about engagement? 

[00:08:56] Alison Darcy: 

it's about feeling better. That's it. Ultimately, the, the, the, the ground truth to us is like, do you feel better now or not? And, and then it's a sort of a, well, how much better do you feel? And if you feel better, oh, oh, this is great, right? Let's, let's kind of build on that and if you don't, okay, let's troubleshoot that.

What went wrong? It's a toolset, you know? Um, and I think one of the beauty, one of the, the beautiful things about a robot is, and an AI in general. Um, in this role of kind of guide based on CBT as an approach is that the person is doing the work, right? There's no I'm bringing some extra wisdom that you don't have about yourself, right?

Like I'm reading your palms almost like that's not what it is. It's like this is your, this is your skillset. I'm just gonna step you through it. And so they get that experience of having stepped through it and then saying, “Oh wow, yeah, I actually do have the answers. I just need to be asked the right questions to get there.”

And I think that's tremendously powerful because I think some of the dynamics that we can sometimes see. Um, in maybe more clinical settings is a sort of a diffusion of power to some extent. 

[00:10:05] Bilawal Sidhu: 

Interesting. So there's like a different power dynamic. Um, it's almost like because it's a bot, you're constantly remembering that it's up to you to take the steps to improve how you feel like you're in control.

You have the agency. So besides the unconstrained access to this resource, what can a chatbot do that a human therapist cannot do? 

[00:10:27] Alison Darcy: 

The challenge of those ki of that kind of line of questioning is that you, it's almost set up like a replacement. And I think, you know, it's, it's, it's clearly not a replacement, but there are things that an AI can be great at, uh, and that it, that a human can't, and availability is definitely one of those things. 

And perfect memory is another one of those things. Never getting tired, never retiring, never having a bad day, never being hungover, right? Those are all good things that AI can bring to the table. To flip the question like what can uh, a human do that an AI can't?

Human connection, you know? And I think that is so clear. AIs can never be human and they shouldn't pretend to be 'cause they're best when they're not pretending to be. And I think if AIs can actually just lean into the fact that they're AIs, it's, it's a lot less complicated for our heads to get around. 

[00:11:17] Bilawal Sidhu: 

So, you know, it sounds like you're clearly making this design decision, right.

How is that design decision being reflected in the product experience for a Woebot? 

[00:11:26] Alison Darcy: 

I think we made this design decision very early on that like Woebot should be like very clear that it's a robot. Like. “I'm an AI, I am an AI.” And like we really leaned into it. I think more than most people, even the Woebot as a name and as a visual is like to remind people this is an AI, this is not human.

Um, but in the experience itself, I do think there are specific things that one has to pay attention to. That is, that is nuanced in AI, for example, Woebot becomes concerned with some. Something Woebot should say, you know, “Because you said this thing, I'm concerned about X.” Right? So you're showing people, this is the phrase that I'm worried about.

Woebot can say, you know, “I'm, um. I'm not able to, in fact, give you some medical, uh, medical advice here.” Right? Like, or “I'm not like I'm constrained here.” Just being very, very clear about the boundaries, um, what it is and what it isn't. And I think that's where good consultation with clinicians and, and specialists in the field comes in.

[00:12:27] Bilawal Sidhu: 

Can you talk a bit more about how this product will work in concert with, let's say a traditional therapy experience? Right? And, and like, you know, do you think Woebot could be sort of this gateway allowing users to start opening up without this pressure of a real person at the other end receiving this stuff without the fear of judgment and um, and then eventually transitioning to in-person therapy.

Has that happened? What does that look like? 

[00:12:53] Alison Darcy: 

This was the point of, of, of a Woebot. It was, how could you be the most gentle unintimidating on ramp? Into, into the experience of, you know, managing one's own mental health really early on and, and hopefully in a way that demystifies what full blown therapy might look like as well.


And actually that's, we've had lots of anecdotal feedback from, from users to say, “Yeah, actually using Woebot made me see like what CBT with a therapist might look like.” And, and then I to your other part of the question, like what does Woebot look like in conjunction with, you know, a, a, a clinician or a healthcare professional?


Um, and we've had lots of really interesting feedback here as well. When they give Woebot to their patients, um, Woebot comes back, sort of ready to engage in a therapeutic process, a little bit better informed that the things that they're sharing with their patients. Then Woebot is, um, sort of facilitating the practice with those concepts or those skills in between sessions.


'Cause therapy doesn't really happen in a void either. The more opportunity people have to practice certain skills based on CBT, the, the better their outcomes tend to be. Um, so Woebot can facilitate that practice, um, you know, sort of reinforce what the therapist is sharing and, and, and teaching. 

[00:14:18] Bilawal Sidhu: 

Right now it feels like it's, it's very much intentionally a sort of choose your own adventure on rails experience.


Um, uh, for plenty of reasons, right? I'm sure you were pressured enough to be like, “Hey, Allison, we gotta use the latest generative AI model.” Et cetera. And so yeah, talk to me a little bit about where you see things going, perhaps in the near term, but let's, then, let's start talking about the long term too.

[00:14:41] Alison Darcy: 

I mean, yeah, you've hit the nail on the head. There are so few opportunities now in our interactions with technologies or, or, or elsewhere to learn how to objectively sort of challenge our thinking. And I kind of worry about that being lost as a skill, you know, because our, almost like our emotions are hijacked with a lot of online platforms as we know.


And, um, you know, there's an awful lot of like very strong opinions and not so much opportunity to say, “Huh, well, is that correct?” I think that's exactly, you know, what we'd like Woebot or how we'd like Woebot to operate, um, the future. Our objective function is human wellbeing. Right? And it, I'm sort of agnostic to how we get there.


Within reason. You know, we wanna use the best tools that are available to us to be able to get us there. And I think having that objective function be based on wellness not attention is so crucial. And that's why, you know, that's why we operate as much as we can, you know, um, in partnership with healthcare professionals and those, um, those settings and health systems, because then your incentives are aligned in the right way.

But, um. Yeah, look, I think when technologies are available to us that enable us to do a better job there. Uh, we'll use them. Right. I think we've been in, in terms of like Generative AI, we still have our writers write every line that Woebot says. Um, and we have, we just con finished a trial where we looked at, uh, uh, sort of LLM based version of robot versus the rules based robot.

And that was just fascinating from the, that was really just to explore the user experience and like where's the difference here exactly? And like, you know, would there be some glaring limitation that can only, or some huge gap that could only be filled with, with LLMs? Um, and what we found was actually interesting and intriguing, in spite of that context of like twice the accuracy we find hu like.


Our users didn't seem to notice a difference. So in the context where there's equal amount of trust and an equal amount of like the feeling of that this is a safe space and not being judged and so on, yeah, maybe the accuracy doesn't matter so much because honestly we've built the conversation over the seven, last seven years to be so tolerant of imperfection.


You know, I think for example, if Woebot sort of thinks they're hearing something, they'll say, “Oh, like, it sounds like you're talking about this thing. Am I, is that true? Like, are we talking about relationship problem here? Is that what I'm hearing? Am I hearing you correctly?” And that's just a very empathic conversation, right?


So if that's what Woebot says in response to a low confidence classification, that's fine, right? And even if you have more accurate reading there, wouldn't you always wanna say, “Am I hearing you correctly like that?” You know, that's, that's what good empathy looks like because even humans don't hear properly.

[00:17:51] Bilawal Sidhu: 

It is interesting that, yeah, humans perhaps are very good at figuring out sort of the implicit rules of what they're engaging in and just working around them. Especially if you set the expectation that this is not a human at the other end. So they're not pretending, yeah. Like they're trying to have this thing passed the, the Turing test or something like that. Right?


[00:18:09] Alison Darcy: 

It's, it's never been about that. Yeah. And it's like, and people think, well, they might be let down by the fact that it's not a, that it's not a human. And I'm actually, no, no, you're missing the point. It's like it works. Because it's not a human or, and not in spite of it. 

[00:18:21] Bilawal Sidhu: 

Also, it reminds me just you have a hammer, everything looks like a nail and everyone wants to reimagine everything with like, you know, Transformers and diffusion models these days.


And it's interesting because, you know, they do use a lot more compute and, and you've got a brilliant case study here, perhaps where good old fashioned AI is, is like, good enough to get the job done. It reminds me of this quote of yours from this recent article where, where upon being asked about Gen AI, you said, you know, “We couldn't stop the large language model from just butting in and telling someone how they should be thinking instead of facilitating the person's process.”

So I'm kind of curious, like, can you imagine a path to getting there with AI where AI could do just a good job? As perhaps a, uh, a real life human embodied therapist. 

[00:19:09] Alison Darcy: 

An AI is never gonna deliver what a human therapist does. So, recently somebody sort of said to me, “But you know, like an AI can't like pick up on, you know, body language signals.” But you know, the jury's out on how much an AI needs to be able to detect that particular set of nonverbal communication because. 

[00:19:30] Bilawal Sidhu: 

Hmm. 

[00:19:31] Alison Darcy: 

Because it's the fact that it's, it's a human to human relationship is why the therapist needs to be able to read all of that stuff, because people don't feel able to disclose to a human all the time. Do you know what I'm saying? So I like, it's a fundamentally different kind of encounter.


It's just a completely different way of engaging and I think one that humans are very sensitive to the id idiosyncrasies of um, and so I, I think the things that people think of threats, well, someone's gonna get addicted to this and they're never gonna go to a human therapist if they get really complacent with this.


I don't think those things are true because they're, people don't see them as the same thing. Right? It's not like, because people start eating sandwiches, they never go to a restaurant again. 'Cause they're like, “Well, I'm fed, I'm, I'm hooked on sandwiches.” You know? That's just, that's just not the way it works.

Those things kind of coexist really well. 

[00:20:24] Bilawal Sidhu: 

Yeah, I could see that being super beneficial. But you are drawing this line about, you know, “I think, uh, like I would never do that.” And I just wanna poke on that claim a little bit because what you described is, you know, let's say, you know, me as a patient, if I go talk to a therapist, there's stuff that I will explicitly say, and then there are these more implicit signals, facial cues, how I react to stuff.

Um, wants to stop AI models from being able to understand all of those nuances, right? Like already. If that happens, would that not come close to encroaching upon that therapist? Um, you know, kind of patient relationship. 

[00:21:00] Alison Darcy: 

Okay. I can say something controversial. Why not? 

[00:21:03] Bilawal Sidhu: 

Let's do it., 

[00:21:05] Alison Darcy: 

Uh, I think this idea of like, um, divining people's emotional state is a bit of a red herring, honestly, I think for a lot of, not all, but for a lot of mental health.

Because here's the thing, the there's a reason why we have self-report measures for mental health. And people say, well, it's not objective, but the point of mental health problems the, the actual way that we conceptualize 'em today is that they are fundamentally a subjective experience. There has to be subjective suffering.

So the, the, the correct question is, you know. How do you feel? That is the right question to ask for a start. I, now, I'm oversimplifying. I think there are times in which you'll want to look at the discrepancy between, you know, the person's self-report and, and, and what they're sharing. But I think for, you know, maybe 80% of mental health problems, um, I think it's preferable to ask somebody and start there.

The other thing is, um, yeah, you're totally right. AIs can pick up on nonverbal communication. It's just not the same set of nonverbal communication that a human therapist would look at. Do you know what I'm saying? Maybe the speed with which somebody is answering a question you could pick up on nonverbals just in the way that somebody's texting.


There's a whole set of nonverbal communication that might be relevant here. For sure. But AIs are great at picking up. In fact, we've seen it in, um, some of the algorithms that have been used to predict first episode psychosis among high risk groups. Um, and I believe it's an NLP algorithm was able to predict with a hundred percent accuracy who would go on to develop first episode psychosis.

[00:22:47] Bilawal Sidhu: 

Wow. 

[00:22:47] Alison Darcy: 

Compared to, I think the, the, the gold standard measure, which is it's about 67 or 68%, um, accurate. So, and that makes a ton of sense because again, um, because so much of human to human communication is nonverbal, but the, the idea of like, let's replicate what humans do, I think is misguided and leaves, leaves a lot of value on the table.

[00:23:12] Bilawal Sidhu: 

You know, you talked about the research practice gap. It's like, hey, all this awesome research is happening, but like, practitioners aren't actually imp reading up on it and like making the latest, you know, insights available to their patients. And so Woebot is this very cool compliment to therapy in that sense, where you can bridge this, uh, research practice gap, make the state-of-the-art findings available, um, you know, to folks.

But I'm curious, you know, like, uh, um. It means that people will use Woebot without access to a therapist. Do you think there's any risk to users if they're using Woebots without a human clinician? And this is in full disclosure, my roundabout way of like, you know, coming back to the question of, uh, is this really not gonna replace therapists, you know?


Uh. 

[00:24:01] Alison Darcy: 

Thinly veiled. So Woebot is no longer available for people with that, you know, uh, just to download in the app store, you have to sort of get it through treatment provider. 

We've done a lot of research and a major objective of that research is to look at the user experience in a very controlled manner, to very carefully quantify any risk or any, you know, um, safety issues that might come up and things like that.


This is why Woebot is primarily rules-based right now, and everything Woebot says is written by a therapist. But I think the risk, the major risk associated with this technology actually is that we don't have the correct conversations about it and that people get spooked and think, oh, this all the missteps that people inevitably will make because they really underestimate the complexity of what it takes to deliver something, you know, a mental health um based intervention into the world. 


Uh, there isn't adequate data. Somebody makes a misstep, it blows up on the launchpad, and then everybody starts to think, “Wow, this is not a good technology used for this.” Whereas really, AI, any AI that you're using is really a tool and, and how you use that is the most important thing.

And I think, you know, the risk that we have facing us is like we are systematically gonna undermine public confidence here in this, in the ability of technology like this to help. And that is a big potential problem because I think this is probably the greatest public health opportunity that we've ever had.

[00:25:32] Bilawal Sidhu: 

There's a lot of responsibility on your shoulders to, you know make sure this is a shining beacon and like a great example for how you do, you know, AI uh, augmented therapy essentially in terms of ways this could blow up on the launchpad, right? Um, obviously data privacy is one that comes up and Woebot, when people use robot, people are sharing some of their most personal and private data.

So, will user data be used to improve the Woebot product experience or the underlying models? 

[00:26:01] Alison Darcy: 

We're HIPAA compliant obviously. We're, um, and, you know, using like all the data are encrypted and it's, uh, you know, we have consent for each and every use. Like similar to what you have with with GDPR. Privacy and security is a topic that is absolutely front and center the whole time because I think a breach there from any kind of semblance of any kind of negligence on our side would be catastrophic.

[00:26:30] Bilawal Sidhu: 

Going back to the future a bit. In an ideal world, Allison, what would mental healthcare look like in five years time? I'm curious.


[00:26:41] Alison Darcy: 

In an ideal world, we would shut down all our clinics and my profession would become obsolete because everybody is looking after their own mental health. We should be doing such a good job and everybody is so, you know, happy and healthy. Now, that's not realistic. You know, remember when COVID, we were talking about flattening the curve?

I think we need to flatten the curve here as well. We need to try and keep people outta clinics if we can by, you know providing access to really good preventative, you know, tools that they can use. We should absolutely not be waiting, you know, a decade or so before somebody for before first sort of people start to struggle a little bit with maybe a couple of symptoms here and there, and then then needing to see a, and then actually getting in front of a clinician.

People should have, I think, very good evidence-based tools that they can turn to from the first moment of like, you know, intense emotion that gives rise to distorted thinking, because that's part of the human experience. It's not about being in a clinical realm. It's not about necessarily needing a, a diagnosis.

It's about sort of being there in a moment of need as early as you possibly can, and getting somebody well. When you can um, and then freeing up our precious human resources for when people actually do need more significant help. 

[00:28:05] Bilawal Sidhu: 

Hmm. I love that. We've got a bunch of technology that we use every day that, you know, plays to our hopes, wishes, desires, anxieties, worries.

[00:28:14] Alison Darcy: 

Yes. 

[00:28:14] Bilawal Sidhu: 

Literally all the time without us even knowing. 

[00:28:16] Alison Darcy: 

Yes. 


[00:28:17] Bilawal Sidhu: 

And so it, it, it strikes me that there should be a counter veiling influence to that, you know, a, a correction measure to that. And, um, you know, starting as early as possible and making it as accessible and, uh, frictionless to get access to this type of evidence-based care strikes me as one great way of making a dent towards that goal you have.


And who knows, maybe you will accomplish it in five years. 

[00:28:39] Alison Darcy: 

Well, thank you very much. 

[00:28:41] Bilawal Sidhu: 

As you've heard, Dr. Darcy insists that Woebot is best used in conjunction with a human therapist. But after the break, we're gonna hear more from Brian Chandler, who's actually been using Woebot without additional therapy since 2020.

[00:29:00] Brian Chandler: 

So my name's Brian. I'm uh, about to turn 25 here and I've been seeking mental health and honestly, mental clarity. Uh, I would really say since the pandemic, so it's been about four years now, I've been kind of working on my mindfulness journey. 

[00:29:26] Bilawal Sidhu: 

Do you remember your first interaction with Woebot? 

[00:29:23] Brian Chandler: 

When I first used it, I guess I had such a low success rate with the, the other apps, I wasn't expecting too much, but when I used it, I, I did feel better, you know? 

And I thought, “Okay, well maybe, you know, maybe this was just a fluke. Uh, let me, let me get back on the app the next day.” And I had a similar feeling and I was like, “Okay, well let me get back on the next day.” And, you know, eventually when it's 3, 4, 5 days of feeling better afterwards, you start to realize, “Okay, I, I think it is the app. I think it is what it's teaching me. It's, it's teaching me coping mechanisms. It's teaching me how to label. It's, it's doing exactly what. A therapist would tell you to do.” 

Um, but I'm doing it from my phone for free in the comfort of my home, and it was very convenient 'cause at the time I couldn't go anywhere.

[00:30:15] Bilawal Sidhu: 

Can you describe your typical interactions with Woebot? Do you find yourself using it in a certain way? 

[00:30:22] Brian Chandler:

I, I typically like to use it. Really twice a day. So I, I think it's really important to start the morning off right with the right headspace, reminding myself that we, we do have some control of how the day can go, or some control of at least being mindful of our thoughts.


And understanding that anxiety doesn't have to have power over you. That kind of helps me get the day on the right foot. And then especially lately, I do like to close the day off using it that way before I go to bed. I am still in the right headspace. So really trying to have those two anchors of the beginning of the day and the end of the day.

But that is the nice thing about the app is. It's 24 hour. I mean, you can use it whenever you want, you know, so there were certainly a few times where maybe I was having a panic attack at, you know, 2:00 AM I'd open up the app and it would help me. It was kind of like a reset.

[00:31:18] Bilawal Sidhu: 

So you're kind of checking your, you know, the monkey mind, if you want to call it that.

You know, starting the first thing in the morning, which, uh, is an analogy I really, really enjoy. 

[00:31:28] Brian Chandler: 

You know, when you wake up in the morning, kind of trying to gauge how your mental head space is to weather, you know, so if maybe you're waking up and you are, you're grouchy or maybe you're depressed, or maybe you're anxious, I mean you, you can kind of use the analogy that, “Okay, it's today's, it's thunderstorms, you know, and what can I do to better prepare myself?” 

Maybe you're the type of person where, “I need a distraction, I need to hang out with a friend. Or maybe that's the opposite of what you need and you're like, I just need to be by myself today.”

[00:31:58] Bilawal Sidhu: 

I'm just curious, like, do you have any experience with conventional therapy? 

[00:32:02] Brian Chandler: 

So prior to using Woebot, I didn't have any experience, so I didn't really have a reference to regular therapy. Um, in 2022, I. I believe it was 2022. I wanted to see really if I felt a difference, and I did try for a couple weeks, regular therapy, and while I think regular therapy can be very good and very important.

For some people, I don't want to discourage that. For me personally, I didn't see really a difference between using the app and regular therapy. And again, I know regular therapy is fantastic. I think if you need it, I definitely recommend it, but sometimes it's inconvenient. You know, sometimes you can't just talk to a therapist whenever you want and you're doing the same practices that you're doing on this app.

That's free. Not to mention therapy can be very costly, so I just figured if I'm, if I'm doing the same practices, I'm feeling the same relief after I use it, and I can use it anytime without leaving my house just on my phone. It was a no-brainer to continue using the app. 

[00:33:16] Bilawal Sidhu: 

Now I have to follow that up and ask.

In therapy, there are all kinds of boundaries, right? There's time, there's what you can or can't or perhaps shouldn't know about your therapist. Do you get a sense of boundaries when you're using Woebot? If so, what? What are they? 

[00:33:34] Brian Chandler: 

In the beginning when I was using it, especially, 'cause I didn't really have experience with therapy.

I, I thought it was, it, it took a moment to get used to, it felt natural, but you did kind of feel like, you know, this kind of feels like I'm talking to a human. But the more you use it, you realize it's, it's not trying to be anything other than what it is. You know, it's, it's AI and like you said, there are some boundaries because there are going to be some things.

The app might not understand about like the human experience, but I think it's programmed in in such a wonderful way where it's never pretending to be more than AI, if that makes sense. So as far as the boundary goes, I would just say, “You know, okay, this is not a human. If this is an emergency, 9-1-1 situation. I, I need to, you know, reach out to, to a human.” 


But, um, no, but the one nice thing about the app is you can use it anytime. You know, so you don't have to deal with that boundary. You would have to deal with, with, uh, traditional therapy. 

[00:34:42] Bilawal Sidhu: 

Are there any telltale signs to you that it is AI? Like how do you know it's never pretending to be something more than it's not?

[00:34:50] Brian Chandler: 

Uh, a lot of it is how, how it frames and words, certain things, it, it does frequently tell you, I honestly probably each time you use it that it's a robot or it might make a joke of, you know, something to that degree. You never feel like it's trying to be human. It, it does a good job. Just this is research, this is what works.

And you can take that at face value, you know? 

[00:35:18] Bilawal Sidhu: 

If I had to ask you, knowing, you know, what, what we know now and what you said, how would you define your relationship with Woebot? 

[00:35:33] Brian Chandler: 

Hmm. I, I, I guess I would describe it as my mental health companion. 

[00:35:34] Bilawal Sidhu: 

Do you think you'll continue using Woebot for mental healthcare?

[00:35:38] Brian Chandler: 

Yes. I, I think I will continue to use it. 


[00:35:41] Bilawal Sidhu: 

You talked about mindfulness earlier on. Have you, have you tried some of the meditation apps out there, like Headspace, et cetera? 

[00:35:48] Brian Chandler: 

Yeah, so I think meditation's great, but it is a totally different pace. I, I enjoy the app Calm. I, I think that's a very good app. I've had a really good experience with that.

Uh, but it's just, it. I've found Woebot to be a little bit more helpful in situations that are a little bit more urgent. 

[00:36:10] Bilawal Sidhu: 

I am kind of curious. Um, like more of a hypothetical question, right? If you could have an AI model, understand more of your life, and kind of give you contextual advice, right? Maybe it involves you sharing all your conversations during the day, your emails, your text messages.

Is that something that you'd be interested in if you could get sort of contextual advice through the day really tailored to your situation, or would it be creepy?


[00:36:37] Brian Chandler: 

 It wouldn't bother me if I knew the information being stored was safe. 

[00:36:42] Bilawal Sidhu: 

Hmm. 


[00:36:43] Brian Chandler: 

So I, I think that needs to be a priority going forward, that the information isn't being sold, the information is being stored safely.


And then I would feel comfortable with remembering, because I know it's gonna help with the tools in the future. 

[00:36:58] Bilawal Sidhu: 

Taking this technology to the limit when it does get better, when it can understand your context. When it, it can be respectful of the data that it collects. Would you want Woebot to evolve into something that feels more like talking to a human?

Or would you rather that it stays in this very clean delineation of a tool? 

[00:37:19] Brian Chandler: 

In this moment in time I think I would rather it stay. I would like it to evolve, but I don't. I don't want it to ever get to the point where, you know, maybe they like add a voice and you're, you're talking to the voice in it. It sounds very human like, I don't think I would like that, but kind of like what I mentioned to you, it's just all of this technology, it's so new and it's, we just aren't used to it.

So I don't know if in a few years that's just gonna be the new norm, but right now I do enjoy kind of the boundaries the app creates where you know if, if you were needing a human connection, go talk to a human. I think it's so important for people to be able to work on their mental health, and especially in this day and age where we're spending more and more time on our phones, we need to have a moment where we can put TikTok down and go to something that's gonna benefit us.

[00:38:17] Bilawal Sidhu: 

Brian, thank you so much for your time. It's amazing to have you on the show. 

[00:38:12] Brian Chandler: 

Yeah, thank you.


[00:38:24] Bilawal Sidhu: 

Like many of you, I've been on my own mental health journey over the last decade. I've really gotten into the work of Alan Watts and Guru Nanak, and I've cultivated a meditation practice, and probably like many of you, I got started by hopping from one app to another with a goal of keeping myself centered. And I know I'm not the only one far from it.

And you know, it makes me wonder if so many of us are already using apps to seek mental calm and clarity. It might not take a lot more convincing for us to start using apps like Woebots. As of now, AI therapy is a tool. And like many other therapy tools, its mileage will vary from person to person. But as this tech continues to advance that gap between the in-person and virtual therapy experience will also continue to close.

You literally got the supercomputer picking up on every single intonation nuance. Voice change inflection? Is that necessarily a bad thing? I don't think so. Mental health is such a problem, and if we've got some technology that can help us tame our monkey minds, I think that's a win.


If you want to know even more about the history of the therapy bot, ELIZA, 99% Invisible, made an incredible episode about her. The link to that episode will be in our show notes. The TED AI Show is a part of The TED Audio Collective. Is produced by TED with Cosmic Standard. Our producers are Elah Feder and Sarah McCrea.

Our editors are Banban Cheng and Alejandra Salazar. Our showrunner is Ivana Tucker, and our associate producer is Ben Montoya. Our engineer is Aja Pilar Simpson. Our technical director is Jacob Winik, and our executive producer is Eliza Smith. Our fact checker is Krystian Aparta, and I'm your host. Bilawal Sidhu.

See y'all in the next one.