May 20, 2025

00:49:38

AI and Beyond: Technology Shaping Mental Health

AI and Beyond: Technology Shaping Mental Health
Into the Fold: Issues in Mental Health
AI and Beyond: Technology Shaping Mental Health

May 20 2025 | 00:49:38

/

Show Notes

Today’s conversation is about the promises—and the pitfalls—of technology. Specifically, we’re exploring how artificial intelligence is reshaping mental health care and what it means for equity, access, and privacy. While AI has the potential to increase access to mental health tools and improve outcomes, it also raises urgent ethical questions: Who is being left out? Who has control over their data? And how do we ensure that innovation doesn’t deepen existing disparities?

To help us make sense of it all, we're joined by Kenneth Fleischmann, professor at the UT Austin School of Information, where he studies the ethical and societal implications of emerging technologies.

Related Links:

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Into the Fold is part of the Texas Podcast Network, the Conversations Changing the World, brought to you by the University of Texas at Austin. The opinions expressed in this podcast represent the views of the hosts and guests and not of the University of Texas at Austin. Hi, welcome to into the Fold, the mental health podcast. I'm your host, Ike Evans, and today we are delighted to bring you episode 174 and AI and beyond technology shaping mental health. [00:00:33] Speaker B: With AI and with information technology development in general, there are not a lot of safeguards, there are not a lot of dampeners in the system. There are not a lot of things that slow things down. Allow us to carefully consider. That's quite different from biomedicine, where when you have a new pharmaceutical that you want to put on market, you have to do double blind clinical trials. It can take years or even decades. Very expensive process. You have to be able to demonstrate both that the medication is effective and that it's safe, it doesn't have side effects. Whereas with information technology, typically, the ethos has been to put it out there, see what happens and deal with the consequences later. [00:01:27] Speaker A: Hi, all. Welcome to into the Fold, the podcast of the Hawk foundation for Mental Health. I'm Ike Evans and I'm glad to be with you as we continue this season's theme, mental health in transition, Navigating change and building resilience. So today's conversation is about the promises and the pitfalls of technology. Specifically, we're exploring how artificial intelligence is reshaping mental health care and what it means for equity, access, privacy and other ethical concerns. While AI has the potential to increase access to mental health and improve outcomes, it also raises urgent ethical questions. For example, who's being left out? Who has control over their data? And how do we ensure that innovation doesn't deepen existing disparities? So to help us make sense of it all, I am joined by a remarkable thinker, Dr. Kenneth Fleishman, a professor at the University of Texas at Austin School of Information, where he studies the ethical and societal implications of AI and other emerging technologies. Ken, thank you so much for being here with us today. [00:02:57] Speaker B: Thank you, Ike. It's great to be here. Really delighted to get to have a chance to speak to your listeners. Thanks. [00:03:02] Speaker A: Okay. And I do not know how much this will tie into the conversation that we're going to be having, but I want to lead with a question that I like asking smart people about whatever their field of expertise may be. Is there a particular question about AI that you are tired of answering? Let's warm up with that. [00:03:31] Speaker B: Yeah, thank you, Ike. That's a great question. In AI ethics. One scenario that I think is becoming pretty tired at this point is called the trolley problem. So the trolley problem. There are several variations and formulations of the trolley problem, but typically the idea is that there is a trolley that is on course to run over, say five people. There is a switch where you could choose to switch and instead divert over to a track where there's only one person. Is it better to take an affirmative action to reduce the amount of death or to not take any action and allow a greater amount of death? Obviously not great choices either way. Why these folks are tied to the rails anyway is usually not addressed in the scenario. So it's a pretty unrealistic scenario in the first place. But what happens or what's happened in a lot of cases in ethics of AI is people will try to connect that to the way that an autonomous vehicle operates. There are a variety of variations of this for autonomous vehicles where people ask, there are two different pedestrians, which one is it more important to spare, this pedestrian or that pedestrian? Also not a great question to ask. We're trying to spare all the pedestrians. We want to keep everyone safe ideally, but also, I think, not really accurate relative to the processes that are occurring within the autonomous vehicle. It's not going around and trying to calculate the age of all the different pedestrians out in the world and make some kind of grand ethical trade off about sacrificing the young to save the elderly or vice versa. Rather, it's trying to track potential pedestrians that might be in the path and trying to avoid them. Certainly I do think with autonomous vehicles, the degree to which it's programmed to protect the occupants of the vehicle versus those outside of the vehicle is a potentially legitimate concern. It's not to say that there aren't ethical trade offs and choices in the design of autonomous vehicles. Even just the fact that we're talking about vehicles that go 30 miles per hour, 70 miles per hour, 100 miles per hour. We have tens of thousands of Americans killed on highways every year. If we would simply restrict all vehicles to mechanically only operate at a maximum of 3 miles per hour, we could probably zero out the number of deaths due to automobile. But it would take us a lot of time to get places and there'd be people who die on the way to the hospital because they didn't get to the hospital fast enough. So you're going to have the potential for harm to occur almost no matter what we make these choices. We make these trade Offs in this case, by and large, American society has made this trade off that will allow cars to drive around 20, 30, 40 miles per hour in cities and 50, 60, 70 miles per hour outside of cities to make transportation more efficient, which allows people to work further from where they live, allows people again to get to medical care quickly, all these different kinds of things. But there are some deaths that occur as a result. [00:07:11] Speaker A: Okay. Do you ever watch the show Black Mirror? [00:07:13] Speaker B: I do. [00:07:14] Speaker A: Okay, I can imagine some kind of Black Mirror episode in which there's some kind of autonomous vehicle that has sort of over indexed on the trolley problem and is actively looking for trolley problems to solve to quite disastrous or psychotic results. So yeah, I think that gets to a larger point about AI, which I guess has to do with again, sort of over indexing on those particular problems that most readily come to mind, kind of in a vacuum, while losing sight of the big picture, I guess is kind of another way of couching what you said. [00:08:13] Speaker B: Yeah. So I mean, right now there are a lot of questions about if and when and how to use AI. First of all, what is artificial intelligence? What is AI? So there are a lot of different definitions of AI out there today. What we've seen as this recent re emergence of AI in the public consciousness recently has largely been driven by the public release of ChatGPT and the variety of generative AI. Large language models, large multimedia models that have that are out there in the world today, becoming increasingly ubiquitous. And people are making decisions about whether they should use large language models to help them with their homework assignments, to help them write a speech, and in some cases as a potential resource if they're going through a mental health crisis. And these are all really interesting choices that people face today about if and how to use generative AI. But again, generative AI is just one form of AI. There are many ways that AI is impacting our lives. When we're getting a song or movie or book recommendation from our commander system, that is AI. When we're getting directions on the most efficient way to go from point A to point B, factoring current traffic, that is AI. So there's so many ways that we're using AI and not even stopping to think about the fact that we're relying on AI. [00:09:48] Speaker A: Okay, so turning to mental health, which I know is not exactly your wheelhouse, but to what extent have you had the opportunity to talk about or to reflect upon the role of AI in mental health? Both the promising aspects as well as caveats, either based on what you've heard or what you see other people touting. [00:10:33] Speaker B: So with AI and with information technology development in general, there are not a lot of safeguards, there are not a lot of dampeners in the system. There are not a lot of things that slow things down and allow us to carefully consider. That's quite different from biomedicine, where when you have a new pharmaceutical that you want to put on market, you have to do double blind clinical trials. It can take years or even decades. Very expensive process. You have to be able to demonstrate both that the medication is effective and that it's safe, it doesn't have side effects. Whereas with information technology, typically the ethos has been to put it out there, see what happens and deal with the consequences later. I think as we look at the particular growth of AI, especially generative AI in recent years, it's useful to look at perhaps the most recent prior huge technological emergence being social media and the impact that social media had on all of our lives and is having on all of our lives, including in terms of mental health, particularly youth mental health. And in many ways we had companies innovating and coming up with new social media platforms with new algorithms which are AI again, so we're still talking about AI, but in the context of social media here, but new algorithms that would drive engagement with the system to make sure that users were using and continued using and frequently using all the time using to the extent of doom scrolling and other kinds of less than ideal behaviors. There have been youth suicides, there have been some really adverse outcomes that seem to have been impacted or caused at least in part by social media. In some cases that's things like cyberbullying, which of course bullying goes back much further than social media, but it has been facilitated and made easier, made more anonymous, made people feel more secure while doing it, but while doing harm to others. I think it's really important as we are now focusing on generative AI, that we're being proactive about it, that we're not just waiting to see, gee, I wonder what's going to happen 10 years from now and what kind of consequences. Because I think we can see the consequences of social media in many ways and how it has affected our public discourse in this country and around the world. Certainly seems to have increased political polarization. Sociologist Zeynep Tufeki was doing research on the emergence of candidate at the time Donald Trump in 2015 and was on YouTube watching a bunch of Donald Trump rallies, Donald Trump speeches and the recommender system. The algorithm started to feed her some alt right Neo Nazi, very far right content and effectively sort of trying to move the taste further to the right. So then she wondered if it was a singular phenomenon in terms of a single poll. So she started watching Hillary Clinton rallies and the algorithm started recommending eco terrorism content and other kinds of extreme left wing content. And if you think about the origins, in many ways YouTube is used for music and for music it kind of makes sense. Most people come in pretty poppy, but it's like, okay, so that song was a little bit on the country side of things. Let's send you more country. That was more R and B. Let's send you more R and B. That was jazz. Let's send you more jazz. That seems like a relatively benign thing in the music world. Of course, music is not apolitical and there's a lot of, historically there's been a lot of connections between music and politics which are important to consider. But we don't normally think of the politics of music as being, being quite at the scale of the kind of national discourse of a presidential election or something of that magnitude. So when you start moving people further and further into their camps, then you can create things like polarization or worsen phenomena like polarization. I think that has implications to my limited understanding to mental health potentially because in an environment, in a society that is highly polarized, it creates a situation where people may not be able to get along with or trust their neighbors. These are the kind of conditions that we've seen in former Yugoslavia and other countries that have gone through civil wars in recent times. Of course, this country went through a civil war a long time ago, but, but these are not good signs. So it's very troubling that society has become so polarizing. And I think it's one of the factors that probably is not a good thing for people's mental health generally. And so I think it's important to be proactive about the impact that AI is having, likely will have in the future. As we think about systems that we're going to be that everyone is potentially going to be using. [00:16:55] Speaker A: Okay, right. When I think of a human therapist and the extent to which they do this kind of ranges across the spectrum, but there's quite a bit of throat clearing about what therapy can and can't do, boundaries, how to make this the kind of collaborative experience that shores up the help seekers agency and doesn't just create another kind of dependency. And it's kind of that meta level of ethical consideration that I would think AI currently just can't touch yet. [00:17:48] Speaker B: Indeed. Although, I mean, it's quite fascinating that of course, AI has evolved a great deal over the years, but one of the first simple chatbots, a Chatbot is an AI that interacts. I think, again, ChatGPT and other large language models are what we're most familiar with in terms of chatbots, but they go back for decades. So if we go Back to the 1960s, Joseph Weizenbaum was creating Eliza, which was a very early chatbot. It was designed actually to simulate a therapist. So the idea was that you would talk about a topic and it would ask you questions about it. And it was very simple. It would look for specific keywords and then it would ask questions based on if particular keywords were introduced. It was very simple technology, similar to how we basically sent astronauts to the moon using slide rules originally and using computers that were less powerful than the ones that we have in our pockets every day today. It's incredible what you can do with some rather simple technology in certain circumstances. And when you tell someone this is a therapist and it's going to interact with you and ask you questions, and at least some of what it asks are the plausible things that one might imagine a therapist asking, then that can be very persuasive. This gets into our notion of the hyperreal. So there are a couple of theorists who've looked at hyperreality from slightly different directions. So Umberto Eco looked at the hyperreality of Disneyland's Jungle Cruise and the idea that when you say, imagine taking a cruise through a jungle, that people's imagination might align more with the ride at Disneyland than it would with the actual experience were they to do it. So that's the kind of hyperreal, more real than real phenomenon. Baudrillard, in turn then argued that that simulation of Disneyland was there only to make you think that the rest of LA was real. And so. But I think it's definitely. You can play on people's expectations and having a technology that can do even very simple things, just following up. And generative AI is just looking for the next word. It's ingested a huge amount of content and it's just building responses word by word based on what's the most likely word. It's completely statistically based. So that's again why the sort of trolley problem reasoning doesn't quite fit. Because some people are starting to ask if artificial intelligence is quite the right term. Because the way that statistical machine learning, the way that a neural net, the way that generative AI works, is radically different from the way that we Think as humans, it just very different. AI is a different phenomenon from human beings, but we tend to often anthropomorphize AI. We tend to, when we think about a robot, often we're thinking about Rosie from the Jetsons or other kind of humanoid kind of robots. If you actually go to a factory where cars are being built, there's tons of robots there, they don't look humanoid. There are certainly some now produced humanoid robots, but it's the vast minority of robots that are out there in the world that have humanoid affordances. But we still tend to think about robots as a sort of humanoid form. [00:22:30] Speaker A: Yeah, I mean, AI as such is everywhere and it's always necessary to delineate what it is exactly that you're referring to. And to your point about robots, in a way, robots are just another boring facet of the world in which we live. But when people get to talking about robots, they want robots like how they imagine them to be. Yeah, it's amazing how bored people got with the future once it actually arrived. So anyway, I'm thinking about AI and mental health and I don't know, I wonder if maybe there is some potential with chatbots in more of a peer kind of relationship, maybe as a group therapy, co participant with a human therapist, kind of supervising and overseeing and if there have been any experiments in that direction, because I see a place for it, honestly. But maybe more as just something that spurs greater human reflection or reflexivity and not as the thing that does it all. And, and I don't know, I think it takes somebody having your particular kind of philosophical perspective in the room for those more nuanced possibilities to even be broached, I would think. [00:24:26] Speaker B: Yeah, I think that's a great idea to explore. So one of my research areas is looking at the role of AI in the future of work. So thinking about this includes workforce training because to equip and prepare students for the future, where increasingly AI is going to be part of our everyday lives, part of our everyday work, that is important to make sure that everyone is prepared for that future. We can see that AI and automation more broadly across decades has had tremendous impact on many different occupations. So many occupations have been impacted and affected by automation. I think what's a little bit different about ChatGPT in particular and other large language models is they particularly focus on knowledge work. And I think knowledge workers, such as journalists, for example, and lawyers thought for a long time they were immune to the kind of automation that factories that were becoming increasingly automated and the use of automation in the service sector that, of course, knowledge work would never be replaced. We do see today that with generative AI there are definitely tremendous things that can be done with generative AI, when we're thinking about therapy, we're thinking about the patient and we're thinking about the provider. So for me, it's easiest to first take it from the provider perspective, thinking about, from the perspective of the provider, appropriate ways to use or not use AI. And I agree that I'm much more comfortable with AI that's being used by a provider and by workers more generally within their domain of expertise. This is something we've seen actually a large trend in increases in unionization in recent years. We've seen that increasingly unions, when they're engaging in collective bargaining and they're going on strike, one of the terms, a big term, is the role of AI in the workplace. And so in 2023, we saw United Auto Workers, we saw the Screen Actors Guild, we saw the Writers Guild of America all go on strike. And AI was one of the terms, one of the things that they were negotiating over as part of the strike with the Writers Guild of America. I think part of what was impressive about they were able to accomplish through their collective bargaining was an understanding that it was writers that would choose if and how AI would be used in the workplace in writing, and not the studios. So I think that's an important consideration. I think it becomes rather terrifying if it is large companies that are providing mental health services that are deciding when AI is used versus when a human therapist is used. From my perspective, much better to think about AI being used by a provider under the supervision of a provider in some way. I've thought about this for a wide range of professions. One of the projects I've been working on recently in collaboration with Sue Youngery and David Lankes in the School of Information, has been a collaboration with Austin Public Library, with the UT Libraries and Navarro High School Library. One of the factors that we've really focused on in that project is the idea of AI as an assistant for librarians, not as a replacement for librarians. I think that's the same whether we're talking about librarians or we're talking about writers or we're talking about therapists. I think it's important that we're not trying to put people out of work. We're not trying to replace workers. We're trying to give people tools so that when librarians are responding to members inquiries for ask a librarian that they can choose how to use AI and part of their process could involve AI, but still, at the end of the day, it's going to be the librarian that is accountable for the information they provide. The information a librarian provides can be life and death in some cases. Of course, librarians have clear restrictions not to practice medicine or practice law, but they might still be helping connect people with information that might help them in a moment of a mental health crisis or physical health crisis or some other kind of crisis. And that could be really consequential information for that person in that moment, Just as a therapist might be helping a patient at a really difficult time in their lives. And that could be a life or death situation. There could be life or death consequences that could result. Therapists are trained professionals. Let's keep the work in the hands of the trained professionals. Let's give them tools to be able to better do their work. Another area of work I'm doing in collaboration with Sherry Greenberg and the LBJ School of Public affairs and Raoul Longoria in the the Walker Department of Mechanical Engineering is focusing on skilled trade workers. And we're building smart hand tools. Actually, what we're building is an attachment to a hand tool that gives workers feedback so that it would help a worker avoid a repetitive stress injury or workplace accident. It would help them to train faster. It's basically like a digital assistant, like we have on our phones, but. But for skilled trade workers. But still, it's the skilled trade workers who are doing the work and who are getting compensated for the work. And this is just one way to assist them to sustain healthy, fulfilling careers. So I think across all the different. To me, it's really important that we're thinking about all the different professions, including, but not limited to knowledge workers. And we're thinking about how we can make sure people are able to leverage their professional expertise to accomplish their work, Whether that is building buildings that will stand up with life or death consequences, or assisting people with mental health crises with life or death consequences. [00:31:31] Speaker A: So at the Hawke foundation, we tend to approach mental health as a phenomenon that both reflects, but might also be a way to address existing inequities. And so if we're going to be talking about AI, our way of talking about it would also reflect those concerns. And so I just want. I would love to get your take on just, you know, on the concerns that many do have around equity in mental health. AI. [00:32:21] Speaker B: Yeah, I mean, there's definitely material for a Black Mirror episode here. Actually, Amazon has a series called Upload, which is a really fascinating series. One of the aspects of it and the science fiction writer Greg Egan had written similar things decades earlier. It's basically about after death having a sort of digital afterlife. But one of the phenomena is that one's digital afterlife has to be purchased and it's dependent on one's socioeconomic status, of course. So if one can afford really fancy, wonderful heaven like afterlife, whereas there are some who are put into a kind of storage where they're only allowed one byte per month or something, they have one interaction and that's it. And then they're in stasis for the rest of the time because of the sort of lack of bandwidth. So it's like, I mean, we do heavily depend on bandwidth in our everyday lives and there are inequities in bandwidth in terms of income, in terms of urban versus rural, and in many other ways. There are these kind of bandwidth implications that can have severe consequences. I definitely think there's a potential black mirror episode about a society where the rich can afford human therapists and communities that are under resourced are having to rely on AI counselors. I think that's potentially a very problematic world where you have these different levels of service and even who gets to interact with a human and who doesn't. I think in some domains we already see some aspects of this where you have some customers that might get different treatment from other customers, depending on their frequent flyer status and things of this sort of. But I think that's especially problematic when we're thinking about mental health care, which is a form of health care, which I believe is a fundamental human right. So in that case it is very troubling, I think, on the flip side, to say, so the solution is to not allow people to use AI for mental health is definitely not a solution to the systemic problem that's happening and potentially an even worse thing to do. Because perhaps the only worse thing than affluence affecting the ability to afford a human counselor versus an AI counselor is affluence impacting the ability to afford a human counselor versus no form of counseling whatsoever. I think in many ways prior to the human advances in AI, that was in many ways true in society that we haven't had as equitable access to mental health care as one would want, certainly as I would want in a society where mental health is considered a fundamental human right. The hope would be that when you add the existing mental health resources in terms of trained professionals, you give them additional tools. Could this allow them to expand their caseloads? Could this allow them to serve more patients? Could this allow for more universal access. Because to me that should be the goal of all healthcare, including mental health care, should be universal access so everyone in society has access to the care they need. But certainly we fall short of that in this country today. Is AI the perfect just add AI and stir and is it going to solve that problem? Absolutely not. And I could again imagine some ways it could potentially make things worse. Certainly not automatically better, but I do think there's the potential, depending on how it's done, perhaps that it could be part of a solution to make sure that everyone has access to high quality mental health care. [00:36:51] Speaker A: I don't want this episode to just be a reflection of the things that I'm most cranky about. I hope that my questions are not too leading, but there's always a sense in which those who go about something like AI in a studious way are always having to catch up with the hype train. I can't help but be curious about what it is that you see out in the world as far as the hype or claims made for AI by those who have a vested interest in it that either make you cringe or make you want to rush to your keyboard with your own hot take of a response. [00:37:51] Speaker B: Yeah, so this is not the first hype cycle for AI, and historically every prior hype cycle for AI has been followed by an AI winter, a period where everyone grew disillusioned with what they had perceived as the amazing promise and potential of AI and the degree to which it didn't, in its current instantiation, live up to that hype and potential. And thus people decided to give up on AI and move in other directions. So I'm the founding chair of Good Systems Ethical AI ET Austin, which is a campus wide research ground grand challenge focusing on ethical AI. We have 140 members from over 30 different departments, schools and units across campus and we're committed to working to build human AI partnerships that benefit society. So this is something that we talk about all the time in good systems. Immediately as you were introducing this question, was thinking about my colleague Peter Stone, professor in Computer Science. And Peter has seen up close both these hype cycles and these AI winters. And so when ChatGPT was released and there was so much hype around AI, Peter was very presciently and responsibly encouraging everyone, including us, to be mindful of how much AI is being hyped. Because in every other hype cycle, this is including Deep Blue beating Kasparov at Chess, Watson beating Ken Jennings at Jeopardy. But each time yes, it plays chess well, it plays jeopardy well. But then you try to get it to fix our healthcare system. And our healthcare system is a lot more complicated than a lot of people from the outside imagine it being. And it's not quite as simple to solve as some technologists had hoped it would be. So I think it's really important that we think about being thoughtful and cautious about how much we lean into, feed into the hype. I think it's very important to be aware of the limitations of any AI technology and the state of the art with AI today and be aware of both the potential benefits to so many in society of particular AIs being adopted in particular ways, such as, for example, if it did mean we could move toward more universal access to mental health care, that could be so transformative. But there are also so many perils, so many pitfalls, so many potential dangers to adopting AI in the wrong ways, designing AI in the wrong ways, and indeed it could lead to a society that is more polarized, more unequal, and it's a scary thought. [00:41:18] Speaker A: Okay, so I'm going to get you out of here on this. If there's just anything that our listeners who would just like to learn more about you, about your research, things that they can be looking out for or places that they can go, maybe to fill out their own understanding of the things that we've been talking about here today, if you want to plug anything. [00:41:48] Speaker B: Thanks. Yeah. So, first and foremost, good Systems, Ethical AI et Austin. So we run a number of public events, an annual symposium. We just had an event on Responsible AI education, which was a really excellent collaboration with the Office of Academic Technology. We run events throughout the year. So definitely anyone who's interested can go to goodsystems, utexas. Edu and can learn more about and sign up for our listserv so you get notifications of upcoming events and find ways to participate and get systems. In the School of Information we have a new Bachelor's degree in Informatics, both a Bachelor of Science in Informatics and a Bachelor of Arts with a major in Informatics. Informatics is the intersection of information technology and society. So we have concentrations ranging from Human Centered Data Science, User Experience, Design, Social Informatics, Social Justice Informatics, Cultural Heritage Informatics and Health Informatics. We have a required Ethical Foundations for Informatics course. Every student in the program learns about the ethical foundations of their work as informaticians and also learns programming and the technical side of things. That's one of the popular things. As our program went through the approval process at the university. We also offer masters and PhDs in information studies. Our master's and PhD students, many of them are enrolled in a campus wide portfolio program in Ethical AI that we have that's open to all graduate students at UT Austin. So if you're a graduate student in any degree program at UT Austin, master's or PhD, you can apply to join the Ethical AI Portfolio program. It's a really great way to take courses from across campus from leading experts in the world. And then we also launched the new online Masters in AI, the msai. So if you just Google MSAI utexas. When it came out, the New York Times did a really great profile and there's been a lot of hype around that. And in that case, I think the hype is warranted. This is a really great program and it's part of the initiative to make higher education more affordable. So it's about $10,000 to complete complete master's degree, which in American higher education these days is a reasonable price point to be able to hit. So I think we've got a lot of educational offerings. On the research side, I am the founding editor in chief of the ACM Journal on Responsible Computing. So if responsible computing is of interest, we're in our second volume this year and have published some really excellent work from around the world in all aspects of responsible computing, but including in the Ethical AI space. And you can look me up in Google Scholar. Kenneth R. Fleischman, if you're interested in seeing some of the great research that the doctoral students and postdocs and colleagues that I've had the privilege and honor to get to work with over the years. [00:45:13] Speaker A: Okay, well, fantastic. Dr. Kenneth Fleishman, thank you so much for taking the time today to just share some wisdom about something that I think people are rightly both enraptured by, also a little freaked out about and maybe helping some of our listeners kind of locate that sane middle path. We really do appreciate it. [00:45:44] Speaker B: Thank you. It's been great to be here. Thanks. [00:45:47] Speaker A: In April of 2024, we wanted to take a fresh look at an always hot topic, screen time, and its impact on the mental health of young people. Here's a listen. This youth focus is and this sort. [00:46:06] Speaker B: Of attention to nuance and detail is so important given the big sweeping claims that we hear left and right in news headlines and from some researchers about. [00:46:17] Speaker A: These big impacts technology is having on young people's sense of well being. [00:46:21] Speaker B: And so we really want to bring a more textured, nuanced perspective. [00:46:25] Speaker A: What are kids actually experiencing? [00:46:29] Speaker B: What are some of the upsides and what's hard specifically about growing up in the a world with all this technology and why really getting that youth or. [00:46:38] Speaker A: Teen level perspective on what it's like. [00:46:41] Speaker B: To grow up with social media, smartphones, Fortnite, all kinds of apps at your disposal and the ability to be connected. [00:46:48] Speaker A: To friends around the clock. [00:46:49] Speaker B: So that's really our interest. [00:46:52] Speaker A: The voice that you just heard belonged to Dr. Kerry James. That was from the episode titled Digital well being for youth. Dr. James, along with her colleague Dr. Emily Weinstein, are co founders of the center for Digital Thriving at Harvard University. In their book Behind Their Screens, Emily and Carrie draw on a survey of more than 3,500 teens with the objective of adding to our understanding of teens online lives. In this episode, we wanted to explore how young people navigate our increasingly networked world and how we balance safety, empathy and technology. In response, there's a link to the episode in the show notes, so do check it out so today's conversation reminds us that that innovation is never neutral. The same tools that offer hope can also create new risks. And that is why ethics must be part of the conversation from day one. As AI continues to shape the future of mental health care among infinite others, it is up to all of us to ensure that it does so in a way that is fair, safe and non exploitative. Thank you to Dr. Ken Fleischman for guiding us through such a timely and important discussion and to learn more about the ethical implications of AI in society. I have dropped links to all of the resources that Ken mentioned at the end of our interview. And if you like today's episode, please subscribe, rate and review into the fold wherever you listen. And that does it for this episode. We're glad that you could join us. If you have comments or anything you'd like to share about the podcast, feel free to reach out to us at into the foldustin utexas.edu. thoughtful comments will be acknowledged during a future episode. Production assistance by Daryl Wiggins, Kate Rooney and Sylvia Bloom Just as taking care of ourselves enhances our ability to to help others, so it is as well that by helping others we enhance our own resilience. And thanks as always to the Hogg foundation for its support. Taking us out now is Anna's Good Vibes by our good friend Anna Harris. Thanks for joining us.

Other Episodes