The Virtual Jewel Box
The podcast of the Tanner Humanities CenterHumanities perspectives on AI
with Lizzie Callaway and Rebekah CummingsThe leaders of the University of Utah summer institute Humanities Perspectives on Artificial Intelligence, Elizabeth Callaway and Rebekah Cummings, join Scott Black to discuss the human limitations of AI, as well as the points of contact between AI and the humanities.
Links:
- Marriott Library, The ARPANET Project
- Brigham Young University, Office Digital Humanities
- Catherine D'Ignazio and Lauren F. Klein, Data Feminism — MIT Press
- Claire Wardle, “The Science of Misinformation” — SciLine
- Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power — PublicAffairs
(Episode image: modified detail from Caspar David Friedrich, Der Wanderer über dem Nebelmeer/Wanderer above the Sea of Fog, 1818.)
Episode edited by Ethan Rauschkolb. Named after our seminar room, The Virtual Jewel Box hosts conversations at the Obert C. and Grace A. Tanner Humanities Center at the University of Utah. Views expressed on The Virtual Jewel Box do not represent the official views of the Center or University.
-
This transcript is automatically generated and may contain errors.
Scott Black: Welcome to the Virtual Jewel Box, the podcast of the Tanner Humanities Center at the University of Utah, where we explore the richness of human experience through the lens of the humanities. I'm your host, and today we're diving into one of the most urgent and fascinating topics of our time, artificial intelligence, but not from the usual tech-centric angle.
In this episode, we ask what does it mean to be human in the age of intelligent machines? How do literature, philosophy, history, and ethics help us understand the promises and perils of ai? And what can the humanities teach us about the values, narratives, and cultural assumptions embedded in the technology shaping our future?
We will hear from scholars, artists, and thinkers who are engaging ai not just as a tool, but as a mirror reflecting back our hopes, fears, and the complexities of our shared humanity. Whether you're curious about algorithmic bias, the role of storytelling in machine learning, or the ethical dilemmas of automation.
This conversation invites you to think deeply and critically about the digital age. So settle in and let's explore how the humanities illuminate the path forward in our increasingly AI infused world. That is the introduction to today's episode of the Virtual Jewelbox, written by the AI co-pilot.
Welcome to the Virtual Jewel Box Podcast of the Tanner Humanity Center. I'm Scott Black, director of the Tanner Humanities Center, and today I'm delighted to welcome two of my favorite human beings, Lizzie Calloway and Rebecca Cummings to talk with us about AI in the humanities. This past summer, Lizzie and Rebecca led a summer institute at the University of Utah's Marriott Library on humanities perspectives on ai.
Which offered 30 higher education faculty from across the country an immersive interdisciplinary examination of ai. They're here today to tell us what they learned. Welcome Lizzie.
Lizzie Callaway: Thank you Scott. And
Scott Black: welcome Rebecca.
Lizzie Callaway: Thank you. Scott, can we talk about that intro? Yes,
Scott Black: please do
Lizzie Callaway: So. I did not do the homework.
Apparently you did send this to us ahead of time. So I was sitting there listening, thinking you wrote that, but you gotta the part that said, we're gonna be talking to scholars. Check artists. I thought that's a stretch and thinkers and I thought, Scott, what's next? Breathers water. Consumers like thought leaders,
Scott Black: but the things that give it away for me are things like embedded in the technologies, shaping our futures.
I would never in a million years say that, 'cause I don't think we have a future to shape Also. Invites you to think deeply and critically about the digital age I, who's, who's talking here deeply
Lizzie Callaway: and critically. The thing that also gave me were narratives in machine learning sometimes, sometimes it just says things and I'm like, those are words.
Yes. They do not mean anything.
Scott Black: It's just too broad and it's full of plum. It is all that AI slop. There's just a lot of words, as you said, Lizzie, that may refer to something, but not in our actual water-based world. I thought it was a fascinating thing and I didn't, I didn't train it at all. I didn't say improve it or do anything.
I just took it out of the box and put it up here. You
Lizzie Callaway: know what? Which is what people do. Which is why they get caught. You could have made that AI a lot better if you had taken. 160 more seconds
Rebekah Cummings: to
Lizzie Callaway: work on it. Exactly.
Rebekah Cummings: What might have been useful about that is if you're starting a podcast and not sure what a structure of an intro would look like, I wouldn't keep the words the same, but there was some conversational pieces that are nice for a scholar to know how to kick off a podcast.
Scott Black: Right. Anyhow, I wanna hear about your institute. It was an incredible group of people who came together. The topics you presented or were talking about are absolutely fascinating and crucial, and I want you to tell us all what AI is and how we can manage it in the humanities.
Lizzie Callaway: Before we get to that, we absolutely have to thank you because you were an instrumental person in making this happen.
Our institute was at NEH Summer Institute and it did get terminated by Doge and Tanner Humanities and through you was one of the entities that contacted us almost immediately and said, we have some funding. What can we do?
Scott Black: I also wanna say, Marriott Library stepped in right away and BYU Digital Humanities also.
Lizzie Callaway: Yes. They came through immediately, so right when everything was going downhill. Those three entities really brought it back and made it happen because then we had momentum and we could get the rest of the funding.
Rebekah Cummings: Yeah. Lizzie, I also made exactly one note for today and that was, don't forget to thank Scott.
We're one of many, many awardees who were defunded, but I feel like we're in this very rare category of like defunded but not defeated. We kept the work moving forward. We actually were able to do the great things that we had hoped to do. We could not have done it without Marriott Library, Tanner Humanities, and BYU and the rest of our funders as well.
But I don't think we would've gotten those extra funders, if not for that initial support.
Scott Black: Yeah, and I also, I mean, listen, we just provided a tiny push and a lot and some money. Marriott provided a substantial amount of resources, which is terrific. Sarah Shreves, the Dean of Marriott Library, should be thanked in particular for her incredible support of this.
But you guys did all the work and you really wrangled this together and ran. A two week institute, which is very close to the original conception of a three week institute. I think you had all 30 people sign up for, and this should never have happened. You shouldn't have had to do this and we shouldn't have had to step in.
But it's a real credit to the two of you that you got this off the ground. And had a couple weeks of really excellent conversation. So thanks for your thanks, and thank you for your work with this. I think it's really important, and I think it's something we should really be talking about in the humanities and the interdisciplinary humanities.
So maybe just tell us about the sort of range of people you brought together.
Rebekah Cummings: Both of us had been working sort of at social and ethical implications of artificial intelligence. I'm a librarian by training. I do a lot of digital literacy instruction, and I'd been thinking about how technology kind of undergirds the way that we're receiving information, interpreting information, teaching on misinformation.
Doing a lot of work around censorship. Lizzie's area is in technology, the environment and literature, and I was really honored when she came to me and said, I have an idea for a summer institute on humanities perspectives on artificial intelligence, and I wanna do it with you. I was like, well, don't you want a colleague in humanities?
But actually I think it was really great to bring in our different training, our different perspectives. Um, we brought 30 scholars together, like you said, from. All around the country. All kinds of humanities and humanities adjacent disciplines. I'm gonna miss a few, Lizzie, but we had history, philosophy, English, human geography.
We had three K through 12 teachers. Actually, we, we weren't sure we were allowed to bring them in at first because it's a summer institute for higher education faculty, but we cut some. Fantastic applications, and I think it really benefited our cohort to extend it to three K through 12 teachers.
Lizzie Callaway: We had a couple
Rebekah Cummings: poets and communications and anthropology, if I didn't say that.
Yes.
Scott Black: And you also had, as you said, several resident experts from across the country as well as across the university here.
Lizzie Callaway: Yeah. They were fantastic. So we brought in one still in person, Lisa Meen, who walked us through some smaller AI systems that she's developed for teaching that allow people to get hands-on experimentation with AI without having a computer science degree.
So she was there so that she could walk us through on our computers. So you needed to be there to be able to look at our screens. But the other ones. Other experts did come over Zoom and they were absolutely fantastic. Lauren Klein came to talk about data feminisms and bias in in ai. Claire Wardle came to talk about misinformation and ai, and then from the Colorado School of Mines, Rafael De Bear came to talk to us about cobalt mining and technology and what, how they're using AI to mine faster and more, but also what kinds of metals are needed in mind for ai.
Rebekah Cummings: Yeah. And then we had a couple other guest experts as well. Like we had, uh, Matthew Winters, he's the Utah State Board of Education AI specialist, so he came and talked about AI and education. We had Brian Johns Rudd from Adobe. We went down to Adobe for a whole day, and we learned about like the corporate responsibility and how the corporate world is thinking about ai.
It was very important for us, for this institute to be place-based like. What has Utah done in the space of technology? How does our sense of place and the people who are here contribute to this conversation? So after Raphael spoke about, um, mining, we spent the afternoon at Kennecott Copper Mines, which was fantastic.
Again, we went to Adobe. We had Lauren Guus, who's the CEO and Executive editor of the Salt Lake Tribune, came and spoke about how local journalists are thinking about ai, how they're using it and not using it. Notably, not at all in there. Reporting, but in other things for some efficiency. And then we went to the Marriott Library special collections to look at the ARPA net collection.
So we have amazing special collections. We have a rich history of computing here in Utah. We were the fourth note of the internet. We've had some, I mean the, the creator of Pixar word perfect. And we've got some really heavy hitters here in Utah. So that was a fantastic experience. But one of Lizzie's friends who had run an institute said to us.
Think of it as like your dream people you would wanna talk to, and I think we really did like shoot for the stars with like Lauren Klein and Claire Wardle and pretty much everybody said yes, which was incredible
Scott Black: fun. That sounds excellent. Yeah. I'm just going to, um. Read through your five goals for the institute.
Foster interdisciplinary understanding of ai, promote ethical and responsible AI development. Prepare educators for ai liter literacy and engagement. Develop public facing scholarship and resources. Create practical and transferable teaching tools. Those are the goals you set for the institute. How did it go?
What did you learn? What do we need to know about AI out here in podcast world?
Lizzie Callaway: So we do have a public facing scholarship and resources in that. We made one season of a podcast, Scott, we're not as committed as you. We are not going to keep it going, but we had all our participants make a podcast episode.
So we have a whole season of humanity Scholars from different disciplines talking about what they're thinking about ai, what they're worried about, and what they're hopeful for.
Rebekah Cummings: I think that we did PR do to some extent, everything on here. I think the things we did pretty well out of these goals were to promote ethical and responsible AI development for us.
We got nervous when we got the 120 applications for this institute, and we started saying, why are we teaching this? Like these, these applications are incredible. They should all be teaching this. And we really had to shift our minds to say, we are creating a space. We are facilitating conversations of scholars who have been working with ai, thinking about AI and having everyone in a room all together for two weeks.
In a different place than they all live. It was an incredible experience to think about interdisciplinary approaches and perspectives on ai. That was maybe one of the my favorite things about the institute that also took pressure off us to feel like we didn't have to have all the answers to all the things.
Is there one on here you think we didn't do well, like could have done better? I
Lizzie Callaway: was thinking we did something that I hadn't even thought about and didn't even put on that list because for me. One of the best things wasn't any of these outcomes. Mm-hmm. AI is really hard. It's a really hard subject. There are no solutions.
We were never gonna get together and talk and, and come up with a, here's our five year plan, we solved it. You're welcome everyone. But I think on a metal level, our institute really emphasized to everyone how much they value the human to human. And the human in, in thinking and talking and writing, because nearly everyone there said that this simply would not have been the same, not in person and not in Utah.
So I think the very way we went about it ended up underscoring what we're trying to maintain in the face of this kind of tidal wave of AI that's coming.
Rebekah Cummings: Agreed. And a lot of people commented that they could feel, I'm trying to think of a better word than like synergy, but they could feel our close relationship reflected in.
The content of the course because Lizzie and I have worked together for years. I mean, Lizzie was a postdoctoral fellow in digital matters back in 2018 and 2019. So we've worked together and collaborated for seven years and we have a lot of fun together. And so preparing for this, if there was one like piece of advice I've give to anyone, it's do something fun with your best friend that you care about.
But everyone said that that actually really set the tone for respect and camaraderie and connection. And, and I feel the same. It was such an incredible experience.
Lizzie Callaway: And Rebecca, you've been telling me for years. That those kind of soft skills are so important and undervalued, durable
Rebekah Cummings: skills.
Lizzie Callaway: Those skills that, that you do at the Digital Matters Lab, that you're a space where, where students come for a semester, they can grow and learn and they're supported and you're interested in their work, and you've always been telling me how important that is, and I've not believed you.
I've been like, but what? Is the outcome, what can you point to? And I feel like you, this experience completely changed my mind. Yeah. About that. That actually bringing people together and being with them is important in and of itself.
Rebekah Cummings: Yeah. A lot of people ask us like, how did this institute change the way you thought about it?
And to some extent it didn't. I've always believed. That we need deeply humane spaces where we have room for connection, room for failure. When it comes to digital matters, like if I, if you could call it a management philosophy, I've always thought we need to create spaces where people can thrive, which means knowing who they are as a person and what motivates them.
And of course, the outputs are so important. We're doing important research that we want. Them to have something that they're proud of and they can take out in the world. But I think that they do that work best in a space where they feel, feel seen and valued and deeply connected. So to me, those aren't two separate things.
I
Lizzie Callaway: know and you've always said that, and I always used to roll my eyes and now I'm like, yes, yes. Preach.
Scott Black: So let me ask, it's interesting. This is, this happens to me regularly, but I always thought it was me. When I wanna talk about ai, I end up talking about what AI doesn't get, which is called, for lack of a better thing, human.
So. Is what you're talking about when you talk about this kind of synergy, this kind of connection, this kind of feeding off of other people's energy, setting a tone. These are all vague things, but is that human intelligence? Is that what. AI either is striving for or maybe is the limit of its artificial intelligence.
Part of the problem with AI for me, and I don't know anything about this, is I don't know what artificial means, and I certainly don't know what intelligence means, human or otherwise. I know there are plenty of. Ways to think about intelligence that are not human, that are all around us. There's my cat, for instance, who is every bit as intelligent as me and my wife and manipulates us regularly.
So, and I, I know there are. Octopuses or octopi, they haven't told me what they like to be called that have things close to our kind of intelligence. And there may be machine intelligence. And then I've met a few human beings that have demonstrable intelligence, a few. So all of these are confusing words to me, and the idea that we're worried about artificial intelligence when maybe people know what intelligence is.
But anyhow, my question is, what you're describing is. Does feel to me like a kind of intelligence. Is that fair?
Lizzie Callaway: I think that's very fair. And the important thing about that kind of intelligence is that it is embodied.
Scott Black: Interesting,
Lizzie Callaway: right? This is a, this was a feeling in our bodies that were close to each other and interacting in all sorts of ways that we're also nonverbal.
Right? And it's an intelligence based in experience and shared experience. And. Large language models do not have experience. They don't know what a dog is. They just know the word dog tends to co-occur with tail and furry and baby cats sometimes, and they're playing with signs without referring back to anything real.
So this kind of intelligence that we're talking about here, I feel like is v. The opposite.
Rebekah Cummings: Yeah, and I think an important thing to note is that the question, what is intelligence? Likely won't be answered by technologists. If I wanted to get to the root of that question, I would go to the philosophy department.
How do you think about intelligence and how do, how is it different from wisdom or knowledge or information? This is the province of the humanities. I would also go to the psychology department to see how, you know, developmental psychologists think about intelligence. We know that even within humans, there's all.
Kinds of different intelligence. There's emotional intelligence, there's cognitive intelligence, there's spatial intelligence, and so we're gonna unpack these questions in interdisciplinary spaces, but I think the humanities has so much to say, even with that very basic question of what is intelligence?
What is artificial? Okay.
Lizzie Callaway: And another thing you could go to the humanities for is to think about the potential consequences of any strict definition of intelligence and why you might not want to put clear boundaries on what it is and isn't and what the harms were in the past when people did that.
Scott Black: It is one of the things that's most disturbing about ai, about artificial intelligence is the precision that it, it.
Presents, and it's not actually precise. When you read AI about something you don't know, you think, oh, that sounds right. There's sometimes even some detail when you read AI about things, you know, well, suddenly you start to see, wait a minute, this, there's holes in it. You actually are yada, yada ing over the good stuff.
And I'm sure AI could make a Seinfeld joke too, but it wouldn't be as perfect as mine. Mm-hmm. So there's both, there's the generality you're talking about, there's the lack of reference to the world, yet AI poses as a embodied being all the time. What I worry about are the chatbots who are making friends with people who are getting advice in all kinds of actual human lived ways.
Yet it's, it is by definition, a bullshit machine. It doesn't anchor in reality and. In significant ways.
Lizzie Callaway: I think about that in terms of my kids. So there are chat pos that that not only will say, I think I feel, but will say, oh no, I am a trained psychologist. I'm sitting looking at my degrees on my wall.
They're not supposed to do this. But they do stuff like this all the time. And the developers are kinda like, oops, sorry. We're gonna try to make it not do that. And I think. I think there's a danger in thinking that caring for someone is made of words and the chat. GPT will never or any LLM or chat, chat bot companion be able to perform.
Empathy. It doesn't have any needs of its own. It never had a long day like my spouse would and has his own worries. So it's gonna say the right words and it can say the right words. Oh, I'm listening. That sounds so hard. Reflection it, it like does this very, very well. And yet I, I have to step back and think like, if I only cared for my daughters by saying things, they would be dead.
Caring is also doing things in the world, grabbing a hand, looking into eyes, feeding kids, nursing them for years. There's more to care than just saying the right thing. And I think
Rebekah Cummings: sometimes that's forgotten. It is interesting and worth thinking about that. When you talk with a chat GPT, it'll always tell you you're wonderful.
It'll always tell you, it's such a great question. Oh my goodness. But even if you catch it doing something, you're like, oh, I'm so sorry and I'll do better. It's, it's so obsequious. This is something that we know about LLMs and humans aren't like that, right? I mean, humans will push back. Humans are cranky.
Humans are hungry. If you wake up a human in the middle of the night to tell them something, they'll be angry at you because you woke them up. My chat, CPT is always ready and available to talk to me, but there's something really valuable about the fact that like you, Lizzie, you Scott, have your own wants, needs, desires, and the fact that we chose to all sit in a room and give each other our limited time and energy.
When we could be doing so many other things. That's what makes this valuable to me. I did wanna say though, about your comment on, when you know a lot about something and read cha GPT, you're like, oh, this is all wrong. There's a, an actual social science name for that. I think it's called like gelman amnesia, that when you're reading articles that you know a ton about, you can see all of the problems with the article, but you forget about that when you're reading something that you don't know as much about and you just accept it wholesale.
I very much see that effect. And I might have that name slightly wrong, but I see that with LLMs, that it sounds great if I don't know much about it. And that was even true in our poetry module that we did. 'cause I am not a poet. I love writing, but I don't write poetry. And when Lizzie had us reading the most beautiful love poems of all time, and then creating poems and chat, GPTI was like, Hey, these are pretty good.
And then the poets in our group were just brilliant. And actually pointing out all the reasons that they weren't. And I, it wasn't like they convinced me. I just hadn't thought about that until they started pointing out that poetry is all about the unexpected turn of phrase, and it's all about embodied experience and things that chat GPT could never do so, which was why it was also great being in a room of scholars from different disciplines.
Lizzie Callaway: What's terrifying to me about what you just pointed out is that. At the same time as you need to be an expert to see the holes in chat, GPTs, let's call it thinking chat. GPT is undermining the kinds of things we need to do to become an expert, to not be fall for it. Like hook, line and sinker. Yeah.
Rebekah Cummings: Tons of friction involved in becoming an expert in anything, and AI reduces a lot of that friction.
Scott Black: That's interesting and that's something I'm feeling that in my own classes and with my own students, is they have a tremendous amount at their fingertips and they can discover new things very easily, but they never mis discover things. They never accidentally stumble upon things as you do if you're in.
Shelf in a library and seeing what's next to the book you want, there's no, there's neither accident nor friction as you're saying, or disappointment or self-correction. And that's actually the interesting stuff. That's I think, why people become experts, because they've had to work things out for themselves through a lot of.
Garden paths, a lot of wrong ways. A lot of returning or rephrasing a question or an issue in a way they hadn't originally thought. If you're just being told this is great, or here's the next step, or, yeah, I see where you're going. Someone else is doing that work for you. What, what? LLMs give you is a tremendous amount of information, but no sense of priority.
No sense of embeddedness, no either embeddedness within other structures of information or embeddedness in a world. Anyhow, I think it's, it's the frustration. It's the cranky person you woke up in the middle of the night that is actually. What makes things interesting and makes them your own? One of the things that I worry most about with AI is its assumption that everything is verbal.
And you guys were talking about this and I couldn't agree more, and now I'm gonna be absolutely heretical for an English professor and say, I'm worried that we are fetishizing. Language as the best way to access our experience. Now, I've spent 25, maybe 30 years at this point explaining to students why language is so important to access.
Our experience is ai, the revenge of English professors.
Rebekah Cummings: Well, the first thing I wanna say is that we've, we've spent a lot of time even in this conversation talking about large language models. It's worth pointing out AI is so much bigger than large language models. You know, AI is surveillance software, AI is robotics.
AI does a lot with images, and so we, we've talked about it narrowly, but it is more than just words. Although the ability to tokenize things has been a huge part of the advancement of generative artificial intelligence. To the second part of your question, I mean, that is. That is a tricky one. Lizzie, do you have thoughts on that as an English professor?
Lizzie Callaway: Yeah. It, it was a weird experience hearing you go through that thought, because I agree. And I, and then you turned it around and I was like, oh my God, what am I, what am I saying if I say we've. I don't know, valued language too much. And what it, what are the alternate modes of understanding your experience?
And For me, language is the one I have the most training in and the one I'm trained to teach students to use. But when I stop and think about it, it absolutely isn't the only or the most important thing. I, I started playing violin with my daughters last year, so there's music and it's amazing to watch them come home tired and after a long day at school and maybe grumpy and to use music.
They're not thinking words. They're playing a song on the violin. And they work through something. So I do think I've been ignorant of these other ways of processing our experiences, but they are there when I look. And I don't know if that makes us heretical English professors or better ones.
Rebekah Cummings: One of the reasons that we do write, it's not just to have the, the product of an essay.
I mean, I jokingly say we never needed more essays in the world for our students. We have plenty of those. We did it because writing was evidence of thinking. I do feel like in an age of AI what matters more than anything. The ability to ask good questions, the ability to evaluate responses, knowing about context and bias and history, critical thinking.
I mean, these really are not the soft skills. These are the flexible, durable skills that I think our students are going to need to be successful in an age of ai. I do probably feel more negative than ever about whether or not we're actually gonna be able to tackle these challenges. Like do we have the collective will as a society?
To really face these, these intractable challenges around things like labor displacement, around environmental issues. There's an irony that we are so distracted and disjointed from ai, from social media, from these recommendation engines that we can't even tackle the big problems that we need to tackle together.
Lizzie Callaway: I came out seeing a lot of parallels between. My, the work I do in eco criticism and the environment and AI in that these, let's say climate change is an environmental problem and AI as a problem are both really gonna require collective solutions. I talked at a high school today and, and one of the questions was, are there any ethical.
Uses for AI if it has all these environmental impacts and all this exploited labor and stolen data. And I, I said, I think something we've learned from the environmental movement is you can't put that onus on individuals. Are you gonna do the ethical thing or not? We have to create a system in which the best way is the easiest and most appealing way to go about and do things and that.
There are ethical uses of ai, especially if it's funded through grants and prizes. We can solve protein folding and maybe make progress on diseases, but that if it's hooked into the attention economy, like chatbots are that you mentioned earlier, Scott, and, and they're designed for engagement, I think No.
And I also think that sh Shoshana Zubov gives me a little hope. She is, of course, um, the author of the Age of Surveillance Capitalism, and she says, regulation. Is it, is it one way of doing collective action and another way of doing collective action is abolition, and we've abolished things in the past.
We do not allow people to buy and sell other human beings. We don't allow markets for human organs. We don't. We've abolished child labor and she thinks our attention. Is like an organ and that it's so precious that we cannot allow markets for it, so that we should not have a market and human attention.
And I think that's what I hope for.
Scott Black: It's really interesting that the problems of ai. May have very little to do with AI and much more to do with the systems that are driving it.
Rebekah Cummings: I know here at Tanner Humanities, you're deeply concerned with human values, and I think that really does come into this conversation because it's not always about ai, but when we apply.
AI to it. You can think, well, what have we done in the past? I mean, we've had, just to use one, one peril that people talk about of labor displacement. We've had labor displacement in the past. We've had globalization, we've had automation. We can look back and say, how have we cared for people in those moments?
And the truth is, we haven't cared for them very well. I think as a society, we tie people's inherent worth and dignity with their productivity and, and what they bring to the world through like. Yeah, through the means of production. I really worry about that in an age of ai. If 30% of jobs go away, we haven't built a society where we have a strong social safety net that cares for people who have been displaced.
So these are things we need to think about that really, they might be related to ai, but they're not solely about ai.
Scott Black: So, so we have no answers to the question of ai. You haven't solved it. I'm a little disappointed, but
Lizzie Callaway: that's the real bummer. And I feel like you were asking this whole time, Scott, like, so what came out of it?
And I, I wish we did have more solutions. I think we have a lot of strategies and I have a lot more ways to talk about AI with my students. I think the. Only way is to have a lot of out in the open conversations with them about it and what I'm thinking about it and what they're thinking about it, and the pros and cons, and really use our humanities skills to dig down and imagine the futures we want and imagine the repercussions and ask the the different questions that aren't maybe necessarily immediately obvious.
That's like, okay, well. If technology, if, if a is gonna help teachers, that's one view of teaching, but it, we could step back and ask the humanities question like, what is teaching? It's maybe not just a transfer of information from one entity to another. It is a relationship where. You're helped to grow. I don't know.
Rebekah Cummings: Yeah. One thing I love about the humanities is it helps us to think about the kind of world and the kind of society that we wanna be a part of, and then work backwards from there. I feel like with the deployment of ai, we are not taking the time to think about, do we wanna live in a world where we are all disconnected, lonely.
Unemployed, the environment is shattered. This is a very dark future. Right? And it's, it's something that brilliant fiction writers have helped us to like know that we may, we need to resist, but if we want a world where people are healthy and happy and thriving, and self-actualize and prosperous, how can we use AI to create that future?
And reduce the harms of it. I just, especially at a policy making level, I'm concerned that we have not done the work of thinking, how do we build the society we wanna build? And I think the humanities has a ton to say about that.
Scott Black: I am very happy that we don't have an answer to ai. If we did, I would worry because our job is actually to come up with or keep asking questions, to come up with better questions and resist the easy answers, or maybe to resist answers at all.
No wonder people find us so frustrating. Um, thank you very much. Rebecca Cummings. Lizzie Callaway, for coming and talking to us about ai, but more importantly. Talking to us about being human in the humanities.
You've been listening to the Virtual Jewelbox Podcast of the Tanner Humanities Center. Our music is Jelly Roll Morton's Perfect rag ai. Could not write that.