Previous

Lauren Lee McCarthy on Software Values

Lauren Lee McCarthy, celebrated artist, UCLA professor and creator of p5.js, spoke with Peter Bauman (Monk Antony) about Seeing Voices, her first long-form generative art project. They also discuss collaborating with AI and inclusive community building, while questioning the inherent values embedded in software.
About the Author
Lauren Lee McCarthy, Seeing Voices (Test), 2024. Courtesy of the artist and Bright Moments


Lauren Lee McCarthy on Software Values

Lauren Lee McCarthy, celebrated artist, UCLA professor and creator of p5.js, spoke with Peter Bauman (Monk Antony) about Seeing Voices, her first long-form generative art project. They also discuss collaborating with AI and inclusive community building, while questioning the inherent values embedded in software.

Peter Bauman: As the founder of p5.js, it might be natural to assume that your first long-form project would be procedural. Yet Seeing Voices does recall many themes of your work, like AI, personal boundaries, being present and everyday life’s consequences. Was it a considered decision to make your first long-form project AI and not procedural?

Lauren Lee McCarthy: I was invited to be a part of the AI-curated group so it pushed me in that direction. I'm always interested in subverting expectations so the challenge of making a collection that wasn't procedural was really interesting to me. I was also grappling with ideas of what making an AI-generated image or piece actually means. In the case of this work, it's the text itself that is generated by AI. Then I'm performing as a human interpreter of that statement to make the image part.

Peter Bauman: How exactly is the image created based on the AI-generated text?

Lauren Lee McCarthy: They are real, physical drawings by my hand that are interpreting the text—part of my drawing practice, which I haven't really put out there, at least as an NFT. It's something really new in that domain. A lot of my work is about the play between the human and the AI. A lot of times I'm taking on the role of the AI and performing it as a human—I use that mindset to make these images. The text itself is generated from the Voice In My Head performance that I created with Kyle McDonald. The texts are excerpts from that performance that were generated by the software that we wrote with ChatGPT. For each phrase, I'm performing as if I were Midjourney. I'm being fed this prompt and then generating a drawing based on it. The two are put together to create the final image.

But there's also another step. First, I scanned all the drawings in high res. Then I wrote some software that takes the scans and puts some of the drawings together. It's cropping them in different ways or it's sometimes putting two pieces together and overlaying them with this idea of creating more of a coherent system between all 100 images. And then I'm ultimately selecting the outputs. Similar to the Voice In My Head software itself, there is a play between me having direct control, inhabiting the role of AI and selecting things, and then the randomness of the system that I've built to kind of augment myself. And I would say that ultimately, a priority in making the system was to not lose the human feel of the drawings. So I'm not doing any manipulation of the images that doesn't feel like it's taking them out of the realm of something handmade and hand-drawn. I'm not manipulating the pixels, tinting them or things like that. But that is one more way that I was using software—to create a system that, in a way, augmented my own drawing process and added some randomness to it.

Peter Bauman: You mentioned Voice In My Head, which is the companion piece to Seeing Voices. The projects seen as a whole seem like the perfect example of what you referred to before as reciprocal risk-taking and vulnerability in your performance, where both performer and audience relinquish control. Why do you see cessation of control as so critical to your work?

Lauren Lee McCarthy: That's a good question. I think I've always been interested in control personally, especially when it comes to making work that's thinking about technology. I think there's a lot of ways that we're asked to cede control without a lot of discussion or a lot of choice in the matter. It's kind of like a lot of these systems are just kind of rolled out without a lot of transparency around what's happening with our data, what choices are being taken out of our hands, and how they impact our lives. So we're kind of sliding into a bit of an autopilot mode where everything is recommended, suggested and customized. It offers some utility, but it can also be quite limiting, especially when you don't know the values on which that software is based when making those predictions. With control of my work, it's a way of bringing light to some of those issues and also offering a different model.

We have to question what values are being built into the software we use.


A piece of technology is not neutral; it's built based on the biases, assumptions and priorities of the people that create it. Even if those are inherently good people, there's going to be people who are left out of the equation when designing technology. It tends to be predominantly white, male and western. And that's not to say that there's not technology being developed elsewhere, but just that a lot of these technologies are coming out of places like Silicon Valley.

We also need to question: how is this being made? And how much agency do we have to decide whether and how we want to use it? In a lot of my performances, there is sort of a surrendering of control or playing with that. But I'm trying to do it in a way that is consensual so that people who participate know the terms of what they're agreeing to and can make decisions about whether to participate or not and how. That consideration is a really different frame than the way that we're often expected to interact with technology.

The other thing is the shared vulnerability in the performance. There is an opportunity for connection with other people because you are both somewhat vulnerable stepping into it. There's a bond or a shared experience that emerges and I'm interested in the intimacy of what can happen in that space.

abstract orifices peach colored, black hairs and the text i don't want this to become a thing
Lauren Lee McCarthy, Seeing Voices (Test), 2024. Courtesy of the artist and Bright Moments



Peter Bauman: You mentioned the values of software. Your practice brings attention to the values encoded in artificial systems and questions who is prioritized or targeted. How do Voice In My Head and Seeing Voices navigate these ethical dimensions, creating art that engages with societal issues?

Lauren Lee McCarthy: The Voice In My Head performance is custom AI software that is listening to the environment. It's imagining the scenario where things like ChatGPT are taken to their logical or extreme conclusion, where all the thoughts in your head are just replaced by AI. The participant puts in an earbud and they’re led through this onboarding, where they're asked to reflect on the current voice in their head, their inner monologue, and how they would like it to be different. Would they like it to be more encouraging, more supportive or more funny? They are able to customize how they would like their inner monologue to sound. Then, as they walk around and interact with other people, the software listens to people or listens to the interactions that they're having and offers this running commentary. It also uses a clone of their own voice so it sounds like it's speaking to you in your head as the voice in your head. The style and content are customized based on your requested preferences.

The performance directly questions, “Is this something we want? Do we want to have AI replace all of our thoughts?” On the one hand, it can feel quite dystopian. I think we would hope to imagine that we have some sense of free will and control over our own thoughts. On the other hand, maybe all the AI-generated text and media are already seeping into our thoughts more than we realize. Then, what if you try this piece and it actually feels helpful? What if the voice is more supportive, more encouraging or helps you be a better person because of the way that you've customized the software? Even if you have doubts about the ethics of it, what if the outcome is something that you want in your life? How do you navigate that kind of tension?

Those are some of the questions in that software. By excerpting those lines for Seeing Voices, it's bringing to the forefront the wide range of outputs from the software while also pointing to the range of desires that the people using it might have had. When you hear a phrase that sounds particularly encouraging, emo or self-reflective, you can start to imagine how the owner of that AI voice in their head might have prompted it and who they might have been. Through the outputs of the software, you start to get a sense of its owner.

Peter Bauman: With questions of AI so central to your practice, I was reminded of a recent conversation I had with Christiane Paul about Harold Cohen. She commented that she doesn't really see artists today collaborating with AI as meaningfully as Cohen did with AARON. She did say there were exceptions, though. Do you think your work is an exception? In the collaborative process with Kyle for Seeing Voices, how did you see AI functioning as a part of that partnership?

Lauren Lee McCarthy: Yeah, that's a great question. It was really shocking. Kyle and I have made a lot of software-based things, including things integrating AI or writing our own AI software for the last ten years or so. This project was the first one where we could both really feel the leap that had happened in the AI space in general. So there'd be times where we would be discussing how we wanted the experience to run. And I would say, “Maybe it could ask this series of questions," or “It could respond in this way,” or “This could be kind of the format.” Then I would say to Kyle, “But how do we code that? That seems like it would be sort of hard.” He’d say, “Oh, no, it's not hard. We just tell ChatGPT to do that thing.” For example, there's an onboarding process for the piece where the person in the beginning is asked a series of questions. Rather than scripting out each question ourselves and trying to figure out which questions to ask, we basically described the situation to ChatGPT: “You are replacing their voice in their head. Lead them through an onboarding process so that you can be as efficient and helpful as possible.” Then we did some scaffolding and saw what the raw output was. After that, we could push back with our own modifications. It was a really different thing than step-by-step saying, “Here's the question,” and then “Here's going to be the response.” For each step of it, we were able to give the system a model of what the piece was and then ask it to propose features of the experience. So that was super interesting and I think a little bit scary, too, because of what we were talking about earlier—questioning what values are built into these tools.

I started wondering what it means to be offloading more of our development of an artwork to this AI, or not necessarily offloading, but involving more creative input from the AI in the development of the piece.


So I would say that it felt like a collaborator. I think we were still really guiding the vision of it. So it wasn't necessarily like a third partner. It was more like we had a very helpful assistant that sometimes went totally off the rails, and we had to rein in a little bit.

Peter Bauman: Reining in AI reminds me of something you’ve said about LAUREN, where you mentioned wrestling for control with AI. Since then, we’ve seen this jump in AI you just mentioned over the last year or so with ChatGPT. Did that jump mean a certain combativeness was no longer there? How did this recent shift relate to questions of control in your practice?

Lauren Lee McCarthy: That's a very good question. I think a lot of times in the past, I was making things that were approximating the ideal version of a software but failing. That could be failure through my own humanness, for example. In the case of LAUREN, someone would say, “Lauren, can you turn on the lights?” And I'd hit the wrong button on my interface or maybe I had coded some bug in there and then the hair dryer would turn on instead of the lights. I'd have to say sorry and fumble to fix it but people extended patience towards me that they would never extend towards Alexa because they knew I was a human. What was interesting about this recent work was that it was very seamless. It didn't have those glitchy moments of breakdown very often. Those glitches contributed to the critique I was trying to embed in the work. Those moments where everything breaks down felt a bit dystopian and dream-like.

But with Voice In My Head, there were fewer moments like that, which was something Kyle and I talked about. Is it obscuring the critique in this piece? Or does it start to become so much like a product?

Our hope was that in the framing of it and in the design of the experience itself, it left enough room for people to not just take it as a one-sided endorsement of technology but to still find the complication. 


For example, the fact that it is intervening in your conversations and offering commentary means that it's creating these disturbances where you're kind of disrupted. You're then asking, “Do I want to be disrupted by this technology all the time?” And would it be better if it were more seamless so that I didn't even notice I was being interrupted? Or does that become even more dystopian—to know that my thoughts are suddenly being run by another system and I'm not even aware of it?

Image
Lauren Lee McCarthy, Seeing Voices (Test), 2024. Courtesy of the artist and Bright Moments



Peter Bauman: It reminds me of Fahrenheit 451, where jarring sounds are used to intentionally disrupt and distract people. Before we go, I’d like to touch on community. As you're such a renowned community builder, what do you think Web3 communities can learn from your experience in community building for nearly a decade?

Lauren Lee McCarthy: Well, I love that question and I think it's not just what we can learn from p5, but how can we keep learning from the efforts of others who have tried different things in the past? Sometimes with Web3, there's such a tendency towards disruption and innovation and everything new that we forget that some of these ideas and efforts have been around for a while, and a lot of meaningful ideas have already emerged. So p5 was building on Processing, of course, but also other efforts that have been considerate of issues of access and inclusion. I would place it in a longer continuum of feminist work, thinking about community. For p5 specifically, I don't want to tell anyone else what they should learn from it, but I can say what I learned. 

One big takeaway was that some things, like making an accessible and inclusive space, take time and genuine connection. 


There needs to be time for people to have conversations, to hear each other, to think or to respond when there's a misunderstanding of community. Sometimes, when you're moving at high speed, there just isn't the space for that. In the capitalist framework we live in, there's a priority on speed and productivity. We push back against that and say, “We're not going to move as fast as possible all the time because we're going to be missing things.”

Another thing that happens when you move really fast is that the status quo tends to get reinforced because there's no time to imagine other alternatives or to make them happen.


Another takeaway is an openness to learn from other people, which I see in Web3 a lot and appreciate. As an artist, it can often feel very individual. But I think with p5 or with Web3, there's really this spirit that we're not doing it alone. We're making things using code written in part by other people or we're building infrastructure that actually requires different components to come together and requires different people to collaborate. It gets away from this idea of the individual genius in their studio and much more towards an ecosystem of people supporting each other.

That's a strength of Web3 and something that we really embrace with p5—trying to highlight all the people involved and not just the most name-recognized artists or the most technical programmers. It's also key to recognize all the people doing other kinds of work, such as design, communications, outreach, organizing and leading. We have to realize that there are a lot of different roles that go into these projects or into these spaces and we should really value all of them, not create a hierarchy.

Image



Peter Bauman: Getting away from the individual genius reminds me of François Morellet and GRAV who sought to dismantle these notions through their collaborative practice in the ‘60s. You’ve talked before about how it was difficult to feel like you were part of the community when you first started because it was so male-dominated. In contrast, Maya Man told me how warm and inclusive the space was when she entered, largely thanks to you. What advice would you give to men in this space, including myself, about making the space more welcoming and inclusive—making it an overall better place for everybody and not just ourselves?

Lauren Lee McCarthy: That's a nice question. I think I would ask, and I'm thinking about this myself too, “How do you open up space?” I think it really comes down to: What are the things that you can meaningfully and structurally do to shift power? It's one thing to nod to women or whoever you're trying to lift up.

It's all very helpful to signal your support as an ally. But when we're talking about really shifting the dynamics, it’s about letting someone—a woman or whoever you're trying to lift up—actually lead.


That was one thing I noticed in working on p5: a lot of men were very supportive of the ideas. But then sometimes, when it comes to things like organizing an event where I propose that we do this series of activities, structure it this way or invite these people, I get a lot of pushback. I would hear. “Oh, no, that's not how we've done it in the past. I don't think you know what you're doing.” And those are the moments when you need to actually just double down and say, “I have a different idea of what should happen here. Let's try that because the current ideas are leading us in one direction—that we've got a good space—but it's pretty biased towards certain demographics of people.” So maybe we really need some different ideas and different leadership in the space. A lot of the answer to your question comes down to making space for that leadership on all different levels.

A second thing I had a lot of conversations about with p5 was that sometimes I would hear people try to make this artificial distinction that we're operating based on a meritocracy here or that it's not right to just invite some more women if they're not as qualified based on the quality of the generated artwork in this space. And my responses to that were twofold: One, if that's really how you feel, maybe you should go back and look at the criteria you're using to evaluate quality in this particular frame. Maybe there's some adjustment that could be made there.

Two, if this really is the frame you've chosen and you're telling me the top ten people here invited into the space are all men because they're the best, then I would suggest that we blow up that whole frame. It doesn't seem particularly useful.


I'm just not really interested in a scene that is exclusionary, that's so limited in its diversity, because I just don't feel that it offers that much to the world. Realizing this inclusivity entails a radical rethinking of the assumption and the question. Rather than just asking, “How do we help teach another woman to code so maybe she can enter this space?”



---



Lauren Lee McCarthy is an artist, UCLA professor and creator of p5.js whose award-winning work explores social dynamics in tech.

Peter Bauman (Monk Antony) is Le Random's Editor-in-Chief.

Suggested Reading
Linda Dounia on Memory Machines
Peter Bauman
February 13, 2024
View