In “Bioadapted,” Artificial Intelligence Comforts Our Fears, Then Sings EDM
The show’s creators Tjaša Ferme and Heidi Boisvert discuss the role of artists in AI ethics
In September 2020, a robot wrote a letter to humanity.
“I am here to convince you not to worry,” explained GPT-3. “Artificial intelligence will not destroy humans. Believe me.”
GPT-3 was the latest “large language model” and the engine powering ChatGPT, the much-debated chatbot which pulls data from the internet and generates text. For this letter, GPT-3 was following a basic prompt set by The Guardian: “Explain why humans have nothing to fear from AI.”
That letter opens Tjaša Ferme’s new show Bioadapated, an interactive investigative documentary from Transforma Theatre now at CultureLab LIC in Queens through September 24th.
This timely new work, which Ferme created and directed, explores the risks and potential rewards of AI in humanity’s future. The text shifts between documentary elements and fictionalized scenes, interrogating how AI might impact art and storytelling as it integrates into society.
One scene imagines a couple watching the same episode of television on two devices and realizing, as they spot variations, that the content is being individually tailored to their interests. That segues into a panel discussion on how individualized content could break down not only our wider discourse, but even humanity’s shared understanding of reality.
Bioadapated ends with an interactive segment which allows the audience to interrogate GPT-4 (the latest update) directly—though, at my performance, we were more focused on asking it to write an EDM song.
Ferme created the show in collaboration with Heidi Boisvert, a scientist and creative technologist who helps nonprofits utilize AI tools in creative and social justice work. I spoke with Ferme and Boisvert about the show’s conception, AI ethics, and whether ChatGPT could write my reviews for me.
How was Bioadapated first conceived?
TJAŠA FERME (director): I heard about Heidi’s work mapping the world’s first media genome—looking into people’s bio-data and seeing what their answers tell us about their personal values, then matching that to create media specifically for them. I thought, this is so mind-blowing, and I don’t fully understand how this works. So let’s commission a play about it. That play was Alexis Roblan’s Affinity, which premiered at the Science in Theatre Festival [in 2021].
Reading transcripts of the talkbacks we had at the festival, they felt like a play in themselves. And then I found James Wu’s Singular: Possible Futures of the Singularity [a dialogue between Wu and GPT-3 about potential AI futures]. A structure started to emerge in the overlaps between these sources. The docu-material was appropriate to tell us where we are. And then alongside that, I wanted to show glimpses of what the future could look like.
In one of the scenes from Alexis Roblan’s play Affinity, a couple realizes they are seeing two different versions of the same television episode, tailored to their interests. Heidi, I believe that is technology you are actively developing?
HEIDI BOISVERT (technology and innovation director): The idea is to track somebody’s viewing behavior over time and to have a repository—like a database—of biological signatures which can be better targeted to ideal audiences and demographics. My feeling is that we’re moving away from mass media to personalized media consumption. We’re able to actually bioadapt content based on people’s past and present viewing patterns. So, it’s not that these tools are very far in the future. They are very much in development.
The show also explores the dangers of that kind of bioadapted content—like the potential for humans to lose any kind of shared discourse. Why are you developing these tools?
HEIDI: I started jumping on this because I wanted to create more impactful media in the social justice space. Neuralink [Elon Musk’s brain implant software] was already starting to build some of these tools, but making them predominantly for marketing or entertainment outlets that are reproducing toxic narratives with a value system that isn’t in alignment with where we need to go—in terms of creating a culture of belonging and pluralism. So the idea was, let’s build these tools in an open source fashion, and make them specifically available to social justice organizations.
GPT-3 says some disturbing stuff in the show, specifically multiple references to life on Earth after humans are gone. That can be explained as our own ideas and writings being thrown back at us. But it does make you wonder about the degree of sentience.
TJAŠA: I’m always interested in questions. Realistically speaking, it is most likely just regurgitated material thrown back at us. But either way, I think there is a potential danger. And I do think that we as humans need to regulate this in a way that will benefit us.
Also, a huge problem is: who is behind these companies? AI is still in the hands of corporate American greed. The problem isn’t just that they’re not telling us what’s happening with this technology—the problem is that they themselves don’t know.
HEIDI: The models are based on learning, just as a human might learn. But we’re still at a very early developmental stage, like a child. There’s a long way to go before generalized intelligence. I think it’s possible, but we’re a long way away. I’m hoping everyone is going to get on board with redirecting this current trajectory that we’re on, and start heavily regulating these tools. And artists need to be at the table in those conversations.
At my performance, the final sequence of interrogating GPT-4 was actually really joyful. We all lost it when it said, “Dropping acid,” before starting to sing its EDM song.
TJAŠA: That’s what emerges when meeting a different entity. Even if people ask really cutting questions, GPT-4 is still answering from a place of: “I am benevolent, I want to do what’s best for humanity.” So very quickly we get more into a philosophical space, rather than a persecuting space.
HEIDI: The humorous moments are important because they invite in critical feeling, which is a first stage in awareness raising. There’s this release that brings the [audience] together, to maybe then ask the more critical questions.
Before we spoke today, I asked ChatGPT to write one of my reviews for me. I felt reassured that it couldn’t really do it. It was a lot of, on the one hand this, on the other hand that—it didn’t really have a critical perspective.
HEIDI: It could eventually write your review for you. You just have to learn how to refine your prompts. To narrow it. But you would have to do many, many iterations to nuance it.
Oh, okay. Well now I’m less reassured.
HEIDI: You could even build a whole training model based on your previous writing, so that it could write “in the manner of” Joey Sims. And then it could write just like you!
BIOADAPTED continues at CultureLab LIC through September 24th. Tickets are available here.