“Nothing, Forever,” the hit AI-generated never-ending comedy show that has been streaming on Twitch since December, got pulled offline this week.
The show is an AI-written, never-ending, unofficial “Seinfeld,” following an animated comic named Larry Feinberg and his friends as they chat about, well, nothing. Other than the base animation and laugh track, 100 percent of the content is generated live by OpenAI’s GPT-3 language model.
Everything was going great until last week, when Larry delivered a transphobic comment during one of his stand-up sets, violating Twitch’s terms of service.
The creators of “Nothing, Forever” had included some moderation filters to keep things in check. However, during a switch to a new AI model, the filters failed, leading to Larry’s comments.
Twitch promptly suspended the channel for two weeks after the transphobic material aired.
The creators of the show were quick to distance themselves from the AI-generated content, explaining in the show’s Discord chat that it was “an unfortunate mistake.” They also emphasized that the content did not reflect their beliefs in any way.
After the two-week suspension, “Nothing, Forever” will be back, but the incident has raised some larger questions about the ethics of AI-generated content, which will become more and more ingrained in our lives moving forward.
Essentially, is it the responsibility of content creators to put ethical guardrails on the AI bots they use? (Which could be counter-intuitive since the whole point of AI that it can “think” for itself so humans don’t have to be involved in the content output.)
Or is it the responsibility of the AI developers themselves — like OpenAI, Google and others rushing into the space — to put ethical guardrails on their AI? Or should governments be the ones calling the shots?
And then, who decides, how quickly can they decide, and ultimately whose biases and values will be the ones infused into the AI?
“Nothing, Forever” was just a silly experiment to see if AI can create comedy and plot lines comparable to human writers. (And if you watched it, the answer is a resounding “not yet.”) But this incident does raise a larger question about the responsibility of imposing human ethics on artificial intelligence.
While the show was suspended due to a technical error, the misstep is highlighting the fact that AI is only as ethical as the data it is trained on and the guidelines it is given. “Nothing, Forever” is actually opening up a much larger conversation. And that’s not nothing.