Notice: All forms on this website are temporarily down for maintenance. You will not be able to complete a form to request information or a resource. We apologize for any inconvenience and will reactivate the forms as soon as possible.

With AI, ‘Reality Television’ May Become Even More Unreal

Netflix has its share of salacious, scandalous true-crime documentaries. But one such doc may be sparking a scandal of its own.

The doc: What Jennifer Did. It’s the sordid story of a young woman (Jennifer Pan) who allegedly spearheaded a plot to kill her own parents. (Her mother was killed during an attack on the Pan home; her father survived and says that he saw Jennifer walking around the home, unbound, talking with the attackers.)

Jennifer and her three alleged co-conspirators were convicted of the crimes, though all have appealed the decision. But Netflix’s 90-minute documentary—currently one of the streaming service’s most popular films—unpacks the alleged plot in chilling detail, leaving perhaps little doubt in the viewers’ mind what really happened.

But an interesting thing happened to What Jennifer Did on its way to streaming success: People started taking a closer look at some archival photos of Jennifer and opined they looked a little … off.

“The images that appear around the 28-minute mark of Netflix’s What Jennifer Did, have all the hallmarks of an AI-generated photo,” wrote Futurism’s Victor Tangermann, “down to mangled hands and fingers, misshapen facial features, morphed objects in the background, and a far-too-long front tooth.

The executive producer of What Jennifer Did, Jeremy Grimaldi, offered a denial of sorts: “Any filmmaker will use different tools, like Photoshop, in films,” he told The Toronto Star. “The photos of Jennifer are real photos of her … The foreground is exactly her. The background has been anonymized to protect the source.”

But that did little to quell the controversy, and as the Futurist says in another article, “Questions abound. Did the film’s producers use existing archival images of Pan to generate new ones? Or were AI tools used to edit an existing image? Or do the images look like AI, but actually have another explanation?”

To those questions, we’d add another: Who, and what, are we to trust?

Listen, Plugged In has discussed the unreality of reality TV for a long time now. Unscripted shows feature very scripted premises and contrived dialogue. People on camera play exaggerated versions of themselves. And we still watch because, hey, many viewers like the end product.

But when we enter into the world of documentary—even one as inherently sensational as a true-crime story—viewers presumably go into it with different expectations. We expect the dialogue we hear to be what people actually said. We expect the facts of a case to be, y’know, factual. And we expect that the photos we see on screen are “real.”

But when plenty of us edit our own family photos—to make the skies look bluer or our cheeks look rosier or our families look happier—“real” takes on different connotations within our very own social media feeds. Even removing power lines from an otherwise clear sky still alters what was actually there. In an age when we’re able to “improve” upon reality with just a click or two, why would we assume that documentary creators would necessarily hold themselves to higher standards?

And as AI and photo-manipulation software get better, these problems will get worse. Already, we’ve seen plenty of instances of photos where celebs and politicians appear to do things they never did, and soundbites where they say things they never said. And while hopefully there will always be ways to detect such deepfakes with enough time and energy, society’s trust level has rarely been lower. We’re prone to distrust such instruments of detection, too—and maybe sometimes because we prefer to believe the fakes.

If there’s any solace to all this uncertainty, maybe it’s this: It’s nothing new. Having such a thing as “photographic evidence” is a relatively new development in our culture, and people have always found ways to manipulate it. And while we don’t want to encourage overt cynicism in our children—that’s a recipe for stressed-out kids—we do want to encourage them to ask plenty of questions. It’s important for them not to accept stuff blindly that they see or hear on their screens, be it in a Netflix doc or their own social media feeds. Doctored photos or conversations can ruin friendships—and ruin lives.

Who can we trust? Jesus, always. But be mindful of what He said in Matthew 10:16. “Behold, I am sending you out as sheep in the midst of wolves, so be wise as serpents and innocent as doves.” Or, to paraphrase a Russian proverb popularized here by Ronald Reagan, “Trust, but verify.”

paul-asay
Paul Asay

Paul Asay has been part of the Plugged In staff since 2007, watching and reviewing roughly 15 quintillion movies and television shows. He’s written for a number of other publications, too, including Time, The Washington Post and Christianity Today. The author of several books, Paul loves to find spirituality in unexpected places, including popular entertainment, and he loves all things superhero. His vices include James Bond films, Mountain Dew and terrible B-grade movies. He’s married, has two children and a neurotic dog, runs marathons on occasion and hopes to someday own his own tuxedo. Feel free to follow him on Twitter @AsayPaul.

Leave a Reply

Your email address will not be published. Required fields are marked *