Notice: All forms on this website are temporarily down for maintenance. You will not be able to complete a form to request information or a resource. We apologize for any inconvenience and will reactivate the forms as soon as possible.

How An AI-Edited Film Speaks to a Bigger Concern

Imagine this scenario: you’re a movie producer, hoping to get your film into the hands of as many people as possible. But when the MPA returns with your rating, they’ve labeled it as an R-rated film for crude language improvised by your actors. That’s bad news: That little R excludes your movie from a significant portion of prospective viewers, whether they be teens or wary families. What’s worse is it means that you’ll either have to accept the rating or else schedule another costly day of reshoots in order to fix those scenes.

Or …you could use AI to alter what’s being said in the first place.

If you happen to be Scott Mann, the director of 2022’s Fall, this may be sounding familiar.

It’s exactly what Mann and his team chose to do for the film, which originally had, according to Mann’s interview with Deadline, “about 35 f-words.” And using Mann’s advanced “AI Filmmaking Tool” known as Flawless, they were able to edit those expletives down to two in order to clinch a PG-13 rating.

As you’ll note in our own review of Fall, we likewise only caught two uses of the f-word—and we didn’t even notice any sort of indication that there were once more.

Dubbed “the world’s first licensable Generative AI film editing software for professionals,” Flawless works using two distinctive features: TrueSync and DeepEditor, both of which are showcased in an example video below (with a quick warning regarding a couple censored f-words).

TrueSync lets the editor alter the lip movements of the subject for when a film is dubbed in a different language. A character speaking English could be dubbed in Japanese, and the software will alter the character’s lip movements to make his or her mouth move as if he or she is speaking the language.

DeepEditor is supposed to help directors not need to have to schedule expensive reshoots if a line or scene is messed up due to dialogue. According to the website, “DeepEditor allows you to make dialogue changes, synchronizing the on-screen actors’ mouths to the new lines without having to go back to set or compromise in the edit.” In essence, it allows editors to modify the mouth and lip movements of the actor to match whatever dialogue changes are necessary.

So is this a legitimate use for otherwise frightening deepfake technology? No, Flawless insists—because deepfakes use a different sort of technology. While deepfakes act like a “face filter,” Flawless uses something called “DeepCapture,” which “takes a detailed 4-D scan of the actors’ existing performance, enabling the DeepCapture system to learn the intricacies and nuances of the actor themselves. The end result is that any new lines are driven by the actors’ original performance, not puppeteered or ‘faked’ by someone else.”

Still, software such as Flawless certainly further muddies what we see on screen—and it will make reviewing movies potentially more interesting and challenging.

For instance, Flawless’ software could make it possible for a film to be released with both a PG-13 and an R rating so that audiences could choose which they’d rather attend. Likewise, I immediately wondered how companies like VidAngel, whose streaming service is centered around helping families avoid unsavory dialogue and scenes, might utilize something like this for their own platform. But this software also speaks to a bigger concern: unauthorized content manipulation.

Flawless, to its credit, aims to maintain actor and artist integrity through “legitimately sourced data and a rights management platform, to enable consent, credit, and compensation in the age of AI.” However, it is easy to see the ethical concerns that may be raised from similar technology.

Of course, the ability to alter photos and videos has been possible for far longer than 2022, when Mann’s film released. But what was once a time-consuming manual ordeal can now be done automatically in less than a day—and even more convincingly, too!

The technology to alter what someone is saying or doing might not be very nefarious with the subject’s consent, but with the right software, it’s just as easy to achieve the same look and feel against someone’s will, too. In less than a decade, we’ve already seen it be used in all sorts of positive and negative ways—from the positives of the aforementioned film editing to serious negatives like bolstering false accusations or faking the words of celebrities and political figures.

And as AI software continues to advance and become easier to use, parents need to start having conversations with their children about the inherent dangers of social media and maintaining a public image online. Because there are those out there who won’t ask for your consent to manipulate your photos, videos or audio files to change how you look or make you appear to say or do things you’ve never said or done.

And we cannot stress vigilance on this enough. This is happening now to people of all ages. And, oftentimes, it really does only take an altered photo to do so. A recent CBS story highlighted a lawsuit against a teenager who allegedly used AI software to digitally remove the clothing in photos of fellow female students. National Public Radio notes how the Federal Trade Commission issued prizes for organizations who could come up with ways to detect whether a voice was real or an AI imitation. And the Associated Press recently released an article explaining signs to discern whether an image is real or a deepfake.

The age of AI can be scary. It can be hard to know who or what we can trust, and it can be a bit frightening to see how easy an edit—even one that’s been made just to eliminate a swear word—can alter someone’s speech. If anything, AI’s ability to easily manipulate our online presence is merely reflective of a culture that has increasingly struggled with objective truth. But that’s all the more reason for parents to carefully guide their children as beacons of that truth. Because if the online world only serves to stir up doubts and uncertainties, they’ll be all-the-more willing to hold onto parents they can trust to act as firm foundations.

Kennedy Unthank

Kennedy Unthank studied journalism at the University of Missouri. He knew he wanted to write for a living when he won a contest for “best fantasy story” while in the 4th grade. What he didn’t know at the time, however, was that he was the only person to submit a story. Regardless, the seed was planted. Kennedy collects and plays board games in his free time, and he loves to talk about biblical apologetics. He thinks the ending of Lost “wasn’t that bad.”

6 Responses

  1. Scary stuff. AI-altered content is becoming inescapable, so educating kids is definitely the right move.

  2. Agreed, good article, and IMO I think this sort of manipulation, even if consensual, is too dangerous to be left unregulated simply because of the potential of it being used to portray someone as saying something they didn’t say.

    1. Yes, here it was used relatively benignly, but even though actors and actresses have agreed to be in these movies, this could be used to change performances in ways they would not have approved of.