By Jennifer Walden
In May 202 1 , Flawless released TrueSync, an AI-powered visual dubbing tool that helps refine localization results by generating new lip movements mapped to the new language audio. It has helped to bridge the language gap of storytelling because when the visual lip-sync matches the dialogue delivery, the performance feels more natural, allowing the audience to connect with the story and the characters.
The development of TrueSync led the team at Flawless to apply the concept of lip-sync matching in a new, creativity-driven way. Their latest tool, DeepEditor (released in February 2025), utilizes performance-based AI to match lip movements to new dialogue, enabling editors to preserve the actor’s emotional delivery even when lines are altered. While TrueSync is a functional solution for improving localization lip-sync (i.e., changing from the original language to a new one), DeepEditor is a creative tool for changing dialogue content while preserving the actor’s lip-synced performance on screen.
Let’s say a director wants to remove excessive profanity from the actors’ on-set performances. Instead of cutting away from an actor’s face to hide dialogue edits, the director can have the dialogue editor remove the offensive words. Then the picture editor can use DeepEditor to generate lip movements for the edited dialogue. This is precisely what director Scott Mann did for his film “Fall.” Editor Rob Hall was able to use the new profanity-free dialogue scenes in his cut, effectively changing the film’s R rating to PG-13, making it watchable for a broader audience.

According to the Flawless website, DeepEditor uses a patented deep learning process called “DeepCapture” that takes a detailed 4D scan of the actor’s existing performance, learns the actor’s intricacies and nuances, and then creates a visual result for the new dialogue that’s driven by the actor’s original performance. (So, it’s not “puppeteered” or faked by someone else).
Scott Mann, Co-Founder & Co-CEO at Flawless, is a DGA director, WGA writer, and producer. He’s worked with A-list actors such as Robert De Niro, Pierce Brosnan, David Bautista, Robert Carlyle, Kate Bosworth, Jeffrey Dean Morgan, and Bruce Willis. In addition to “Fall” (2022), he’s directed “Final Score” (2018), “Heist” (2015), “The Tournament” (2009), and series such as “The Oath” and “Six.” His passion for filmmaking runs parallel to his lifelong passion for science, innovation, and storytelling. Mann and Flawless Co-CEO & Founder Nick Lynes are I-X Fellows at Imperial College London, working with the institution’s cutting-edge AI research center.
Mann said, “Flawless has been about building technologies that transform, augment, and revolutionize the filmmaking workflow. The Flawless technologies we’ve been building aim to make filmmaking better from the filmmakers’ perspective, whether that’s the actors, directors, editors, and so on. Making film productions is very hard, and often there are huge amounts of creative compromise. The tools we’re building will reduce creative compromise and make content better, easier, and faster, ultimately lowering the price point at which you can create it, as well. DeepEditor gets you in this world of elemental editing, where you have more control and more flexibility, and you can do a lot more with the footage you have.”
Editor (and former Local 700 member) Rob Hall, Chief of Film Innovation at Flawless, has over 25 years of experience in film editing, visual effects, and post-production. In addition to editing Mann’s films, Hall edited director Matthew Vaughn’s “The King’s Man” (2021) and was first assistant editor on “Resident Evil: Apocalypse” (2004). Like Mann, Hall has a passion for technology. He earned a bachelor’s degree in electronic engineering and created speech processing software that was sought for purchase by Schlumberger (now SLB), a global technology
company.
As filmmakers, Mann and Hall are deeply aware of the potential impacts that AI and machine learning tools can have on the filmmaking industry. This is a key consideration when developing tools for Flawless. Hall said, “DeepEditor was designed with ethics, and with consent and ownership of rights, fully in mind and accounted for, while also meeting cinematic quality standards. This is a pretty unique objective within the world of AI. No one else in this sphere has achieved this level of quality and artistic consideration. Having copyrightable outputs is a big part of what we build.”
DEEPEDITOR FOR POST-PRODUCTION
DeepEditor is a web-based technology that works with host NLEs: DaVinci Resolve, Adobe Premiere, Avid Media Composer, or Final Cut Pro X. (Note: Google Chrome is currently the only browser fully compatible with DeepEditor.) Editing is done in the standalone DeepEditor software, and processing is done in the cloud. Hall noted, “The Flawless platform is fully secure, TPN certified, and every project is treated with the trust and confidentiality expected from the highest levels of the industry.” Once processing is complete, the generated visual dub (or “vub”) can be exported/ downloaded from DeepEditor and imported back into the host NLE. The exception here is Avid Media Composer.
During the NAB show this past April, Avid announced that DeepEditor will be integrated into its updated Media Composer software, accessible directly from the Tools menu, making it the simplest way to insert DeepEditor into the creative workflow. Clips can be pulled from the Media Composer bin into the DeepEditor tool (inside Media Composer) for processing. Once complete, the generated vub can be dragged and dropped back into the bin.
Hall, who had demonstrated an early version of DeepEditor’s Avid integration during a British film editors panel in February 2025, said, “Bringing it up within Avid produced some audible gasps amongst the editors surrounding the desk. People are naturally wary of new tools, so to see DeepEditor integrated into Avid immediately made everybody start paying attention. Seeing the ease with which we managed to generate vubs was received expectedly well.”
Mann added, “It is a big turning point in Avid’s history. DeepEditor will change how we’re going to edit and create from here on out, just as Avid, as a new technology, had changed editing. DeepEditor is going to be this new tool that’s augmenting and revolutionizing how you edit and how you make a film.”

There are a few essential steps to creating a vub with great results, so reading and following the “Getting Started With DeepEditor” guide (https://kb.flawlessai. com/getting-started-with-deepeditor) is strongly recommended. For brevity’s sake, the general process involves selecting your “source media” (i.e., the original shot you want to change) and your “driving data” (i.e., the new performance or dialogue you’d like to transfer to your source media). Click the “Create Vub” button, enter “Vub Name” in the pop-up box, and select one of the four output quality options (Draft, Standard, Professional, or Theatrical). While the vub is being generated, you can check the status in the “Vubs” tab. Once complete (and a vub is generated), DeepEditor will automatically create an export, which you can download by going to the Exports page.
The output options are priced according to quality — the higher the quality, the higher the price. A draft vub, meant for quickly previewing different storytelling options, costs substantially less than a theatrical vub, which is intended for use in the final cut of a film or series. This type of pricing scale allows filmmakers to experiment without draining the production’s budget.
Interested in trying out DeepEditor for free? Check out the Flawless site (https://www.flawlessai.com/deepeditor), click “Try DeepEditor,” and fill in the request form. The free trial provides access to three Draft Vubs and one Standard-Quality Vub.
CAPABILITIES AND MAIN USES
There are three main use cases of Deep-Editor: “In-Vision ADR,” “Performance Transfer,” and “Performance Editing.”
In-Vision ADR is used to change a line of dialogue, which is a typical occurrence in any production. Hall said, “I’ve cut 15 films and I averaged about a hundred ‘change lines’ of dialogue in every single one. When you’re changing dialogue, you have to cut to a reverse shot over someone’s shoulder, or cut away to something else, or reshoot the scene, which is expensive and time-consuming. So, changing a line has always been a very significant problem. The golden rule of editing is that if you want the audience to absorb a line, you need to see them say it.”
Using In-Vision ADR, the actor’s visual performance can be matched to their newly recorded line of dialogue, eliminating the need for a cutaway. “You’re staying on that person’s face. It’s an incredibly freeing mechanism for adjusting your story in the edit creatively,” noted Hall.
Performance Transfer lets editors take an actor’s facial performance from one shot — whether it’s from the same scene or a different one — and apply it to another. For instance, if the most emotionally resonant delivery is in a mid-shot but the preferred framing is a close-up, the editor can blend the two, preserving the best performance in the ideal composition. “So you get the best performance in the frame size that you want, which allows you to structure the scenes exactly how you want and have the best performances,” Hall said.
Another example of using Performance Transfer is to take an actor’s performance (including dialogue) from a deleted scene and transfer it to a shot of the actor being used in the cut. “It doesn’t matter about lighting conditions, or head pose, or anything like that. You’re transferring the actor’s performance, not the image,” added Hall.
Performance Editing allows the editor to control the pace of a dialogue performance, thereby controlling the pace of the scene. Hall explained, “In the editing room, we always have instances where the rhythm of the performance wasn’t quite right. Or you want to hold for a beat at the end of a line of dialogue but the actor goes to the next line. With DeepEditor, just by editing the audio, you can alter the pace of the scene. The newly-paced dialogue is used to generate a new visual performance.”
DeepEditor is only changing the area below the eyes, in the mouth region; it’s not changing the overall physicality of the actor. “If you have physical emphasis points and physical efforts that correlate to emphasis points of the dialogue, then you don’t want to go outside of that. If you have a disconnect between the overall physicality and the placement of the dialogue, then it’s difficult to make it sit believably,” noted Hall. “I always have a very simplistic way of assessing whether or not a piece of dialogue or new dialogue placement is going to work. I physically put my hand over the mouth on-screen and watch the eyes and the body movement. If I believe that they could have said that, then it will work.”
With all of DeepEditor’s uses — whether it’s using an actor’s ADR performance, their on-set performance, or an alternative shot of their performance — the key is that the generated vubs are driven by that actor’s performance.
FINE-TUNING RESULTS
Since the goal is to keep as much of the original performance as possible, editor-created refinements can be applied to the “out of the box” vub, and the Interpolation timeline is one of the key controls at the editor’s disposal. In the Interpolation timeline, when the value is set to 0, the editor is only using the original performance. When the value is set to 100, they’re using all of the “selected” performance. The value can be adjusted to any number between 0 and 100. By changing the value at different points in this timeline, the editor can refine the transition between the original performance and the “selected” performance.
Hall said, “It’s not a cross-dissolve or anything like that. It’s a change in the facial expression. You’re adjusting the amount of change, and working with that ethos of only changing what needs to be changed and retaining as much of the actor’s original performance as possible.”
Another feature is the Scale timeline, which adjusts the performance size. Maxing out “scale” would make the performance loud and expressive, more like yelling. Reducing “scale” would make the performance more of a whisper. Hall explained, “You might have an audio-driven adjustment. So in In-Vision ADR, you may want to increase the performance size of the dialogue. We try to retain the original performance in our Driving Data as much as possible, but if you’re changing a line of dialogue, you’re inherently giving that line a slightly adjusted meaning. Otherwise, you wouldn’t be changing that line of dialogue! And if you’re retaining the original ‘performance size,’ then that might not be the right performance for the new line of dialogue. You might want to adjust the performance to become slightly bigger or slightly smaller, depending on the new meaning of the new line of dialogue.”
The editor can also adjust playback speed. “When an actor is talking fast, sometimes it helps to perceive how accurate the lip-sync is by playing the clip at half-speed,” noted Hall.
DEEPEDITOR FOR ON-SET PRODUCTION
While DeepEditor is typically implemented in post-production, Mann noted that it can also benefit a production on set. He said, “Performance Transfer, in particular, heavily changes how you direct on set, how you plan, and how you even set up the project. We’re only just starting to see the reality of that on a few productions going on at the moment that are using this technology.”
Mann explained that as a director on set, he shoots the master and then all the coverage separately, angle by angle, trying to get continuity to match the master shot. “You’ve got this tremendous amount of repetition, which is why it takes half a day to a whole day to shoot one scene. No one enjoys the repetition — actors don’t, and directors don’t. It’s part of our old process. What we’re seeing now, using DeepEditor on set, is that directors can say, ‘Oh, actually, I got the performance in that mid shot, so I only need one take of this angle and it’s fine. I can move on.’”
He continued, “As a director, DeepEditor is this empowering new tool. It makes the directing job — the on-set aspects — so much easier and faster. We have these new tools that affect all stages of the filmmaking process and that expand filmmakers’ capabilities while lowering costs, changing the types of projects filmmakers can choose to create, and how they approach production.”
FLAWLESS A.R.T. (ARTISTIC RIGHTS TREASURY)
As mentioned, Flawless designed its AI tools with consent and ownership of rights as a core principle. They developed the Artistic Rights Treasury (A.R.T.) – a rights management system for generative AI — to protect artists and rightsholders and ensure ethical and fair use of the technology. A.R.T. tracks and links consent, providing a record of a production’s rights clearance process. “We’re very happy with A.R.T. and see this being used as a framework for how AI should be used in workflows. We’ve done a lot of work to make sure all the models are clean so the outputs are copyrightable and consented to,” said Mann.
A.R.T. is fully integrated into the DeepEditor workflow. Every vub generated is watermarked by default. To access a non-watermarked export, the editor must initiate a formal consent request. Selecting “Download No Watermark (Needs Consent)” triggers a clearance workflow, prompting a notification to the production’s rights clearance manager. The vub is then reviewed and approved by all required stakeholders — including the actor(s) involved — before the download becomes available. This ensures that every final export is backed by documented, informed consent.
Mann revealed that substantial work has gone into simplifying the process as much as possible. He said, “We’ve been working to make it super easy for editors to understand what requires consent, who looks after consent, who’s responsible for it, and who needs (if and when) to give it. That’s why A.R.T. and the rights manager are baked into DeepEditor, making that process as straightforward as possible.”
He noted that actor consent is a complex topic, and Flawless has worked diligently with the Screen Actors Guild, which has already clarified (during the Guild agreements last year) what doesn’t require consent and what does. He said, “We worked through all that with the guilds — with particular focus on the Screen Actors Guild. They made it quite clear that if it’s not a substantial change or it’s something the actors already recorded, then it falls under pre-consent according to SAG. For any changes that fall outside that boundary, it’s just checking that the right people are consenting to it at the right point. That’s all baked into the workflow and occurs when you’re moving to the online phase. You’ll be dealing with the DeepEditor consent the same way you would deal with stock footage or licensing music. It’s the same kind of department.”
DEEPEDITOR’S IMPACT ON FILMMAKING
Understandably, new advancements in technology can be unsettling due to their impact on industry, yet change is inevitable. How will DeepEditor change filmmaking? Who will feel the effects of that change the most, and in what way? An ethical approach to change requires forethought. As Mann stated above, he sees DeepEditor changing the filmmaking process in much the same way Avid changed it years ago with the switch from analog to digital. The Flawless team has approached the development of their tools as filmmakers who care deeply about filmmaking and support the performances of actors bringing those stories to life. Mann concluded, “In terms of technology as a whole, Rob and I both approach filmmaking from our perspectives of directing, editing, and making movies, and we look at what the best tools are that can help the storytelling process.”
