by Michael Kunkes, photos by Gregory Schwartz
When director James Cameron realized that editor James Cameron, A.C.E., was going to need a larger team to help navigate the brave new world he was creating for Avatar, he turned to two editors who, in their own right, have themselves often been at the leading edge of technologically challenging projects: John Refoua, A.C.E., and Stephen Rivkin, A.C.E.
Refoua worked with Cameron from 2000 to 2002, editing or co-editing 16 episodes of Dark Angel, the post-apocalyptic sci-fi series Cameron co-created that aired on Fox during those years. He also co-edited Cameron’s 2003 3-D documentary on the grave of the Titanic, Ghosts of the Abyss. Early in 2007, Refoua says, the director called and asked for a few weeks of help on Avatar. “Two and a half years later, I’m still here,” he says.
Rivkin joined the editorial team of Avatar a few months after Refoua––shortly after finishing Pirates of the Caribbean: Dead Man’s Chest and Pirates of the Caribbean: At World’s End in rapid succession. After the latter, Rivkin was looking forward to a nice long rest, but it was not to be. “I thought the POTC films were complicated,” says Rivkin, (who won a 2003 ACE Eddie Award for his work on the first film in that franchise, Pirates of the Caribbean: The Curse of the Black Pearl), “but I’ve never been on a project as complicated or as detailed as this one.”
CineMontage spoke with Refoua and Rivkin about how the formidable editorial triumvirate worked together to forge this landmark achievement in post-production.
CineMontage: How did the editorial process on Avatar evolve?
John Refoua: The focus from all of us was: How do we do this movie in a way that makes sense? What is the best way to do things? There were no set procedures, so we were discovering things as we went along. In simple terms, a lot of the cinematography came after the editing, because first we had to do capture on the performances, go through the process of selecting the performances and putting a skeleton of the cut of each scene together, and then fine-tune that to the performances for each character. Then, in the Template phase, Jim shot his virtual cameras, which is when we got our first normal dailies to work with.
Stephen Rivkin: Jim wanted to approach performance capture in a way completely unlike what had been done before. Rather than capture actors on a set and cut them in a conventional way, he wanted to do the performance capture, pre-cut a scene, select specific readings and takes from every character, and build a pre-cut that he could then shoot from. So by the time we were going to camera, we were already dealing with all the best performances and could focus on the refinements of the images. We had to come up with a whole new language of terms—combos, stitches, FPR, loads, etc.––all invented on this production (Editor’s Note: see Sidebar, Page 27).
CM: What was the importance of the reference cameras?
SR: A scene in any film is a representation of many different takes and setups, but in performance capture, the shots do not necessarily reflect the final angles. The reference footage allowed us to see the entire performance of an actor at any given moment and gave us a real-time face to refer to when we got to the Template phase and final renders.
CM: With only the capture and reference footage, how did you determine edit points before getting to the Template phase?
SR: The limitations of capture and the number of characters dictated that we could only build so much into a camera load—combinations of perhaps one character from one take, one from another, a crowd from two or three others—so we would have to pre-determine where a cut point might be. When Jim would open that digital file on the stage, he would know what performances he was looking for and in what part of the scene they were located.
JR: Basically, we had to cut everything twice. First, we cut once from the original performance capture on the Volume stage; this was a “performance edit” to figure out how a scene would work and what the basic structure of the film would be. Once we got that fine-tuned and created the camera loads, Jim, Steve and I would take lines out or change performances. Next, Jim would shoot his virtual cameras; then we would have a whole new set of dailies based around actual possible cameras that would ultimately become the angles that you see in the movie—wide shots, tracking shots, dolly shots—all done on the basic camera loads he selected. This second, or Template phase, was something a lot more like regular editing.
“A lot of the cinematography came after the editing, because first we had to do capture on the performances, go through the process of selecting the performances and putting a skeleton of the cut of each scene together, and then fine-tune that to the performances for each character.” -John Refoua
SR: After the completed shots came back from final processing, we actually did a third edit, in the context of the entire film, to fine-tune and tweak the last changes.
CM: What makes performance capture on Avatar different from previous efforts?
JR: Avatar takes things a step further because we used the capture to figure out the rhythm and pace of a scene with the exact performances Jim wanted to use. He might shoot three takes or 18 takes of a shot until he got the performance he wanted––and only then would he shoot the virtual cameras that will be the shots used in the movie.
CM: How did the idea come about to edit on the stage?
SR: Originally, John and I were on the Volume stage to interpret the way the camera loads were built while Jim was shooting his virtual camera. Our assistants would stream what he shot into the on-stage Avid Adrenaline. So instead of just sitting there waiting for camera, I said, “Why not start cutting scenes? We’re here anyway.” We presented this idea to Jim who just said, “Wow, okay,” and we just started cutting while shooting, and this kind of became our face time with Jim––the time when we would discuss his intentions about how the scene could be constructed. By the end of a shoot day, he would have a cut of a scene, or a portion of a scene, in a stage cut. It was a real breakthrough.
CM: How does facial performance replacement [FPR] work and how does it differ from ADR?
SR: If we needed to change what an actor was saying, we would bring him onto the Volume stage, put a face camera on him, and play back the performance of the body to make sure the timing was right. After capturing the facial performance, we would go through whatever the selected take is, sync it up with the body performance, and send that to final processing, where the facial performance would be put on the body––or on any body, for that matter. Unlike ADR, you’re not necessarily trying to match a previous line or just change an inflection; we can physically change what the actor says, without having to re-shoot anything.
CM: Can you describe a scene that illustrates the FPR process?
SR: The scene where Jake [Sam Worthington] flies in on the Leonopteryx and becomes the savior of the Na’vi was a scene that nearly broke the performance-capture system in terms of working with the number of crowd extras. The Na’vi have their own very distinct language, and Sam was having some problems with his speech in this scene; there was too much space between his lines and the rhythm was off.
Jim is very particular about the accuracy and inflections of the language and wanted the pauses to be in just the right places, so editorially we had to cut Sam up so that he spoke in the right cadence. However, that created jump cuts where he would jump forward spatially, and it just didn’t flow. Sam wasn’t available, so we brought a stunt double onto the Volume stage to walk the scene and give us a smooth, fluid walk up the stairs with the same expression in his hands and body motion that Sam would have had. We then mapped Sam’s original performance face onto the face of a stunt double, using a stitch FPR to lose the pauses. Later, we had Sam come back for some conventional ADR just to smooth out the pronunciation. But the process allowed us to make this important scene flow smoothly.
CM: How was the work divided up between the three editors?
SR: We each had scenes that we carried though to the final. Whoever’s scene was being prepped was usually the one who would be on the set when that scene was shot. That person, whether it was John, Jim or myself, was the one who best understood the intricacies of whatever was built into the scene loads, and ultimately did the first cut.
CM: How were visual effects reviews conducted?
JR: Jim did a video conference with ILM and the other visual effects vendors every day from his editing room. When we moved the cutting rooms to Fox for the last three months of mixing and finishing, we converted the Robert Wise Mix Stage into a cineSync facility for face-to-face meetings with effects vendors. Jim could review current renders of shots that could be seen simultaneously on our screen here and theirs. The room also doubled as a color-correction facility for the da Vinci Resolve system.
CM: With all these reference cameras, virtual cameras and performance-capture rigs going, was time code a problem?
JR: There were almost no time codes; everything had to be eyeball-matched by the assistants. And with so many reference cameras going, it was really hard to see who was doing what, especially when you were trying to find one individual in a pile of people. The dailies had to reflect all the moves that Jim made while he moved the camera, and he was moving so fast that often no real notes were taken––and the assistants had to figure out what was going on. It was an insane amount of work.
CM: How did 3-D figure into the edit?
JR: Actually, for us, that was the simplest part of all. The top two tracks on the Avid Media Composer timeline were for left and right eye. We only monitored the left eye when cutting, while making sure the right eye was in sync. When the Template cut was done, it went to a whole separate group who would look at it in stereo, analyze the interocular and convergence, and make adjustments. There was a limited amount you could adjust in the live action because it’s already baked in when you shoot, but in the CG virtual world, all that was a lot more changeable.
CM: How involved was Jim Cameron in the day-to-day editing?
JR: It’s really important people understand that Jim is very much an editor on this project. Jim took a lot of scenes all the way from performance capture, and edited all the way through to turnover, so he did everything that we did.
SR: Because he likes to do his director’s cut himself, we gave him many different options for parts of scenes. He’d integrate those into a master cut of a scene and did his own changes. He has always considered the three of us to be an editorial team. If there was any difference of opinion, he of course won, because he’s the director. But he was always open to suggestions. If he liked a path we were going down, he’d follow it all the way with us.
CM: Has this experience changed your feelings about your craft?
SR: John and I are very excited about this process; it’s unusual, challenging and gives us so much freedom, even though it is very meticulous. On the other hand, while it’s great to talk about how revolutionary Avatar is, we were still making a movie. And when you come down to it, all this technology is just there to make the images more compelling and to tell the story better. Ultimately, we’re asking the same questions editors always ask: Does this shot work? Does this scene serve the story? It’s all about performance and story.
Things just take a little longer to get done when you’re on the moon Pandora…
Michael Kunkes is a freelance editor and writer specializing in animation, production and post-production. He can be reached at writermk@sbcglobal.net.