Archive for the ‘Workflow’ category

Syncing Dailies

January 12, 2011

In 2011, hand syncing of dailies seems downright anachronistic. Doesn’t timecode make all that trivial? Yes, with digital cameras, automatic syncing is standard practice. But this inevitably involves two clocks, and that means they are subject to drift. It doesn’t take much drift to put you out of sync a frame or two. Production is supposed to jam (synchronize) their clocks several times a day, but in the heat of battle that doesn’t always happen. The result is that picture and sound slowly drift out of sync.

In my editing rooms, we always check sync using slates, and resync if necessary. This takes time, but sync starts with dailies. If you’re in sync there, you have a shot at staying in sync further down the food chain.

Media Composer allows us to sync in two ways. First, you can use Autosync to merge audio and video clips. If your clips are pre-synchronized, load them into the source monitor, select video or audio and subclip to separate picture and sound. Then mark the slates and autosync to merge them again.

Second, and even better, you can use the Perf Slip feature to sync to the nearest 1/4 frame. Perf Slip is slick and quick but it comes with some limitations. You have to turn on film options when you first create your project — even if you never plan to touch a frame of film. It only works in 24 or 23.976 projects. And it only works on subclips. It comes with a couple of other minor limitations, as well, but I used it successfully on my last Red show, and wouldn’t want to be without it.

Either way, you’ll have to check every slate by eye. That’s trivial, right? You just line up the visual slate closure with the sound clap and you’re all set. True, but many slates are ambiguous. How you handle them is crucial to good sync. When we worked with film there was plenty of debate among assistant editors about this. Today, it’s a lost art. Here’s my interpretation.

First, you can’t sync properly without checking at least three frames — the frame where the slate closes, the frame before it, and the frame after it. Only with that context can you understand what happened at the slate closure. There are three possible cases.

Case 1 — Normal

In the first frame, the slate is clearly open, in the second it’s clearly closed, and in the third, it’s closed, as well. That’s the standard situation — no ambiguity, no blurred images. We make the assumption that the camera is making its exposure in the middle of each frame. In frame one, the slate is open. In frame two, it’s closed. So the slate hit somewhere between those two exposures. Check the images below (click to blow them up). The waveform of the clap is lined up at the head of frame two. That’s as close as we can get.

Case 2 — Blurred but Closed

Here we see a blurred frame 2. To decide where to put the audio clap, we have to examine that blurred image carefully. Did the slate close while the shutter was open? Notice that within the blurred image you can see both the top and bottom of the closed slate. The shutter was open when the slate closed and the camera captured an image of the closed slate within the blur. The audio clap goes in the middle of that frame. (Click to blow it up.)

Case 3 — Blurred but Open

Here, the second frame is blurred, but if we look closely, it remains open. The camera captured the slate in motion, but not in its fully closed position. The first closed frame is frame 3. So we sync between frame 2 and 3.

Syncing with this kind of accuracy takes work — blurred slates are always somewhat ambiguous. But if you look carefully, you can generally assign all slates to one of these three cases. If you’re syncing to the nearest frame, you won’t be able to achieve this much precision, but at least you’ll know what you’re aiming for.

Keep in mind that in a 24-frame environment, the camera is typically shooting at about 1/50th of a second and that the exposure occurs in the middle of a frame that’s being displayed for a 24th of a second. With that idea in mind, you should be able to sync as precisely as anyone ever did in a film editing room.

If you’re interested in more Media Composer techniques like this, check out my new book, Avid Agility. You can find out more about it here on the blog, or at Amazon.

CAS Workflow Seminar

January 1, 2011

The Cinema Audio Society will host “The Digital Gameplan,” a comprehensive workflow seminar next Saturday, January 8 at the Sony lot in Culver City. The day will focus on sound, from production to delivery, but if it’s like a similar event held in ’05 (which I participated in), there will be plenty to chew on for picture folks, as well. Members of all Hollywood locals and societies are invited along with producers, folks from facilities and film students. And the price is right — it’s free. For more, see this pdf.

When: Saturday, Jan 8. / 10 am – 2 pm

Where: Sony Pictures Studios, Cary Grant Theater
10202 W. Washington Blvd, Culver City, CA
Enter using the Madison Gate.

It’s best to send an RSVP to this email address, but they will admit you regardless.

Conforming Headaches

November 24, 2010

For better or worse, high-end feature films and television still follow an offline/online model, cutting with some kind of lower-res proxy and conforming a higher-res original. The dirty little secret of our new file-based workflows is that despite the many advances we’ve seen, conforming is still a pain in the butt. Why? Because no conforming system can fully conform Avid effects. Sure, cuts and dissolves can be handled easily, but more often than not, effects work has to be painstakingly rebuilt by eye. That seems downright crazy to me — in the wonderful, all-digital, file-based workflow of the future, people are still studying the locked cut, figuring out what the heck was done, and reconstructing it by eye.

Yes, there are exceptions. If you do your offline in Media Composer and finish in Avid Symphony, everything comes across. That’s a wonderful thing and if you work that way, you become dependent on it quickly. But unless you color correct in Symphony, you’re going to have to export, which means baking in a look and accepting a maximum raster size of HD video. On the Final Cut side, the XML export format opens the door to full conforms, but even then, in many DI environments you still don’t get everything.

I had a chat with a product marketing person at one of the DI system manufacturers recently, and I asked him why. His answer surprised me. His view is that we editors don’t care — we expect and have no problem with a by-eye conform. That might have been true once, but not today. Once you start doing complex effects work and see it conformed perfectly with little or no effort, you start wondering why things should work any other way. And you start to chafe at all the behind-the-scenes effort expended by editors and assistants, just trying to get back to something that worked just fine in the offline editing room.

This is a long-standing, Tower-of-Babel problem — there is no standard effects language. And it seems that each manufacturer has their own selfish reasons for not spending the money needed to make really good translations possible. That was tolerable in the days of film and HD, but in the all-digital present, it seems more and more anachronistic to me.

File-Based Basics

November 17, 2010

I recently finished a TV movie that was shot on Red and Canon 5D, cut in Media Composer 5, conformed in Smoke, and timed from the original raw R3D files in Lustre. None of that is particularly unusual these days (though timing from the R3Ds is still rare in television). But there seem to be a whole lot of people who are confused about these processes. If you’re among them, then maybe the following will help you make sense of it.

First, the epiphany. You’re shooting with a file-based camera. Okay, that’s not unusual. You’ve been working with film and/or tape for years, going through all kinds of gyrations — is this really so different? But then it hits you. The camera generates files on disk. And from then on, everything is a file. Everything. All you’re going to do is create files, copy files, move files, archive files. That’s terrific, you think, that simplifies everything. But then it hits you — there are way too many file types! And no standards. The list of acronyms is bewildering: r3d, rmd, mxf, omf, mov, dpx, log, linear, log c, aaf, avb, dng, psd, wav, xml, prores, tiff. Soon you begin talking about these things — and people around you start looking at you funny.

The beauty of a file-based workflow is that you can manage most of it with off-the-shelf computer gear. But that’s a curse, too, because now you have a raft of choices to make. Do you do as much as possible in the ‘offline’ editing room? Or do you get adult supervision from a post house? Or both? There’s a massive decision tree to navigate, and every choice influences every other choice.

So let me start with a couple of caveats: First, leave time to figure this stuff out. Don’t wait till production begins. Start early and go through the various permutations, talk to everybody you can, learn as much as you can. Second, remember that nobody knows everything. This has always been true, but in the wild-west science experiment we’re all now engaged in, where things are changing daily, it’s a certainty.

So what are all these choices you’ll have to make? They break down roughly as follows:

  1. Production
    Which camera(s) are you using? Which audio recorder?
    What kinds of files are you creating?
    What frame rate, sample rate, timecode rate, raster size are you recording?
  2. Dailies
    Who’s doing them? What do you need for editing, review and conforming?
    Who syncs and how will they do it? Who backs up and when?
    How are drives being moved around; where are they stored?
  3. Editing
    What system will you use? What kind of drives/raid?
    How will you output cut material for review?
    What are you turning over to sound and music?
  4. Conforming
    Will you roll your own or have a post house do it?
    How do you handle visual effects created in your editing room?
    And those created by the vfx team?
    What kinds of files will you use for color correction?
    And for television, a crucial question — when do you convert to HD?

There are some simplifications in this list, to be sure, but it should give you a basic overview of the terrain. Yes, it can seem overwhelming. You aren’t going to come up with a perfect solution, just one that satisfies the needs of your particular production. The more questions you can answer before you roll, the happier you’ll be.

Interview on Hollywood Reinvented

November 13, 2010

My friend Larry Jordan, editor and creator of the new blog Hollywood Reinvented, has just posted an extended video interview with me. Topics covered include digital editing in general, Final Cut vs. Media Composer, the need for editors, and the future of post production. It’s all nicely edited into tasty, bite-sized pieces (if you let it play, it’ll move from clip to clip without interruption). The full post is here. I hope you enjoy it.

Conforming Red

October 17, 2010

Red is now Hollywood’s great science experiment, with workflow options proliferating almost every day. How do you do dailies? How do you transcode and sync? Who is archiving your media? We’re finally starting to get our arms around those issues, but there are still too many options. And the bigger question now is how you conform.

“The Social Network” team actually did it in their offline cutting room, moving from Final Cut to Premiere and from there to After Effects, using EDLs (not XMLs) and dpx files (not the native R3D files). They then turned over to a Pablo for timing. (Adobe has posted a video laying this out.) I’m finishing a TV movie that was cut with Media Composer 5, conformed in Smoke and timed in Lustre using the native R3Ds, which gave us all kinds of color control. And those are just two of the dozens of permutations available. Before we started shooting, I spent a full week going over them, and at the end, the conversations were so filled with jargon that a normal mortal listening in would have thought we were nuts.

We do more and more visual effects work in our offline editing rooms. In television, I’ve gotten very spoiled seeing my work conformed perfectly using Symphony. There’s a tremendous sense of freedom in that — if you get something right, it’s finished and you never need to think about it again. But in features we don’t generally experience that particular thrill, because above HD resolution everything has to be rebuilt, and too often, by eye. Each system has its strengths and weaknesses. Smoke is powerful, interfaces with Lustre for timing and understands many MC4 effects — but MC5 is another story. Baselight understands XML (but not all effects). After Effects is cheap but doesn’t understand either one. And that’s just the tip of the iceberg.

The whole thing is a mess. Conforming complex visual effects by eye is crazy, and somebody is going to make real money straightening it all out. More fundamentally, will we be conforming in our cutting rooms or at a post house? Or will increases in processing power make the whole thing moot?

Meanwhile, be prepared for a new workflow on every show you do, with new options, new gotchas, and new things to learn each time.