Converting 30p to 24p

As the long-awaited 24p firmware update for the Canon 5D Mark II draws near, I joined Mike Seymour on episode 57 of the Red Centre podcast to talk about how excited I am that it marks the end of painful workarounds for the 5D’s no-man’s-land frame rate of 30.0 frames per second.

For as long as I’ve had my 5D Mark II, I’ve avoided using it for any projects that I could not shoot 30-for-24, i.e. slowing down the footage to 23.976 fps, using every frame. My 5D has been a gentle-overcrank-only camera. There are plenty of occasions to shoot 30 frames for 24 frame playback — we do it all the time in commercials to give things a little “float,” or to “take the edge off” some motion. I still do this often with my 7D. Whatever frame rate I shoot — 24, 30, 50 or 60 — I play it back at 24. Just like film.

Folks often ask me about 30p conversions. Twixtor from RE:Vision Effects is a popular tool for this, as is Apple’s Compressor. Adobe After Effects has The Foundry’s well-regarded Kronos retiming technology built-in. All of these solutions are variations on optical flow algorithms, which track areas within the frame, try to identify segments of the image that are traveling discretely (you and I would call these “objects”), and interpolate new frames based on estimating the motion that happened between the existing ones.

This sounds impressive, and it is. Both The Foundry and RE:Vision Effects deservedly won Technical Achievement Academy Awards for their efforts in this area in 2007. And yet, as Mike and I discuss, this science is imperfect.

In August of 2009 I wrote:

I’m not saying that you won’t occasionally see results from 30-to-24p conversions that look good. The technology is amazing. But while it can work often, it will fail often. And that’s not a workflow. It’s finger-crossing.

On a more subtle note, I don’t think it’s acceptable that every frame of a film should be a computer’s best guess. The magic of filmmaking comes in part from capturing and revealing a narrow, selective slice of something resonant that happened in front of the lens. When you use these motion-interpolated frame rate conversions, you invite a clever computer algorithm to replace your artfully crafted sliver of reality with a best-guess. This artificiality accumulates to create a feeling of unphotographic plasticness.

Of course, it’s often much worse than a subtle sense that something’s not right. Quite often, stuff happens in between frames that no algorithm could ever guess. Here’s a sequence of consecutive 30p frames:

Nothing fancy, just a guy running up some stairs. But his hand is moving fast enough that it looks quite different from one frame to the next.

Here’s that same motion, converted to 24p using The Foundry’s Kronos:

Blech.

Again, don’t get me wrong—these technologies are great, and can be extremely useful (seriously, how amazing is it that the rest of the frame looks as good as it does?). But they work best with a lot of hand-holding and artistry, rather than as unattended conversion processes.

(And they can take their sweet time to render too.)

I’m so glad we’re getting the real thing.