Tools

Slugline. Simple, elegant screenwriting.

Red Giant Color Suite, with Magic Bullet Looks 2.5 and Colorista II

Needables
  • Sony Alpha a7S Compact Interchangeable Lens Digital Camera
    Sony Alpha a7S Compact Interchangeable Lens Digital Camera
    Sony
  • Panasonic LUMIX DMC-GH4KBODY 16.05MP Digital Single Lens Mirrorless Camera with 4K Cinematic Video (Body Only)
    Panasonic LUMIX DMC-GH4KBODY 16.05MP Digital Single Lens Mirrorless Camera with 4K Cinematic Video (Body Only)
    Panasonic
  • TASCAM DR-100mkII 2-Channel Portable Digital Recorder
    TASCAM DR-100mkII 2-Channel Portable Digital Recorder
    TASCAM
  • The DV Rebel's Guide: An All-Digital Approach to Making Killer Action Movies on the Cheap (Peachpit)
    The DV Rebel's Guide: An All-Digital Approach to Making Killer Action Movies on the Cheap (Peachpit)
    by Stu Maschwitz
Monday
Sep102007

Don't Panic


So hey, that RED One camera is in the hands of its first customers, and there has been an explosion of traffic, images, enthusiasm and confusion about it.

Mostly confusion.

Shooting raw is a new thing for a lot of people. Even those of us who are used to shooting raw with DSLRs aren't accustomed to seeing our images with little or no post-processing—when you open a raw file in, say, Lightroom or Aperture, the software tries to make the image "look good" for you (non-destructively, of course), and lets you tweak from there. But RED Alert, the beta software that ships with RED One, doesn't do this. Nor should it, but it RED Alert has enough controls to daunt many new RED One owners.

In truth, RED Alert is probably hurting matters by offering too much control too prominently in its UI. Most RED One shooters would be better off setting white balance and nothing else (consistently per sequence), and then selecting between a hard-coded preset for either Rec709 gamma (for video post) or log (for film-style DI post). To do any more tweaking than that in Red Alert is to simply muddy the waters and cause downstream confusion.

Many people, for example, feel the need to correct for underexposure in Red Alert. I've seen people apologizing for underexposed test shots. Don't—underexposing is exactly what you should do, within reason, in order to hold highlight detail. If you look at the "offhollywood" test shots on hdforindies.com, you'll see that the exposure is all over the map. That's fine! That's a very easy thing to correct for in post, and holding onto those troublesome highlights is worth some inconsistencies from shot to shot. Remember, the dynamic range of a digital camera has nothing to do with how much overexposure it can handle (because no digital camera can handle any)—it's all about how much you can underexpose. In other words, as you try to hold onto that highlight detail, how much can you underexpose that car before it reveals nasty noise, or worse, static-pattern artifacting, when you brighten it back up in the DI?

I only wish that Mike and company had transfered every single one of those shots with the exact same RED Alert settings—it would be so much more illuminating.

Graeme Nattress, author of RED Alert and chief image nerd at RED, started a thread on RedUser.net in an attempt to guide people's initial use of RED Alert. I joined in and added this:

I would appreciate it if people posting example images would differentiate between attempts to simple "develop" the RED One image into a workable video form vs. attempts to make the image "look good."

The reason being that some people will be looking at these images for proof of RED One's empirical qualities, i.e. dynamic range, highlight handling, ability to hold detail in saturated colors, etc. These people will be disappointed to see a clippy, crushy image that has lots of sex appeal and "looks good."

And of course, some people will be looking at the first RED One images off the line and hoping that they "look good." But that should not be the case unless the images have been color corrected. While RED Alert has some color correction controls, it's not a color grading station, and the ideal RED One workflow would most certainly not be to make permanent color decisions early in the process.

Remember that an image that shows a broad dynamic range will look flat and low-contrast. An image that shows good highlight handling will probably appear underexposed. And an image that shows good color fidelity will appear to have very low color saturation! I urge new RED One users to learn to love underexposed, low-con, low-saturation images as they come off your camera, for they contain the broadest range of creative possibilities for you later.

But also maintain your love for the rich, saturated images that you may ultimately create from this raw material—and hope/beg/plead for tools to allow shooting with RED One under a non-destructive LUT that is included with the footage as metadata, so that you can preview your image as it may ultimately appear, record that nice flat raw image, and later have the choice of applying your shooting LUT or some other awesome color correction.

And then I went and listened to the fxguide podcast about Mike and Jeff's first day with the camera, and felt tangible pain as these incredibly sharp guys verbalized their near-terror at the learning curve that lays before them. Guys, it's so much easier than you think. Don't stress out about what to do with RED Alert—the less you do, the better. And so much more the better if you do the exactly same thing to every shot.

Next time: What do do with all these flat, low-con, underexposed and uneven—but consistently processed—images! The good news? If you've read The Guide, you already have a leg up.

Tuesday
Sep042007

The Film Industry is Broken

The film industry has a tremendous need right now for an open standard for communicating color grading information—a Universal Color Metadata format.

There are those who are attempting to standardize a "CDL" (Color Decision List) format, but it would communicate only one primary color correction. There are those trying to standardize 3D LUT formats, but LUTs cannot communicate masked corrections that are the stock in trade of colorists everywhere. There are those tackling color management, but that's a different problem entirely.

Look at the core color grading features of Autodesk Lustre, Assimilate Scratch, Apple Color, and just about any other color grading system. You'll see that they are nearly identical:

• Primary color correction using lift, gamma, gain, and saturation
• RGB curves
• Hue/Sat/Lum curves
• Some number of layered "secondary" corrections that can be masked using simple shapes, spines, and/or an HSL key

Every movie, TV show, and commercial you've ever seen has been corrected with those simple controls (often even fewer, since the popular Da Vinci systems do not offer spline masks). It's safe to say that the industry has decided that this configuration of color control is ideal for the task at hand. While each manufacturer's system has its nuances, unique features, and UI, they all agree on a basic toolset.

And yet there is no standardized way of communicating color grades between these systems.

This sucks, and we need someone to step in and make it not suck. Autodesk, Apple, Assimilate, Iridas; this means you. One of you needs to step up and publish an open standard for communicating and executing the type of color correction that is now standard on all motion media. This standard needs to come from an industry leader, someone with some weight in the industry and a healthy install base. And the others need to support this standard fully.

Currently the film industry is working in a quite stupid way when it comes to the color grading process, especially with regards to visual effects. An effects artist creates a shot, possibly with some rough idea of what color correction the director has in mind for a scene, but often with none. Then the shot is filmed out and approved. Only once it is approved is it then sent to the DI facility, where a colorist proceeds to grade it, possibly making it look entirely unlike anything the effects artist ever imagined.

Certainly it is the effects artist's job to deliver a robust shot that can withstand a variety of color corrections, but working blind like this doesn't benefit anyone. The artist may labor to create a subtle effect in the shadows only to have the final shot get crushed into such a contrasty look that the entire shadow range is now black.

But imagine if early DI work on the sequence had begun sometime during the effects work on this shot. As the DI progresses, a tiny little file containing the color grade for this shot could be published by the DI house. The effects artist would update to the latest grade and instantly see how the shot might look. As work-in-progress versions of the shot are sent from the effects house to the production for review, they would be reviewed under this current color correction. As the colorist responded to the new shots, the updated grade information would be re-published an immediately available to all parties.

Result? The effects artist is no longer working blind. The director and studio get to approve shots as they will actually look in the movie rather than in a vacuum. Everyone gets their work done faster and the results look better. All of this informed by a direct line of communication between the person who originally created the images (the cinematographer) and the person who masters them (the colorist).

Oh man, it would be so great.

I've worked on movies where the DI so radically altered the look of our effects work that I wound up flying to the DI house days before our deadline to scribble down notes about which aspects of which shots should be tweaked to survive the aggressive new look. I've worked on movies that have been graded three times—once as dailies were transfered for the edit, once in HD for a temp screening, and again for the final DI. Please trust me when I say that the current situation is broken. We need an industry leader to step in and save us from our own stupidity.

And this industry leader should do so with their kimono open wide. Opening up a standard will involve giving away some of your secret sauce. Maybe there's something about your system that you think is special, or proprietary. Some order of operations that you feel gives you an advantage. Well, you could "advantage" yourself right into obscurity if your competition beats you to the punch and creates an open standard that everyone else adopts. The company that creates the standard that gets adopted will have a huge commercial advantage. You can learn about the business advantages of "radical transparency" from much more qualified people than myself.

Of course, there will be challenges. Although each grading system has nearly identical features, they probably all implement them differently. It's not obvious how much information should be bundled with a grading metadata file. Should an input LUT be included? A preview LUT? Should transformations be included? Animated parameters? It will take some effort to figure all that out.

But the company that does it will have built the better mousetrap, and they'd better be prepared for the film industry to beat a path to their door. So who's it going to be?

Until you step up, we will keep trudging along, making movies incorrectly and growing prematurely gray because of color.

Wednesday
Aug152007

VFX: Easier than you think, harder than you think


I love this breakdown clip from Ryan vs. Dorkman 2 (which, if you haven't seen it, is totally worth watching). Based of the so-simple-it's-brilliant notion of showing Star Wars Lightsabers doing things that "we personally think would be fun to see," these guys staged a Lightsaber battle in a factory between, well, two regular guys. The effects work is excellent, and one reason why is that they shot a lot of practical elements.

When you're just getting into effects, it's easy to get stuck thinking that you have to do everything with your computer. These guys wanted to create a realistic reflection, smoke, and sparks. So you know what they did? They shot something that would create a reflection. Then they filmed some smoke. Then they filmed some sparks.

Easy, right? Well, maybe not. To some people it's easier to sit in front of a computer for hours trying to get particles to look like smoke than it is to black out a space and heat up a metal rod with a blowtorch. But the latter is worth the extra effort, because the results will look better and ultimately take less time to create. Sometimes making something look photo-real is just as easy—and as difficult—as shooting something real.

Sunday
Aug052007

Taming the Toy

I mentioned in my previous post that the Canon HV20 has poor manual control. While this has been amply documented elsewhere, here's a brief summary of why, in the form of a description of the Sex Positive workflow.

We used Cinema Mode. Consult your HV20 user's manual for the customary misunderstanding about camera modes with the term "cine" in their name: "Give your recordings a cinematic look by using the CINE MODE." No, that's not what it does. What it does is remove the clippy, contrasty gamma that makes consumer video look good to Ma and Pa Birthday Cam, and replace it with a clean, low-contrast curve that extracts as much dynamic range as possible from the CCD. CINE MODE is important with this camera—don't leave home without it.

We always used a 1/48th shutter. As readers of The Guide know well, a 180 degree shutter is not just a good idea, it's the law. Violate it and you've got audiences who've never heard of "Viper" or "Genesis" walking out of Apocalypto saying "What was with the scenes shot on video?"

We didn't have a ton of light. We had two 650W lights and one smaller one, and we blacked out windows to make a night interior out of a day shoot. The main light on the talent was a 650W diffused by a muslin. This meant that the HV20 needed to be "wide open" at 1/48th. Here begins the wrestling:

  • In CINE MODE, shooting 24p, aim the camera at something plenty bright, like the mus.
  • Use the joystick to enable manual exposure
  • Give the PHOTO Button a half-press to check the f-stop and shutter speed. You need a MicroSD card [EDIT: Actually it's a MiniSD card, thanks Mike!] in the camera for this to work, even though you don't intend to actually snap a still.
  • Adjust the exposure up until you see the magical combo of F2.4, 1/48. Best to overshoot to F2.8, 1/40 and then toggle one notch back. The HV20 can open up wider than 2.4 (to 1.8), but not when it's zooomed in. So the amount of zoom necessary to frame up the adaptor's groundglass is a factor in reducing light sensitivity.
You have to re-do that dance every time you camera auto powers-down, or every time you return from checking playback. It's not fun. I'd been getting myself used to it leading up to the shoot, and then had the alarmingly refreshing experience of dusting off the old DVX100a for the first day's shoot. On the DVX, if you want to change the shutter, you change the shutter. If you want to change the aperture, you change the aperture. And you can run with any gain you like at any of these settings. The HV20 will always open up more shutter before allowing the gain to increase, which makes sense for consumers but not for filmmakers. A day with the HV20 after a day with the DVX was a stark reminder of the filmmaker-friendly features we were giving up in order to go 1080p for less than a G.

We monitored in HD. Using a noga arm, I mounted an Ikan V8000HD to various places on the camera depending on our configuration. Mounting this LCD upside-down allowed me to see the image right-side-up, and since the Ikan is HD, I could actually see if my subject was in focus, which is a constant fight at f1.4! The Ikan runs on Sony camcorder batteries, but we just powered it with AC, mostly to keep weight down.

We shot to tape. In a tight apartment with a critical mass of gear, we shot to tape. You can eek an "uncompressed" signal out of this camera's HDMI output, but the last thing I wanted to do was drag a computer around with this rig. I've been experimenting with using Re:Vision Effects's new DE:Noise plug-in to reduce compression and noise in my HDV footage, and the results are very promising.

We boomed to a box. Rather than pipe the input from the boom mic into the HV20, we recorded to an M-Audio MicroTrack Recorder (another good choice would have been the Zoom H4). This tool audio level management off my plate (unlike the DVX100a, the HV20 has no convenient audio input level knobs) and ensured a high quality, interference-free signal. We slated manually and have the camera mic audio to help us post sync our dailies.

As you can see, there are trade-offs with this setup. One of the great things about basing a DV Rebel shoot around a prosumer camera such as the DVX100b, HVX200, or Canon XH A1 is that your rig grows with your capabilities and never gets in your way. It actually prepares you for a future of shooting with a Varicam or a Viper. With the HV20 on the other hand, you're off the reservation. You've go no leg to stand on when your camera fails to support your cinematic needs, because you bought it in the toy aisle. And yet, if you hop on one foot and wave the rubber chicken just right, you can make amazing images with the little guy.