Two cool posts showed up today on the Rebel Café from new member tcindie:
In case you haven't checked it out recently, the Rebel Café continues to be approximately awesome.
Two cool posts showed up today on the Rebel Café from new member tcindie:
In case you haven't checked it out recently, the Rebel Café continues to be approximately awesome.
Anyone interested in the RED One camera should read this comparative review/diary of the Sigma SD14 and the Canon 5D (the latter of which I am a delighted owner).
Both are SLRs, both are a few years old. But the 5D has a 12.7 megapixel, full-frame chip, where the Sigma produces a mere 4.6 megapixels. Why bother even comparing them?
The reason is that the Sigma has a Foveon sensor rather than a CCD or CMOS. This sensor can capture distinct R, G and B light information at every pixel. The 5D's CMOS chip can record luminance at every pixel, and uses the common Bayer pattern of color filters at the photosites to capture RGB color, intermingling color fidelity and spatial resolution in a way that must be decoded by software using some math, some compromises, and some guesswork.
One way of looking at this is that the 5D spends three of its 12.7 million pixels to accomplish what the Sigma achieves in only one. If the 5D records 12.7 million tiny little light records per image, then the Sigma records 13.8 million (4.6 x the unique R, G and B records per pixel).
But that's not entirely fair. Our eyes tend to perceive detail more in luminance than in color, and the 5D is recording much more luminance information that the Sigma. For black and white photography, the 5D truly is a 12.7 megapixel camera and the SD14 truly is a 4.6 megapixel shooter.
So the truth lies somewhere in between—the SD14 is neither the 5D's equal in resolution nor is it possessed of one third the pixel count.
In one very real way the RED One is a 4K camera. It creates 4K images that look damn good.
And in another equally real way, the RED One is not a "true" 4K camera, as each of the 4K's worth of pixels it creates for each frame is interpolated (from compressed Bayer data at that).
And that's probably just fine—a topic for another day.
So hey, that RED One camera is in the hands of its first customers, and there has been an explosion of traffic, images, enthusiasm and confusion about it.
Mostly confusion.
Shooting raw is a new thing for a lot of people. Even those of us who are used to shooting raw with DSLRs aren't accustomed to seeing our images with little or no post-processing—when you open a raw file in, say, Lightroom or Aperture, the software tries to make the image "look good" for you (non-destructively, of course), and lets you tweak from there. But RED Alert, the beta software that ships with RED One, doesn't do this. Nor should it, but it RED Alert has enough controls to daunt many new RED One owners.
In truth, RED Alert is probably hurting matters by offering too much control too prominently in its UI. Most RED One shooters would be better off setting white balance and nothing else (consistently per sequence), and then selecting between a hard-coded preset for either Rec709 gamma (for video post) or log (for film-style DI post). To do any more tweaking than that in Red Alert is to simply muddy the waters and cause downstream confusion.
Many people, for example, feel the need to correct for underexposure in Red Alert. I've seen people apologizing for underexposed test shots. Don't—underexposing is exactly what you should do, within reason, in order to hold highlight detail. If you look at the "offhollywood" test shots on hdforindies.com, you'll see that the exposure is all over the map. That's fine! That's a very easy thing to correct for in post, and holding onto those troublesome highlights is worth some inconsistencies from shot to shot. Remember, the dynamic range of a digital camera has nothing to do with how much overexposure it can handle (because no digital camera can handle any)—it's all about how much you can underexpose. In other words, as you try to hold onto that highlight detail, how much can you underexpose that car before it reveals nasty noise, or worse, static-pattern artifacting, when you brighten it back up in the DI?
I only wish that Mike and company had transferred every single one of those shots with the exact same RED Alert settings—it would be so much more illuminating.
Graeme Nattress, author of RED Alert and chief image nerd at RED, started a thread on RedUser.net in an attempt to guide people's initial use of RED Alert. I joined in and added this:
I would appreciate it if people posting example images would differentiate between attempts to simple "develop" the RED One image into a workable video form vs. attempts to make the image "look good."
The reason being that some people will be looking at these images for proof of RED One's empirical qualities, i.e. dynamic range, highlight handling, ability to hold detail in saturated colors, etc. These people will be disappointed to see a clippy, crushy image that has lots of sex appeal and "looks good."
And of course, some people will be looking at the first RED One images off the line and hoping that they "look good." But that should not be the case unless the images have been color corrected. While RED Alert has some color correction controls, it's not a color grading station, and the ideal RED One workflow would most certainly not be to make permanent color decisions early in the process.
Remember that an image that shows a broad dynamic range will look flat and low-contrast. An image that shows good highlight handling will probably appear underexposed. And an image that shows good color fidelity will appear to have very low color saturation! I urge new RED One users to learn to love underexposed, low-con, low-saturation images as they come off your camera, for they contain the broadest range of creative possibilities for you later.
But also maintain your love for the rich, saturated images that you may ultimately create from this raw material—and hope/beg/plead for tools to allow shooting with RED One under a non-destructive LUT that is included with the footage as metadata, so that you can preview your image as it may ultimately appear, record that nice flat raw image, and later have the choice of applying your shooting LUT or some other awesome color correction.
And then I went and listened to the fxguide podcast about Mike and Jeff's first day with the camera, and felt tangible pain as these incredibly sharp guys verbalized their near-terror at the learning curve that lays before them. Guys, it's so much easier than you think. Don't stress out about what to do with RED Alert—the less you do, the better. And so much more the better if you do the exactly same thing to every shot.
Next time: What do do with all these flat, low-con, underexposed and uneven—but consistently processed—images! The good news? If you've read The Guide, you already have a leg up.
The film industry has a tremendous need right now for an open standard for communicating color grading information—a Universal Color Metadata format.
There are those who are attempting to standardize a "CDL" (Color Decision List) format, but it would communicate only one primary color correction. There are those trying to standardize 3D LUT formats, but LUTs cannot communicate masked corrections that are the stock in trade of colorists everywhere. There are those tackling color management, but that's a different problem entirely.
Look at the core color grading features of Autodesk Lustre, Assimilate Scratch, Apple Color, and just about any other color grading system. You'll see that they are nearly identical:
Every movie, TV show, and commercial you've ever seen has been corrected with those simple controls (often even fewer, since the popular Da Vinci systems do not offer spline masks). It's safe to say that the industry has decided that this configuration of color control is ideal for the task at hand. While each manufacturer's system has its nuances, unique features, and UI, they all agree on a basic toolset.
And yet there is no standardized way of communicating color grades between these systems.
This sucks, and we need someone to step in and make it not suck. Autodesk, Apple, Assimilate, Iridas; this means you. One of you needs to step up and publish an open standard for communicating and executing the type of color correction that is now standard on all motion media. This standard needs to come from an industry leader, someone with some weight in the industry and a healthy install base. And the others need to support this standard fully.
Currently the film industry is working in a quite stupid way when it comes to the color grading process, especially with regards to visual effects. An effects artist creates a shot, possibly with some rough idea of what color correction the director has in mind for a scene, but often with none. Then the shot is filmed out and approved. Only once it is approved is it then sent to the DI facility, where a colorist proceeds to grade it, possibly making it look entirely unlike anything the effects artist ever imagined.
Certainly it is the effects artist's job to deliver a robust shot that can withstand a variety of color corrections, but working blind like this doesn't benefit anyone. The artist may labor to create a subtle effect in the shadows only to have the final shot get crushed into such a contrasty look that the entire shadow range is now black.
But imagine if early DI work on the sequence had begun sometime during the effects work on this shot. As the DI progresses, a tiny little file containing the color grade for this shot could be published by the DI house. The effects artist would update to the latest grade and instantly see how the shot might look. As work-in-progress versions of the shot are sent from the effects house to the production for review, they would be reviewed under this current color correction. As the colorist responded to the new shots, the updated grade information would be re-published an immediately available to all parties.
Result? The effects artist is no longer working blind. The director and studio get to approve shots as they will actually look in the movie rather than in a vacuum. Everyone gets their work done faster and the results look better. All of this informed by a direct line of communication between the person who originally created the images (the cinematographer) and the person who masters them (the colorist).
Oh man, it would be so great.
I've worked on movies where the DI so radically altered the look of our effects work that I wound up flying to the DI house days before our deadline to scribble down notes about which aspects of which shots should be tweaked to survive the aggressive new look. I've worked on movies that have been graded three times—once as dailies were transfered for the edit, once in HD for a temp screening, and again for the final DI. Please trust me when I say that the current situation is broken. We need an industry leader to step in and save us from our own stupidity.
And this industry leader should do so with their kimono open wide. Opening up a standard will involve giving away some of your secret sauce. Maybe there's something about your system that you think is special, or proprietary. Some order of operations that you feel gives you an advantage. Well, you could "advantage" yourself right into obscurity if your competition beats you to the punch and creates an open standard that everyone else adopts. The company that creates the standard that gets adopted will have a huge commercial advantage. You can learn about the business advantages of "radical transparency" from much more qualified people than myself.
Of course, there will be challenges. Although each grading system has nearly identical features, they probably all implement them differently. It's not obvious how much information should be bundled with a grading metadata file. Should an input LUT be included? A preview LUT? Should transformations be included? Animated parameters? It will take some effort to figure all that out.
But the company that does it will have built the better mousetrap, and they'd better be prepared for the film industry to beat a path to their door. So who's it going to be?
Until you step up, we will keep trudging along, making movies incorrectly and growing prematurely gray because of color.
I love this breakdown clip from Ryan vs. Dorkman 2 (which, if you haven't seen it, is totally worth watching). Based of the so-simple-it's-brilliant notion of showing Star Wars Lightsabers doing things that "we personally think would be fun to see," these guys staged a Lightsaber battle in a factory between, well, two regular guys. The effects work is excellent, and one reason why is that they shot a lot of practical elements.
When you're just getting into effects, it's easy to get stuck thinking that you have to do everything with your computer. These guys wanted to create a realistic reflection, smoke, and sparks. So you know what they did? They shot something that would create a reflection. Then they filmed some smoke. Then they filmed some sparks.
Easy, right? Well, maybe not. To some people it's easier to sit in front of a computer for hours trying to get particles to look like smoke than it is to black out a space and heat up a metal rod with a blowtorch. But the latter is worth the extra effort, because the results will look better and ultimately take less time to create. Sometimes making something look photo-real is just as easy—and as difficult—as shooting something real.