January 31, 2016

Many thanks to all who have written in with interest in color science and in my Display Prep Demo (www.yedlin.net/DisplayPrepDemo). 

I’m so happy that there is a groundswell of filmmakers who are interested in new mathematical tools to take control of the look of their images in complex new ways instead of merely treading the same old dreary paths.

As filmmakers, if we base our actions on a belief that selecting a camera-type (and then turning lift/gamma/gain knobs) is our main or only voice to control the look, then all we’re doing is failing to control the look: leaving it up to chance or letting someone else control it. Because, whether we deny it or admit it, the fact is that the dry photometric data from the camera must go through a defining mathematical transformation for viewing. And there is nothing to prevent such a  transformation from being much more complex and interesting than such transformations have usually been in the past.

I’m glad that so many have contacted me asking about my own tools, so this is an open letter to all those who have been interested in whether I’ll be distributing any sort of toolset package. 

First off, some who have written in have recognized (and some haven’t) that the math I’ve developed for creating a recipe for a look is a totally different thing from the specific recipe that I used in the Display Prep Demo. So, there are really two totally different questions you might be asking: 

Will I be distributing my development tools?

Will I be distributing the specific implementation (or recipe) that I developed with those tools and used in the Display Prep Demo?

Here are the short answers: 

I would love to distribute the underlying tools but to do so is overwhelmingly complex, and I would not be quite sure how to proceed even if I wasn’t busy on a movie right now. So, I enthusiastically hope to do so, but I’m sorry to say: don’t expect anything any time soon. 

As for the one specific recipe used in the demo: I have no plans to distribute that particular recipe, but that’s fine because I want to inspire you to have your own recipe. Filmmakers are already stuck with inflexible recipes imposed by vendors; they don’t need another from me.

Okay, you can stop reading there or proceed to the longer answers:

—————————————

The Development Tools

The tools that I use are custom-made and sprawling and messy. Let me give you an idea:

There are very long math/programming expressions with lots of variables and flow-control statements. Adjusting and using these tools is far from automated or intuitive and it’s difficult for me to keep in my own head while using them what happens when I adjust any of the many variables, and I’m the one who wrote the math, so it’s going to be even more confusing to distribute to others.

Some of the math/programming is complicated enough that I can’t put the long math expressions into existing software but have written my own software (based on my own idea of what math should be implemented) that chews on the data sets and delivers up an interpolated transformation. I'm not a programmer and my homespun software's user interface is very clunky and not packaged for general use. For example, you have to save a text document in a very specific format with all of the data points in a specific type of list for the software to even recognize the list. 

This is command line software with no graphic interface and no safety catches to prevent the user from making nonsense results (or just causing the program to abort) if he/she doesn’t understand exactly what is happening under the hood of the software. By way of another example: if your input and output data sets don’t have the same number of points, the software doesn’t give you a nice prompt to explain the problem to you; it just aborts.

On top of all this and other complexities, the way that I use all the tools is not fixed and mechanical and repeatable. They’re just tools and I use them differently every time for every data set. Is the data set dense or sparse? If I’m dealing with film, is it neg or print? If it’s digital is it linear, log or gamma encoded? How do the general characteristics of the input data set differ from those of the output data set? It takes intuition, experience, and familiarity with the tools to even use them: to figure out which are applicable, what order the they should be applied in, and how they should be applied given the myriad possible implementations.

The tools themselves and their implementation are in constant flux, so it’s difficult to conceive of a simplified yet coherent distribution that only includes the results without all the underlying principles.

I think in the future, this will all be collapsed into simpler and more intuitive tools and thought processes such that it will be easy to interact with it all on a surface user level. I’m not quite sure, though, if I can be the one responsible for collapsing it like that or if I should try to publish all of the underlying math and principles and let someone else try to package it for the masses.

Either of those two options is a huge effort that I don’t have time for right now. The former would require me to become a software developer which I know nothing about and the latter would require me to write an entire textbook or a textbook-like-website. 

I do indeed hope to do something like this in the future, but I just don’t know what and it can’t be soon. Unfortunately I can’t give out the tools as easily as saying “push the x, y, and z buttons in the camera menu” or “buy thus and such software.”

The bottom line is that every implementation is unique and requires lots of trial and error. There is not one unambiguous prescribed method, even if you have the tools (which are themselves messy and complex).

In the mean time, if you’re a cinematographer or colorist looking for new tools to break the dreary cycle, I’d suggest collaborating with a real color scientist (I’m just an amateur color scientist) and/or just digging in and starting to look curiously and rigorously yourself at the underlying math of manipulating images rather than relying on the simple knobs that are in the camera settings or in the color correction software.

The Specific Recipe Used In The Demo

There are a whole bunch of scattered reasons not to try to give out the specific recipe used in the Demo. Probably any one of these alone is enough not to do it, and yet look how many there are:

-My whole point in distributing the Demo is to empower and inspire filmmakers to be authors of their look instead of slavishly following well-trod paths. If I offer another competing path that’s equally fixed (instead of tools for forging your own path), then I’m undermining my own philosophy.

-My own recipe is constantly changing, but if I were to distribute it, the distributed version would be frozen. I’m always tinkering and playing and learning new things or gathering newer data sets. It’s not a fixed recipe. The precise iteration that was used in the demo is already an old version. It’s a living thing and I don’t want to freeze it.

-It’s too complicated to distribute in any succinct way, as there are different steps requiring different types of software and math. It’s true that many of the attributes (not all, but many) could be flattened into a LUT, but the LUT would only work in a very rigid and narrow set of circumstances. The slightest change in the sprawling web of circumstances would require going back to the original complex math and flattening a new LUT. So a LUT is not a workable way to simplify distribution of the underlying complex transformations.

-If I distributed it, it would either end up getting misapplied or I’d be doing non-stop tech support which I don’t have time for. If people want to make it their own, then they can just start with something that’s their own and don’t need anything from me, but if they want to apply it as intended, it’s just too much work for me to support. It’s trivial for me to implement it myself on my own jobs, but that’s because of my familiarity with it. If you’re not familiar, it’ll require customer service that I’m not prepared to offer.

-If I make it public, I’d be opening myself up to users who misapply it or misunderstand it then start making public posts like “I used that Yedlin thing. It’s malarkey.” They’re entitled to their opinions, but my whole point in empowering filmmakers with math for display prep is to stop that kind of vitriolic yet empty opinion that masquerades as technical knowledge. When it comes to making sweeping vague value judgments about complex subtle math, I want to cool us down not stir the pot. Even positive rather than negative value judgments are equally incendiary. I can’t see any scenario in which distributing a recipe would have a calming effect that favors rationality over empty exuberance.

-The specific film look of that recipe, in many ways, only works if you actually light with traditional film ratios. If you use on-set lighting ratios that many people are currently using with Alexa and then use my recipe, you might not be happy with the results. In other words: despite much lip-service to film being the gold standard, many people today don’t really want all of film’s attributes.

-Get excited! This is just the beginning, not the end. The world of imaging used to be simple and now it’s complex. No  matter how much we yearn for simplicity, that genie isn’t going back in the bottle. My recipe is not the one simple magic answer you were looking for — nothing is. Nothing as blunt and simple as saying a magic incantation like “film, not digital” or “I got a LUT from Yedlin” is going to give you subtle authorship over the look of your images. Don’t be a shopper, be an author. Don’t bow at the altar of your tools; be the master of your tools. Roll up your sleeves and embrace the complexity.