Hi, ----

Thanks so much for [your enthusiasm]. It's much appreciated.

I don't want to seem unappreciative of [what you've said], but I'm writing to say that I disagree with what I believe is a false dichotomy regarding film grain emulation algorithms.

It's a false dichotomy to differentiate "authentic scanned format" from "digital algorithm."

Adding grain computationally is (by definition) always digital and always an algorithm, so there can be no distinction between using a digital algorithm to apply grain and using another "format" to apply grain emulation to a digital image -- the distinction is meaningless. There's no computational operation called an "authentic format" that has a meaningful distinction from an "algorithm."

"Authentic" is merely a value judgment and not a distinction that has any technical meaning or unambiguous criteria for validity. It's not very useful to categorize algorithms by lumping them into categories of opinionated vague judgments without first having done any analysis to understand how the algorithms work in order to arrive at the judgment.

I suppose the distinction you're really trying to make is between algorithms that use scanned film data (actual rasterized grain images rather than just data about how grain works) stored within the algorithm itself versus ones whose underlyng data is not stored within the algorithm. Which, to my mind, would not divide the two categories along the lines of "authentic" versus "not authentic" but along the lines of "inefficient" versus "efficient."

Grain itself is just random noise and there is no one single magic pattern that is the ordained "correct" one. Film itself never repeats some one magical correct pattern -- every strand of film is different, but an algorithm that uses a finite stored set of scanned film images to seed its random pattern will always repeat the same patterns, which is not what film actually does.

The fact that someone has scanned some film of an even gray field (or an exposure sweep of even gray fields) does not mean they know how to make a good algorithm with the data they've collected.

My own algorithm is also based on mathematical analysis of real scanned film -- it's just a more efficient algorithm because the study and analysis of all the scanned data is all done in the development stage, so the finished algorithm itself only contains the RESULTS of the analysis, rather than all that orignal data that was analyzed.

So my algorithm is more efficient because it doesn't have to re-analyze the whole data set again and again each time: the analysis was all done in the development stage rather than being needlessly repeated again and again in every instance of applying the algorithm.

It's knowing what to DO with the information, not merely having the information, that makes a good algorithm.

If you don't know how to analyze the data from the film scans, then they become nothing but a meaningless random scattering of amplitudes all over the screen (which you could also get from a random number generator). How does the person writing the algorithm know what DO with those amplitudes that they got from the scans: what math do they use to blend the amplitudes with dark or light or red or blue (or dark-blue or bright-green) parts of the image, given that that sample data was all on a gray even field and not on a photographed image?

You have to have done an analysis and understand the complicated behavior of the phenomena you're modeling to make a good algorithm. Using scanned film as a random number generator will not fix your algorithm if you don't know what to DO with the random numbers you've received. It's not a replacement for not having done the analytical work.

There is nothing that makes it more "authentic" just to have a computationally inefficient algorithm that uses a gigantic amount of stored film scans as a random number generator instead of... well, just using a random number generator.

I would argue that my own algorithm is just as "authentic" (if not more so) because it is based on an empirical study of the probability scatter of real amplitudes of real film. It's based on real scanned film data just as much as an algorithm that (inefficiently!) stores the scanned data within itself. But I'd argue that my algorithm is more authentic than most other algorithms because I studied how the amplitudes are actually applied in the real film data and then I emulated it: I actually did an analytic study of the probability scatter in real film and then built an emulator or a mathematical model. It's a probabilistic emulator rather than a geometric one, but it is just as much an emulator built on real film data.

Of coure it's true that JUST using a random number generator within a lame algorithm that isn't a good mathematical model of film grain is going to yield bad results, but that doesn't mean that you can't have an awesome algorithm built on a good mathematical model of what film actually does that uses a random number generator to re-seed and refresh its pattern instead of using stored image data of film scans to re-seed its pattern.

The presumption that if you just scan film of even-gray fields and then build any algorithm of any description whatsoever with that image data, then it's somehow 'authentic' is actually the very reason that some algorithms are no good: because there's a false presupposition that the mere act of doing these scans is all there is to it, so the real heavy-lifting part of the work where you figure out what to DO with the scans is never actually done. I've seen real life examples of someone scanning gray fields of film and then blundering around, hunting aimlessly to use that data in any random (and meaningless) way until they stumble into something/anything that sort of vaguely looks like real film grain.

That's not more authentic. It's less. Yes, they have indeed collected a data set, but they never did the work to methodically analyze that data set or figure out how to USE that data set to build a meaningful mathematical model of what film grain actually does.

I'm not saying that all algorithms that have internally stored image data are flawed like that, but some real life ones certainly are. Whether or not algorithms are built this way tells you nothing about how successful they are. The success or failure of a film grain emulation algorithm is based on getting real film data, analyzing it, and building a workable mathematical model that emulates the real physical photochemical results (perceptual results, not chemical results). Whether or not your algorithm stores the original analyzed data set within itself or whether the data set is used to build the model only during development of the algorithm and then not stored within the finished algorithm is not at all the deciding factor.

All that matters is if it's a good workable model of what film grain does.

To make an analogy: if two painters were going to paint a still life of the same vase, and one painter does his painting on a stretched canvas and the other does his on a sealed wooden box containing the vase itself, is the latter more "authentic" because the real vase is inside? No! This does not guarantee that his painting is more representative of the vase, but only that it weighs more than the stretched-canvas painting. The painting that is more representative of the vase will be done by whichever painter has better studied the vase's contours and colors and is more skilled at understanding how to represent the human perception of that real-life 3d object as a 2d image in oil paint.

Anyway, sorry to be so cantankerously finicky and nitpicky, but I've put a lot of work into that algorithm to be a MORE meaningful model of what film grain actually does than other algorithms so it's disappointing to see it deprecated out of hand, based on a false dichotomy. Is my algorithm perfect? No, but neither are any of the others. I believe mine is as perceptually successful and as philosophically "authentic" as any out there, and its success or failure is not based on the precise mechanism by which the data used to build the model is synthesized into the operation.

Also, a few other side note (much more minor things) on your [comments on] my res demo:

-The term "Spatial Fidelity" as I used it in the demo isn't exactly a "metric" that's technically different from "resolution." I was just using a pointedly descriptive but unfamiliar term to jolt us, because the word "resolution" has been ruined by having too many presumptive meanings and associations that get blurred together (like it can shift between meaning "real resolution" or "sheer pixel count" or "perceptual clarity and sharpness" all within a single conversation). I was just using that phrase to shed the baggage and underscore that, when I use it, I'm talking about REAL RESOLUTION. REAL RESOLVING POWER, not just nominal pixel count or perceptual clarity/sharpness.

-Contrary to your description, the scaling (resizing) algorithms used in the demo are NOT my personal or custom ones. They're just some of the standard ones used in professional imaging (I compare a smoother one to a sharper one in the demo). The algorithms in the demo that ARE my own custom ones built from scratch are the film emulation ones: color, density, halation, grain, gate weave, etc. Not the resizing ones.

Again, thanks so much for your time and enthusiasm. I'm not writing to you because I'm upset about [what you've said], but because I love it [and would like for your thoughts on it to be even clearer].