BACK TO #NerdyFilmTechStuff

Hey, man!

Great to hear from you.

[…In regards to the article you sent me in your email]: it’s a valiant and admirable effort [that would be exactly the kind of thing we need more of if it weren’t unfortunately a bit confused]. I feel like it unintentionally hides behind some pseudo-science-sounding stuff but is really just as superstitious as a lot of stuff out there [that it’s trying to debunk] and has some dangerous implied premises presented in a somewhat confused potpourri style.

One of the biggest problems with it is that it indiscriminately uses the word “sharpness” to mean either “perceived sharpness” or “resolution.” Which are two things that are not even remotely the same. The fact that they’re not the same but often confused is one of the largest matters at hand with 4k, and this paper not only fails to mention the issue but is guilty of perpetuating the confusion.

The article also completely fails to mention artificial sharpening, which is the proverbial in-room-elephant in the whole 4k discussion. I really think this is one of the biggest issues today in all discussions of “sharpness” and “resolution” and “K’s."

The reality is that the normal, non-superhuman eye really can’t see the difference between 2k and 4k at any comfortable Screen-Size-to-Viewing-Distance ratio (I mean, MAYBE you can see the difference if you sit in the very front row of a theater, which is such a poor film-going experience anyway that everyone avoids it — but for any normal comfortable viewing in theater or at home, it makes no difference). Almost all 4k images are mastered with artificial sharpening (sometimes in-camera, sometimes in-post, sometimes in-monitor, and sometimes a combination of all three) — that’s because the merchants making the 4k stuff want to trumpet that 4k makes it better so they can sell TVs and projectors and disk players and stuff, but when they see that it literally doesn’t actually look any different than 2k [to the human eye], they realize they can’t sell it and so have to do something to it to make it stand out and make the consumer say “wow." So, they add artificial sharpening. Which is ironic since 4k is supposed to be “better quality” or “more fidelity,” and artificial sharpening is itself a degradation of quality and fidelity — and it’s also something that could just as easily be added to HD or 2k content. But I believe that this artificial sharpening and not true resolution is the biggest difference that’s actually visible in most cases (I’m not saying whether or not the actual resolution exists, just that it’s not what’s visible to a viewer).

The article also confuses a whole bunch of other separate issues together, even including confusing “texture” with “resolution" and “sharpness," which I don’t think are interchangeable (even if they are interrelated).

It also mentions and propagates (but fails to illuminate) the widespread fallacy that the virtue of film over digital comes from the image components (the silver crystals) being scattered randomly rather than being arranged in a pixel-like grid. 

I’ve heard this assertion (about the non-grid scatter) repeated again and again by DPs and others who claim that therein lies the magic of film, but I believe it is an utter falsity based on pure superstition and speculation and not based on controlled and comparative observation. The proof that their assertion is false is that any of those same DPs that make that claim would certainly agree that film-acquisition still looks like film when displayed on a digital projector or screen, and yet that film image on a digital screen is now itself made up of gridded pixels: a 4k scan of 35mm (let alone a 2k scan, which is worse), has between 9 and 900 silver crystals per pixel. Since the digital image (by definition) doesn't resolve anything smaller than a single pixel, when a film image is scanned to digital the original silver crystal scatter is utterly (not partially) lost and the image is now just a grid and not a random scatter. And yet, the superstitious film proponents still think it looks like film. So, they’ve self-evidently disproved their own assertion about the random scatter when they agree that film-capture still looks like film when shown on an electronic display format. If this random scatter were the magic issue, as is being professed, then a film-acquired image would immediately look exactly like a digital-acquired image as soon as it was scanned. And, conversely, merely adding “noise,” “grain,” or “chaos” to a digital image would immediately make it look exactly like film, which I’m sure (rightly) none of them would agree with. (By the way, I agree that grain’s residue is still visible as turbulence or noise in scanned film — I’m merely saying that, once scanned, the image is now a grid with noise rather than a non-grid).

Another item: the article talks a lot about resizing (“blowing up”), but fails to mention that there are multiple algorithms for upscaling and down-scaling, which to a large degree determine the so-called “sharpness” more than the actual resolution does. For example, I’ve done tests with highly controlled variables that show that a 2k image upscaled to 4k with one of the sharper algorithms might look more perceptually “sharp” than an actual 4k image. (I’m not just talking about debyering, but actual re-sampling: like preparing 2k acquisition for 4k distribution). This issue is not totally separate from the “artificial sharpening” mentioned above.

Also, with statements like "Many actresses know perfectly the virtues of the defocus,” the article perpetuates what I believe is a widely held fallacy. Digital capture can (but doesn’t always) have two attributes which I believe people regularly and mistakenly chalk up to the format being “too sharp.” The first is that traditional video-style capture (though not necessarily newer style capture) is very contrasty: with clippy whites and crushed black. Contrast and “sharpness” are perceptually linked and an un-attractively aggressive contrast is often described by people as “too sharp” rather than the more appropriate “too contrasty." The other issue at hand is merely that full-resolution on-set monitoring is possible with digital but not with film. The hypothetical angry actress (who doesn’t like “seeing every pore”) would, in the past, never have had the means to see the full-resolution film on-set, so she wouldn’t have the chance to be upset about how she looked. At best, she could see a blurry, smeary, and irresolute video tap. That didn’t mean the film wasn’t capturing “every pore," only that she couldn’t see every pore in all it’s detail until the film came out in theaters — and even then, she couldn’t see it paused or looped, and she couldn’t see it up close with her face right in the monitor — so she never could have scrutinized it in the same way. If mere lack of resolution made actresses look better, then more movies would have been shot on 16mm or even VHS. Even the gauzy diffusion that was used on actress’s close ups in the old studio era is more of a contrast reduction more than a resolution reduction.

On top of all this, there are some things that are confused and confusing in the article even if not quite wrong. For example, the article describes a windowed sensor as “down scaling,” but windowing is not down-scaling — it’s extracting. Resizing or re-scaling (which is better called "re-sampling") is a totally different thing than extracting (also called “windowing” or “cropping"). And confusing a presumably already non-expert reader on this and other topics is not helpful.

Anyway, sorry to be so down on it. I do think it’s great that people like this are trying to investigate and think for themselves rather than just trumpeting manufacturer specs, but I also think we have to be careful what we put out there when we try to bring people around to being more analytical. An article that just sounds authoritatively confusing and vaguely suspect is likely to scare off the very audience it’s meant to bring around, and that could send them running back even more firmly to the camp of believing the bullshit claims of gadget-mongers.

Okay, sorry this was so long. I hope this hasn't sounded vitriolic — I’m just trying sober and thorough.

Cheers.

-Steve