HDR Demo FAQ
May 16, 2025
I'll try to answer some topics that have come up relating to my Debunking HDR demo.
The Q's in the FAQ aren't word-for-word questions I've received but composites of topics that have come up repeatedly.
Questions in bold typeface; answers in regular:
When you say “scene white,” are you talking about “diffuse white?”
No: it’s pointedly a different concept.
Diffuse White is unambiguous in scene-referred space but ambiguous in display-referred space whereas my proposed idea of "Scene White" is the opposite. Here's what I mean:
A scene referred camera original image file is a literal record of the physical relative contrast of the photographed scene. Whereas an image that's been prepped with its photographic look and is ready to go to a display device has decisions baked into it that are artistic decisions, not just slavish literalism.
“Diffuse white” is the light reflected by a perfect diffuser (the white chip on a Macbeth chart is almost but not quite a perfect white diffuser) when the amount of light falling on it is what your camera is exposed for. So: in the real scene (and in the un-interpreted camera original image file), Diffuse White’s relationship to neutral gray is unambiguous: an 18% gray card reflects 18 percent of the luminance of Diffuse White (that’s the very definition of 18% gray!).
But that’s in the original scene or in the camera file that hasn’t yet been prepared with a photographic look. It’s a different story in the final rendered image, in which there is no fixed relationship at all: not project-to-project and not even shot-to-shot. This is true firstly because each project's show LUT has SOME type of "shoulder" that reduces contrast in the highlights but any two LUTs don’t usually have the same shoulder as each other. And secondly because we actually do color grade movies: the final movie is the product of artful decisions, it's not just a clinically accurate photometric record of the scene. If you do something like "reduce contrast" or "soften the highlights" of a shot in color grading, you’re likely moving the luminance of diffuse white closer to middle gray and if you do the opposite, you’re likely moving it away.
So that relationship between diffuse white and middle gray, although very well fixed in the original scene-referred image files, is not at all fixed in the final image. Which means you can’t use it as a reference for maintaining relative contrast when converting an already-authored project from one display colorspace to another.
So the idea of Scene White is to have something that is constant for a whole project (and hypothetically could be constant across projects) and can be used as a reference point for converting.
So the crux of the whole thing is: my idea of Scene White is that (in the final rendered image) it has a fixed relationship to middle gray and no fixed relationship to diffuse white. And it serves as a breakpoint below which relative contrast will be perfectly (not just approximately) maintained when converting between an absolute and relative system, and above which relative contrast can diverge in conversion (though it remains as similar as possible without clipping or otherwise losing tonal differentiation when converting either direction).
“Is what you’re proposing like HLG?” or “Why don’t you mention HLG in the demo.”
HLG ("Hybrid Log Gamma") is an alternate transfer function (not the main line; the main line is PQ) offered in the rec2100 document which, like SDR, has relative instead of absolute values on the y-axis of its transfer function.
I didn’t mention HLG in the demo because there’s no real-world in-use path for images to be encoded in HLG at the final viewing stage. Consumer disks, streaming platforms, and theatrical “HDR” are all encoded in PQ. It’s possible to use mezzanine files in post with HLG, but it’s also possible to use ANY transfer function for mezzanine files (even non-standard ones) -- the “HDR” stuff that actually gets to final viewers in the real world today is overwhelmingly (if not 100%) encoded in PQ (not HLG) and then converted (if needed) to the screen’s colorspace.
That’s why I didn’t mention it: because I was addressing the problems with what’s substantively going on in the real world, not weird edge cases or hypotheticals. But let’s talk about it here, since people keep mentioning it.
The rec2100 document has “High Dynamic Range” in its title and in its stated scope, and the document includes specifications for HLG, so in that sense, of course HLG is indeed “HDR.” But if we just look at what HLG actually is definitionally in that document rather than the name of the document, it’s quite literally rec2020 (an “SDR” colorspace) but with a slightly different transfer function: same primaries, same x and y axes on the transfer function, same whitepoint. So, if HLG (as opposed to PQ) is "HDR," then what even is HDR, because at that point there is no difference at all between HDR and SDR as display colorspaces, so why even claim to have two different systems (“HDR” and “SDR”) if there’s literally no difference.
As discussed in the demo: if you change only the SHAPE of the transfer function and not the units on the axes, then there is no difference in what the system can do because every point on the x-axis still has exactly one correlated point on the y-axis. The shape only effects efficiency. Meaning that it doesn’t effect what’s possible, it only effects how many bits and bytes you need in order not to see banding.
So, HLG (in effect if not in name) is just yet another SDR colorspace. And not only that, but the one and only thing that’s even a little bit different about it (as prescribed in rec2100) from the "SDR" colorspace rec2020 is the efficiency of its curve, which is worse than traditional gamma-style curves for most real-world applications.
So, it’d be doubly crazy to start making it a substantially implemented encoding standard for actual delivery: firstly because the one and only difference is a detriment in most cases (and only a small benefit in outlying cases) and secondly because the proliferation of standards is itself a problem that’s causing images to be displayed wrong because of all the chaos and confusion. I don’t personally think we should be crowding the field with more standards unless it’s sure that their benefit is real and not just in name.
We don’t want “even though this colorspace is pretty much exactly the same as its predecessor, we’re gonna confuse things even more with yet another 'format' just because it has the word ‘high’ in the title of its white paper.”
So, what I'm proposing differs from PQ not only in being relative instead of absolute, but also in other ways. And I’m proposing we don’t implement anything new at all (not a version of my own proposal nor any other) until the new thing being implemented is ACTUALLY an advantage over existing standards.
My own proposed change is to regain quality that’s visibly being lost in real-world streaming implementations of HDR due to the streaming’s incredibly high compression rates combined with rec2100’s inefficiency by:
-Going back to a relative system with a gamma-style transfer function instead of an absolute system with an inverse-log transfer function (and NOT using the "hybrid log/gamma" style of HLG either).
-Keeping a wide gamut but only going as wide as is actually used in practice (which is more like P3 than rec2020 -- a bit expanded from rec1886 but not absurdly so). Literally no one finishing at professional cinema post houses is going outside of P3 in their mastering and almost no current monitors would be able to show it if they did. And not only that, it's not even a benefit for "future proofing" because evidence shows that it’s a benefit (not a failing) that real-world monitors’ physical illuminants are closer to P3 than to 2020. That's because illuminants with spectral distributions narrow enough to get close to 2020 can cause the whole image (not just pixels near the gamut edges) to look different to different viewers and to look spectacularly screwed up for people with abnormal color sensitivity, even the most common type of abnormality. (Abnormal vision such as deuteranomaly is often called “color blindness,” but I don’t like the word “blindness” -- it’s seeing colors differently, not seeing them the same but diminished.)
And I’m also proposing to retain PQ’s ability to “punch through the ceiling” (which HLG doesn’t offer), but to do so in a way that’s not absurdly inefficient like PQ, which wastes somewhere in the neighborhood of 35% of the available quantization steps just to be able to do these dazzling highlights. I showed in the demo how wasteful this is, because there are other methods that could allow you to do the same punching-through while making much better use of the limited data rate.
And yet another benefit of doing the dazzling highlights with this proposed different method is that it becomes JUST a benefit instead of a trade-off. Remember: right now (if you actually follow the specifications), you can do dazzling highlights OR be allowed to adapt your display to the surrounding environment, but not both. This would allow both while also making bitrates more efficient.
And even one more benefit: right now there is so much confusion about how to decode a nominally absolute signal in a system that’s never actually absolute that all kinds of crazy stuffmetod is going on. I’ve spoken to multiple people who see HDR titles coming over their home systems as very dim and dull while SDR looks as expected: not because that's how they're supposed to look, but because implementation is so confusing. Going back to a relative instead of absolute system could (in addition to the efficiency advantages) alleviate some of this chaos that’s going on and make it much more likely for viewers to see images correctly: by making it much less ambiguous as to how a TV or a computer’s system display should convert numeric RGB code values into physical luminance and chromaticity on a screen no matter the local circumstances (such as the user-defined overall luminance or the display’s physical limitations).
So, I’m proposing not to make yet another change (like, for example, don’t launch an all new “format” of consumer HDR disk that uses HLG instead of PQ) until the colorspace that we’re changing to is ACTUALLY a substantive improvement.