BACK TO #NerdyFilmTechStuff

REPLY TO YET ANOTHER LARGE FORMAT ARTICLE
April 9, 2022


This is a reply to someone who asked me what I thought of a new article in Y.R. Cinema Magazine.

It's a piece that, like my own Large Format Misconceptions post, tries to debunk some common ardent but false beliefs about the optical characteristics (or lack there of) associated with different sensor sizes in photography.

It's more informal and less technical than my own piece, and I think it's overall quite good. At least in general (if not in all specifics), it comes down on the side of truth over spurious propaganda, unlike the "large format look" article that I critique in my piece. But this new article is still a bit problematic in my opinion. Here is a non-exhaustive list of some things I take exception to in this Y.M. Cinema article:

1.

Not sure where they got the 1.81 ratio of frame size between Alexa Mini and Alexa65.

The custom framing area for the specific project described in the article may well have had a 1.81-to-1 ratio (or any ratio at all for that matter) because you can draw any project-specific frame lines you like within the sensor area. But that doesn't mean it's meaningful to say that the ratio of the cameras themselves should be called 1.81:1. The Alexa has an active sensor size of 54.12mm across and Alexa Mini has an active sensor size of 28.17mm across. So, if you use the full width of each camera and then crop the vertical to your project's aspect ratio, the ratio of sensor size is 54.12 / 28.17 = 1.92:1. Similarly, if you don't use the full sensor width and choose to have a pad, if you use the same (horizontal) percentage pad on both cameras, you'll still have a 1.92-to-1 ratio.

I'm not just nitpicking here: it's a common occurrence to calculate something a little bit wrong (not spectacular wrong, just a little bit wrong) and then mis-attribute the difference between expected result to actual result to some sort of magic rather than going back and just fixing the math error, which would show that the expected and actual results agree. So, being brusque and not double-checking math can play into misconceptions.

(Again, in the particular project being described in the piece, 1.81-to-1 may well have been the ratio of their custom frame lines, but firstly the article gives no specifics to support that. And secondly, even if true, it's still misleading to say that 1.81 is the ratio between the sizes of the camera sensors as opposed to saying it's the ratio of the one-time custom framing areas.)

2.

In trying to debunk common misconceptions about "large format look," we should not give credence to the misleading mental crutch of "crop factor."

There is no one single magically correct or biblically mandated imaging width that demands being always kept in mind and compared to. And there are now so many format size (not just different sensor sizes, but also different project-specific framing areas within a sensor's expanse), that there's no such thing as anything like a "usual" frame size. So, in comparing (in this case) an Alexa65 to an Alexa Mini, you can just directly compare the framing areas of the cameras you're actually using without also comparing them to a third imaginary camera that we're not using here, is totally unrelated to what we're doing, and is less of a familiar or commonly-used size than many others.

Not only does using the unnecessarily elaborate mental model of "crop factor" cause conceptual confusion that gets people spun around, it also adds extra steps to the actual calculations. These are extra steps that can cause precisely the kind of error mentioned above in #1. (Like: why first calculate the two crop factors then calculate the ratio of the two crop factors. You can just directly calculate the ratio of the two frame sizes without the extra gyrations of doing a crop factor -- it's just creating useless extra steps in which an error can be made.)

So, I'm not saying that the math of crop factor is wrong, but that it's a completely unnecessary superadded concept that does nothing at all other than add a mental stumbling block to a topic that already has people confused and prolong false credence in the idea that there are certain frame sizes that have magical primacy over others.

So, no: it's not "all in the 'crop factor.'"

3.

In matching size of blur circles, the article seems to make a mistake in which it gets the math itself right but doesn't explain that math correctly.

As I explain in more detail elsewhere the article is correct that, to match blur circles, you do what they actually did: multiply the f-number by the ratio of the frame widths.

But it's not correct to do what they say that they did (in their explanation that precedes the actual math): they say you need to change the f/stop by the number of stops that equals the ratio of frame widths.

It's easy to see that what they say they did (which, again, is not the same as what they actually did) is an incorrect formulation because the results it gives aren't just wrong, but are not even coherent. For example, you get different results if you try to match camera A to camera B than if you match camera B to camera A:

If camera A has a sensor that's twice as big as camera B, then to match camera B to camera A using their incorrect formulation, you'd have to open its f/stop by 2 stops, because camera A's sensor is twice as big. But to go the other direction and match camera A to camera B, you'd have to close the f/stop by only half a stop, because camera A's sensor is half as big. This is absurd: it can't be true both that the blur circles match when the cameras are set two stops apart and when they're set half a stop apart.

Even more absurdly: if camera A and camera B have the same frame width, then in the mistaken formulation, the two identical-size formats would have to be set 1 stop apart to match, since the ratio of the two sizes is 1.

So what they said to do in the explanation (change the aperture by 1.81 stops) is wrong while what they actually did (multiplied the f-number by 1.8) was correct (or, at least it's correct if 1.81 is indeed your ratio of frame widths, which I'm not sure it was).

And multiplying an aperture's f-number by a value is not the same thing as closing that aperture by that same value in stops. Here are some examples to help illustrate that:

If the aperture's f-number is f/1.0 and the value is 10: multiplying f/1.0 by 10 gives you f/10 whereas closing f/1.0 by 10 stops gives you f/32.

If the aperture's f-number is f/2.8 and the value is 1.0: multiplying f/2.8 by 1.0 gives you f/2.8 while closing f/2.8 by one stop gives you f/4.0

If the aperture's f-number is f/22 and the value is 0.1: multiplying f/22 by 0.1 gives you f/2.2 while closing f/22 by 0.1 stops gives you f/23.4.

4.

I object to the statement that "larger sensors require one to get closer to their subject."

At least in this case, the phrase is given in a context that's much more clear-sighted than the usual forum in which that same sentiment is often given. So the spurious phrase is delivered in such a way that the sentence in which it lives taken as a whole is technically (and trivially) true... but it's still misleading.

And saying something that's only misleading and not technically false is still a problem. Especially when the misleading statement plays into the very misconceptions the piece is trying to debunk.

Different sensor sizes don't force different camera placement. They just don't. Filmmakkers are free to put the camera wherever they want. If you think you need to move the camera because of the sensor size, you've already failed to understand the geometry of optics and the crux of shot design.

In motion or still photography (or in animation or in realistic painting or drawing for that matter), the single decision that most defines one shot as distinct from any another is camera placement: where the entrance pupil (the perspective point) is in space in relation to the scene. If you move the camera (i.e. change your perspective on the scene), you're simply doing a different shot.

So, when you compare two format sizes fairly instead of deceptively, you have to compare photographing the same shot on the two formats, not doing a totally different shot on each format, which is pointedly not an even handed comparison.

A fair comparison (that is: a comparison of how different formats capture the same designed shot rather than a comparison of differently designed shots) would consist of setting up each camera so that its entrance pupil has the exact same position in space and is and aimed the same direction and then selecting a lens and aperture that give the same angle-of-view and blur circles (not the same lens and the same aperture, but equivalent ones that give the same result for the sensor size. I've written elsewhere about how to unambiguously calculate this.)

Doing two different shots with two different cameras and then saying, "see, these two cameras have different looks because these two totally different shots didn't come out the same as each other" is obviously a misleading statement that twists the evidence: why the heck would they come out the same!?

So while the article's section called "Sensor size DOESN'T effect DOF" may not have statements that are technically untrue, it does use suggestive and charged language that implicitly reinforces the same commonly held misconceptions it's trying to debunk.

5.

I know the article is brief and not as technical as some of my own resources on the topic, but they probably should have mentioned the difference between f/stop and t/stop.

That's a difference that can cause some confusion: it's the (main) reason that doing the calculation to match blur circles on different cameras/lenses using only the marked apertures on the lenses is an approximation.

Lenses are marked in t/stops, not f/stops. If you were to include in the calculation the conversion from t/stop to f/stop for your actual lens models being used, you could get a more precise match.

6.

It's quite misleading (and bizarrely so, why do this -- it seems so random?), that the article concludes the entire discussion by selecting an arbitrary, hypothetical, and strangely hyper-specific example of a way that you could actively game a comparison so that there would be a difference in look between two format sizes that wouldn't otherwise exist and then proclaims that "this is the large format look."

Their specific example is to take one specific lens model that is uniquely engineered to be blurry at the edges of a larger sensor area and compare how it looks when you use that lens with exactly the type of larger taking area it was designed for that can see the blurry edges to how it looks when you use it on a smaller format whose smaller image area is contained within the non-blurry center. This is transparently a gamed example.

The "look" here is not the "look of format size" but "the look of lens engineering." This particular lens was engineered to have blurry edges. You can have a blurry edge look (or not) just as easily on one format size or another. For the smaller image area, you could use a different lens model that's designed to have blurry edges on that smaller format size (or you could ask the lens technician at your rental house to defocus the edges of a normal lens's image circle) and now the smaller format would have what they're proclaiming is the "big format look." Or conversely (and more to the point), all you'd have to do is shoot the larger format in their example with almost any real world lens model at all other than the one highly specialized model they cherry picked for the example and now the larger format would have what they're defining as the "small format look."

Having options for differently engineered lenses is a very real and not a merely hypothetical option. And part of that pragmatic reality is, because so many more lenses have been produced for standard motion picture framing areas (around 22mm to 24mm across) for over 100 years, there are currently many more options for different specific models with specific engineering looks for standard motion picture frame sizes than there are for the bigger sizes which are only lately more common. There are many new and legacy lenses that can offer a clean high quality look or others that offer a vintage degraded look or a blurry-edge look. And there are so many more options too. There are lens models that are nearly perfectly rectilinear across the field, others that are not rectilinear at all, and yet others that are rectilinear in the middle and non-rectilinear at the edges.

There are many such lens looks, but (ipso facto) the attributes of the look that are created by the lens engineering are... created by the lens engineering. Not by the camera sensor. It's deceptive to say "if I select one lens engineering look for a bigger sensor and select a different look for a smaller sensor, then the difference in look is a result of the sensor size." You could just as easily have reversed the types of lenses selected or made them both the same instead of different. It's like photographing one specific dark/contrasty scene with one camera brand and photographing a totally different bright/flat scene with a different camera brand and then proclaiming that first brand enforces a darker/contrastier look while the second enforces a brighter/flatter look.

As is often the case when misconceptions are propagated: the example is deceptive because it purports to be a comparison of one attribute (sensor size) but fails to hold other pertinent variables constant (in this case lens engineering) and then perpetuates the magical thinking by misattributing differences created by one variable to the other variable. And in this case, it's done slyly: it may seem like it's holding lens engineering constant by using the same lens model, but it's actually not: a fair comparison would use equivalent lens models, not the same lens models (like: either use lenses on each camera that are designed to be blurry on the edges of that camera's sensor for both or use lenses that are designed to be clean on the edges for both).

This is the same mistake found in many of the other false/deceptive comparisons as well: an even handed comparison demands doing the equivalent thing in both cases, not doing the same thing in both cases. If you want to compare which of two different brands of shoe is more comfortable and one brand is marked in US sizes and the other is marked in UK sizes, you don't compare them by trying on the same marked size number in each; you compare them by trying on the sizes that actually fit you.