OK. I completely get it. I'm on board. I now also understand why, LPowell, you have mentioned several times in the GH1 hack threads that high exposure is more likely to cause high data rates and hence crashes - the big data capture is all pressed up to the right.
For a few years I had a 5D Mk1 that I absolutely LOVED for RAW stills work, and I have one image that I always regarded as supreme, above all others I've taken, that I always show to demonstrate the brilliance of the camera (with a Zeiss 50mm) - I just checked in Aperture (an awesome Apple app. by the way) and yup, the master file is WAAAY over-exposed, with what appears to be blown out highlights, but on the adjusted version, it's simply perfect. I have a real world verification of something that isn't placebo, or too subtle to notice but simply better.
So, LPowell, would you agree that ETTR has a place in 8 bit acquisition? It seems even more so than in RAW, to me.
ChromaSoft blog: "[In Nikon lossy-compressed NEF files,] raws have 683 codes versus the 4096 to 16384 that uncompressed Nikon raws have."
Taken at face value, this statement implies that the bit-depth of compressed NEF files has been truncated from the original 12-bit or 14-bit RAW image sensor data down to less than 10 bits. If that were in fact the case, I would no longer regard the data as RAW, but as mutilated. What is actually done is to map the original linear curve into a near-logarithmic curve that is closer to how the eye perceives color.
Based on that quoted statement alone, I would be highly skeptical of the objectivity of any technical claims made by this author.
In addition, @bdegazio is correct in pointing out that the author's tests reveal virtually nothing about color depth - they show only the difference in signal/noise ratio at different ISO settings on one particular camera (a Canon G10).
For good examples of how ETTR can be effective in the right shooting conditions, I found this article informative:
@bdegazio OK, I get what you're saying. So his tests were about noise, where there's no advantage, whereas the real issue is the variety of colours and chroma available for each stop in the frame. I'd love to see a test as thorough as the above article, but with colour levels - a real world (yet scientific) test!
@nomad Is it not true that it should work with 8 bit as well as with 12 bit. The number of available levels is already reduced, but accessing more (in the brighter end of the histogram) makes sense, no? (Perhaps with 12 bit, it could be said to be less relevant, as there are so many more levels within every stop to begin with.)
And the whole idea of exposing to the right works very well with RAW only, it's all about maximizing the use of the DR (the range between the saturation level of the photo sites when shooting and the noise floor) of the sensor and later adapt it to the target display in post.
Our main problem, even with the hacked GH2, is 8 bit. You can't squeeze a high DR into 8 bit and expect to stretch detail again in post where you want it and squeeze other parts to balance for the limited range of our displays (TV has only about 5 stops in an ordinary viewing environment and public cinema can show about 9). But most sensors can exceed 10 stops these days, the best ones go close to something like 14 (Arri Alexa).
We have no other possibility than adapting our look while we shoot as close as possible to the range of the target medium, since any massive corrections in post starting from 8 bit will inevitably introduce more banding. So, using the 'famous' Technicolor Picture Style with a 5D is pretty much useless for a scene with low contrast and it can introduce banding in post even with contrasty scenes if you don't like the milky look it delivers.
This applies to any camera with less than 10 bit of recording.
These conclusions may be correct regarding noise. But the purpose of ETTR is to maximize luminance/color resolution, not minimize noise. I think you're confusing two image quality issues here.
[edit]
Just scanned the article. The author simply doesn't understand the issue. He seems to think that ETTR is exclusively about noise. His tests only involve shooting flat panel color cards, with no gradations.
I submitted this comment to his blog:
You're confusing noise performance with luminance/color resolution. "Expose-to-the-Right" (ETTR) is meant to enhance the latter, not the former.
If you want to test ETTR you need to have color or luminance gradations in your shot, not flat panels. The improvement will EASILY be evident then.
His conclusions: There is no advantage to image quality from ETTR that can't be duplicated by selecting a lower ISO, if a lower ISO setting is available. In some situations, such as where there is in-camera noise reduction, ETTR actually increases noise. That's what the practical tests show, and the theory of the case confirms the practical results to be correct. The only situation where there is an advantage to ETTR is if you're already at the lowest ISO setting your camera, and you use ETTR to synthesize a lower ISO. However, given the noise performance of most modern cameras, that advantage is often very small. The test I did here - a small sensor high pixel count camera - is the best possible scenario for seeing an improvement. Using a modern DSLR, the improvement would be marginal at best. Any kind of ETTR brings significant disadvantages in the shape of color and tone curve shifts that will have to be repaired in post processing. While these shifts are small, they are easily the equivalent in effect of changing profiles. So, in effect, ETTR negates the advantages that modern raw developers such as Lightroom bring with them. Bottom line - ETTR offers improved image quality in only one specific situation - where you can use a lower ISO setting than your camera has. In all other situations, ETTR will only ever decrease image quality.
Actually I've been researching this, and expose to the right is actually not as great as is being said here. The effective difference is equal to lowering the ISO in all situation, EXCEPT where the ISO will go no lower. This has been demonstrated with some tests I've seen online, in a reply to the luminous landscape article, and it makes a lot of sense to me.
At ISO 100, if you feel the need for reduced noise (surely pretty pointless really) then by all means ETTR, for all other situations, simply reduce the ISO for an equal or better result than ETTR offers.
@dkitsov It's not just about sensor size; our eyes have very wide lenses and are pretty slow - even in a very dark room, wide open, they're about f/3.5. In daylight they can be as slow as f/14.9, apparently. At anything other than close focus, you're hardly going to get any bokeh.
As for film formats, you know about 70mm, right? It is not flame bait, mere fact stating. Obviously you are incapable of interpretting it as such.
I give you my point again: Just because that's how we see things, it does not mean it's more cinematic.
So 32 steps in the darkest areas of the frame explains well why se see so much mud and macro-blocking in these areas, especially on lower bitrate recordings.
I'm done shooting in dimly lit conditions. I want to shoot 100 iso wherever possible, as the results speak for themselves - lights are the key (pun intended), and yes, I will start to expose to the right and adjust in post! Thankyou.
@ Ptchaw Canon 5D mark II: sensor size 36x24mm Super35 (most common film format in motion pictures): roughly 22x19mm, full size never used. Difference in area roughly x3.5 times. Yes, wide use of "Full" size sensor for video production is a recent development. f-number: focal length/aperture diameter(pupil opening diameter) Super35 image area (as used for 2.35 projection) = 240mm squared Average human eye image sensing area (fixed gaze) = roughly 254mm squared DOF a function of image area , focal length, aperture, resolution and the "circle of confusion"). The perception of very deep DOF in humans comes from human's ability to reframe and refocus very quickly plus the function of "RAM" our visual system has. Please make your calculations before making wild claims. I will respond to no further flame-bait from you.
@dkitsov Are you serious? 35mm is not a new development, shallow DOF has been around for decades, through large format film. Our DOF is huge compared to 35mm film, not just because of perception, but because our retina is comparatively much smaller. As before, just because that's how we see things, it does not mean it's more cinematic.
@soapey It works fine for jpegs, as ETTR is not about recording format but about the way the sensor works. @Ptchaw Human eye depth of field in the dark is hardly "huge". While it's true that in the bright day light human eye is roughly @ f/8.3; in the dark room it can be @ f/2.1 (or according to the later research @ f/3.2) wich is still rather below the traditional cinematic f/4 (lower f numbers in the lenses and super shallow dof are recent development in cinema, as a matter of fact super shallow DOF is still only possible with 5D II due to its extra large (from the cinema POV) sensor). The perception of very deep DOF in humans comes from human's ability to reframe and refocus very quickly plus the function of "RAM" our visual system has.
@dkitsov Just because we see more motion blur in low light as a human it does not mean it is more natural for cinema. For example our DOF is huge and we see in stereo, the complete opposite from what would be typically considered cinematic.
The 2011 article says "Now, just to be sure that there is no misunderstanding – this approach only applies to raw files, not in-camera JPGs." Doesn't that mean it won't apply to video either?
...the camera would also need to record a low bitrate metadata stream indicating the normalization factor used in each macroblock. Naturally, this would require modifications to existing JPEG and AVCHD codecs, which are long overdue for other reasons as well... Yes indeed. You know what also would be simply amazing? Simple exif info in the existing setup. How complicated is it for Panasonic to include ISO, f-stop and a shutter info with a video file? As far as I can tell it is not available even with in-camera playback on GH2.
@stonebat Luminous Landscape's characterization of today's image recording technology as "19th Century" is a valid criticism, but it really only scratches the surface. While a Live View camera could automatically normalize the exposure of each entire frame, that would also be a compromise. What the camera should do is normalize the exposure for each individual macroblock in the image, in order to make optimal use of the digital coding range in each case. To make this approach work properly, the camera would also need to record a low bitrate metadata stream indicating the normalization factor used in each macroblock. Naturally, this would require modifications to existing JPEG and AVCHD codecs, which are long overdue for other reasons as well.
@itimjim Nothing is free. ETTR works because of the linear nature of the sensor. When you increase the ISO of a digital sensor you are not really increasing the ISO, you are increasing electrical gain. Increasing the gain = lowering the dynamic range. I am not sure of the source at this point (Red Center podcast perhaps) but apparently gain that gives you an extra stop of light will take from you an extra stop of the dynamic range. Low dynamic range = no advantage of the ETTR. ETTR will work only through the variation of the shutter and aperture speed all else being equal. ETTR is a good technique to employ if the post-production color grading is intended. Regarding the 1/50 of a second: one can argue that while the 180 degree shutter is the cinematic standard, it might be OK for the low light scenes that are intended to read as a low light scene (and not because you did not have good light) to go as low as Shutter Speed = Frame rate. We, as humans have a natural smearing of the vision in a low light environment (in addition to the wide open pupil) so it would feel natural to have a bit more motion blur in such scenes as opposed to the daylight scenes.
But, given an inflexible shutter, 1/50. There's only 3 options to increase data, more light, wider aperture or higher sensitivity. In many situations we might already be on the limit of how wide we can open the aperture, or light we can throw on the scene. So how to expose to the right then? Increase sensitivity, or ISO, as most of us call it.
So the question here is, and I don't have the answer, is it better to ETTR and increase noise, or expose to as close to your intended levels and keep the noise down?
I think the basic idea of what he's saying is easy to get and right on. That is that all digital video has underlying noise but you only see it in dark areas because there isn't enough data to not see it. i.e.- light allows more data to be recorded on your sensor/card so you see that and not the noise. So it's better to overexpose and print down than to under expose and bring up the smaller recording of data amongst all the noise.