Personal View site logo
DR and sick people
  • 32 Replies sorted by
  • I have never thought bit depth related at all to DR. However saying that, storage of extreme DR in certain formats can have issues with banding.

    In your example @vitaliy_kiselev we could have 1 bit file of 100 stops of dynamic range. This would cause extreme banding. More bits less banding. (In today's tech bits should be limited to sensor native noise level, as above that we are just sampling noise better)

    Example of this is GH4/5 and recording in VLog-L. Similar issues exist in recording SLog as well.

    Obviously solution is simply "don't use log", however then the look of the image is burned into image, and gives less creativity afterwards. Also sometimes on-set you don't have time to get the image 100% perfect in camera- so it is best to do that work afterwards. Hence why RAW is so popular.

    Similar in auido recording 16 bit is great for distribution, but 24 bit gives much more versatility for editing; eq'ing; repair etc.

    Then again, one only has to look at all the issues with 8bit distributed video on YouTube etc, (Banding) to get an idea why a 10bit display and 10bit file could help here.

    Noise dithering can also be used to fix such banding issues, (which can be extreme not only in skies but also others scenes). However video land doesn't like the idea of "adding" noise to images.

    Obviously this discussion is only talking about final image- not capture or sensor. So good idea to use CGI as example, or even motion graphics. If I were to generate a lovely gradient of black to white, we would need at least 8 bit image to have decent non-obvious banding. I have found personally 10-16 bit is best for no banding artifacts "at all" in this example. Now if one were to dither the 8 bit file, gradient banding would not be as prominent.

    There also must be prior research into human perception of banding and light intensity. A level at which changes of intensity disappear to the human vision system.

    So one would take that value, and then the black point and white point of the display device into account, and that should tell us the best step value, which would easily be able to be translated to bit depth.

    Similar research tells us that 60p feels more lifelike than 30p for example. Because at the end of the day, displays are only intended to be viewed by humans.

  • @StudioDCCreative

    Thanks for good post, but I have strong suggestion to move it to separate topic. May be leave directly related part.

    You see, bits are always powers of two (e.g. 1 bit = 6 dB in these cases).

    This is wrong, for example.

    This topic is ONLY about digital part and dynamic range. Not about sensors, not about eyes, not about monitors.

    I'll get to whole picture step by step.

  • DR and shade resolution are entirely separate concepts. DR is the difference between the highest input value capable of being represented and the lowest, below which noise is the dominant value. The human eye, for instance, has an approximate dynamic range of 100dB give or take, but cannot experience this full range at the same time, due to optical glare and masking, among other phenomena. In strict dB terms, an Fstop or EV is equivalent to ~6 dB, making the human eye theoretically capable of perceiving roughly 100/6 = 17 EV, but practical limitations make this the standard figure of roughly 15 EV, in other words the optical nerves saturate (cannot represent a higher value) at a linear stimulus value about 100 dB over the lowest value at which noise becomes the dominant signal on the nerve, but we cannot determine this many different gradations of tone when presented with a signal that extreme - either we squint into the highlights and lose our ability to see into the shadows, or we are partly blinded in the bright sun in order to catch a glimpse of the interior of the cave or building.

    In digital terms, this is roughly similar: the sensitivity of a given sensor may be capable of representing any arbitrary number of EV in terms of actual stimulus before the cells saturate and above the point at which internal cell noise becomes dominant. In Vitaliy's terms, these are "high buildings" and "low buildings" respectively. Obviously we can't tell "low buildings" apart from "no buildings" because somewhere between the two is the noise floor.

    But this says absolutely nothing about the in betweens, as Vitaliy has been pointing out. And here is where bit depth plays a part, but a very nuanced part in some ways. You see, bits are always powers of two (e.g. 1 bit = 6 dB in these cases). But when the ends of the scale are fixed reference points, as in the real world where the sensor is physically limited by an actual saturation point and a noise floor, bits cease to be meaningful with respect to the EV or whatever, and only act by dividing the distance between the upper value ("high building") and the lower bound ('low building') with either a greater number of divisions (more bits) or fewer, but farther apart, divisions (fewer bits).

    So, to Vitaliy's point: DR and bit depth have nothing to do with each other. Bit depth has to do with the ability to represent shades with greater or lesser precision from one another. Dynamic range has to do with how far the endpoints (bright and dark) are from each other in terms of actual light stimulus. DR cannot be inferred from bit depth and bit depth cannot affect or control dynamic range given the same sensor chemistry, assuming the ADCs are properly calibrated to the sensor.

    For a more practical example: If I have a chip that, at a given exposure, saturates at the light intensity of a 100W light bulb and has a noise floor at the light level put out by a 4W nightlight, adding bits won't mean that I can see into darker shadows if I'm exposing for the same 100W light bulb. It will mean the shadows that are brighter than the 4W might have a bit more detail, but the darker shadows will still be clipped to noise (note, I did not say clipped to black). To increase the actual dynamic range means I need to change the chip, so that it's noise floor is lower, so that darker input can still be represented with meaningful data rather than noise. More bits can mean, in this case, that I might take greater advantage of this detail by more clearly separating the colour and shade of, say, the cockroach on the dark floor from the shadow in which it is running, but they won't help me "see" anymore into that same shadow if I keep the exposure correct for the face lit by the 100W light bulb - I still may not see the black scorpion hiding by the couch leg unless I choose to clip the light bulb to white and catch more light to brighten the shadows by changing the exposure.

    Does this help explain the difference?

  • I think the wording of your post may be a bit unclear.

    It is extremely clear samples showing some utter stupidity.

    But I do agree. F-stops and bit depth are not really related.

    F-stops were never mentioned and it is totally different thing.

  • I think the wording of your post may be a bit unclear. But I do agree. F-stops and bit depth are not really related. Most people have very limited understanding of the terminology.

  • @caveport

    Are you sure?

    I am sure that 99.9% of people talking about DR in video files with specific bit depth do not understand that they talk about. Sadly.

  • Are you sure?