Personal View site logo
LOG vs. LINEAR vs. LUTs vs. Math: Battle ROYALE!!!!!
  • Soooooo..... lotta debate going on, lotta maths and such being tossed about, lotta newbs trying to wrap their heads around the new color space. So let's get it on.

    So when grading @driftwood's V-log Shogun footage, I discovered that the Panasonic provided Varicam V-log LUT did an ok job of remapping the luminance and color values into a linear space, but didn't put the highlights quite where i though they should go. (unless of course the footage was really that grossly underexposed, which might be the case, but maybe no)

    So I remember Panny saying something about V-log being very close to Cineon, so I tried that, and it delivers a very nice image with a lot of density in the image, but pretty dark. (Again might be underexposure)

    S-log did a really nice job of mapping out a full range, but until I shoot bracketed footage and determine just where the sweet spots are, there's no way to tell which LUTs are best for putting V-log L into Linear to begin grading.

  • 30 Replies sorted by
  • @shian, which Panasonic-provided Varicam V-Log LUT do you mean exactly?

  • Panasonic's V-Log-L tone curve requires exposure calibration in order to precisely match the dynamic range of the footage to the curve of the Varicam V-Log LUT. The reason this is necessary is because V-Log-L is not purely logarithmic throughout its 12-stop dynamic range, the bottom 4 stops have a non-logarithmic rolloff down to 0% reflectance. In addition, the entire V-Log-L curve is offset with a 12.5% pedestal. The DVX200 (and presumeably the GH4R) enforces this LOG-LUT match by scaling the maximum sensor level down to 79 IRE and biasing the black level to 12.5%. Unfortunately, that discards the camera's 80-109 IRE range, leaving you with little more than 7-bit data precision when V-Log-L is encoded into 8-bit H.264 format. What's worse is that the bottom 4 stops of the 12-stop dynamic range are encoded with just 4 bits to cover all 4 stops. Such a crude level of digitization makes the effort to capture 12-stops of dynamic range futile. Bottom line: V-Log-L was designed for recording at 10-bit or better data precision; 8-bit H.264 encoding is not quite good enough to preserve shadow detail.

  • The reason this is necessary is because V-Log-L is not purely logarithmic throughout its 12-stop dynamic range, the bottom 4 stops have a non-logarithmic rolloff down to 0% reflectance. In addition, the entire V-Log-L curve is offset with a 12.5% pedestal.

    Let me just show why logic can be ill.

    Suppose that your LUT (to get 709 footage from LOG) is H(x), we will reduce all to luma for now, for simplicity.

    Now, consider this H(x) to be defined as G(F(x)). Where F(x) is intermediary function turning LOG back to linear, it is defined explicitly and do not depend on exposure or such (only on actual value x). And G(x) is our usual function converting sensor linear footage into Rec 709, here we do not have any intervals at all, so it is not dependent on exposure.

  • @Vitaliy_Kiselev

    The LOG-LUT workflow provided by the post production tools isn't designed to be as flexible as your mathematical manipulations would require. As you claim, any exposure mismatch between the footage and the LUT could be easily corrected by adjusting the gain of the footage before feeding it into the LUT. But that would require manual calibration of post production levels, and Panasonic's engineers wanted to avoid that complication. They instead enforce the proper V-Log-L exposure level at the time of recording, producing a turnkey workflow that requires no technical understanding of the process on the part of the user.

  • As you claim, any exposure mismatch between the footage and the LUT could be easily corrected by adjusting the gain of the footage before feeding it into the LUT. But that would require manual calibration of post production levels, and Panasonic's engineers wanted to avoid that complication.

    I did not say anything like this. I just said that you will get usual underexposed footage, as always.

    And you do not need any calibration, as you can just tune things looking at proper tools if you need to change how footage looks.

    My math example was intended not to reflect workflow, but to show that interval definition of LOG function does not mean anything.

    They instead enforce the proper V-Log-L exposure level at the time of recording, producing a turnkey workflow that requires no technical understanding of the process on the part of the user.

    Well, technical understanding and grading skills are required if you are shooting such things as LOG and raw.

  • @Vitaliy_Kiselev

    you can just tune things looking at proper tools if you need to change how footage looks.

    Yes, so long as you tune things after first processing the log footage through a precalibrated LUT. If you use an uncalibrated LUT or just eyeball it without a LUT, you're basically using the log curve as a subjective flavoring, like any other tone curve.

  • Yes, so long as you tune things after first processing the log footage through a precalibrated LUT. If you use an uncalibrated LUT or just eyeball it without a LUT, you're basically using the log curve as a subjective flavoring, like any other tone curve.

    Again - underexpose footage and you will get underexposed result with this LUT.

    LUT is just method to define function, and you can repeat such LUT using set of transforms. So, best way to supply any footage transformation is to use multiple tunable nodes (hence the approach used in serious grading apps).

    Another strange point is that you need some super special way to check footage during shooting for proper exposure, while it is simple if you use proper tools present even in cheap monitors and many cameras (false color will do as well as even zebra if you can set the level of it).

  • @lpowell said

    there's no such thing as an "extended highlight range"

    Sure there is. A typical camera clips at 3.5-4 stops above middle grey. The Varicam 35 clips at 6.5 stops above middle grey. Canon Log clips at 5.3 stops above middle grey. S-Log2 clips at 6.2 stops above middle grey. S-Log3 clips at 7.7 stops above middle grey.

    Your claim that "you don't want to devote a lot of bits to encoding noise" is absolutely the wrong approach to preserving shadow detail.

    Then please tell Canon, Sony, Panasonic, GoPro, and Arri that they have done it wrong. All of their camera log color spaces use decreasing precision towards black.

  • @Vitaliy_Kiselev

    Another strange point is that you need some super special way to check footage during shooting for proper exposure.

    With V-Log-L on the DVX200 (and presumeably on the GH4R) "proper" exposure is highly unintuitive. The maximum output level of the sensor is scaled down to 79 IRE, and there's no way to capture anything but white in the 80-109 IRE range. That means you have to expose your highlights to fit within 79 IRE rather than the 100-109 IRE range you're used to with the other built-in profiles.

  • this one http://pro-av.panasonic.net/en/varicam/35/dl.html

    @shian, you mean VariCam 35 3DLUT V-Log to V-709 (Ver.1.00)? That LUT maps to Rec.709, not to a linear space.

  • With V-Log-L on the DVX200 (and presumeably on the GH4R) "proper" exposure is highly unintuitive. The maximum output level of the sensor is scaled down to 79 IRE, and there's no way to capture anything but white in the 80-109 IRE range. That means you have to expose your highlights to fit within 79 IRE rather than the 100-109 IRE range you're used to with the other built-in profiles.

    Well, USE TOOLS Luke, as I said, it is super simple. As for lack of information in upper values - it is just Panasonic engineers utter failure.

  • They instead enforce the proper V-Log-L exposure level at the time of recording, producing a turnkey workflow that requires no technical understanding of the process on the part of the user.

    @lpowell, Panasonic does not enforce any particular way of working or setting exposure level. They readily provide the definition of the V-Log color space, which can be used to transform it into a log or linear color space, enabling a film-style or VFX-style workflow. What you call "manual calibration of post production levels" is just exposure compensation and color balancing, which are totally normal and standard parts of any film workflow.

    The "turnkey" workflow using their V-Log-to-709 LUT is not really a workflow at all, but a quick way of previewing footage that hasn't been graded yet.

    With V-Log-L on the DVX200 (and presumeably on the GH4R) "proper" exposure is highly unintuitive. The maximum output level of the sensor is scaled down to 79 IRE, and there's no way to capture anything but white in the 80-109 IRE range.

    Have you verified this yourself, or are you relying on the early tests done by other people?

  • @balazer

    A typical camera clips at 3.5-4 stops above middle grey.

    https://en.wikipedia.org/wiki/Middle_gray

    "Middle grey" is not an absolute reference point, in sRGB terms, it's defined as 50% of the maximum brightness, which is the highlight clipping point of the image sensor. The "extended" terminology used to refer to the additional 4-stops of V-Log range compared to 12-stop V-Log-L is an artifact of how Panasonic chose to scale the highlight clipping point of each camera's image sensor. In the Varicam's V-log profile, highlight clipping is scaled down to about 90% of the top of the 16-stop V-Log range. In the DVX200's V-log-L profile, highlight clipping is scaled down to about 72% of the top of the 16-stop V-log range. In both cases, the full range of highlight sensitivity is recorded from each sensor, it is just scaled differently in 16-stop V-Log versus 12-stop V-Log-L. The chart below shows how this scaling works. (Note that in this chart IRE scaling is not quite the same as percentages, since 100% = 109 IRE.)

    In short, the "extended 4-stop highlight range" terminology is marketing-speak, it is not a physical property of the image sensor. All cameras capture the full linear highlight range of the image sensor up to the point of clipping. An internal tone curve is then used to manipulate that data into ranges tailored for a variety of professional and consumer purposes.

    Have you verified this yourself, or are you relying on the early tests done by other people?

    I'm relying on reports of Panasonic's DVX200 and GH4R Beta testers, who IMO, have the tools and track records to know what they're talking about.

    V-Log scaling.jpg
    874 x 507 - 74K
  • Extended highlight range is not marketing speak. It's a very real property of the camera's behavior, which you need to understand when you're shooting.

    Take any camera that records into a display-referenced color space and a camera log color space. Shoot something in the display-referenced color space. Note which objects in the image are below clipping and which are clipped. Now switch to the camera log color space, keeping the same ISO setting. Some of the objects that were clipping before now will not be clipping. The highlight range has been extended. You should understand the highlight range of your camera's output to know how to expose and how to set the ISO setting.

    Of course in a camera log mode you may not be able to choose all of the same ISO settings that are available in a display-referenced mode.

  • Extended highlight range is not marketing speak. It's a very real property of the camera's behavior, which you need to understand when you're shooting.

    It is just software thing, sensor is linear device (well, almost) who does not care or know about it :-)

    As soon as you dump all IRE, middle gray and such, it will be much simpler and closer to reality.

    We just have sensor(linear)->ADC(still linear)->raw(linear)->processing (log, Rec 709, anything)

    All this complex stuff is introduced on processing step.

  • @balazer

    Note which objects in the image are below clipping and which are clipped. Now switch to the camera log color space, keeping the same ISO setting. Some of the objects that were clipping before now will not be clipping.

    That is nothing more than under-the-hood ISO fudging and tone curve manipulation by the firmware. In digital photography, "ISO" is a convenient fiction, what actually matters is the exposure gain of the tone curve. The image sensor responds exactly the same in both cases. So long as you haven't saturated the sensor, whether highlights clip or not is determined simply by the amount of gain applied to the output of the image sensor.

  • ISO is not fiction. It defines the relationship between the brightness of the exposure on the camera's sensor and the output values. And it totally matters when you decide how you are going to expose and how to set the ISO. Are you using a light meter? Are you using the camera's built-in exposure meter? Are you using a monitor LUT? Are you looking at a histogram or false color display? You need to know how your camera maps into the output range, and the performance of each part of the output range for each ISO setting. Where does it clip? Where in the shadows does noise become apparent? Where does color accuracy suffer? These things are not just properties of the sensor. They change when you change the ISO setting.

  • ISO is not fiction. It defines the relationship between the brightness of the exposure on the camera's sensor and the output values.

    I do not know that is "brightness of the exposure", it is just raw sensor values.

    ISO setting can affect analog gain (part before ADC), digital gain, some processing and some other things (noise reduction parts) In digital world ISO is quite inaccurate thing with firms using various tricks to look better for given number you set in menu on knob.

  • I do not know that is "brightness of the exposure", it is just raw sensor values.

    Light. Photons. Before the sensor produces raw sensor values, there is light falling on the sensor. That light can be quantified, and related to the camera's output.

  • You have forgotten that before the sensor produces raw sensor values, there is light falling on the sensor. That light can be quantified, and related to the camera's output

    Again, it is vague terms, what is "quantified, and related to the camera's output"?

    Sensor physics are very good known and all we have is raw sensor values (what happen before is not our task), in processing developers also use some knowledge of sensor lenses and filters properties, as well as how exact pixel position effects catching efficiency. But, for log discussion making simplification we can assume that sensor is almost perfect, has no natural vignetting, and no filters on pixels, and all we have is luma values. Where this does not fit we can use more complex approach.

    Note that this is topic about LOG things, not about light physics.

  • Light can be quantified as luminous exposure, measured in lux seconds. (equal to illuminance multiplied by the exposure period) You can relate it to the camera's output, for example, by saying that at a certain ISO setting, a certain luminous exposure produces a certain output value.

    ISO matters because in a camera log mode, the ISO setting changes the mapping between raw sensor values and the output log space. The mapping is not constant. And the sensor gain is different in log mode compared to in a display-referred mode, for the same ISO setting. And not every ISO setting is available in log mode. A camera log color space isn't just another gamma curve.

    I really suggest reading Larry Thorpe's Canon Log white paper: http://learn.usa.canon.com/app/pdfs/white_papers/White_Paper_Clog_optoelectronic.pdf

    and also Jeremy Selan's Cinematic Color white paper: http://cinematiccolor.com/

  • @balazer

    ISO is not fiction. It defines the relationship between the brightness of the exposure on the camera's sensor and the output values.

    ISO was orginally defined objectively as film speed. Nowadays, it's just a number that camera manufacturers manipulate to make the various camera modes appear to work in a familiar manner to consumers. Different sensors have a wide range of sensel pitches and well capacities, and have widely varying sensitivity to light. But they're all pretty much linear and with decent optics, each can handle bright daylight. In a fixed lens camera, the maximum output level of the sensor is scaled to produce enough overexposure to fully blow out the JPEG or H.264 encoder at whatever number the manufactor selects as the "minimum ISO" (typically 100, 200, or 400). At the opposite end of the ISO range, what counts is how high you can boost the ISO setting before the image gets too noisy, which is specific to each image sensor. At base, it's all relative to the highlight clipping level of the sensor, regardless of how you manipulate the tone curve and its exposure gain. Dynamic range extends downward from that highlight clipping point, until you run out of enough photon sensitivity or bit depth to discriminate anything useful but black.

    BTW, that Canon white paper is a prime example of technological marketing-speak. It's intended to wow you with mathematical diagrams in the hope you won't notice how the tonal depth of the C300's 12-stop dynamic range is obliterated by the crude digitization of the internal 8-bit H.264 encoder.

  • For the novices among us I think this technical talk is getting too far into the weeds. I don't believe anyone is being educated by this back and forth at this point. What i'd like to know is how are we going to be successful in exposing our cameras using V-Log L? can we successfully get predictable results using the tools in camera or will we need to use other means? How should we use Zebra's or Histogram etc to get the right exposure?