Personal View site logo
420, 422, 444, rescaling and colors flame
  • Made from http://www.personal-view.com/talks/discussion/9542/gh4-4k-panasonic-video-camera-official-topic#Item_798

    Intended for flame only, theories of "different magical colors" and such.

  • 114 Replies sorted by
  • "10 bit color has a large "hide" amount of color information, once its recorded into 8 bit, all these "hide" information is lost FOREVER,."

    Unfortunately it means quite big lack of logic. Please check link I provided above. To oversimplify - 8bit/10bit is not magic, it is electrons count (in linear case of raw, in same logic apply in nonlinear spaces). If you get four buckets that each can contain from 0 to 255 electrons and put their contents in large bucket you'll clearly get from 0 to 1023 electrons. Is it clear?

    Also 420 or 422 also does not mean that it is inferior by default, it just means that encoder is using human vision properties to reduce amount of information (in color sampling).

    I do agree that is no magic, but resuming my meaning is that ok to get 10bit, but kinda false 10bit comparing to a native 10bit (or at least coming from a source more than 10bit). The math makes 10bit but the reality colors don't. The algorithm cannot imagine and re-create the color that has already been destroyed by lowering the bit depth. As you said its already encoded "using human vision properties to reduce amount of information". So, 4k 420 has only the human vision amount of color, all the "deep" color has already been destroyed, you just get better reproduction of these colors, better pixel sharpness in 2k, and of course some better gradation if this 2k frame is stored into a 10bit file.

  • The math makes 10bit but the reality colors don't.

    So, 4k 420 has only the human vision amount of color, all the "deep" color has already been destroyed, you just get better reproduction of these colors

    Ok, let's make you explain it formally and with logic. Nothing is destroyed "magically". On sensor level loss occurs during well clipping and due to noise (heat noise and ADC noise). We are not talking about this losses. But it is good to know that it is reason why we do not see 16bit ADC in m43 sensors (as it makes zero sense).

    Thing you are talking about is about "losses" that can occur during RGB raw conversion into YCC for each pixel. This conversion can be complex and affected by many user settings. So, none of this "losses" are permanent and you can get any part of original information using proper exposure and proper settings.

    I'll oversimplify you again and just illustrate your point for people who will read this flame :-)

    Suppose that we will kills all deep shadows for all channels, say 2 least significant bits (so 6 bits only contain information). If you sum 4 such values you will still have 2 least bits without information, but 8 top bits will have valuable information.

  • I mean 2 kind of losses: one is by lowering the bit depth and other is codec losses. If you get 12bit RAW and compress into 8bit compressed codec. These are permanent losses.

  • If you get 12bit RAW and compress into 8bit compressed codec. These are permanent losses.

    It is completely other thing. You realize raw is linear (with small simplification), compression is not related to this at all (again, in simplified terms), that 8bit YCC is non linear (and relation of this to raw is dependent on bug amount of things and settings) and human vision is also non linear?

    As you now want to argue that 10bit Prores has permanent losses to 12bit raw. Yes, may be. But they are minimal due to different meaning of this two bit values. And it is not related to our previous posts at all.

  • Your logic is based on the "visible information", I'm talking about invisible, but stored, information. Higher bit depth plus less compressed codec can store more these "invisible" information. The invisible, but stored, information can be pushed in post. 4k 8bit 420 already "kill" all the invisible information. So how it can be achieve in 2k? The invisible, but stored information, can be used to increase the dynamic range. I'm not only talking about color, but data. Proper exposure doesn't save data loss.

  • Information is information, no such thing as "invisible information" exist. It is just bad term. You also constantly mess many thing in one. Please read carefully my posts, as each conversion and compression play it's role but it is clear role. No magic here.

    Topic area is ONLY non linear YCC space and rescaling. Not real or imaginative losses in raw conversion.

  • By "invisible information" I mean all the data that is stored but its hided by highlights or darkness. If 10bit, they can be retrieved, once 8bited, they are all lost. That's simple.

  • All the data you try to "restore" is already an averaged data. So, it is possible yes, to get more perceived Dynamic range, and also perceived resolution both in chromatic and luma

    This data has not only has been output from sensor in a limited way to ADC (and nothing has to do noise over here) but in the moment it enters the codec pipeline data is INTRINSICALLY LOST, due to the nature of a codec. If All intra is used diffrent results from IPB compression may occur.

    When an algorithm is made to "restore" information, from 4K to whatever shit lower resolution you choose, is only making average calculations. Can it make more data from a higher to a lower resolution video, of course, but dont expect a perfect image since the information added in a proper way is an averaged data.

    Dont confuse mathematically proper , from perceived proper things.

    On paper sounds grate, and results may end better than you think, but yes my friend the information lost, cannot be recovered by any means, although can be more than properly predicted using different Technics.

    and good prediction is all we have at the end

  • All the data you try to "restore" is already an averaged data

    What does this mean at all? Again vague and wrong terms like "perceived Dynamic range".

    This data has not only has been output from sensor in a limited way to ADC (and nothing has to do noise over here) but in the moment it enters the codec pipeline data is INTRINSICALLY LOST, due to the nature of a codec. If

    And this is bunch of crap. What is "limited way to ADC"? Who the hell said that no noise happens in pixel level and in ADC readout? What is "codec pipeline" and "nature of codec"?

    All intra is used diffrent results from IPB compression may occur.

    With same bitrate IPB always preserve more data and for most cases much more. It is whole point of adding P and B frames to use data from different frames to increase quality.

  • What terms!!! im no engineer!!!

    Perceived dynamic range is all dynamic range that is interpreted by human eye, but not necessarily there. Like 4:2:0. Why does not bother us that low color and luma information using that compression???

    bunch of crap????? Data from sensor to ADC (that is inside sensor) in limited, since it enters as analog input, with noise, and signal loss, degradation, etc ( I never said noise is not present at pixel level) this information is converted to digital interpretation, 01000101001 when original is a wave so....data is loss damnit!! , why i even bother to explain this shit if you even know bayer patern aint even a pixel per pixel pattern so more interpretation happening!

    Code pipeline is inside LSI where the supposed RAW data enters. As you may know any processor that has intergrated code inside, has what is called a enconding, decoding pipeline. The nature of EVERY codec is to COMPRESS so data is loss by different compression methods.

  • @endotoxic compression doesn't mean loss of information. There is destructive and non-destructive compression. You don't loose data with ZIP. And if you have a ZIP file you can get the original uncompressed data without a single bit loss (or I think zipped software wouldn't work properly)

  • @flablo

    any compression get rid off data, non destructive only replaces things already loss.

    Quick explanation: I have an image, composed of 10 reds 1 green 3 blues 8 oranges, i keep the green the blues. The oranges and the reds only keep the position information, the color information i throw, and only put on that over all those positions a had red. That is lossless.

  • this information is converted to digital interpretation, 01000101001 when original is a wave so....data is loss damnit!! , why i even bother to explain this shit if you even know bayer patern aint even a pixel per pixel pattern so more interpretation happening!

    Original is wave? What? Original is analog value (charge of pixel well), but not continuous (as it is proportional to electrons number :-))

    if you even know bayer patern aint even a pixel per pixel pattern so more interpretation happening!

    What? May be you wanted to say that each sensor element only can get specific range of wavelengths :-) And to get full RGB at sensor interpolation is used? Yep, in fact this interpolation produces correct 422

    The nature of EVERY codec is to COMPRESS so data is loss by different compression methods.

    No. Data can be lost, can be visually lossless, can be lossless, and compression can even increase file size :-). Even H.264 can be lossless, yes.

    any compression get rid off data, non destructive only replaces things already loss.

    Incorrect.

    To be short all your points are partly or fully incorrect. Please geed good book or article on sensors, raw to RGB conversion and about H.264 compression. As it is almost pointless to explain anything at this level now.

  • you want to flame good in this topic vitaliy.

    all analog signal is measured by wave. Can be square, but wave. The value of the pixel well is not continuous, giving a dynamic measure in time, interpreted by a wave in the ADC.

  • all analog signal is measured by wave. Can be square, but wave. The value of the pixel well is not continuous, giving a dynamic measure in time, interpreted by a wave in the ADC.

    It is called flame topic, not bullshit topic, if you noticed :-)

    Thing that you just said is very incorrect as you even do not understand that you are talking about.

    And this

    all analog signal is measured by wave.

    Must be added to famous citations :-)

    As I said, please READ SOMETHING and THINK before posting.

  • "4K420toHD444" is the new "micro43crop/dof/lens equivalent"

  • This was discussed a few years ago. Extract below from http://www.ambarella.com/docs/1080p60.pdf ( it's no longer online and I can't find a copy)

    "This creates the opportunity to use 1080p 60 4:2:0 60p at 8bits-per-sample as a unifying format for both contribution and distribution. This is the case because down conversion of 1080p60 4:2:0 at 8 bits-per-sample can deliver 4:2:2 with increased dynamic range (almost 10 bits) at both 1080i and 720p resolutions"

  • Is anyone aware of any software that performs a 'supersampling' (to use Avid's words) function that will allow transcoding from 4k to 1080p with all the above benefits? I don't think Avid's proprietory 'frameflex' will do the job, somehow.

  • This is not about compression, I mean it is, but not in the sense that software that would do the up-sampling or whatever the proper term would be, must create missing information out of nothing, it just rearranges the existing color information into new pattern with more color resolution. So the up sampling uses the information that would be thrown away in simple down-scaling (I am over simplifying I know), and ads +2 and +4 to Cb and Cr to make 444. Luma is 4 anyways.

    Am I even close to the logic behind this process?

  • @mrbill if you reduce an image in Photoshop you are essentially doing this, as long as you don't use the "nearest neighbor" interpolation method. But even in Photoshop there are different interpolation methods, so I'm wondering if the algorithm to perform such reduction is not unique, and some are better than others (?)

  • Scaling a 4k image to 1080 will improve colour rendition to a close approximation of 4:4:4 colour sampling but 8 bit is 8 bit and stays 8 bit. Bit depth is a measure of how many luminance samples there are between black and white limits. Colour sampling is the resolution in the spatial plane i.e. X/Y axis. So we will see an improvement in colour resolution but not bit depth. Banding will still be an issue.

  • Let's see if I understand this correctly... for arguments sake let's use 0-250 as the 8bit pixel data range, and 0-1000 for 10bit.

    In a 2k 8-bit file if a color is captured as an actual value of 999 in 10bit it has to convert this to 250 in the output file, so some info is lost as it cannot be 249.75

    But if there are 4 nearby pixels in 4k that are 250, 249, 249, 249 in 8bit...this can be down sampled by adding the totals, which would give us 999 in a 10bit pixel, a value otherwise not attainable in 8bit.

    Is this reasoning correct?

  • For 4k 8-bit to be downsampled to something equivalent to 10-bit, it will need extensive dithering. This 10-bit will be a mathematical approximation, a bit like how bayer sampling works. It will in other words be inferior in color and luma accuracy to "real 10 bit" from acquisition.

  • For 4k to be downsampled to something equivalent to 10-bit, it will need extensive dithering.

    What kind of dithering? Did you read link above to understand from where 10bits can come from?