Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
OpenEXR and other frame-based lossless file formats
  • Hello. I need in a high quality file format that is fast and can be compatible with Blackmagicdesign software. I found Open EXR. http://www.openexr.com/

    Of course it is slow and it does not allow me to play in realtime. But it keeps any grain details and extreme lighting because 10-bit floating point is used.

    Also my workflow is for 100,000 and more frame videos (many hours in Full HD), so I do not know how it is compatible with Windows file systems. I use RAID0 NTFS disks. Is there any problems or pluses/minuses of using this file format? Currently I use H.264/ProRes.

  • 12 Replies sorted by
  • For frame-based, that's the file format, for linear data (DPX for log). And maybe there is some kind of odd 10-bit variant but it's generally 32bit float or 16bit float.

    Big question is why? There are a few reasons for using frame-based rather than stream, either transcoded or the native camera format, and none of them are editing or grading. If you shoot h.264 then working in Open EXR is beyond overkill to the point of absurdity. I can think of few reasons why it would ever be necessary to transcode hours of footage to any frame-based format much less one so industrial strength.

    edit: frame-based files are generally used in effects or when dealing with scanned film.

    edit2: oh, you're interested for Windmotion, eh?

  • @BurnetRhoades yes, I have an idea to integrate it into Windmotion and to keep the results (4:2:2 and better with 10 bit or better), also with film grain. To keep film grain in H.264 a minimum 150 mbps is required with good level of artifacts.

    Currently open source (FFmbc) ProRes encoder I used is very slow for this, also has resulted compression artifacts, so is not good place to backup and keep old projects, only as a temporal container.

    The better idea for me is to use 10-bit H.264, but it is not compatible with editors, so convert to ProRes is needed anyway. So I looking for another file format.

    PS. I mean 10-bit quality (mantissa is 10-bit really for 16-bit float Open EXR).

  • Hmmm, maybe it's from straight 422 and not having 422-HQ. Prores 422HQ is 10bit 422 and good enough for no-further-losses dealing with transcoded h.264. After heavy processing one could make the argument that Prores 444 would be in order, but I would only entertain that if you're also combining imagery from non-compressed sources (cgi, painting, etc.) or you were doing something like Windmotion.

    If you're seeing further compression artifacts that's either an implementation error or some settings error, because only proxy and LT settings should show a really obvious difference from the source. Standard 422 is about 145Mbit and 422-HQ is 220Mbit. Standard 4444 is about 330Mbit and the newer 4444 XQ offers 12bit at about 500Mbit.

    I've done a fair share of transcoding from 100-150Mbit All-I h.264 shot on the GH2 to 422-HQ with fairly noticeable improvements to color rendition (banding, bleed on edges, etc.), no additional noise, no degradation to luminance, with 5DtoRGB. Perhaps you're just on the cusp of having higher bandwidth 420 versus the standard Prores 422. If the HQ option isn't available in FFmbc I'd try 4444 first. It's libel to still be smaller than the same footage as OpenEXR.

    And if you can find it, I highly recommend the Miraizon codecs pack (Prores + DNxHD). They went out of business some months ago so you can't actually buy their codecs anymore but I'm sure they're out there, in the grey zone. This would let you write to all flavor of Prores from any application that supports Quicktime, all options. And though I despise the Avid codecs as trash, sometimes you have to deal with DNxHD and when you do the Miraizon codecs are better than those you can get from Avid directly.

  • @BurnetRhoades I use currently ffmbc with -vcodec prores -profile hq -qscale XXXX -pix_fmt yuv444p10le and -pix_fmt yuv422p10le. FFmpeg is uselless for me because it does not store color information tags, so we have color shift problem.

    About the options: the problem is not in low quality options but with to select a good qscale with is both good for project and backup together. Prores has visible MPEG artifacts for grained source with adequate compression, but very large file size, so it is useless to backup - a recompression to H264 is required to keep space. This spents many of transcode hours even on Intel i7 4.0 Ghz. Probably it is because of inefficient open source code. But I have no possibility to improve it.

    My idea is to fully switch to near-lossless workflow, because grained H.264 is too big and is near relative to frame-based video formats. I selected Open EXR because it has very good quality source, also I can use it to develop my own read/write plugins.

    Any non-open source codecs are useless for me. I cannot integrate them to my software due legal and programing difficullity reasons.

    We can see compression artifacts if looked at frame-by-frame slow slide with 2x-5x zoom. GH2 hacked have them too, so combination with Prores you will get very reasonable compression level. But after restoration, the quality will be better, so these artifacts of Prores very visible. This is a key why I search better quality for smallest possible size with fast playing.

  • Prores does not impose any MPEG artifacts that aren't already in the source. It isn't an MPEG based codec. Any artifacts introduced are being created by the implementation. If you were using commercial or even shareware implementations you would not be having this trouble. Prores is more than adequate for lossless backup and easier editing of h264 sources. That's simply a fact.

    The problems you're experiencing are from usage and/or implementation, they're not related to Prores itself.

  • @BurnetRhoades Any non-lossless codecs produces artifacts. You can see no artifacts, but I can see. I have a difference tool that give me pixel-exact difference to see. It is good that you are defending an abstract file format, but I prefer to work with what is available in the real world also usable in my solutions. If you are completely satisfied with what you are doing, it is also fine. But I was very concerned about reaching a compromise in high-end limit, up to the quality required for the most demanding tests. Because it is a goal for what I research for a new file format.

  • A "pixel-exact difference" would also false flag chroma up-sampling which would likely occur when moving from a lower quality, lower color sampled format to a higher quality, higher color sampling format. That difference does not equate to artifact.

    But go with god.

  • @BurnetRhoades the difference is between a lossless source vs lossy (Prores) version, made from it. Please do not point me to non-adequate comparing. You are wrong. Also you wrong with understanding of the upsampling technologies. But, anyway, this is offtopic. Please return to OpenEXR theme.

  • I am using OpenEXR in my daily life. We kinda organized our workflow around it. Highest possible quality among any format that exists and very easy to adjust for the needs. Although, I don't know anything that can play a full HD sequence in real time, we usually work with proxies.

  • Although, I don't know anything that can play a full HD sequence in real time,

    @GeoffreyKenner Darby Johnston's free DJV can, in Linux, Win and OSX
    Doesn't work properly within HiDPI modes, known issue, but still can be used with "normal" rezs

  • @GeoffreyKenner Could you write some things about your workflow? How you use proxies, how to organize a backup, some tips etc.?

  • @rean It really depends of the project, but OpenEXR is the main format we use when we have to deal with visual effects. We render Everything twice, first Each shot into a sequence of OpenEXR images in a single doc accompanied with a proxy (basically a smaller version of the file, 480p/720p, could be h.264, ffmpeg, avi...) The proxy file only serve the editors and some VFX visualization. Once the editing is finished, we have a script that run all ffmpeg file and replace them temporarily for the render giving us a master render for delivery after the VFX compositing is done. We have named in our pipeline two main folder one called "Input" which contains the original file from the camera, the OpenEXR transcode for the artist to work and the proxy. We then have a second folder called "Output" which is where after the VFX have been done we render the new sequences as OpenEXR with the effects added. The script when rendering in the NLE try to call first the file from the output folder and if it doesn't exist it's going to search for the input folder. There are further things happening as to how it get the sounds but I'm not a scripter or a sound designer so I can't really help you on that.