It is very hard to tell that this is in reality. My understanding is that this is similar to my idea recently published on blog. They get 720p60, and change ISO for each frame, or just processing. It could be not easy to edit this stuff.
They managed to override the cameras native frame rates and got control of the shutterspeed even allowing a 360 degree shutter, it's a manipulation of that.
They say it will work in 24p at 1080. Basically recording 48 frames for 24p with every other frame under exposed i guess. Supposed to work for every recording mode even 60p. I guess it's worth it to wait and see how well it actually works in practice.
Or it might be 24 frames with 12 underexposed. Then streams are separated in post and the missing fields interpolated. Whichever it says they have a script to automate the process so you end up with your normal footage and your under exposed clips separated.
Way cool. I think the only issue would be not to record fast motion with or go handheld with a high shutter speed or you will see wired motion artifacts. You can see it a little with lady at the terminal in the beging if you look close at the pen moving across the screen. But whatever......
HDR video on a DSLR is huge news! I could totally use that.
This seems like a very technologically doable feature. No special processor chips etc. Very helpful because it both increases dynamic range. A problem with GH-2 and all HDSLRS, and very well may be a workaround the 8 bit video. Combining two video streams, even with just a 1 stop exposure difference, could be rendered into a higher bit depth working file for grading. Could eliminate banding.
They once shouted of the death of HDSLRs for video, as AF100, Scarlet, various Sony camcorders came out. But with these new feature discoveries, Our little cameras have extended usefulness.
I bet an even closer difference in exposure could be useful in noise canceling in the image. Even if it is not possible on a GH-2. I hope our favorite Panasonic Engineers start to feel competitive with Canon's feature in time for the GH-3. Very cool!@
I watched a video, I think posted here at P-V, regarding Red HDRX example of post process, it required one of the two layers to have post motion blur applied (twixtor?) to make the luma blend sit better. It is worth the effort for selective shots, making almost impossible shots possible. Interesting space evolving.
this is hdrx for a bargain price. this is huge, if its editable! No DR Problems anymore? DSLRs final problem solved, using vitaliys idea! I reeeally hope this is doable on GH2, in theory. I won't use a 550D. :)
Problem is: cant work on fast motion. there are never 2 of the same frame.
This is truely amazing. This makes the 550D a keeper, really. Amazing!
@fatpig "Problem is: cant work on fast motion. there are never 2 of the same frame." This is true, but if you first take a short frame, then directly the long one, and you add virtual motion blur to the first they should blend pretty nice. Guess some intelligent blending algo can make up for this on long term.
Switching to a low ISO every other frame? Genius! Is this something that would be possible to hack on a GH2?
Even if you only had 24 fps (12 x 2), it would be possible to generate in-between frames in post, and have two 24p clips with different exposures. Totally worth it even if you lose some temporal detail, in my opinion.
It is coming to the 5D Mark II as well. I have just tried it out (private beta). It isn't done in-camera you have to blend frames in post. It does indeed do the same low ISO / high ISO alternative frame recording on the 5D Mark II... very cool feature.
Won't shoot without M/L now. Here is a sneak peak at the unified ML running on my 5D Mark II...
Exactly how the blend is implemented, I think, should make a massive difference when it comes to time differences and end result. Now, it would make sense to use some of the color information, but not all - even with motion estimation, which in itself is arduous - there will be a leap in missing movement information. So basically you are shooting in the blind for a particular scene.
I'm guessing that a "base" frame of sufficient low level information and an "extended" frame of some 50 levels of highlights would yield pretty reliable results, coupled with curves that mix the frames well. It could be done vice versa as well, depending on which end of the spectrum one prefers for moving objects.
The Nikon D5100 has an in-camera 2-frame HDR feature that works much like this for JPEG still photos only. I tried it out and it did make a noticeable difference in highlight and shadow detail. However, it really needs the camera mounted on a tripod with the mirror up to avoid image blur, I couldn't get usable results out of it handheld. That's at 16Mpix resolution, though, at 1080p it might be fine.