Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
Please, support PV!
It allows to keep PV going, with more focus towards AI, but keeping be one of the few truly independent places.
The CGI and VFX thread!
  • 106 Replies sorted by
  • @otcx Thanks but I didn't use any rotoscoping, the soldier is simply keyed (Keylight). Or do you mean the tracking of the Light on the helicopter? I did that manually, it only took me 4 keyframes or so ;)

  • @Pechente

    Nice work. What soft did u use in rotoscoping? I prefere Mocha, but always lookin something better.

  • Just noticed this thread. Great examples.

    @bwhitz: I just started working on 3D this year. I use Modo. It is great for modeling and really nice Renderer built in. I also have C4D and hope to work on that later. Been watching Retro-SciFi. I really liked your ship. I want to learn how to build those. This is my 2nd render with Modo...mainly learning how to Texture now.

    http://photos.jamesthorpe.com/Other/3D/21462416_6HtjM2#!i=1710496558&k=4VtdRvQ&lb=1&s=A

  • @Ralph_B "When the car was hovering around, did you keyframe that too, or does C4D have a random generator that you can apply to position?"

    Nope, it was all keyframe. But a random hover generator sounds like a useful plug-in.

    ...I just wish I was better at coding stuff like that.

  • A short breakdown that I recently put together.

  • @bwhitz Yes, that's good animation technique. When the car was hovering around, did you keyframe that too, or does C4D have a random generator that you can apply to position?

  • @Ralph_B "Question: Just before it takes off at high speed, it does a little anticipation, where it moves back a bit. Did you do that by hand or did Cinema 4D do it automatically?"

    That was done by me intentionally... it just seemed more natural.

  • @bwhitz Nice work with the Skycar. Question: Just before it takes off at high speed, it does a little anticipation, where it moves back a bit. Did you do that by hand or did Cinema 4D do it automatically?

  • Well, you could always use trial-and-error then... Try a few test tracks and see what they look like. I doubt that a margin of .1-.2 is really going to effect the outcome that much.

  • @Pinger007 You probably already encountered this but just in case, I thought I'd share. One guy said "Panasonic hasn't, as far as I know, released the exact true width of the GH1/GH2 sensor, but from pixel calculations we think it's using an area of 18.9 x 10.6 or thereabouts."

    If you found official specifications from Panasonic, they could be really useful not only on here, but on other forums like that. So I'm wishing you the best of luck with all this.

  • @Pinger007 "Is the sensor 19mm or 18.89mm???"

    In reference to the use of three question marks instead of one I said:

    "...appeared to be getting either confused or frustrated (though I may have misread that)."

    Then you responded: "Neither frustrated or confused (please don't assume), just wanting specifics."

    Obviously I misread you, but no assumption was made. :) Best of luck getting confirmation about the specification.

    Where does Panasonic indicate 19mm? It might be helpful in sorting this out to see whether they specify things to the nearest tenth of a mm elsewhere in the document (and if they only go the nearest mm, I would simply use the more specific value).

  • Neither frustrated or confused (please don't assume), just wanting specifics. DPreview specifies an 18.89mm sensor, whereas Panasonic specifies a 19mm. Just wondering which one is correct.

  • here is some stuff from last year where we used GH2 footage for 3D tracking. Worked well:

  • @bwhitz

    You're right and I wasn't trying to be anal. The only reason I got specific was because pinger007 was trying to calculate based off of that rounding and appeared to be getting either confused or frustrated (though I may have misread that). I was just trying to say that the only number suggested so far was the one on the diagram you (helpfully) linked to. :)

    I've edited my post to better reflect that.

  • "So far the only person in this thread that has said 19mm also linked to a diagram that didn't coincide with the statement."

    Ok... 18.8. No need to be so literal. I was rounding up.

  • @pinger007 If the diagram at http://www.bmupix.com/journal/2010/9/21/gh2-my-next-camera.html is accurate (which was the GH2 specific link mentioned in the one @bwhitz specified), then it looks like it should be closer to 7.25mmX4.09mm, not 7.31mmX4.12mm.

  • @pinger007 So far the only person in this thread that has said 19mm [EDIT: clarified they meant rounding to the nearest mm]. But you can quickly verify for yourself whether the video mode has a different FOV or not.

    Put the camera on a tripod, shoot a frame at 24H and then (without moving the camera) shoot the same frame as a 1920X1080 still (that would be the "S" setting in 16x9 mode). Then you can compare the FOV of the two shots.

  • @thepalalias

    I understand multi-aspect sensors. The confusion was in the statement regarding video mode being wider than photo mode.

    So assuming 19mm x 10.7mm (21.81 diagonal)... The question is, what size is the part of the sensor used in ETC mode? I calculated it to be about 7.31 x 4.12 (8.39 diagonal)

    Is the sensor 19mm or 18.89mm???

  • @bwhitz I looked at that diagram. It doesn't say anything about video mode being wider than stills - just any mode that uses a 16X9 aspect ratio, whether video or stills. :)

  • @pinger007 Panasonic has been using "smart" multi-aspect sensors in many of their cameras for years. Even my old TZ5 was similar. The 16X9 mode uses the widest section of the sensor and the output dimensions (in terms of pixels) reflects that while the 4X3 mode has the highest overall resolution.

    Note that the diagram refers to the GH1, not the GH2, but the concept is exactly the same.

    Yes, the 16X9 mode (whether video OR stills) goes wider than the it does in 3X2 or 4X3. So no, it does NOT grow for video. If you use the 16X9 mode in stills, it uses the same section of the sensor.

  • I concur regarding using 2D motion blur like ReelSmart. We wrote our own at VisionArt, as 2D motion blur was much faster than rendering it 3D. We did a bit of a hybrid approach, where we used 3D data from Houdini to help calculate the 2D blur for increased accuracy, particulary if objects were moving toward or away from camera. ReelSmart does a nice job as an off-the-shelf solution.

  • @ Vitaliy_Kiselev

    Can you confirm the above statement? I've never read it before (didn't know the sensor can ”grow” larger for video mode). Are the forums at dpreview usually reliable? Thanks.

    Thanks for the link bwhitz. Hope to learn more on this subject.

  • @pinger007

    Ok, yea. The setup must be a little different then. But as far as the math, I think you've got it... except the imaging area of the GH2 in video mode is actually 19 mm x 10.7mm. The sensor is actually wider in video mode than in stills... check out this diagram:

    http://forums.dpreview.com/forums/read.asp?forum=1041&message=36396593

    ...it's an interesting sensor.

  • Here are a couple simple things that might be helpful to beginners in regards to getting a nice composite. They are really basic today but back when I cut my teeth, they got ignored frequently on several high profile projects.

    • Look carefully at the saturation and luminance levels of the "live" elements when you create the computer ones. Even if you can't use advanced techniques like image-based lighting/global illuminations/mirrorball HDRs/etc. you can do a lot by just eyeballing things. A good rule of thumb: when in doubt, make sure your generated elements do not exceed the saturation of your live elements. It draws more undesirable attention to the gap between them if they are over-saturated than under-saturated.

    • Always match framerates between different compositing elements. If you look at Saturday morning cartoons from the early 90s, you can immediately tell when they started doing "effects in post" because the framerate was higher than for the normal animation.

  • I utilize Syntheyes and PFTrack to do my 3d tracking, and then bounce it back to Maya and integrate that with either Nuke or AE.