Personal View site logo
Make sure to join PV Telegram channel! Perfect to keep up with community on your smartphone.
is it possible to physically hack 2 GH2 bodies together for 3D
  • i'm just wondering if it's possible to get the 2 bodies close enough by removing the body cases, maybe moving stuff around? kinda brutal i know, it's a thought experiment:) i remember reading some posts about it somewhere awhile ago but can't find anything now. thanks for any info!

  • 24 Replies sorted by
  • @blackroom You could never put them close enough for closeup shots. Have you tried with "beam-splitters"? Beam-splitters have some polarisation problems, but could be useful...

    http://exposureroom.com/members/crunchy/4954e8d1037247e4abe563a4a7252d04/

  • @crunchy thanks for the link, nice footage! yeah i've been looking into beam-splitters. just thought it be interesting to theoretically construct a 3D gh2 in one housing, moving grip controls on the right side camera to make space for the left camera, for human eye spacing. then figuring out a new chassis and case.

  • @blackroom Interesting idea, although difficult to be realised. On the other hand, I cannot imagine to take 3D video/stills with fixed stereo base. I am changing the base according to the distance to the closest/(and also farthest) subject (and lens focal length).

  • I'm not 3d savy at all, but can't you just rig up something with cages. flip one of them upside down to get them side-by-side? Similar to what Peter Jackson is doing for Hobbits with Red Cams.

  • @griplimited : There are some reasons why not. The most important one is that GH2 (and GH1) have CMOS sensors. Readout of the entire sensor does not happen at the same instant on entire sensor's surface, but it goes line by line. This is the main reason why we have "rolling shutter" problems. If one of the cameras is inverted, forget about good 3D. I learned that hard way, since I did not know that EX3 cameras have CMOS sensors. :-)

    The second reason is that we would not get the lenses so much closer, since you must control the cameras simultaneously, so you need remote cables and video cable as well to monitor missynch. Try to reverse one of them with attached cables and you'll see that you did not get much closer than with parallel setup. :-(

  • @crunchy, yes u'd still want them to be adjustable, that makes sense. i had another idea to mount cameras as they are, side by side and try tilt shift lenses to get the lenses focal space closer together, but probably introduce even stranger artifacts, i don't really.

  • Shoot side-by-side.. (do convergence in post) - rule out close quarters - or get a mirror rig. There is a sync unit for the gh1/2 out.. Do a search here and you will propably find all the info you will need. Have a further look at 3d category on 3d setups, on how to make life easier.

    Reconstructing them into one unit seems like a bad (overly complicated) idea. You can get a camcorder 3d cam (the sony's are alright) if you indeed need to get closer and can't afford a mirror rig.

  • @RRRR thanks for some perspective:) yeah side by side with post convergence sounds good. also gopro hero2 3d rig looks decent but maybe 3d handicam is better quality. looking to do greenscreen work with actors medium to long shots. close up could do 1 cam 2d with post 3d but maybe a bit cookie cutterish. some research and testing to do.

  • @blackroom.. I'm pretty sure you can get stunning 3d with the gh2 (even side by side) if you spend time with it. Especially if you are planning to do medium / long shots; as you will have a flatter image if you aren't shooting wide, and a side-by-side gives you an (unrealistic) exaggerated stereoscopic effect. You 're unlikely to have any problems with 3% rules either since you are doing greenscreen. Could be interesting! If you can do it in a controlled environment it should definately be possible to do something good with that.

    I have tried the sony NX3D1E and it's a pretty handy camcorder (emphasis on camcorder), but pretty expensive, for what it is. Apparently the td20 is pretty much the same, although a lot cheaper but it only shoots interlaced.

  • 3d good is only with beam splitter, side by side rig work only with obj far 5/10 meters or more from cameras... convergence in post is not possible be cause when you change convergence you change the perception of space between all plane in the wrong way, or you do in shooting, or you MUST NOT do in post, be cause that causes a lot's of real headace to audience... this ones of real reason that there are too much disturbance 3d movie on market, and only two live 3d movie are good 3D movie... (avatar, the hole) (i talk real 3d movie, not converted movie like 90% of 3d movie in the theater)

  • @RRRR interesting thanks. will be experimenting soon when i get another GH2 and controller. also considering setting up for some photo scanning with multiple stereo pair cameras, there is an overlap in technology here. @madrenderman yeah that's what i thought, the post convergence only can work so far if at all. much to test out, thanks for your input.

  • @madrenderman did u see promethius? i thought the 3d was good, should be with the epic 3d rig an all...

  • @blackroom @renderman – yes, obviously you need 5 meter+ for side-by side but I assumed you have that? Maybe it´s possible to get a bit closer with greenscreen, have not tried that.

    Good idea.. actually you can start to experiment already if you have a bar that is steady so that you can slide the GH2 back and forth. (and a steady subject), shoot stills, move it around and see what you get.

  • and only two live 3d movie are good 3D movie... (avatar, the hole) (i talk real 3d movie, not converted movie like 90% of 3d movie in the theater)

    This is just plain lie.

  • @RRRR yeah i guess alot of testing can be done with one camera to start, pick up a macroslider. gonna test pair of GF2's as well, smaller camera but side by side maybe still not close enough. @vitaly could we start a best 3d movies examples list? a bit subjective but maybe get a consensus.

  • @blackroom I advice you to try out with greenscreen from the off. Maybe it will be ok to have pretty big separation for the kind of work you are looking to do.. Problems with too big divergence (and straneous 3d) occur between foreground / background but since you are going to control / replace the background it should be easier to get good results with the foreground.

  • @ madrenderman, @RRRR The minimum distance depends on the lenses, on background distance and partially on the final screen size and viewing distance to the screen. Ballpark value is 30-times more than distance between the cameras if using 35mm equivalent. If using 28mm equivalent lens (14mm lenses), the closest subject might be closer: 30*28/35 - times the distance between the cameras. Of course, exaggerating with wide lenses results in a quite artificial 3D with pronounced depth of the subjects.

    @RRRR By mentioning "Shoot side-by-side.. (do convergence in post) " you probably meant: shoot parallel and then change horizontal alignment in post. Namely, I think that some members did not understand properly your sentence...

  • @crunchy @RRRR thanks! the formula and green screen ideas are helpful. will post some tests soon.

    1. Samsungs new nx-system-3D? If samsungs new camera-system or body comes true then 60mm base is possible side by side. I wouldnt be surprised if they can be twinned.

    2. zeiss pc-distagon 35/2.8 has 10mm shift-70mm diameter. there are very expensive 20mm solutions. zoerk 35mm mamiya etc. so when calculating with gh2 124mm we get 104mm base min. but subject is distorted differently....! if the cables are not on the sides.... if i were you stick at canon s95 and use macrobox for closups. video is also quite synch if no high-speed-action.

  • 3d is very simple and complex matter, everyone think that 3d is simple, we have two eyes, distance between is X value, then we build 3d with this distance and is OK 3D is stereoscopic interpretation of our brain, our brain delete that cause noise, move eyes many times every seconds to keep space and info correctly. When you shoot in 3D you cannot feed to viewer brain these info, be cause you fix on screen a lots of variables that usually our brains change in real time, this is the reason that most of 3d movies are a bad 3d movies... i not see new Prometeus, be cause is not just arrived in my nation, when i will see i will can talk about it.

    @crunchy no, 3d is not a math equations, good 3d is good on small monitor or on big theather screen, i can talk that, be cause i work with best stereographer of world, their work NOT NEED POST be cause it's good on small or big screen... most of movie must be remastered and re aligned for big or small screen, be cause shooting was wrong... if 3d is a simple math equation, 3d was a reality from many years, we would have automatic 3d cameras and more... 3d is a fine art, and few guys can do a good 3d...

  • It seems to me that trying to do 3D with two GH2's would be problematic for other reasons as well:

    How do you get the two cameras to sync exactly? If there were even a slight difference in when each frame is read out and any motion in the subject, it would result in a very strange type of jello. I think it would make the viewer sick instead of just annoyed.

    Imagine if frame types (i.e. I, P, B) weren't synced perfectly as well. Wouldn't that produce a rather nauseating shimmer as well?

  • @madrenderman I think that I have some experience in stereoscopy. My 3D usually does not need much post as well (except horizontal and slight vertical shift), since I know what I am doing. And I am doing it for smaller screens (up to about 5m wide). However, you should be aware that 3D proportions (e.g. subject's depth versus height or width) depend on screen size, even when fixing the observer's field-of-view (in degrees). This is the main reason why some 3D movies made for "big screens" are so shallow on handheld devices, like 3D phones, etc. Namely, good 3D for theatre screens should not exceed about 1-2% of positive parallax.

    @cbrandin It can be synchronized for some limited time. Due to drift of the oscillators good synch cannot be achieved permanently. I am using custom-designed synchronizer. The first version (I am now using another version) can be found here:

    http://dsc.ijs.si/damir.vrancic/down_3d/Movies/

    (see 3DSLR_Master_01.jpg)

    I am usually taking clips which have up to about .5ms disparity (I am checking synch on-line with the synchronizer during taking video). By the way, you can check the videos on the mentioned site, but newer version of ISU clip is here:

    http://dsc.ijs.si/isu2013/

    (Multimedia downloads)

  • If you are going to do good 3D you should get an iPhone app. A friend of mine designed IODcalc (about $50) and it includes an exclusive 'roundness' readout (my suggestion). 3D can look like a diorama with layered 2D objects or look fully rendered (roundness) depending on your lens choice and inter-ocular distance. This friend was stereographer on my short 'Highly Strung' which won me an award from SMPTE at their Dimensionale Film Festival... so he knows his stuff. The other pitfall to watch out for (other than sync and rolling shutter) is no 2 lenses are the same so of the 3D is to be viewed on a large screen you have to fix the errors. Sony has the MPE2000 to do this but if you are using zooms you still need the 3D synced ones from Canon or Fuji to get you in the ball park. I built his 3D rig V2 and supervised a test for ATN7 footy last year (as my friend became very ill at the time) and even with the help of Sony experts and Fuji and Canon I was run ragged fixing stuff and have learnt just about every pitfall there is... and they are aplenty. For a laugh, I made this video of the exercise