How to export 3D data to compositing in After FX using CV-VRCam?
Posted: 15 April 2016 04:17 AM   [ Ignore ]  
Total Posts:  1
Joined  2016-03-28

Hello, Rick Barrett!

This Tutorial has worked very well for me. But, what is the best workflow to use CV-VRCam to render stereo footage from C4D with 3D data (camera, light and objects with external compositing tag) and import inside After Effects to do post-production?

I’ve tried to do it using Skybox, but the elements I need to track looks completely unmatched. Could you give me one direction?

Thanks a lot.

Profile
 
 
Posted: 01 May 2016 03:04 AM   [ Ignore ]   [ # 1 ]  
Avatar
Total Posts:  6
Joined  2013-03-01

Yeah, I’ll +1 on this.

Any one?

Profile
 
 
Posted: 26 September 2016 08:44 PM   [ Ignore ]   [ # 2 ]  
Total Posts:  1
Joined  2016-09-25
Germano Mariniello - 15 April 2016 04:17 AM

Hello, Rick Barrett!

This Tutorial has worked very well for me. But, what is the best is semenax and the workflow to use CV-VRCam to render stereo footage from C4D with 3D data (camera, light and objects with external compositing tag) and import inside After Effects to do post-production?

I’ve tried to do it using Skybox, but the elements I need to track looks completely unmatched. Could you give me one direction?

Thanks a lot.

Hey Germano did you figure it out?

Profile
 
 
Posted: 27 September 2016 07:41 AM   [ Ignore ]   [ # 3 ]  
Avatar
Total Posts:  8532
Joined  2011-03-03

Hi everyone,

This thread has been moved, so I will take a look at it. I’m not even pretending to know how to do it in the way it was asked (Ae has no on-board tools to have VR in 2015.3), or if that is possible at all. Perhaps clearing some parts of the pipeline in a discussion helps to find something.

I’m not aware that Skybox supports stereo footage [import wise]. What do I miss? (My trial version had three sessions which I used a while ago. I do not have that plug-in, hence my question)

My experiments with 3D stereo (not VR, to keep it simple for now) have shown that the default “Stereo Scene Depth” in Ae 2015.3 is too large, while setting up the camera with a scale of “one pixel equals one cm”. The value that I have found works is 0.1675%. (This ignores that the camera is pretty weird set up in Ae, with one inch being 72px, where we all know that the screen size can vary independent of the resolution, a common mistake - carried along since pretty much ever, which is true in print [my guess where the nonsense has originally started, but never ever was in Broadcast true at all]. In “wrong” Ae default terms, 100cm equals 2834.65 pixels, if the camera is the leading idea here at all, which matters anyway only for DOF and focus/target point, etc. My take on this, set up the camera-back so the 72px/inch is not longer following the nonsense.)

My set up was based on UHD. While the left right camera was 6.5cm apart and 6.5cm (neck wise) moved forward.) The pixel numbers of the camera pos in Ae speaks then a clear language; No doubts here. I see here certainly a huge source of trouble, if that is not acknowledged. In short, the stereo camera pair is different to the C4D if not adjusted, if C4D was set up in a metric system. I think that 1000cm and 1000 pixels are an easy match, compared to one inch and 72pixels. But that is maybe up to everyone. Inside of a team and inside of a pipeline, this must be limited to one standard only.

My current understanding is, that the “Stereo Render” of the VR-Cam has the “Poles” melted. In this way one can look up and rotate the head without problems, but with the loss of depth-impression along the vertical axis. This effect is a gradient like effect.
This translates to me as going from a horizontal eye distance of a normal viewer to a eye distance of zero in the vertical direction. To match this with objects in an XYZ space might create the first problem.

The stereo render represents the parallax given from the initial camera pair. Since it has been merged to 2D footage left/right, my impression is, that each of the stereo cameras needs its very own set up for this. How that needs to be organized, will depend on what is requested as starting point, as the stereo footage might determine some steps already, being rendered and in this way kind of in-flexible—parallax wise.  In CC2015 the “create 3D rig” produces two “cubes”, if there was one based on C4D solids. Which can then be filled with the 6-side view/twice=stereo, in two passes I would think. But the cubes should be very large. But are object in Ae 3D closer or not will create another problem. Also as mentioned before the pole problem needs to be address the equal pole problem, and linear with 0º seems for the intermediate step the best I can see for now. I found also that objects too close to the camera result in distortion, safe to say horizontally 100cm seems fine, vertically as Rick’s green shape suggest, at at least 300cm. With practical rigs one has to deal with zones of no coverage close to the middle point of the rig, between camera views. See ASC magazine Oct 2016.

So: Was the idea to mix stereo footage or just VR-Cam material to get a VR-stereo result?

If you describe your project, how far would be the “VR cube” from the Ae camera? Since the data is merged to 2D and to my understanding “Stereo Convergece settings do not apply in an VR 360º from VR-Cam, is the use more to simulate environment or do you have any other need?It is common knowledge that too close action will result in discomfort, as described in “Digital Stereoscopy” by Benoit Michel. Speaking of Camera, the cm vs pixels, as well where the cameras are positioned is absolutely crucial.  the 6.5/6.5 cm pair between eyes and rotation head axis is not a fixed value. But if you look from a mono camera and adjust things, it will fall apart in 3D view [left/right]

All the best


Added content: Oct. 01

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004

Photography For C4D Artists: 200 Free Tutorials. Texture, Panorama, HDRI, Camera Projection, etc.
https://www.youtube.com/user/DrSassiLA/playlists

Profile
 
 
Posted: 23 August 2019 11:54 AM   [ Ignore ]   [ # 4 ]  
Total Posts:  1
Joined  2019-08-23
Germano Mariniello - 15 April 2016 04:17 AM

Hello, Rick Barrett!

This Tutorial has worked very well for me. But, what is the best workflow to use CV-VRCam to render stereo footage from C4D with 3D data (camera, light and objects with external compositing tag) and import inside After Effects to do post-production?

I’ve tried to do it using Skybox, but the elements I need to track looks completely unmatched. Could you give me one direction?

Thanks a lot.

Hi Germano did you figure it out???

Profile