A new version of Cineversity has been launched. This legacy site and its tutorials will remain accessible for a limited transition period

Visit the New Cineversity
   
 
CV-VRCam: technical limitations? Motion Vector Multi-Pass for example ...
Posted: 21 February 2016 01:38 PM   [ Ignore ]  
Total Posts:  8
Joined  2015-09-22

Hi everyone,

is there a list of technical limitations of the VRCam-PlugIn? I quick searched through the video tutorials, but didn’t find it yet.

In my case, I tried to render a Motion Vector Pass for postproduction motion blur, but is doesn’t seem to work - the result doesn’t look like expected - see the file attached.

I guess, this pass isn’t supported by the PlugIn - or ist there a way to render the motion vektor pass with CV-VRCam?

Kind regards,
Marten

Profile
 
 
Posted: 21 February 2016 04:21 PM   [ Ignore ]   [ # 1 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Hi Marten,

I edited my post here after a longer exploration time of the problem:

This is easily reproducible. A scene file is perhaps not needed.

I have tested any combination and the idea that I got so far is, it is currently not working. Even The Motion Vector Render Pass (image not data) fails as well.

The Problem shows even up with no animation at all. My initial problem to reproduce it was given on a specific Motion Scale. Even the Motion tag is not helping.

As a side note, I’m not aware of a Plug-In that is suited for this use, as we talk here about a single frame that requires a right to left frame edge treatment as well. Left to right is typical, but the seamless blur “around edges, might be a new feature to request form those companies. The workaround would be an off-setted image and depth path combo masked out to the new sides (left and right) placed on top of another not off-setted combo. Twice the render-time perhaps.

The Support might have a specific answer to that, but since VR is a topic in discussions these days, I like to share some thoughts below and why I think that this “problem” wasn’t encountered, as the inner workings of such VR presentations have a deeper complexity in terms of motion blur, than a simple screen or even “3D Stereo” representation:

========

Motion Blur is a tricky question for any individual interactive 360º*180º view.

Technically one can set easily parameters for the rendering and all is set. The reality is: it is not. Why is that?
Let’s explore this with a simple example, a car ride on the highway. As long as the driver has the road as target (looking straight forward), any motion blur is simple. With this a motion vector should be predictable. But is it useful at all?
Since we talk about interactive experiences, it is not any longer. Imagine there is a building along the high way, and the interest of this building leads the viewer to have it as visual target. Even the car is fast, the counteraction of the driver is now focusing on the building leaves the “change” to a minimum, and with that the motion blur should be much lower. If you look just out of the side-window, the building would become the most motion blur. Hence why in movies the car has to slow down for such shots, as it looks otherwise too fast.
If we had motion blur on that building based on the side view position, and would stay focused on it during the movie, we would see a blurry result, while expecting more sharpness.

My personal conclusion for this problem, if you want to have a proper motion blur, it is based on the change of the initial render camera (equal to the car speed in the example), the change of position of the parts in the movie (other cars) and the actions of the viewer as mentioned (being fixated on the building vs fixed in view). This complex formula, can’t be predicted during rendering, nor can be rendered motion blur taken out during the VR viewing.
The only way to make this happen in a proper way is given while the movie plays and the the user actions are given. To me, that sounds like a motion detection plus at the same time user interaction matrix, based on realtime engines.

The perfect answer would be to use extreme high frame rates with no motion blur, from my point of view. Since we can move our head very fast and the refresh rate of images is then the key to follow up. With a low frame rate, this will be always a limited experience.
Footage based Motion blur is a sign of Camera based settings and based on its directed view, which we do not have in the same way in VR. Hence the high frame rates required in games since long, to get our own “blend” of motion aesthetic directly based on normal environments, as the real world has no motion blur by itself. Sloppily said: “Wet-Ware-MB”.

I know that I have scratched here only the surface of this problem. As usual, there are many ideas about it, and as Rick pointed out, the tech has a relatively short “half-time-value”, and new ideas and concepts evolve constantly. I focus often on the user experience as it succeeds (hopefully) or fails (if badly set up) on that level finally.


All the best


To illustrate the problem a tiny bit, here is a comparison set up, download the clip if you like
https://www.amazon.com/clouddrive/share/CMWcDOsta7nO1FVKfOUwr2IOozRDOOBwHk2hniiyAzT?ref_=cd_ph_share_link_copy

I hope this illustrates a little bit the complexity here, e.g., if the audience is focusing/targeting a fixed point on the horizon, the foreground will be even more blurry. It is not just a single general per frame question, it is an interactive spatial problem. Again, very high frame rates leads to an closer equivalent of “reality” as reality has no MB, perception has, cameras have…

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile
 
 
Posted: 21 February 2016 05:00 PM   [ Ignore ]   [ # 2 ]  
Total Posts:  8
Joined  2015-09-22

Hi Dr. Sassi,

thanks for the example. In my scene, the camera does not move, like in Your example. But I do get the point, that rendered motion blur cannot consider any possible motion of the final viewers.

But in my case there are some points, that make me belive that my scene still needs rendered motion blur:
- the scene is still, only one object is moving - so, you only need motion blur when looking into the direction of that object.
- the viewer is expected to follow this objekt, but it will happen more by eye movement than by head movement - just like watching a normal non-VR-screen - where You also would need motion blur.
- in the ending, the object moves very, very fast towards the viewer. So fast, that I think, he/she will not be able to compensate that motion by her/his own movement: motion blur needed.

But this really seems to be a technically problem with 360°-rendering: meanwhile, I also tried to render the motion vector pass using the QuickTime VR functionality build-in to Cinema 4D - the result looks different, but still not as expected - see the new file attached ...

Profile
 
 
Posted: 21 February 2016 08:16 PM   [ Ignore ]   [ # 3 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Marten,

Thanks for the extra image. I was finally able to reproduce it, with a very low Scale setting, which is a mix of Scene scale and absolute movement in conjunction with the camera frame. Since this frame is 360º, I guess the setting is problematic or impossible at the moment. Again, check with the support if they have an better idea about. My concerns about the use in general is given, so I leave it to the support from here.

You might send a file to the Support, or share the scene file in the QA forum, I can also send an upload link. So I can see a little bit more what it is about.

If an object moves toward you, you will be right, any motion toward or away from the nodal point (camera/lens) will work in a pre-rendered way. (vs a full VR immersion based on rotation and movement capture/tracking of the HMD (head mounted display). If the motion is not toward or away from the camera center, or the object is large, then we have a tangential movement and the option to interfere with the head rotation is given! I got that this is not the case here.
Given that movement, the movement should be in the BLUE channel, as this would be the z or depth channel of the view, which is not an option in C4D. All you get is a RED and a GREEN channel. Which is the movement of the camera frame, the 2D image.

Blue cannel/Z depth motion, a complex theme in 360º ...

If you have software that supports a blue channel, you might render it in an extra path: It needs all objects in question to have a Luminance channel. The shader needs to be set to camera, and the “Scale” is measured in distance You need to find a fit to the Render Settings> Options> Motion Scale. Scale. The amount of color brightness in that channel is based on the movement (the distance from the previous frame to the current frame): XPresso>Distance, Prev.Pos, etc. See attached file. Just a sketch, for Red and Green, try a analysis of H and P rotations in the same way, not precise nor pixel-correct but for small features as described…
Of course no Antialias, AA set to none! It is a data channel after all, not an image channel. Of course not in 8 bit.ch! Needless to say, that each object needs an individual treatment.
The use of those post motion blur options have many limitations, and for me mostly given by the missing parts of the 3D scene, since it is rendered in 2D, no MB behind transparent object, nearly impossible to mix with post depth of field effects, etc. Always everything in relation to the camera Z! What will not work here, objects that have parts in cameras Z= and Z- space, as those will render not at all correctly, and it is a tricky idea anyway. I mentioned it here to showcase the complexity of and eqirectangluar projection of motion from a spatial source, expressed by axis measurements only.

BUT, yes, I know, it is a lot of time otherwise.

The options in C4D would be, to render in Subframe-Motion Blur, which might be prohibitive in UHD in many cases, render time wise. I tested it, it works, kind of, but again, I think its might work against the viewer experience in some cases. I discussed the Sub-motion Frame (Scene Motion Blur) in my Jet series, and used then to smooth it RE:Smart Motion Blur on top of it. In the Pro version you can set manually vectors!

BTW: I added a clip to the post above, to illustrate the problem.

Fast movement of any part will be a problem, as long as we have only “TV standard” frame rates”, not talking about the artificial screen refresh of TV, real source frame rate.

Yes, eye-movement, a very specific theme, and highly dependent on the viewer device. Which indicates even more that things need to have high frame-rates to work, as motion blur added at any time will be not based on it.

My best wishes.

[...edited!]

File Attachments
CV2_r17_drs_16_VRca_11.c4d.zip  (File Size: 41KB - Downloads: 230)
 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile
 
 
Posted: 22 February 2016 03:22 AM   [ Ignore ]   [ # 4 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Marten,

You had one object in mind, and since I mentioned all my concerns, I like to share an idea how to get it done anyway. Given the fact that you have a short and fast sequence here. I might not suggest this for slower and/or longer sequences.

The key idea is to have a camera moving with the VR-Cam-Rig. Only position wise, as the current rig is only position animatable. IF rotated, the Motion Vector information will not work correctly!

This addition “child” camera (zeroed out to the rig!) must be able to “see” the object in question.

I have used a little Cloth Object to add some size to it, since the Motion Vector pass can’t be anti aliased (its a data channel, which should never be AA’d)

With this camera active, a Motion Vector pass is rendered, of course in 32bit/channel float. (I just saw in an 3rd party manual again, hence here we go again: radiance 32bit is NOT a 32bit per channel format, its 4*8bit whatever format RGBE, never use it in production for that!)

This Open EXR file is then used in the Luminance channel for camera projection, the “padding” via Cloth is now off!

The VR-Cam and Render settings are now active.

With only the Luminance channel on, the rendering in AA to None is quite fast.

This is now the Motion Vector Pass, note that black isn’t working in most Plug ins. In 32bit channel linear no motion has a value of 0.5. If you do read outs in gamma space, it might be around 73.5% +/- depending on the profile you choose, hence stay in float linear. Since the Motion vector pass we just did can render as well fast an alpha channel, the “no motion background is easily added.

Since I do not know your next step, if it is Ae or NUKE, perhaps Fusion or Smoke, you certainly know your set up very well.

So far I got that the one minute clip works well, to get an “idea” about the flow. I have captured one for you. These are no tutorials, just give a hint about.

All the best


Movie clip:
https://www.amazon.com/clouddrive/share/cKak0JOcNdZ5Ur0EfNPpMrXnkAK9t6LAgKsRUz3P8h?ref_=cd_ph_share_link_copy
Scene file with the first render session: ~80MB:
https://www.amazon.com/clouddrive/share/4wPBr4hDSLgm35yj9cNgEKxRmBnW7WNUFl3uic4xr92?ref_=cd_ph_share_link_copy

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile
 
 
Posted: 26 February 2016 12:59 AM   [ Ignore ]   [ # 5 ]  
Total Posts:  8
Joined  2015-09-22

Thank You very much for Your suggestions! In the end, I’ve rendered the last 30, fast frames with scene motion blur in Cinema and some slightly MB was added afterwards in AfterFX with Pixel Motion Blur ...

My CV-VRCam production experience report can be found in this thread: http://www.cineversity.com/forums/viewthread/1934/

Profile
 
 
Posted: 26 February 2016 01:17 AM   [ Ignore ]   [ # 6 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Thanks a lot, Marten.

As Christin Böhme said during her “rbb” show, “...we don’t want to produce here any nightmares”, tehehe. Like you said in the interview, you started to get to like him.
Well done, and it seems to work nicely, perfect!

So, congratulations to you and the team, as well as for the nice behind the scene documentary. Was nice to see you there. The motion-blur worked very well. The whole set up—well thought out! Great work! My best wishes for any follow up projects!

Thanks for the nice contribution to Rick’s thread.

Herzlichst :o) und alles Gute!

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile
 
 
Posted: 26 February 2016 01:34 AM   [ Ignore ]   [ # 7 ]  
Total Posts:  8
Joined  2015-09-22

Danke! :-D

Profile
 
 
Posted: 26 February 2016 01:37 AM   [ Ignore ]   [ # 8 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Gern geschehen/You’re welcome, Marten.

Anytime again. :o)

Greetings to Berlin (I lived there two decades), the official partner city of Los Angeles, CA :o)

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile