Hi Marten,
I edited my post here after a longer exploration time of the problem:
This is easily reproducible. A scene file is perhaps not needed.
I have tested any combination and the idea that I got so far is, it is currently not working. Even The Motion Vector Render Pass (image not data) fails as well.
The Problem shows even up with no animation at all. My initial problem to reproduce it was given on a specific Motion Scale. Even the Motion tag is not helping.
As a side note, I’m not aware of a Plug-In that is suited for this use, as we talk here about a single frame that requires a right to left frame edge treatment as well. Left to right is typical, but the seamless blur “around edges, might be a new feature to request form those companies. The workaround would be an off-setted image and depth path combo masked out to the new sides (left and right) placed on top of another not off-setted combo. Twice the render-time perhaps.
The Support might have a specific answer to that, but since VR is a topic in discussions these days, I like to share some thoughts below and why I think that this “problem” wasn’t encountered, as the inner workings of such VR presentations have a deeper complexity in terms of motion blur, than a simple screen or even “3D Stereo” representation:
========
Motion Blur is a tricky question for any individual interactive 360º*180º view.
Technically one can set easily parameters for the rendering and all is set. The reality is: it is not. Why is that?
Let’s explore this with a simple example, a car ride on the highway. As long as the driver has the road as target (looking straight forward), any motion blur is simple. With this a motion vector should be predictable. But is it useful at all?
Since we talk about interactive experiences, it is not any longer. Imagine there is a building along the high way, and the interest of this building leads the viewer to have it as visual target. Even the car is fast, the counteraction of the driver is now focusing on the building leaves the “change” to a minimum, and with that the motion blur should be much lower. If you look just out of the side-window, the building would become the most motion blur. Hence why in movies the car has to slow down for such shots, as it looks otherwise too fast.
If we had motion blur on that building based on the side view position, and would stay focused on it during the movie, we would see a blurry result, while expecting more sharpness.
My personal conclusion for this problem, if you want to have a proper motion blur, it is based on the change of the initial render camera (equal to the car speed in the example), the change of position of the parts in the movie (other cars) and the actions of the viewer as mentioned (being fixated on the building vs fixed in view). This complex formula, can’t be predicted during rendering, nor can be rendered motion blur taken out during the VR viewing.
The only way to make this happen in a proper way is given while the movie plays and the the user actions are given. To me, that sounds like a motion detection plus at the same time user interaction matrix, based on realtime engines.
The perfect answer would be to use extreme high frame rates with no motion blur, from my point of view. Since we can move our head very fast and the refresh rate of images is then the key to follow up. With a low frame rate, this will be always a limited experience.
Footage based Motion blur is a sign of Camera based settings and based on its directed view, which we do not have in the same way in VR. Hence the high frame rates required in games since long, to get our own “blend” of motion aesthetic directly based on normal environments, as the real world has no motion blur by itself. Sloppily said: “Wet-Ware-MB”.
I know that I have scratched here only the surface of this problem. As usual, there are many ideas about it, and as Rick pointed out, the tech has a relatively short “half-time-value”, and new ideas and concepts evolve constantly. I focus often on the user experience as it succeeds (hopefully) or fails (if badly set up) on that level finally.
All the best
To illustrate the problem a tiny bit, here is a comparison set up, download the clip if you like
https://www.amazon.com/clouddrive/share/CMWcDOsta7nO1FVKfOUwr2IOozRDOOBwHk2hniiyAzT?ref_=cd_ph_share_link_copy
I hope this illustrates a little bit the complexity here, e.g., if the audience is focusing/targeting a fixed point on the horizon, the foreground will be even more blurry. It is not just a single general per frame question, it is an interactive spatial problem. Again, very high frame rates leads to an closer equivalent of “reality” as reality has no MB, perception has, cameras have…