Zero Parallax is a theme for projecting the “stereo” results onto the big canvas in a cinema. This is a critical area, no doubts.
These are my ideas: (…check other sources!)
Since these images are created for an HMD (head mounted display) the “screen” is nearly in front of the eyes. Which means, the typical problem with a wide audience in a theater, front row to back row as well as left and right positioning are a very critical subject to consider. All of that is not given with a HMD. It is also passive, i.e., not interactive.
Another point is different to 3D Stereo (vs 360º*180º VR): Based on the Pole merging, we have already some complexity here. Pole-merging is not a theme in stereo cinematography.
If the Zero Parallax moves from front to infinity for the big screen we have with the pivot point each time on the eye (per channel) a huge movement on the “silver screen”.
In other words, if we are in a theater, the convergence angle of our eyes is not needed, it would destroy the result. In a HMD we can have that movement back, and here, with the close distance to our eyes, the angel counts, especially since the distance of the eyes is not always the same. The parallel camera orientation seems the best way for a “one size fits all approach, missing eye tracking etc.
In summery: Very close to the eye, not so much, as it is a much more defined constellation, compared to the 3D stereo presentation. One might argue that the screen size is vastly different, but from the point of view, it is not so much. In the HMD we have lenses who magnify the image, and more often than not aspherical lenses, with a denser pixel-population in the middle field. The field of view is normally with 90º on it starting point to make nearly a visual field available that starts to convince, or in more advanced HMD up to 150º (FXPHD VR course, Level 1, episode 1: https://www.fxphd.com/details/?idCourse=490 ).
It is easy to so that the set up is based on a parallel camera pair.
Compare both images in Photoshop with “difference” as blend mode while they are on top of each other. The least amount of difference is given at infinite. This allows as well for edits, to keep depth continuity.
If the infinity point (here the Non Parallax Point) would be moved or even animated or edit (or worse: a depth jump cut), the effect on the images based on the projection close to the eye would have a little effect, but the angel between the eyes and the resulting image would be off for the “brain”. So, my idea is that infinite is just a very healthy idea to begin with.
On the other hand, it you set the non-parallax point very close, anything behind would be wrong (the positive parallax could be very strong, but the object in the front would have very little depth information, if the audience tries to focus on something that is way in the background. Normally a VR-Cinematographer would place this into the blur part of the DOF (depth of field), but for interactive systems that won’t work, in the same way as motion blur doesn’t work. The less the depth information from parallax the more depth cues have to work, e.g., size difference, or what is in front of something and behind, etc.
The main point here, which doesn’t work with 3D Stereo definitions is easily given that in a theater, the “movie” is running (time based changes) and the interactivity is zero, compared to a HMD with a 360º*180º stereo view. The stereo image in the HMD could be a still even and while rotating the head the stereo effect is expected to be given (poles may excluded here). If the non-parallax point would be very close, the background content couldn’t be the same while revolving the head, but since a still-couple must work here, this is not a given. Here I see the main part of the idea to keep things set to infinite or parallel.
I know, there is a tendency to dumb classic 3D stereo knowledge 1:1 into VR, but things are a little bit different here, as I hope I could explain it. But feel free to make your tests.
You can always take normal stereo images with a field of view smaller that the e.g., a 50mm lens in FF, and stitch it, you might find in the problem of doing so, your answers, considering a very narrow Non Parallax setting. With stitching it would be even possible (to a certain degree) to change a lot in terms of image space…
Another idea to make this point more clear, in VR-Cam all objects are without DOF-blur. So one could just look at any object. Let’s say that wasn’t the object the zero parallax was set to, so the eyes converge and move until the object is kind of “overlapping”. Now the VR cinematographer animates the set up, and perhaps the needed angle between the eyes is not even like a convergence, but would be in need to divergent to keep it in non parallax. This is an extreme point, no doubts, but it makes it so clear why my assumption that the non parallax point in VR should be infinite or at least to the furthest visible element. In normal conditions the body is not able to get the eyes spread that way.
To set things to infinite where the 3D stereo effect is anyway negligible, emphasizes the 3D-stereo effect in the front. The price for that is of course that a certain (close) distance is not working. IF you go to parallel in the settings it seems to be limited anyway.
I’m certain there are other ideas about, however following the old rule of thumb for Camera projection, that after 30meters the parallax is negligible, I think the current set up works, and we might talk here in an interactive environment about meters instead of worlds between.
If you have another use, that would require a different perspective/idea than presented here, please share some images of your set up, if it is not HMD based for example.
All the best