A new version of Cineversity has been launched. This legacy site and its tutorials will remain accessible for a limited transition period

Visit the New Cineversity
   
 
Data Passes
Posted: 12 April 2013 03:29 PM   [ Ignore ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Hi There,

I got some questions about data passes and why they are not anti-aliased. Since Cineware is out, perhaps more people are happy to explore data passes now, so I put some thoughts about it in this post.

The idea to oversample a pass to smooth its edges is, in most cases, a limitation of the final quality.
These passes are data-bases, not image material. Anti-aliasing or oversampling is like blurring the data of a spreadsheet.

If you would like to see more of this concept in a tutorial, please let me know whether such a series is wanted in the “Tutorial Suggestion” section, thanks!


All the best
Sassi


Some details

Let’s take the UV pass for example. UV is the translation between 2D images and 3D objects. It places the “Object Polygons” (via “UV Polygons”) on the image, to define where the parts of the image are used on the object. In this way the edges, which are connected on the 3D object, are not always connected in the UV Polygons.

The UV space is normalized and is measured in units from 0-1, in both directions.
This normalization leads to the option to represent this “Image Space” with two gradients for U and V direction. This creates the colors of the UV Pass. The pass should be at least 16/bits per channel to work properly. For example, with the Re:Vision plug-in for Ae RE:Map. (http://revisionfx.com/products/remap/features/)

As a result, each color is a representation of the specific position in the U 0-1 and V 0-1 space. The plug-in calculates the position based on the colors (which is the data, by the way!) based on the neighboring pixels, as well as the direction of how the image needs to be mapped, e.g., in AfterEffects.

In the moment one uses anti-aliasing or oversampling on the data passes directly, the two different edge (border) pixels become mixed into a new color value! THIS is, in fact, a position on the UV map which was never before designated for this edge. Using such a map results then in errors, as the content of the provided images is now strangely mixed on that edge.

The idea to oversample data passes is not a good one, as it leads to artifacts. The right way to do this is to double or triple the final resolution. And that should be done for anything in the comp (but perhaps with less AA needed then).
AFTER this work is done, it can be down-sampled, which excludes the UV pass. The down-sampling is (again) only applied, e.g., to the beauty pass.

======

In a depth-path, an AA or “oversampling” leads to positions between the object and the background, which results in a smeary seam.

======

The normal pass stands for the surface direction of the polygons. To AA or Oversample this pass before it is used, creates new surface directions (although this might be interesting in some cases).

+++++

Bottom line, all these data passes, need to be set to Preserve RGB in After Effects—because the colors represent numbers here! Any profile or gamma treatment would distort the values of such. This (Preserve RGB) is an indicator that we use these passes here as a database, not as an image. This, in return, leads to a use that is free of Anti-Aliasing or Oversampling. AA and Oversampling need to be done on the beauty-pass.

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile
 
 
Posted: 19 April 2013 12:44 PM   [ Ignore ]   [ # 1 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Vector Motion Pass

This pass in CINEMA 4D represents an object axis based movement, which includes the camera movement. The movement is not expressed in Z, which means that any movement toward or away from the camera is excluded. This is certainly not based on a technical limitation or an inability of the developers. The motion blur from those movements, compared to the X and Y movement (from a camera view), is most likely lower.

The stored information is based on the object axis changes, as mentioned, and then applied to the surface of the object. The object closest to the camera, and the one that each pixel covers the most, is represented in the entire pixel. Pixels cannot contain two or more directional data for vector motion.
It is a data channel and in After Effects it needs to have the setting “Preserve RGB”. As mentioned before, this indicates a data pass and anti-aliasing is not improving anything here. The only way, in this case, is to use higher resolutions to change the image passes on a sub-pixel level. For higher quality, the image passes should be equal in resolution, and then down-sampled. Using a higher resolution data pass, down-sampling it and then using it for image alternation, lowers the resulting quality
If two objects have different movements, one up for example and one to the side, an oversampling or AA would result in a more or less diagonal direction, which is then expressed within the data. This might not be noticeable as long as the objects have an equal color and brightness. If one or both of these values is in strong contrast (super-whites, etc.) then the resulting artifacts will show up heavily. Motion blur works best in floating point and linear environments where a light/energy blur comes the closest to natural phenomena.
Even if such a seam is only 1-3 pixels instead of a “stair-case” result, this would show up in the worse case as an “ants trail” between these two overlapping objects.

I assume that for high quality, we are working in floating point/linear work space, with a gamma of 1.0, and that we roll out highlights after any compositing is done.

======

Position Pass

This is the simplest data from all the passes. If set to mode “Camera”, the Z channel is a unique and very precise depth pass, if used in 32bit/c float. To store this information in a float-based format allows for an incredibly exact representation of values. These numbers are most likely positive as well as negative.
In this pass any value is possible that represents the dimensions of the project space. Imagine values way beyond 1.0 and, even if the space would go only to 20 units, it would already leave the space that Adobe’s Photoshop has as a color value in, e.g., CS6. Some applications take higher values into account, some do not, which simply means that they leave the gamma in the numbers or clip it to 100%. To use gamma-based numbers above 1.0 might result in an effect which can be seen, for example, when an HDRI adjustment of the sun in an image turns black.
Working with any standard image tools might create an disagreeable scenario. Standard image tools are most likely only available for positive numbers between 0.0 and 1.0, to cover the integer/gamma formats (if they are subdivided in 8bit per channel or 16bit per channel, for example). More recent tools reflect the “energy” that values of, e.g.,  2.7 or 10.0 or even 100 have, following closely to the higher value algorithms which HDRI requires.
A typical Position-Pass, set to mode “World” in C4D, for example, has negative numbers as well. Most applications can’t even deal with negative numbers. Not to mention the variety of AA algorithms or up-and-down sampling methods.
To make my point here, the anti-aliasing idea or oversampling does not work with most algorithms. Some turn a negative value into absolute values or ignore anything below zero. Other AA formulas take the numbers as they are, which means the higher number tends to dominate here, e.g., value 0.1 and value 19 result in 9.75. (Taking these values in Gamma 2.2, averaging them and bringing them back to linear, which would be all wrong, but for the sake of comparison, the value would end up being 5.01943…)
Edit: When one likes to rotate the Position pass, or just set it in Ae to Camera (via plug in), then a pre-oversampled pass would not work properly any more As unequal depth values are mixed), the fore- and background are mixed on the edges of the foreground object. Ugly artifacts are the result. /edit


To have such an uncertainty will lead of course to the question: If one over-samples/down-samples data passes, which algorithm is used while doing so? Not that I suggest that at all. Producing the images based on any resolution, and then downsampling the results (beauty pass) is possible, but downsampling the data-pass should be avoided.

This is of course not at all a result which creates a stable procedure, given the idea that one might be using different applications along the way. The variety of AA filters and their results would be a longer story here.

======

Plug-Ins

I was introduced today to the fact that a plug-in developer uses AA-based passes. Well, I am aware that there is a lot of misconception, as I have discussed this uncounted times so far. I can only encourage you to test anything excessively before you go into production.
If the color or brightness contrast of two neighboring objects are extreme, the artifacts will show up earlier. Stress-test your workflow when no one is waiting for your files.

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile