A new version of Cineversity has been launched. This legacy site and its tutorials will remain accessible for a limited transition period

Visit the New Cineversity
   
 
Can camera mapping in C4D drive GI like this?
Posted: 06 February 2012 07:58 AM   [ Ignore ]  
Avatar
Total Posts:  86
Joined  2008-03-29

Quick question: is it possible to use our camera projections to then light objects in the scene. Similar to the Maya plugin previously mentioned? Mentioned here in this link:

http://www.glyphfx.com/examples.html

Quote:
Image-based lighting using projected HDRIs.

The Mattepainting Toolkit can be used to recreate real-world lighting conditions by projecting HDR photos onto geometry. The textured geometry then emits into the scene, simlar to traditional infinite sphere image-based lighting methods, allowing other surfaces to be lit.

I’m already familiar with camera mapping and projection man, and I’ve lit scenes with an HDR sphere before, but how would one go about camera mapping HDRs and then lighting scenes with them in C4D?

Thanks!
Biagio

 Signature 

We make TV and film, and podcast about it.
Joke Productions - company site
Producing Unscripted - podcast and blog about unscripted television
Joke and Biagio - filmmaking blog

Profile
 
 
Posted: 06 February 2012 10:33 AM   [ Ignore ]   [ # 1 ]  
Administrator
Avatar
Total Posts:  365
Joined  2006-05-17
JokeAndBiagio - 06 February 2012 07:58 AM

Quick question: is it possible to use our camera projections to then light objects in the scene. Similar to the Maya plugin previously mentioned? Mentioned here in this link:
Biagio

Short answer is “yes”.

An HDR is an image like any other, except for the fact that it also stores information for intensity per pixel.
So in this case the HDR that they are using is a simple photo that is then projected on to a mesh rather than a spherical map.

So when they project their HDR it is still saying that some pixels should generate more light, because regardless of how the image is mapped it is still saying that certain pixels should have greater intensity.

This is why the TV in their example produces better lighting.

This is an interesting technique though, perhaps Dr. Sassi will also chime in on this one, as projection mapping is definitely a passion of his.

Profile
 
 
Posted: 06 February 2012 03:37 PM   [ Ignore ]   [ # 2 ]  
Avatar
Total Posts:  70
Joined  2006-04-04

Hi Patrick,
I will, thanks for suggesting. Yes, this is certainly my passion :o)
Take care.

Hi JokeAndBiagio,

Technically it can do the same, with all its limitations given in the example. Camera mapping is precisely one of the methods you can use here.

However, there are some details that needs to be understood to make it working without flaws or to know what finally to expect at all.

The first thing to understand is given in the fact that most HDRI are based on the local (camera perspective) light metering and normally people take the given light as middle point value and dial in e.g. 4 exposures up and down, to get a decent HDRI. An absolute value is not given here. (I will discuss this later this year, how to do it anyway). Note that Photographer have a different idea about HDRI than 3D artists have to have.

The camera picks up the light given from its position with the effect that any directional light will not be reproduced as it should be. (Take an image right into a flash light and one from the side to see the effect). The first limitation of those illumination techniques are given based on subjective light and no directional light, because directional light can’t be spatially captured to 100% by just some HDRIs.

The next issue with such technique, even hyped currently heavily as “new”, is based on the fact that mostly a 360º/180º panoramas are used, or few single shots in Camera-Mapping. Which means, anything behind an object, is not captured, but if the “new” object is placed there, it takes the light from the given areas and that is (maybe) not very precise. The missing coverage will lead to a heavily “distorted illumination”.

The technique of projecting or the “baked HDRI to polygons” was given as public option with RealViz’s VTour HDRI, where you projected the Image data back on geometry. This was around six years ago. It makes much more sense than any spherical projection of an 360º/180º panorama ( I will demonstrate this in an upcoming series). Nevertheless, the GI solutions takes then each pixel value on the surface of such geometry into account. But it has no idea about the local way of how the reflected light would act to the object itself. There is no translation from the camera point of view and its capture of the light to the “point of view” or position of the object. This results in the need to capture the HDRI panorama from the objects main position. Which is possible of course. The problem starts when the object moves. Then you need to provide a new panorama or blend from one to the next (which might introduce even more problems). Especially with several objects/characters in the scene, this might end up in some average “light”.

The next problem is given in the fact that some light sources (compare IES light) create with increasing distance a different pattern, and that results in a different strength of the light on the subject as well. This is not a given with any of these techniques. In short, any light captured by an HDRI, even video based, contains only the local (camera) result of any light.

Reflective objects in the scene: A camera that captures HDRI data will pick up from its position light that is reflected in a mirror, or that from a “shiny” surfaces. Which might be not of any relevance from the object point of view/position. Reflective surfaces are an important part in the illumination set up. But with “Camera Mapping” the re-projected values are based on that (the camera position only), not even close to the real local values in pretty much any case!

Lets assume the coverage of the complete scene is nicely done (so far possible at al), the problem is that each (real) material reflects light differently back into the scene. Which is (to my knowledge) not a given in available GI solutions to adjust those parts of the geometry/illumination accordingly, at least not with spatial light pattern (not even like simple IES lights can do).

I hope that those few points “illuminate” the problems of such technique. GI illumination is mostly underestimated in its complexity and creating a similar geometry instead of using just a sphere is already a large step forward. In some cases a huge step, as the sphere as base of a GI solution has a lot of problems and shortcomings.

All in all, Camera-Mapping might support a solution, but the use of light objects and especially the simulation of the spatial characteristics of light source is a discipline that can’t be replaced with HDRI so far. There is always the good eye and knowledge of an 3D artist needed.

I work since some time on a longer series about, but it just takes some time to explore and see what works and what is just “hot air”.

All the best

Sassi

 Signature 

This is one of my old accounts please do not use it for PMs or other communications. I will not receive it. Sorry. Check the avater of teh newer posts in the forum, that should always work nicely.

Profile
 
 
Posted: 06 February 2012 06:34 PM   [ Ignore ]   [ # 3 ]  
Avatar
Total Posts:  86
Joined  2008-03-29

Wow! Thanks to you both for answering so quickly, and thank you, Sassi, for the huge, in-depth answer.  I understand most of what you are saying, and it’s incredibly helpful as I begin to experiment with this technique. 

For an average workflow, do you feel it would look better (though not necessarily more correct) to simply start with an HDRI sphere and then add light objects to the scene to enhance them (so for the scene posted above, map the hdri of the room and then add a blue light where the TV is.) 

Or do you feel the extra time to really get the GI mapped correctly will lead to vastly superior results?

Either way, thanks so much for your time and kind response.  I will be referencing this post many times in the future, and can’t wait to see your upcoming series. Sounds fascinating and just what I need.

All the best,
Biagio

 Signature 

We make TV and film, and podcast about it.
Joke Productions - company site
Producing Unscripted - podcast and blog about unscripted television
Joke and Biagio - filmmaking blog

Profile
 
 
Posted: 06 February 2012 07:06 PM   [ Ignore ]   [ # 4 ]  
Avatar
Total Posts:  70
Joined  2006-04-04

Hi Biagio,


Sorry if not all was clear, but I wouldn’t have invested so much in research for tutorials, if that subject would be easy to cover in an “letter” sized answer. I do photography since many decades proffesionally. Since one decade we talk digital in it. The problem that I encounter teaching it, is the problem of misinformation given in the first place. So I have to clean up and (then I can) start over with students. Double work. However it is a breathtaking option and field to work in, so I do it.

Your Question—The simple answer is, it depends: As usual, the artist is in charge to decide what is really needed. In fact, most scene certainly do not need an HDRI based information. Especially not if you take the direct light source out. Why put the lights out of the HDRI and use objects? Because the quality and precision that a Light Object can produce is not a given with “Sphere based systems”.  What is left then that would require an HDRI. Again, you need to know that, there is no this is the only way to do it rule.

HDRI is never a 100% replacement for a good light set up with “light objects”. (But a perfect addition!) If you try just for fun to adjust a scene with all its parts to get the reality into your image, you might notice that a huge amount depends on the material. In the past decades, I haven’t seen a lot of material set ups that even bother to use the Diffusion channel (they just use the color, go figure.) Similar to that is the use of the default setting of the light Contrast parameter or the Diffuse Falloff in the Illumination channel. All of that (to name only few) needs to play together. If that is not a given, HDRI can’t replace that artists knowledge.

Light is pretty much all we deal with, and the results are based on the knowledge that is used to set up a scene. Sometimes this missing knowledge is replaced by adding AO to the scene. Which is a geometry based shading effect, but it simulates “something” (!) and might improve a bad illuminated scene a little bit.

Well, if I sound like ranting, a big sorry from my side. Technology is there to support the artist, not to replace her/him.

To start some questions and thoughts:

Analysis your scene, what is needed, what light is “on set”

What materials and objects are there and how the light changes their appearance.

Do you combine practical footage or is all CG

Do you have enough reference material

Are all (!) textures captured linear or do you sit on an old pile of JPGS with baked in Gamma

IS there a shallow depth of field needed, or Motion Blur…

Do you render flat, which means no crashing the blacks, no clipping the whites (not needed in 32bit/c rendering and footage format)

Do you have the capabilities to handle 32bit/c in post ...some compositing apps are limited in that field.

All in all, is your pipeline linear and 32bit/c from the start to the end, to justify the effort in the first pace.


A pipeline is set up based on the requirements and options (time / money), and the ability of the artists to operate such a pipeline.

All the best

Sassi

 Signature 

This is one of my old accounts please do not use it for PMs or other communications. I will not receive it. Sorry. Check the avater of teh newer posts in the forum, that should always work nicely.

Profile
 
 
Posted: 06 February 2012 07:15 PM   [ Ignore ]   [ # 5 ]  
Avatar
Total Posts:  86
Joined  2008-03-29

Fantastic—and no, you don’t sound like you’re ranting… you sound very passionate and I appreciate it.  All of your points are well taken, especially the idea that artists, not computers, make beautiful images.  It’s easy to be seduced into thinking that there is a magic “hdri” solution to make things photo-real, but as you point out, it’s simply another tool in an artists kit.  I look forward to learning this technique as I hone all the others, and making this part of my toolkit. 

And even your “letter” sized responses are filled with more useful info than almost any tutorial on the web!

Thank you again, and can’t wait for your series.

All the best,
Biagio

 Signature 

We make TV and film, and podcast about it.
Joke Productions - company site
Producing Unscripted - podcast and blog about unscripted television
Joke and Biagio - filmmaking blog

Profile
 
 
Posted: 06 February 2012 08:18 PM   [ Ignore ]   [ # 6 ]  
Avatar
Total Posts:  70
Joined  2006-04-04

Thanks a lot Biagio,

Yes, the more C4D and Photography/Filming is melting together (option wise), the more I get passionated.

I know that many artists produce incredible cool and great “stuff”. I like to “push” everyone in that area of skill-level.

Thanks for being so open-minded, which is certainly an awesome tool to have.

Please note that I have focused above only on the parts related to your Question. “Integration” (Practical-footage, 3D/CG preparation and Compositing relevant things are discussed in an upcoming course. Hint: think only about shadows and reflections ... in each direction! To the scene from the object, to the object from the scene. etc…)

Have fun with the tools! ;o)

Sassi

 Signature 

This is one of my old accounts please do not use it for PMs or other communications. I will not receive it. Sorry. Check the avater of teh newer posts in the forum, that should always work nicely.

Profile