Hi Patrick,
I will, thanks for suggesting. Yes, this is certainly my passion :o)
Take care.
Hi JokeAndBiagio,
Technically it can do the same, with all its limitations given in the example. Camera mapping is precisely one of the methods you can use here.
However, there are some details that needs to be understood to make it working without flaws or to know what finally to expect at all.
The first thing to understand is given in the fact that most HDRI are based on the local (camera perspective) light metering and normally people take the given light as middle point value and dial in e.g. 4 exposures up and down, to get a decent HDRI. An absolute value is not given here. (I will discuss this later this year, how to do it anyway). Note that Photographer have a different idea about HDRI than 3D artists have to have.
The camera picks up the light given from its position with the effect that any directional light will not be reproduced as it should be. (Take an image right into a flash light and one from the side to see the effect). The first limitation of those illumination techniques are given based on subjective light and no directional light, because directional light can’t be spatially captured to 100% by just some HDRIs.
The next issue with such technique, even hyped currently heavily as “new”, is based on the fact that mostly a 360º/180º panoramas are used, or few single shots in Camera-Mapping. Which means, anything behind an object, is not captured, but if the “new” object is placed there, it takes the light from the given areas and that is (maybe) not very precise. The missing coverage will lead to a heavily “distorted illumination”.
The technique of projecting or the “baked HDRI to polygons” was given as public option with RealViz’s VTour HDRI, where you projected the Image data back on geometry. This was around six years ago. It makes much more sense than any spherical projection of an 360º/180º panorama ( I will demonstrate this in an upcoming series). Nevertheless, the GI solutions takes then each pixel value on the surface of such geometry into account. But it has no idea about the local way of how the reflected light would act to the object itself. There is no translation from the camera point of view and its capture of the light to the “point of view” or position of the object. This results in the need to capture the HDRI panorama from the objects main position. Which is possible of course. The problem starts when the object moves. Then you need to provide a new panorama or blend from one to the next (which might introduce even more problems). Especially with several objects/characters in the scene, this might end up in some average “light”.
The next problem is given in the fact that some light sources (compare IES light) create with increasing distance a different pattern, and that results in a different strength of the light on the subject as well. This is not a given with any of these techniques. In short, any light captured by an HDRI, even video based, contains only the local (camera) result of any light.
Reflective objects in the scene: A camera that captures HDRI data will pick up from its position light that is reflected in a mirror, or that from a “shiny” surfaces. Which might be not of any relevance from the object point of view/position. Reflective surfaces are an important part in the illumination set up. But with “Camera Mapping” the re-projected values are based on that (the camera position only), not even close to the real local values in pretty much any case!
Lets assume the coverage of the complete scene is nicely done (so far possible at al), the problem is that each (real) material reflects light differently back into the scene. Which is (to my knowledge) not a given in available GI solutions to adjust those parts of the geometry/illumination accordingly, at least not with spatial light pattern (not even like simple IES lights can do).
I hope that those few points “illuminate” the problems of such technique. GI illumination is mostly underestimated in its complexity and creating a similar geometry instead of using just a sphere is already a large step forward. In some cases a huge step, as the sphere as base of a GI solution has a lot of problems and shortcomings.
All in all, Camera-Mapping might support a solution, but the use of light objects and especially the simulation of the spatial characteristics of light source is a discipline that can’t be replaced with HDRI so far. There is always the good eye and knowledge of an 3D artist needed.
I work since some time on a longer series about, but it just takes some time to explore and see what works and what is just “hot air”.
All the best
Sassi