Hi everybody,
Just a little thing that I would like to touch on to make my workflow better:
My work requires me to take photos of real objects and reproject them onto models of (matching) C4D objects, and then bake them all into a single UV-mapped texture for display in a little proprietary OpenGL viewer. That way, all of the little details, decals, and lighting are from the photographs rather than needing to be modeled, have materials, etc.
Of course, this requires me aligning my camera view of the model to the photographs of the real object. I would like to get these matches as close as possible, to avoid manual fiddling with the UVs.
Here’s the thing: to get the best projections (see another thread about this that I started), I’m constrained to shoot my photos aligned to the global axes, not oblique to them. That means that, in the Camera Calibrator tag, it’s very hard to get vanishing points. Also, the Camera Calibrator tag doesn’t like it when you try to calibrate two cameras on opposite sides of an object.
Does anyone have any tips for doing this matching process another way? I currently set the focal length and sensor size to that of my real camera, and then point it at a null, so that I can move the center and the camera independently. Then, I basically fool around with the positions until I’m satisfied. But is there a quicker, better, more systematic way to do this?
Any help would be greatly appreciated! Or, if I haven’t described it clearly enough, I would be happy to expand on what I’m doing and what I need.
Best wishes,
Eric