Hi hello,
Thanks for taking the time to reply.
To change your name, please use the contact us option (in the lower-left corner).
Any simulation of light is a reduced set up of reality. Always.
I’m not aware of any calculation of light that would work on any CPU/GPU available to CG artists. Light bounces around in a way that can’t be reproduced in any timely fashion.
So the idea is to find a way to get only relevant parts that calculate from the much smaller subset. Besides, most render engines see the light only as single frequency phenomena described as a ray.
Light bounces uncounted times, GI, on the other hand, tries to reduce the bounces to get affordable render-times. It is always a compromise.
This is by far not all there is to it, but in summary: it is not physically correct nor accurate. The keyword is, it comes close while having only a fraction of the calculation needed.
While 30 years ago, some techniques were the top-notch way to do, they are marked now outdated. Which will happen to anything, as everything progresses?
Please have a look here, scroll down, to see Pro/Con lists.
https://docs.redshift3d.com/display/RSDOCS/GI+Engines
Having said all of that, selecting what is used and (!) how it is used is critical.
Some methods work best with the artists’ support, some do better by just blurring (or reducing the size) the HDRI.
Fewer pixels mean that more rays hit the same (or similar) value more often, less noise. After all, everything in an HDRi becomes a light source, but what really needs to be in such details?
Baking parts or the whole scene, excluding small but hi-res objects, is a method used in Cinema 4D for a long. It depends if the scene has only camera animation in it or all is animated.
Thanks for the file, but it was again just a project file without any textures in it. I mention that, so you can create a file that is usable for the Redshift 3D Forum. Again File> Save Project with Assets… (Zip the main folder created based on this.)
All the best