Hi Mikest,
What I see is that you have a result—but the render-time is over the limit for your needs. As I have no model here, nor endless render-farm-options, my suggestion is here to find the source that pushes the rendering that high. Exclude things (Transparence blur!) to see how it would work. Some results are possible only with the render-time already established, some other scenes might not suffer if some parameters lowered. In other cases, “post” need to balance what the rendering couldn’t handle.
What I did with my morning so far was to search for alternatives. I’m not a strong believer that only one way exist and certainly not at all a fan of ready made stuff. SSS sits kind of between for me, it allows to do renderings which are difficult to “fake” in some instances. But to rely only on it is not my ideal, as I do animations, most time consuming things are out of the question from the start, especially since we go strong to the 4K / UHD era. No time for things that can’t be optimized or faked ;o) [Well, that is just my point of view.]
My suggestion, as I mentioned it above is based on multi pass, or call it separate passes, as it is not doable in one rendering. I created an object with similar features (thread and structure) and tried to figure out how to do it without SSS.
MY idea is here to have the inside of the object as separate object and render it without the outside. I assume that modeling is not at all a problem for you, given the example. I did several passes with transparency (no blur), with reflection and one with FOG as the only channel. Each time with a straight alfa, so I have full pixel values. The idea is to blur in Photoshop the inside “parts” and limit the blur of course with the outside Alfa-channel. Note that for the inside you need to use after blurring the outside alfa.
The FOG result is mixed as well into it, perhaps very little. To have a depth map of the inside object might help a little bit as well.
Keep in mind that some objects can have an Environment-Channel based refection. This could be based on a 360º/180º rendering from the center of the object (QTVR) and have that blurred before use (Blur it and use the Offset option in Photoshop to blur the seams appropriately)
For the transparency, I would use the “Absorption Color/Distance” Option to a certain degree as well. A little luminance perhaps as well for the inside object, as light is scattered there. In the light sources you need to adjust the softness (area) of the shadows, to get the SSS effect for the material simulated.
Perhaps you have some other observations, and adjust internal refection and fresnel option accordingly. Global Illumination for the inside sounds good to me as well, just as an other option. Another option of this set up is that you can color correct each layer: If your impression is the the light [while traveling] becomes warmer or colder, just use an layer adjustment and pull the values you need in.
Yes this is way more work, but you gain the option to adjust each “quality” of the material in Photoshop in realtime. ... or you can ad perhaps something later on. If you compare that with extreme Render-Times otherwise, you might get the idea. (Perhaps each extra pass is rendered on a new frame, with Step as key interpolation, as it is for print, that might save some time)
As we talk here nearly exclusively about light, 32bit/c float for the compositing might be mandatory, to stay linear. Which disables sadly a lot of options in Photoshop. If you have access to NUKE or other full 32bit/c apps, give it a shot there perhaps.
While I type the last hours of experimenting down, I got the idea that the team of “ROBOT” (Will Smith) did the same for the robot face parts. Control in realtime has its advantages.
Good Luck
Sassi
Example images are more sketch. I could even double the “Inside Object” and blur one more than the other ...etc.