A new version of Cineversity has been launched. This legacy site and its tutorials will remain accessible for a limited transition period

Visit the New Cineversity
   
 
Updating Records for Diffuse Depth #1 - What the F?
Posted: 09 January 2013 12:41 AM   [ Ignore ]  
Total Posts:  138
Joined  2012-04-04

When attempting to render an animation with the GI Mode set to “IR+QMC Net Render” every frame render is preceded by around 90 seconds of calculation accompanied by a message saying “Updating Records for Diffuse Depth #1.”  This happens whenever I try to render a sequence of frames on my Mac Pro.  When doing a NET render it’s apparent that the client machines are doing similar calculations based on the time it takes to render each frame.  This does not happen when I render a single frame; the frame starts rendering immediately. 

When rendering a frame range, after that 90 second “updating records” calculation it renders the frame, which for this scene at 1920x1080 takes an additional 4-6 minutes on my fastest workstation.  Then it spends another 90 seconds updating those records, and then renders the next frame.  And so on.  This is all despite running a pre-pass on the IR cache, locking it in, and making sure the .gi file was visible to the scene.

That would be an additional 10.5 hours of “updating records” over the course of my 420-frame animation.

What the what?

As I related in my earlier thread dealing with my attempts to create a camera animation of a room lit with GI, I had to resort to workarounds like baking all of the objects in the scene because my attempts to NET Render the scene resulted in obvious jumps between the ranges of frames rendered on the various client computers; it appeared the client computers weren’t seeing the illumination files, even though they were visible on the server.  So after around 8 hours of work I was able to get everything baked adequately for a final render, the result being 1920x1080 frames that renders in under 10 seconds each on a 12 core Mac Pro.

But that was all a workaround.  I really want to just be able to render the scene as-is without all that baking effort.  I’d rather have my computers churn away hours longer on the final render than lose all that billable time setting up all of the objects in my scene to bake in the illumination.

After a lot of research I finally found why the client computers weren’t seeing the IR Cache files on the NET Render server.  When I save a project with assets for NET Render I add “NET” to the name of the project, and so the project name no longer matched the .gi and .gir file names.  Once I made sure those pre-passed files had the same name the client machines saw the cache and I got consistent frames.  It would be swell if the documentation made this clear; I had to find that tidbit of pertinent information on an old CGTalk forum.  Thank you, Google.

But then I ran up against this “Updating Records” issue, making it impractical to render an unbaked version of the scene using a NET Render of our three workstations.  That issue added too much overhead.

I was about to tear my hair out, but on a hunch I changed the GI mode to IR+QMC (Camera Animation).  I ran another IR Cache pre-pass at half-size with a 30-frame frame step, then launched a Net Render.  Lo and behold, it appears to be working.  The frames look the way I want (light leaks and all) and there are no jumps between frames rendered on the different machines (all Mac Pros).

So I guess I’d like to know what the GI Mode for the Net Render is doing that requires all of that additional calculation, even when I pre-pass the cache for EVERY frame and lock it in.  Thinking the “updating records” issue was happening because I was skipping frames in the pre-pass I tried calculating the cache for a 20-frame sequence with a frame step of 1.  It calculated the IR cache, but also spent a huge chunk of time updating records for diffuse depth.  But all of those additional calculations did nothing to the .gi file, and when I test rendered that 20 frame excerpt it ran the “updating records” calculations all over again on each frame. 

Again, what the what?

And does enabling Radiosity Maps do anything to speed up calculations or improve quality in this situation?  I forgot to enable them when I ran the cache pre-pass for this current render (which will probably take 16 hours on our three systems), and the frames look fine.

I’m still itching to see a tutorial on Radiosity Maps, when to use them and what their benefits are.

Thank you.

Shawn Marshall
Marshall Arts Motion Graphics

Profile
 
 
Posted: 09 January 2013 01:53 PM   [ Ignore ]   [ # 1 ]  
Administrator
Avatar
Total Posts:  365
Joined  2006-05-17

Essentially when using the “animation” modes for GI you are creating the records / samples from frame to frame.
The system then looks at each frame and try its best to make sure that the samples / records match between these frames.
This can take a long time, as you not only have to calculate the samples / records for each frame, you need to have matches from frame to frame…this can be slow as it is A LOT of data to process.

It seems when using the (NET) option it will update records from frame to frame regardless of cache settings…I would assume that this is done to help ensure better results from frame to frame…this is likely implemented because there can be differences in calculations between OSX / WIN systems…so checking between each frame should help to even out any oddities.

If you just use (full animation) you can still cache the results, and the updating records is much faster.

That said, if you really want to brute force render with out baking or caching…in my original post I suggested just using high sampling rates with high record settings and just render the GI new for each frame.
Generally you can get as decent results with the added bonus on no caching and instant rendering…but as previously stated as well…you will not get perfect results this way either due to the limitations of an Irradiance based GI solution.
The other option would be QMC+radiosity maps. (check the help…it is very well documented and outlines the advantages and disadvantages of Radiosity maps…in short..if you use QMC, only need 1 bounce, and work with proper light objects it rarely makes sense to NOT use QMC+RM)

If GI is an absolute must for you, and QMC+RM or IR/IR+QMC (still image) don’t fit your needs, I would suggest looking into third party render engines…but those get far more complex in terms of setup and operation. The trade off there is more development focus on alternative GI methods.

Profile
 
 
Posted: 09 January 2013 03:56 PM   [ Ignore ]   [ # 2 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Hey Patrick,

What is your opinion about the idea (based R14) to calculate only every 10th or x frame (if the camera moves smooth). I think that technique was discussed three (or four?) years ago. Where you calculate the samples/cache via frame steps.

As any frame by frame calculation tries to find a complete model for the samples and cache entries, from start to finish, slow camera animation might take advantage from it. If the cache is done, the framesteps need of course to be set back to 1.

All the best

Sassi

P.S. I think one of the main problems of his project is the fact (from the given model) that it is missing half the floor and one complete wall. In that way, the few samples hit black or anything illuminated. (The more even a project is illuminated, the less flickering might happen (right?). As GI is always based on the target to simulate with a finite number (samples) an infinite situation (light bounces nearly infinitely!). Without “scientific accuracy” we have here perhaps a “1million to one” accuracy, to safe render time.
I haven’t seen BTW. any mentioning of the use of Portals, nor exclusions of object, as well no replacement of light sources (light objects vs Luminance objects). Just a rant (wtf) about time etc.

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile
 
 
Posted: 09 January 2013 04:19 PM   [ Ignore ]   [ # 3 ]  
Administrator
Avatar
Total Posts:  365
Joined  2006-05-17

The frame steps could work in situations with just camera animation, as the actual records shouldn’t need to change, except when new records need to be added if there is spot that was occluded from the camera view by another object.

But I usually avoid using the animation modes altogether in favour of higher samples / better record distribution / QMC+RM
I have found that the interpolation that the animation modes can increase the appearance of artefacts, and being able to see frames rendering right away gives a better idea of how the actual render is going.

I think in this scene there is a lot that can be done using alternate solutions, as you had outlined in the last thread. I never consider the camera mapping stuff myself, as it is not a workflow I got into working with…but I do know it can provide a lot of cool benefits…so I am always happy that you can add that perspective smile

But Shawn is looking for low setup…which is why I was focusing on the still image method…which is more brute force.
A third party engine is money / time / learning too. Regardless of the method selected there will be a learning / adjustment aspect.
Every option is going to take time and effort…there is no way around that wink

Profile
 
 
Posted: 09 January 2013 04:43 PM   [ Ignore ]   [ # 4 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Thank you very much Patrick!

======

I have no render-farm, but a need to get stuff done anyway—but—I like to find ways to stay in quality of course.

To find a good and balanced solution among knowledge (experience and learning), the time spend, and the available equipment, while the quality is defined already, is not a simple task.

My idea of getting early on anything that will not changed as information—instead of an repetition in creating/calculation, causes effort of course. But at that time the deadlines are in the greatest distance to it.

I can clearly understand the idea to set some parameter, and then send it off to get calculated. If then short render times are required, the set up must be very precise, which means, one need to be experienced to see the needs of the project in conjunction to the targeted quality.

There is a little area that I like to address as well, which is basically the optimization of the scene (I mentioned it often before). Highly detailed objects with nearly no effect of the light situation (except casting shadows ;o) should be always evaluated if they are part of the result. Like a very detailed chair for example. Is each little part really changing anything in the scene? Perhaps not at all, but a chair can have pull more render-time than a room with windows and walls - in extreme cases. So, to have the “lazy push button and go” solution will be paid at least with render time, even if the parameters set up nicely.

I think there is a reason why studios have render artists, it is not simple to get a good set up, a fast calculation and a high quality with not learning effort. (I write this to the forum, not to you Patrick, you know that stuff anyway.)

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile
 
 
Posted: 09 January 2013 05:40 PM   [ Ignore ]   [ # 5 ]  
Administrator
Avatar
Total Posts:  365
Joined  2006-05-17

Totally!
That is also why there are modelers, and uv-unwrappers, texture artist, lighting artists, match-movers, compositors, etc.
Every aspect of 3D can be very specialized, although this is harder in the freelance / broadcast market, as you are thrown into multiples of these rolls.

I guess that is why we are here smile we can help people to figure out what these rolls are.
In the case of GI here, we have covered the trade offs between preparation time / rendering time, and which is more important to the artist…as that is really the deciding factor.
I should also mention that I didn’t factor in test time, both in making yourself familiar with what the options do, and for determining what is acceptable.

When it comes to learning a new feature I always hit google to try and find information on how it works. white papers make for a good over-view of the behind the scenes, and often provide you with a solid knowledge of what various parameters mean…even if you switch to different programs. (The underlying logic to most things is the same…it is usually just a change in teminology you need to adjust to.)

Like I mentioned Vray, which is incredibly fast for rendering GI, but the real trade off there is learning time, which I didn’t emphasize enough…in that case you need to dedicate a few days just to start wrapping your head around the plethora of new parameters.
Everything is time, and the real key is finding where you want to spend it smile (I know at least one studio that uses nothing but an animated hdri and GI for rendering…but they are also willing to spend 20 hours a frame wink )

Profile
 
 
Posted: 09 January 2013 05:57 PM   [ Ignore ]   [ # 6 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Thank you Patrick. To understand the “inner mechanics” of each specialization is an important part. There is of course the aspect of workflow and dependencies among all these steps. No part is an island. Even the modeler has influence on the render time for example. I think the main point in this thread is, that rendering has roughly the same requirement as each other part (do I hear the character animators protest? Hehe) I stick with it. In that way it is nearly the last part in the 3D work (“Post” of course), and mostly underestimated.
(My first rendering took 24hours, and C4D can do that today in 2.4 seconds, which makes me more patient with the stuff, because I’m excited how fast things are today)

Yes, go to any of the big player, with 100.000s CPUs and a frame needs hours+ there as well ;o). As a friend told me when he got his new NeXT computer in the ‘90s, what ever you get (hardware), software will take advantage of it and slows you down again. Twenty years later - I hear the same “song”.

Rendering for me is like the dark-chamber of the photography (analog) time, if one wasn’t able to e.g., dodge and burn (three decades ago -10 years before Photoshop), he took the stuff from the “one hour photo booth”: One size fits all—sounds familiar?  :o) (Replace “dodge and burn” with any render-engine e.g., RenderMan, to reply to your post, and agree with it.)

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile