However, when I bring the rendered PNG file into after effects, the ring’s alpha intensity seems to drop off when it is not overlapping the planet, creating an obvious highlight where they overlap. Like so: https://www.dropbox.com/s/z3490vvdjh4a6su/AlphaAF.jpg?dl=0
I thought if I enabled straight alpha it would be fixed, but the problem remains.
My current hack fix is to adjust the levels of the Alpha Channel in after effects, its not ideal, I’d love to understand why this is happening.
Straight Alpha means that all pixels will be rendered fully, even the ones in the fringe area of an object, a way that is normally used to get better results in post, for many reasons, e.g., clean a color correction and for a perfect transparency, even on multi layered content. Premultiplied images (containing transparency already, work only to a certain degree in that regard). If a premultiplied image is used with an separate alpha channel, things might double in the transparency.
I’m not clear if you switched in C4D and Ae to Straight, or only in one of them.
Are you sure that Ae is set to color management and that the alpha channel is not set to anything gamma, e.g., REC 709 of so. Alpha to my understanding is not an image and should based on that have not any gamma applied.
If I would have your project file, I would check in Float, e.g., Open EXR and see how that changes things, Including textures. A second set up would be then in Fusion to compare it.
To my knowledge PNG does not premultiply as storage, and doe not force the alpha to be gamma treaded. (https://www.w3.org/TR/PNG-DataRep.html). I do not use PNG in production ever, so I really can’t tell more.
The Mask in Ae is of course for comparison, and to not double the information I inverted it, I do not matched the shape, so the problems can be seen in the small stripes around them.
Set the Alpha channels to non RGB and the problem shows up instantly. I have set the rendering engine in Ae to 32bit, as it is bad enough to work with 8bit.ch color, but to keep going in that limitation is certianly not helping during production, you might switch to 16bit.channel color or just for fun shortly to 8bit…
Thank you for the advice, I did set the modes accordingly in AE to match straight mode.
I think as you then discuss, the issue is with colour management. I tried setting the colour profile to linear in C4D render settings, and this seemed to fix that specific problem. But switching to linear made everything much darker /high contrast, as you can probably tell I am not experienced with colour management. I’ve got by so far using default settings until I come across this alpha issue on rare occasions.
I will check your files in more detail.
It would be great to get more of an insight into your pipeline / workflow regarding image sequence file formats and colour management.
You were spot on to ask about the change, because there was something not set up correctly, AFAIK.
The main part here was that Ae is of course not aware if a pass is RGB or a mask channel. To just hope it will work out, while having a monitor that can perhaps show only a limited gamut is not really on par with the time and money one puts into a project.
Since I learned the Analog Feature Film pipeline back in the lat ‘80s, and then had to relearn everything for digital and I continue to do so, asking me about this, is certainly one of my favorite themes. So, I have my ideas about it, and lots of it is based on burned fingers.
“My Pipeline”, … that is a longer story, as I certainly don’t have ‘single one and only’. In the past decade I have set up for each movie another ‘one’, just to gain more ideas about. I work with C4D-flaot and with RED-raw, and using 6K raw footage is certainly a tough challenge for each pipeline, speaking of indie level equipment. To get things working anyway is the challenge, and to gain experience while setting up different one helps a lot.
Anyway, I know that some people argue about processing time, hard-disk space, etc., but on the end the most expensive part is anything else (the time the team has invested for example), and to sacrifice quality is certainly with an eye on all parts. I know that there are deadlines and budgets for many projects, and that is a different story, to solve by the producers, not the artist.
When I shot for one of my movies, back in time, I had to rely on 8bit/channel footage (Sony EX1), as the Helicopter didn’t allowed for extra recorders, etc. The limitations of that (failing color correction, bandings, etc) has certainly cured me to ever go that low again. Even over a quarter century back, TV stations asked for 10bit/channel already, as they knew, 8bit isn’t cutting it. But I see it all over the place even these days, even C4D is internally rendering everything in float.
Color management, this is one of the most crucial things to know. I can’t stress enough that the delivery format shouldn’t be the limiting format during the production, if the delivery is as low as Rec709 for example, or sRGB. We are going to Rec2020 and other formats, defining much wider gamuts and HDR in a complete new way. To even watch anything produced in Rec709 or sRGb will look dull and “yesterdayish” very soon, if not already…
But all of that makes no sense without calibrating the monitor and have a monitor that allows at least for a P3 gamut, for one’s production.
I know color management is not the most favorable thing to learn, but its crucial, and no, Ae will not figure it out by itself.
At the moment I try to dive more and more into Fusion, as it free and since I asked since NAB 2006 to have it on Mac as well, this is also a given by now, there is no way to not use it.
Its a great privilege to be chatting with someone with so much experience, I’m really not looking forward to when my MSA expires near the end of the year :(
I really should have known that you work with floating point and massive resolutions. It is logical for such projects of-course.
A lot of the work I do is for low budget remote clients with crazy short deadlines. When time or budget allows, I have tried out higher quality formats and resolution (for projection mapping for example) But most of the time I have used 8 bit and png, going 16 bit for gradient issues and darker scenes when needed.
I will be starting work in a studio soon and can’t wait to get an insight into larger scale and quality production techniques. And hopefully with more money I can finally afford to get a good quality monitor.
So when you work in Cinema 4D, do you use a Linear colour profile always when rendering? I really need to read up on coluor management. I’ve seen high gamut displays and the depth and difference to a generic consumer monitor is just crazy.
How are you finding Fusion, compared to After Effects?
The main problem is certainly given that we humans have our attention to brightness values more in the middle (to what ever our eyes adjust). The darker and brighter tones are not really as important. The funny thing was, that CRT tubes (super simplified) work in a similar fashion, minus the “aperture” of our eyes: It had a similar “Gamma”.
Over a decade ago, Stu Maschwitz (Co founder of the Studio: The Orphanage, San Francisco, CA) started to complain about Gamma and fought for a “Linear Light Work Flow”.
The core problem that he discussed widely and for example made Adobe change a lot, was that when we calculate values while being gamma based, they create unreliable and weird results.
In a nut shell, or how I explained it in my hands on classes at the time, to simplify it: You have a human (gamma) level, and a Machine (linear) level. The human level needs anything translated to gamma, on the viewing devices, but internally, the computer have to “digest” anything in linear to get clean results.
More in detail, if you take the 256 values/channel of an 8bit file, they have a greater density in the middle field, where our attention is, and a lower density of values where the brighter parts are, naturally the amount in the darker areas is smaller anyway, for many reasons. As each value can go only into a small space of these 256 values, in other words, there is no 125.678 value, it is either 125 or 126, the precision after many operations will go down “south”. Now take anything besides the denser middle part and the jumps from one value to the other will be much larger, as they were stretched on the gamma curve. Banding would be one of those results.
Which leads at least to have 16 bit per channel, during production, even if 8bit/channel was provided and will be delivered, to avoid the 125 or 126 decision. The production pipeline must be larger (while talking about integer not float), to keep at least the given quality and not lower it.
So, why not store linear light values in an 8bit/channel integer file? Because the nature of linear would take half of the values for the brightest stop alone (stop as in photography), the next one a quarter, which means for anything below the two brightest stops, we would have 64 steps left. But it gets worse, as the next darker stop takes and 1/8th away, then the 1/16. So four stops of light would leave 1/16th for anything below. Not quite an impressive number. Whereby, the definition of the brightest and white, is just the limit of the sensor or an random decision of the artist. But I go already to far in details for the space I have to answer all questions. So:
This is all in a nut shell, and yes, I do not do gamma based work at all, as small or large production. The problems that might occur with 8bit/channel and the time to fix those, with dither or by applying noise even or other weird addition stuff, might take away the advantage of working in such limited space in the first place.
Colorspace is key because it mange all the color. This sounded weird, but the key here is, that any pixel value if color managed or not, will be color managed when it is, e.g., delivered to your screen for example. IF non is give, it will be assumed that the material was created by one. If you calibrate your monitor, so all the variables that a monitor has will be neutralized as far as possible, it creates a profile. What everyone wants, is, that what you can see on your screen is the same that someone else will see on his’ or her’s screen or projection.
Since we have different sizes of colorspace, Gamut, the definition of color for each RGB will vary based on it. IF the space is too small, values will be clipped and from there - there is no way back to retrieve it.
While working in a production pipeline, one does not want to have the small color space that the smallest delivery has, it should be larger than the best delivery has. So at no point we have a “125;126” effect, or again something like clipping.
The part of moving from one color space to the next is something that should be done with care and at least after have read the four options described in the Photoshop Manual about it.
After Effects and Fusion. Two complete different approaches to do things. One can compare them, but it should be more a question where one feels comfortable in and more importantly, what kind of project we are talking about.
My history with After Effects began over two decades ago. Since then I have learned several Node based systems: Shake, Nuke, Fusion—to name the main node based systems. Others are available and have their place.
My productions have been on “Timeline” based systems, like Ae, but also on “Node” based systems. My personal idea about it is not so important, as everyone should explore that in depth, and at one point move a simple as well as a complex project through both. It will give a good idea what works in a certain environment and what not.
So, why Fusion? I love Shake, but it was end of life a long time ago. Then I learned Nuke, which is certainly a massive and powerful tool, but very expensive for indie productions, if not used full time. Fusion is supported since long for the windows side of C4D, but since it is on Mac one can use it in the same way, except it misses on the Mac side a specific 3D out-put, but Open EXR and FBX as well Alembic allows for a huge exchange between the two. The Open EXR options in NUKE are second to none, but very usable in Fusion. Fusion has a well a nice 3D set up, and the options to do work in it, that has started in C4D, e.g., camera mapping are just great. One can create set ups that can be reusable in other projects, by just copy and paste, a time safer that I have seen with Ae for example. Since it is free with a few limitations, I think it is a wonderful app, and Black Magic Design will push it, given the past few years with Resolve and Fusion.
As with the other points above, in a nut shell, the separation as Motion Graphics and Feature Film are not longer given, and even Fusion has its focus already on Motion Graphic artists to blur that line even more. Again, every artist has to explore those tools in depth to really know what’s best for him/her. Trusting little blogs and work based on that over years with the wrong tools is a weird idea from where I look at it.
Anyway, your next studio will have its very own ideas about, and those are certainly based on a wide mix of experience, problems and what each artist will share.
I hope you will have an option to keep your options wide open, including the MSA you mentioned. You might have to ask if a normal membership can be given, I’m not the right person to answer that.
You talked about privilege and all I can reply it, this is something that works in both directions in the same way. In the past 13 years that I answer in fora to question, I have gained so much, by exploring areas I never might have touched in that specific depth otherwise. I see it as a win win situation.
Thank you for your insight, so generally the problem is that, as you said, human vision does not perceive the colour spectrum linearly, it is biased to specific range(s) due to evolution and environment etc (I guess!).
So that is why output devices need to have as much data as possible throughout the pipeline, a linear mapping is needed for accurate computation, and humans we will only ever see a limited range of that.
I think I’ll be coming back to this thread when I have more time and definitely be reading up more on the subject.
This is not my personal idea about it, it has been discussed over and over again a decade ago. Which finally lead to the change of so many (but sadly not all) applications. On the end, sometimes there is just the artist judgement needed to find out what to do, e.g., if something is data or image information in a file.
The linear-light-workflow and color spectrum is connected, but not the same discussion. If we start with sRGB or REC 709 as color space, and stay the whole time in it, as it is our delivery, the material will see damage.
To make judgement-calls on a sRGB Gamut Displays and feel safe and sound is certainly a wrong way to do it these days, with so much more to come soon, as mentioned. The short-comings will be seen soon, and then one might lose clients, hence why I push this point since … ever. But I see people using happily 8bit.ch and the smallest Gamut, (besides gif based gamuts 256 index colors etc.). I can’t change it what people do and how valueable their time is compared to speed and hard drives, that is up to everyone.
A few years ago I have also put all my knowledge from several decades of Photography, as well as videography and since five years even as RED-(shooter/owner)-Cinematographer into the series below (signature area: link). I worked two years on these nearly 200 tutorials, and we decided to have them on my YouTube channel, so the connection between 3D and Photography Artist can be established in a better way. Since shooting for 3D is a complete different idea than to shot as an artist. But you will find nearly no (!) texture collection anywhere, that shares the “Light” in which it was shot, i.e., Kelvin, etc, etc, etc.
Yes of course, I just meant, I understood what you said, and I think you’ve explained it all so well. I do feel a little embarrassed after admitting I know so little about colour management, but I don’t regret it, thanks for all those links!
Well, that was not the point to make you feel that way, Nick.
My point is of course to push people a little bit, so their work is more future proof. I like to have everyone enjoying the fruits of their work in the best way.
I remember very well how I felt when Stu started talking about it. While the tools had very little option to set things up that way, back in the days. There was that feeling of getting pushed out of my comfort-zone. I can’t really say that I loved it, but somehow I felt, yes, he was onto something. As it turned out, he was spot on. Passing the Torch, as he called his last post on this them (AFAIK) made me wonder how much I have to pay forward. I love to share knowledge, as I think that we will not make a lot of difference just by knowing some tips and tricks, or even skills. We all make a differnce with the art we create, something that is personal and unique. Hence why I share anything that can be shared.