I purchased a model on TurboSquid and had to make several adjustments/additions. It was already UV unwrapped and everything was pretty much good to go. However, after having to add additional geometry and move some polys around, I have a few errors to clean up. I tried to be cognizant of the areas I knew I would need to clean up. Is there an easy way to fix UVs in Cinema, or would I pretty much need to start from scratch?
Thanks Dr. Sassi. I have sent over a file. Basically the polys around the trigger hole have been adjusted. There’s also a bunch on the inside of the Glock that have been added. I’m thinking if I can separate the inside of the Glock from the outside, then somehow blend the two together? Keep the outside as a UV mapped model, but the inside if I could just throw a material on it and match it as close as I can? I am using Corona renderer, but that shouldn’t matter in this particular case.
Thanks for the file. I have send you a link via PM.
The new polygons where placed over the whole UV space, even worse, since the Texture Tag was set up to a lot of tiles, these little parts got a lot of information. Given the tile setting, I assume there was no real texture given for object. Based on that, I assume the UV space was easier to get completely new than fixing it. However, you will find both options. while the first one I did, kept the original parts in place, no idea why, as I have no indication for a proper texture tag setting.
The second one “…1to1.c4d” uses the space of the texture in a way that each pixel shows up only one time on the object. You might pick.
It is a longer process to find the right (?) set up, especially when most of the steps are just assumptions. Not really the way it should work of course. I hope I made the right assumptions. But tiling and UV makes no sense to me in this case, hence why I moved ahead.
The point with the index number 506 seemed to be an Edge-Point, and indicates here an N-Gone. You might fix that (Weld—for example). Otherwise I couldn’t find a problem in the mesh.
If I get more information, maybe I have other ideas. As usual, I’m happy to look into it.
Oh wow thank you so much for showing this. You unwrapped it quite quickly, and I like how you kept the other UVs in place!
I apologize on the tiling! I had it set to 8, so that it was easier to see the errors in the UV. It should be on 1.
I’ve sent over a couple renders from my original file to show what I’m running into. Basically the outside of the Glock looks great. I had to remodel the entire inside, and of course that has ruined all the UVs. Would you have any advice on how I could keep the outside the way it is, but clean up the inside of the model? Maybe I don’t need to UV unwrap the inside, if I can just find a basic metal material? And to make things even more confusing, the outside material contains diffuse, displacement, gloss, ior, normal, and reflection textures input into each channel of the material. I just don’t know how to make it look good at this point :(
The tiling lead me to the conclusion that there was no image for the surface so far. I missed information, and I should have been stubborn—and request this. My fault.
My tip: take the UV image of your object and tile it after setting the resolution (Ps canvas) to the wanted size, then fill it. Perhaps mark each tile a little bit differently, otherwise the repetition will not show if polygons are on top of each other.
About image one: I have send a screen capture link to your PM, showcasing how to locate the UV polygons and their neighbors, then moving the parts together. Yes, with the precision you like to have, that is maybe the way it has to happen. Typically the projection, or “relaxing” is done first, as it certainly was in your case. While all parts are textured, those patching up problem zones are typically manually, not always, but more than often.
While moving the UV polygons, please have an eye on the area of the object, how that change and then it fits to the texture. This is pretty much the key to UV work. Select, move, rotate and scale. Other options are given to be faster with a complete new set up, but for patching a few…
Image two: it works in the same way, the orphaned UV polygons are in the left lower area and will find their place in the right lower area of the texture.
The image three ( the collection) was not sufficient to see a thing. Anyway, it was enough to see that the original UV set up can’t be just replaced. The Remap function is not really an improvement to the quality at any time, so for now stick with the original then.
P.S.: In case anyone wants to follow, here is an example with a plane-object, that has a similar problem (simplified), as I can’t use Matt’s file.
Please observe how the “problem” square changes its content while moved, rotated and scaled. This is the main key idea of UV. If you get this down, then anything else in terms UV will come after a while. Just play with it.
Why did it (the problem square) showed anything at all, even while outside the square? Because the texture tag was set to tile. Hence, why I think it should be set only to tile while UV work is done when absolutely needed.
Screen capture (… the 500th of my one minute clips I hope everyone enjoys those.)
Yes, it seems so simple, just fix it, right. If a pattern needs to be seamless on the object, that would required that eh application can “read” the texture and interpret it to make it seamless. But that is not always that simple, as an extrude inner followed by an extrude command of a previous single polygon needs more space in the texture image, but that can’t be at the same spot, as pixels won’t be squeezed locally. In this way, the new polygons needs to be either overlap others, with a fixed structure that might work, but with a more organic texture most likely not. To make that differentiation is an artist call. Perhaps Artificial Intelligence will solve that, but maybe it solves more than we want…
To include at least one texture would have shown me what is expected on each area, and perhaps I could have placed the UVs accordingly. While tiling is on (but set to 100%) one can set parts outside of the UV space so they use the same parts of the texture, but the polygons do not overlap. Since the patter you liked to have on some spots was repetitive, that would have been my suggestion. Not knowing it, I was left with assumption found in the scene file. But yes, having Normal maps as well, for example, makes the Color or the diffusion texture an incomplete information.
Thanks Dr. Sassi. I’ve gone ahead and sent the actual textures that make up that part of the Glock. I’m really sorry, I should have sent these initially :(
I wish I was able to get the original PSD, so that it would be easier to make changes to the textures themselves, but unfortunately they don’t have the PSD. Here’s the model I purchased: https://www.turbosquid.com/FullPreview/Index.cfm/ID/1096944
Just throwing this out there, but how hard would it be to re-create materials if need be? I think my biggest concern is keeping the little details on the Glock like the grips, text, and logos.
Thanks, Matt, I got the textures. (and I heard just my e-mail sound for CV, new message from you 08:41)
While going through them, I can see that the point pattern is relatively simple and one would think to just place the new polygons inside of it, but inspecting the same area with “Gloss” and “Reflect”, there is an underlaying pattern (like a cloud) that disables that option. As that cloud would be cut by just placing a new UV polygon so it fits the dot pattern. With that little cloud “background” the texturing became so much more complex. The clouds are not really needed in a fixed UV definition, so they could have been just set up with a 3D noise and everything would be super simple.
You can create a UV polygon outline from the new modeling update and paint your texture in Photoshop or BodyPaint. The Normal pass seems more like patter and noise there easy to expand. https://help.maxon.net/us/#11603
If these are all adjustments you like to do, I would adjust the textures. If there will be more, maybe you might consider to start over and use anything else than jpg/png (low bit) textures for your work. Especially the Normal map, it is a png in sRGB, and only in 8bit per channel. That means a data image (pure information) that was converted into gamma and back from gamma, Which means, not really where it could be. I’m not clear about your target quality, but perhaps for more modeling work, that might be the point to start from scratch. The displace texture image is Open EXR and already a great source.
I know that I’m picky with this, so others might have a much more relaxed idea about it of course, but the handle of the object has horizontally not even 800 pixels. If you want to get a close up, the handle can be seen in UHD not even filling a 1/8th of the frame before pixelation might happen. Even the texture is 8K!
Set your UV image to tile 100%, unfortunately the texture is divided by ten, so you get per square 102.4 pixels in length or hight. (Typically those images are dividable by 2, as in divided by 8 [2*2*2] 1024, 512, 256, 128, etc) For a minimal supply of information the rule of thumb is that the texture must be 1.5 times higher than the rendering of it. Which means per square of the UV texture of yours, ~67pixels. So what ever is in the rendering image as a full square, can’t be larger than ~67 pixels in the rendering. Hence another reason to not tile the UV, but my hints are getting old, so I stop here.
I hope all the information will help you to get to the quality you are after.
Funny you bring up the gamma issue! The model was originally created in 3ds max, and I brought it into Cinema. Well I had to adjust the gamma for all the textures as it was way off. Basically I had to adjust it by a value of 0.45. Apparently linear gamma is 1.0, so to adjust, I had to either add 2.2 or subtract 0.454545. I dunno, still trying to wrap my head around all that.
As far as resolution goes, I keep hearing great things about UDIM, as you can essentially set different UV islands to have different resolutions? At least I think that’s how it works. But Cinema doesn’t support this feature :(
I dunno, at this point I’m almost tempted to do away with the textures/UV workflow completely and just create materials for everything. That’s nice because I wouldn’t have to worry about unwrapping anything, and resolution essentially doesn’t matter. I didn’t unwrap anything on the AK-47 model and it turned out just fine. It just stinks because the Glock model from TurboSquid looks so nice the way it is with the UV structure :/ However, not unwrapping, you don’t get those nice scratches and extra details.
Gamma, well that is a long story, and since over a decade we try to avoid this as data-container. But somehow it seems to stick. Let me share just one concern that I have with data/Information based textures in eight bit per channel and gamma. We all have seen by now the curve of a gamma function. What gamma, 1.8 or 2.2, is not so much the problem here, as the problem that it causes it. The curve moves values around, and the differentiation on the higher values (where the curve is nearly flat) suffers a lot. As there are only 256 slots per channel, many previously separated values from the 256 share now the same. This is un-curable! Which doesn’t matter as much for portraits as it does for data or information. Take the Normal pass. Each value in RGB represents a vector, to define in which direction a normal points. We have 256 values for each “rotation” HPB, or RGB. So, we do not have for each single degree a value. 256/360 is the precision, if we would not have a gamma. With gamma we get a extreme uneven distribution of those few values, based on the curve. Nope, that can’t be taken back, as the 256 are already an extreme rough representation to begin with. Have this done a few times, and nothing has any quality at all anymore. Given the JPG artifacts on top of it… Why people use this today, is beyond me. Seriously. Two decades back, yes, of course, I got that.
IF you don’t go further with the modeling, the fixing is easy, and requires just time. Yes, since that is a plastic part, the projections of details are more likely orthogonal, as the “tool” where the plastic goes into, needs to moved away. With 3D printing that would be different. So, projecting to selected areas a shader is simple, which can be turned quickly into UV information of course.
Perhaps wait until R20 is released and check with the new Material Node System. But yes, I know, you want to do it Corona.
What I wrote above is just sharing information and thoughts. Not to leave it out, and you invest a lot of time, only to see that your close up looks bad. That’s all. I hope I didn’t sound critical, but data and informations are eiterh supplying quality or they do not, that was the whole idea to share.