The problem I have is that between the orange and the pink surfaces, there is no contact with the green, which is the floor.
So any lines that have to go from walls to floor don’t align.
What is a good workaround for this? Is it to do some kind of UV relax?
Here is a simplified model of the room with my UV’s https://www.dropbox.com/s/si8pxsbzt0ftpmm/SimpleRoom.c4d?dl=0
Thanks a lot for the files and for using Dropbox, Alex.
I love those questions, and I’m happy to have a look at it.
More information is needed about the projection or what you need to have as output to get the information into the four-panel (3 walls + floor) room. I hope with some extra information to form an idea.
For now, all I see is a many century-old challenge to wrap a flat (a map) around an object (e.g., a globe) and have no folds, wrinkles, or any distortion to it. I even have a complete book about this problem here. (Discussing Mercato or equirectangular projection.)
Two critical questions:
How is the 3D representation of that room getting the information on the walls?
How will it be projected practically? (A few single projectors or an LED-Volume). I assume it is animated.
So there will be a pixel map created by a projector vendor. I believe it is a 12 projector setup, but I don’t have all the info yet. The vendor is supposed to provide us with a pixel map, but we can inform the vendor on what we are in need of. This pixel map will go to various content creators on the team.
Does that answer your questions? Yes, it is an age old problem, which is why I have fingers crossed that people have found solutions!
Sorry for the delay; I had to dig a little bit deeper. My understanding about Pixel Mapping is more located in the stage light department, where one can place an image (or video) into a “light wall” (many bulbs, simplified) via DMX, for example. I learned that some companies use this term now for video projections as well, often coming from single laser beam projection, effect light not producing images. So, I know now more, but not less confused about this project.
I can see how a dozen projectors can cover that area, but I’m unclear what they will provide you. Having said that, so far, it feels like the last project, while this one has a simpler geometry, not so “cave” like at all.
I can offer to look into any additional information, and I’m certainly highly interested in finding a solution. If there is anything more that you could get from them, I’m happy to explore it. If I could see a single Pixel Map for this project, containing all 12 sources, even small, I will start working on it.
So I got some extra info. We basically have a setup where we have two media servers. One is hitting the floor with 3 projectors. The other setup is 3 projectors hitting the main all and side walls, and then two extra projectors covering the curved corners. The media server will take a single pixel map for the walls, and another one for the floor.
What we are trying to figure out in our content pipeline is ways to create content that will transition from wall to floor, even though they are not connected in the pixel map.
Does that make sense?
So I assume the wall file is a 3x3 HD image/footage, making it a 6K source file.
I assume ( and that is the uncomfortable part, as assumptions lead to problems) that the images (1 to 9) are expected to be ready to use in the projector. This means to set up a camera in the scene for each image and combine the results in a “contact sheet” like the result.
The other assumption is that the wall is flattened out, and they do the perspective distortion.
Which one is it?
So far, what is not clear is this content created in a 2D app and applied in Cinema 4D? Or a direct result of Cinema 4D.
Are people walking inside of it, or is it a room that can be seen from the outside and provide the illusion of a larger scene?
So the pixel map is actually much bigger as these are 4k projectors. Once I have a pixel map I will share.
But I think they are not doing a perspective distortion, they are just hitting each wall straight, and using projectors pointing at the curved corners to deal with focus changes. So I think it is like your first assumption, taking into account blend zones.
The content will be created in several apps, AE, C4d, Houdini and more. But we can certainly give the 2d animators a c4d UV map and let them align the content in there.
Your solution seems pretty great, but in the end we will still have to deliver the content in a “flat”, non perspective distorted pixel map…
Does that make sense?
Any content created in 3D can be rendered as an equirectangular image and projected from the same point in space as a spherical map back to the geometry. This is true for this geometry, as we can “see” everything from one point; nothing occludes something else. Again, Rendering camera and projection needs to match in position (nodal point stuff…)
In the file below, I went through four steps to create a 3D animation then render an equirectangular. After that, it is projected to the Geometry as Spherical (the same position as the 360 camera. From there, it was baked while keeping the UVs.
The baked content is then used to cover the UV texturing and, if needed using the UV to Mesh “Capsule”. I scaled the flat projection to SZ 0.667 to keep the typical equirectangular ratio of 2:1.
I hope all of that makes sense. This should allow to get all parts into the scene and fits the PixelMapping, which is then placed into the app that defines the projector use.
Hey Sassi
Sorry for the late reply, I was looking into this more deeply and talking to the video tech team.
Your setup makes a ton of sense, and is really a great approach. The only step I don’t understand is your second file in the batch. What is going on in that one?
Also, I need to learn how capsules work…Is that just a node setup that gets “encapsulated” as a deformer?
Please never feel pressured to reply; even it is nice to “hear” if something worked.
The Capsules are a “wrapper” around a node-based setup. You can find them in the Asset Browser> Operators.
A real power tour through a few of them can be found here: https://youtu.be/3DQKbJ2xWgo
I used the UV to Polygons here to showcase things. It takes a UV mesh and converts it to a Polygon Mesh.
The …REeq_01.c4d file? That is just a step between files 01 and 03. I use a finer mesh first when I store a projection inside the UV mesh (Generate UV Coordinates). I might remember my earliest Little planets that I shared in the PXC nearly two decades ago. I have translated several of these methods into Cinema 4D, as the engine allows me to develop much larger than some Ps plug-ins do while keeping it in 32bit/float.
Example https://www.youtube.com/watch?v=T_iyfeUXyro
I think I can say, without sounding like bragging, that I have a deep understanding in this area, yet, I line up with many people of the past hundred years who can’t unwrap the Earth-sphere into an undistorted flat rectangular representation. But that is pretty much needed when the room needs to be covered by a seamless and undistorted image. Hence my questions about what is needed. The more I know what I can exclude, the closer we can get maybe to something workable. Perhaps only a specific part of the floor needs to be connected, or it can be layered each time just the floor and one wall. Then another layer with a different layer, on end it appears as if the whole is connected. But that is not up to me to define. I can only point out where a problem is and research possibilities.
I’m following up on this topic to let you know that your solution worked perfectly! We were able to use your approach but also to skip the final UV unwrap because our Media server (Software called Disguise) allowed us to import the spherical camera directly into the software and let it translate that into the actual data going to the projectors. That was a huge time saver…