Yes and no. It depends. Your file has no images, so I can’t really tell what you were after.
I believe deeply that good texturing starts with a very well trained eye.
To see the melange out of all parts needed for the material to work, but simultaneously to see the contextual information, are key.
Going by your point of the application, PxiPlant3D, I assume you like to have an image taken and convert from it the channel information for Diffuse, Metallic, Roughness, AO, and Displacement.
Well, the general answer would be no. That kind of function is not given.
As I will explain below, images contain information that will work a good result, as any image taken will bake in a lot of information that was provided “on top” of the material that has our interest.
A better approach is to take these images as reference (taken with color charts, etc.) and use the 3D object that needs a texture, and with these two things, go to Substance Painter. The advantages are great, as it has no baked-in context information, no screwed up the color temperature or misplaced gamma nonsense. Perhaps it is only 8bit/channel anyway…
But it has an idea of the object and can provide specific maps based on that.
In Cinema 4D, you might check out how to bake from a detailed model a normal map or just a displacement map, perhaps render an AO map. This is possible.
Images might have the disadvantage of being small, and if you need very close to an object, things might fall apart visually.
If you paint (Photoshop) perhaps your own Bump-Map, it can be translated to a (fake0 Normal information via Normalizer shader.
All in all, A tileable texture should not be visible more than 1.5 times in an object; if it is noticeable as a tile, the attention goes there, and your work suffers.
A few thoughts about images, but please allow me to point to my YouTube channel, where I discuss images for textures. Just some basic reproduction tips: Texture.
Often people use the term Physical Correct. I don’t. Physical Plausible is what we can do in 3D. IF we can agree to that, I continue.
When we take an image, the light should be evenly, very soft, and provide zero shadows. Any reflections or specular highlights need to be handled. The given light sources might need polarization to really end up with something useful, to begin with.
Using a light meter while using a gray card seems to be the minimal requirement, and yes, most people use their camera as a light-meter. A good light meter can create a response curve for a camera, which allows in return to measure what the camera really needs.
A color meter might be not available, but a gray card should, perhaps, a Macbeth, of color-chart. To get a constant color temperature and a constant gray-point established. Anything else makes the scene feel like a 3D set up from the early days. Consistency. So if you get a texture and there is no light temperature or consistency based on a gray-point, you mix and match.
Without a depth-map, no real displacement or normal map can be created. Two decades ago, some people shot from different angles and created so with the help of color full light sources a normal map; But questionable nonetheless. So, to have an application produce from a single RGB image, a depth or even a Normal map is not real.
Diffuse and color, as well as roughness or refection. This is one of the most complicated measurements I’m aware of in Surface reproduction science. The simple answer here is, we go what looks plausible, not how it really works. We work with a single ray calculation, not with a frequency range of spectrum and its changes based on surface and material (refraction).
Besides, any application that would provide half the way good source material should have the ability to split color and brightness and brightness of the color or light information into separate passes. The only app that does it is quite expensive and more used in other areas.
To get a metallic, AKA reflection map from a 2D RGB image is impossible. As long as the surface is not recreated and the context, this can’t be established.
Last but not least: AO. This was, and I get not tired to say it, a fake from the old times. If you use Global Illumination and any kind of light bouncing calculation, this is included. Also, here any calculation is a limited try to get away with the least amount of iterations. AO, by itself, has no clue about light and relies totally on geometry. Even if a light source is very close by, it will render the area dark if it is a partly closed space. Just added all over the place, it looks like an oversaturated image, but brightness values. To me, it is most of the time over the top, and if doubled by GI, it is just too much.
Make Tile-able: Well, that can work or not. Every artist should have an eye on the results. Triplanar is not the best idea if the quality is key.
I hope that helps to create your own perspective to what you need, what your target is and perhaps safes you some money.
All the best