Hi Arno,
Sounds like a simple question, which would just require some pointers to parameters and perhaps some numbers to paste into.
Well, it’s not. Sorry.
The Short answer, get the .R3D files and work with your Camera Tracker!
The more detailed answer:
I try to explain the problems and as I had not the luck so far to use a Cooke 25 I can’t tell how much each of the following steps will be important. First of all, some details.
Cooke has a special design and I’m not certain if that is used in the 25mm you mentioned, the “Cooke Triplet”, which makes it even harder to find any equivalent in a normal lens kit.
The RED Epic, with the current sensor (Dragon will be larger) is not a single format for all results/resolution!
http://www.red.com/store/products/epic-m-2
The question might be, if the production is really shooting 5K all day, as it is quite some data. Some ad-clips need high speed and HDR-X, which might be not possible in 5K
MAX IMAGE AREA: 5120 (h) x 2700 (v) which results in a needed LENS COVERAGE: 27.7mm (h) x 14.6mm (v) x 31.4 mm (d)
The resulting field of view that you state is 67.7º.
http://www.cookeoptics.com/cooke.nsf/products/panchro_specs.html
But that is not really important as I will explain below.
Please have a look here:
http://www.reduser.net/forum/showthread.php?77349-Epic-M-and-X-Data-Sheet
Please note that the crop factor here is related to Full Frame lenses, at least how I read it, the Cook is a based on film frame but once you have adjusted the 5K any following crop will be proportional, you might ask the DOP on set (well the camera tracking is key if the parallax is suffiencent)
If they shoot in HDR-X, the frame rate available is practical half, as it records into A and B stream on the SSD.
The most important information is given in the Crop factor. Based on the used area of the Sensor this is a wide range of Crop-Factors.
With the decision of the used sensor format, the lens will provide accordingly a different field of view, as well a depth of field BTW.
So, can we go from there now if that information is given not at all?
As you mentioned currently, the lens grid is an important factor. I consider always two different Lens characteristics as lens distortions, the optical and the perspective one. Both need to be known and handled for the work with CG footage.
The normal way is to have, for EACH set up of the camera, a short lens grid available. This should go with the Color or gray cards hand in hand.
There are two major ways to handle the lens-“results”, the original footage becomes adjusted, and if over-scanned (shoot 5K and for the final 4K) then there is enough space to correct the source footage. Pro: the CG footage can be used as is. Downside of this is of course, you need to shoot larger, and the quality might suffer, as each change of pixel position (except the position change to an exact other pixel position) will lower the quality. Which leads normally to solution number 2, the CG has to match the practical footage. CG has the option to render out larger if needed, even later on. Which means that we have here the most used version.
Normally you write a little XPresso, to comfortably pad pixels around, with out compromising the original resolution—field of view combination. A standard tool for the C4D-DOP :o)
IF you have that lens grid, you can use it in most Camera-tracking packages. (e.g. http://www.ssontech.com/learning.htm have a look to the lens distortion part there) If you don’t adjust for the lens distortion, it might happen, based on the amount of lens distortion, that the CG parts, even camera track precisely, will “swim” inside of the practical footage. Note that some apps needs different grids!
Since r13 we have some minor options to render directly with lens distortion for the CG footage, which allows to render just the needed format. No need for padding areas around it—vs—some distortions will have the need to have more information outside of the given resolution, if not lens distorted directly. (Which is BTW not a bad idea to have anyway, as you might need it for any kind of blur -DOF- work anyway. The parameters in r13 are not always as the one you get from a Tracking package, like the three values that NUKE, PTGui etc uses. I would love to have these Industry standards available. However, it might be a good idea to analyze that in NUKE and get a distortion pass for each set up.
From Syntheyes or PFTrack, well, other packages might be available as well, will give you the camera data, and any concern about the lens might vanish, but the data is important for the Camera tracking in the first place.
Use the Reduser data sheet which I have linked above to determine what you have after the shooting, it might be a mix, so check at least the Metadata from the Red-files. make certain if they have a lot of lock of shots (tripod) that you get some set data, measurements and where the lens was sitting compared to the tripod axis. Measure parts that are vertically and often in the frame, as well rectangular parts to it. To have 3D data is even better. (Take your chrome ball and gray ball with you and if possible hold it shortly where the main CG elements will be. Take 360º/180º if possible, we talked some years ago about that.
I would ask them in what resolution they finally composite the clip, as this might help you in some cases to get the render-times cut in half or even less. Keep in mind that the Epic has a Bayern Pattern, which means, any rendering that has the same result ion might appear sharper. But I leave that discussion normally out (Resolution/Sharpness) as the psychological part of that discussion, especially among lenses, is kind of awkward.
I hope I have made the problem clear, gave enough information and links to clear the problem and gave you more certainty to work with Cinema4D on that shot.
I wish you a great time for the shooting and the post production.
Take care
Sassi