Motion Tracking & Object Tracking inside Cinema 4D: Tracking the main shot Camera Tracking and Scene
Posted: 02 July 2018 01:24 AM   [ Ignore ]  
Total Posts:  2
Joined  2017-09-21

Hi,
I watched tutorial “Motion Tracking & Object Tracking inside Cinema 4D: Tracking the main shot Camera Tracking and Scene”.

Athanasios is using constrains to Calibrate the tracking to the real dimensions.
I’m learning tracking because I have to make some tracking some architecture (3 buildings) into 4K drone video. When Athanasios is using vector constrain he claim that the dimension is 40 cm but he propably is guessing.

My question is what when I want to use real measured dimensions? I have some .DWG plans of the site so I can measure some dimensions, but the automatic trackers is not making tracking points in the place that is usable for me as dimension reference (corners of the roofs). I tried to make some addiotional manual tracking points but vector constrain tag don’t see my manual points.

I have good result (not perfect) when I’m just making some changes in the geometry ( some rotations and little scalling) but I have 3 video sequences with moving cars, peoples etc. so it is important to me to have still geometry and match the tracking to it not opposite - matching geometry to the tracking.

I tried also make some constrains on the existing automatic tracking points and then delete constrains and finishing the matching process manually by scalling and rotating whole tracking null. Unfortunately when I’m deleting constrains whole tracking is going back to its native very out of scale state.

Any suggestions to make it perfect?

Profile
 
 
Posted: 02 July 2018 02:25 AM   [ Ignore ]   [ # 1 ]  
Moderator
Avatar
Total Posts:  5850
Joined  2011-03-04

Hi Pictura,

The first thing that you need is a Lens Chart from the drone, to create a lens profile. Without that, there is no perfection at any time with a normal drone. Never ever use a 3rd party profile, that might be worse than to use none at all. Needless to say, the lens chart needs to be shot with the resolution of the given capture, otherwise a sensor crop which resulted in a smaller field of view would scale the lens distortion calculation and creates an even more wrong result.

Make absolutely certain that you have zero rolling shutter elements in the image. Often these are discussed as a horizontal skew, but with drones this can be easily a vertical rolling shutter, which in most cases can’t be fixed at all.

As well as, if the sensor is not fully used, the field of view is smaller, this needs to be calculated. If shot on a 20MP sensor and UHD (3840 not 4K!) is used then the entry should be instead of 24mm more around 34-35mm. 4K sensor crop in my example would be more a 32mm equivalence. (Note that the equivalence is always misleading, it is a sensor crop, not a lens crop., but since we have no field of view entry, this might be the way we have to measure it. The numbers above are not general numbers, just for one of the drones I flew as a (FAA 1007) pilot.

Typically, when you have to do such work, you get to do “Set Survey”, which means, easy to track points. If none available, use tennis-balls or anything that is visible and measurable for the time being. For drones, that might be best to work with collapsible reflectors, fixed to the ground, etc. These points are then tracked as a manual point. These points should at lease allow to reconstruct an rectangle or better any large box, as well as include images and sketches. (As a side note, I include a Color chart as well.)

A roof might be build based on the blue print, but note that the tolerance there is not really trustworthy. Especially a typical mistake to avoid:  A gutter, for example, is by design not leveled.
To use the roof- of not 100%flat as reference, might put huge source for error inside the calculation.

The drone needs to do more than a pano shot (rotation) to gain enough parallax information. If a pano shot was don, then one frame needs to be camera calibrated manually first and the that will be the position of the nodal point then, the tracking allows then only for any rotational values, e.g., no single position of even scale value can be taken from it.
I have discussed and solved this problem here:
https://www.cineversity.com/forums/viewthread/2472/

Moving cars and people who got a tracker need to have the trackers removed before solving.

If any dimension is given, that will be the scale of the whole scene, that must be in agreement with all other measurements.

You can’t even drag the Feature from the “User Feature Group” into the Vector Targets? I never trust things from the past, or past iteration, so I tested all will the current release just now and had no problems.

Do you think that is a software problem with the tracker? If so, check with the support.

All the best

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.

Photography For C4D Artists: 200 Free Tutorials: Texture, Panorama, HDRI, Camera Projection, etc.
https://www.youtube.com/user/DrSassiLA/playlists

Profile
 
 
Posted: 02 July 2018 01:22 PM   [ Ignore ]   [ # 2 ]  
Total Posts:  2
Joined  2017-09-21

Hi,
thanks for fast reply.
Unfortunately I can’t do the survey. I’m only an3d graphic and have film to mount from other firm.
I am using lens parameters from the producent website (phantom 4 pro).
I did’t try to just move my manual track points. Didn’t realised that it can work that way.

Profile
 
 
Posted: 02 July 2018 09:22 PM   [ Ignore ]   [ # 3 ]  
Moderator
Avatar
Total Posts:  5850
Joined  2011-03-04

Hi Pictura,

The website gives you the diagonal field of view for the full sensor read out, as in 20MP, but it is not really a value I would trust without doing the math. The 8.8mm lens is certainly accurate, but again for the full 20MP.
The site has of course no information of the lens-distortion and if the center of the sensor of given individual cameras is on the optical axis of the lens. This is kind of crucial to have. Since it is a fixed lens drone, this can be taken at any time, as long as this very drone is avaialble.
To use the 24mm for any of the video results will create a wrong entry in the parameter fields. The tracker is very robust to filter data, but any given data helps to create the decision what is right and what is wrong.
Lenses are pretty much magnifying instruments. The term equivalent to 35mm is in that way just nonsense, but yes, most people use it. The glass doesn’t change a bit just because the readout of a sensor is smaller. The Pro has variable f-Stops, so that should be considered for the compositing to evaluate manually and not just use any given number.

If you had a pilot with experience in this work, s/he should have at least for all formats captured a lens profile. Typically a flight report is recorded as well, which shows the flight path of the drone. This can help as well to verify the results of your tracking. I assume that it is a commercial project, so the drone pilot must have at least a FAA 107 license, which means s/he has to keep the reports on file. If not FAA 107, then the footage can cause trouble if used commercially.

The main part is to get tracker-feature for your 3D work that allows you to orient the space, so the XYZ relation works. The size is crucial of course, but that is just a question of scale.

In any way, the further away tracker points are, the less precise they are, typically one can see that on the color already. If in doubt, you can always take a single shot out, where the drone was slow, and use the Camera Calibrator to gain more information about your scene.

Let me know how it goes with the manual tracker.

Enjoy your week

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.

Photography For C4D Artists: 200 Free Tutorials: Texture, Panorama, HDRI, Camera Projection, etc.
https://www.youtube.com/user/DrSassiLA/playlists

Profile
 
 
   
 
 
‹‹ Rig-it Scripts      PlaneSmart: Overview ››