The first thing that you need is a Lens Chart from the drone, to create a lens profile. Without that, there is no perfection at any time with a normal drone. Never ever use a 3rd party profile, that might be worse than to use none at all. Needless to say, the lens chart needs to be shot with the resolution of the given capture, otherwise a sensor crop which resulted in a smaller field of view would scale the lens distortion calculation and creates an even more wrong result.
Make absolutely certain that you have zero rolling shutter elements in the image. Often these are discussed as a horizontal skew, but with drones this can be easily a vertical rolling shutter, which in most cases can’t be fixed at all.
As well as, if the sensor is not fully used, the field of view is smaller, this needs to be calculated. If shot on a 20MP sensor and UHD (3840 not 4K!) is used then the entry should be instead of 24mm more around 34-35mm. 4K sensor crop in my example would be more a 32mm equivalence. (Note that the equivalence is always misleading, it is a sensor crop, not a lens crop., but since we have no field of view entry, this might be the way we have to measure it. The numbers above are not general numbers, just for one of the drones I flew as a (FAA 1007) pilot.
Typically, when you have to do such work, you get to do “Set Survey”, which means, easy to track points. If none available, use tennis-balls or anything that is visible and measurable for the time being. For drones, that might be best to work with collapsible reflectors, fixed to the ground, etc. These points are then tracked as a manual point. These points should at lease allow to reconstruct an rectangle or better any large box, as well as include images and sketches. (As a side note, I include a Color chart as well.)
A roof might be build based on the blue print, but note that the tolerance there is not really trustworthy. Especially a typical mistake to avoid: A gutter, for example, is by design not leveled.
To use the roof- of not 100%flat as reference, might put huge source for error inside the calculation.
The drone needs to do more than a pano shot (rotation) to gain enough parallax information. If a pano shot was don, then one frame needs to be camera calibrated manually first and the that will be the position of the nodal point then, the tracking allows then only for any rotational values, e.g., no single position of even scale value can be taken from it.
I have discussed and solved this problem here:
Moving cars and people who got a tracker need to have the trackers removed before solving.
If any dimension is given, that will be the scale of the whole scene, that must be in agreement with all other measurements.
You can’t even drag the Feature from the “User Feature Group” into the Vector Targets? I never trust things from the past, or past iteration, so I tested all will the current release just now and had no problems.
Do you think that is a software problem with the tracker? If so, check with the support.
All the best