The Perception Guide to FUI: Setting Up a Wrist Mounted Interface, Part 2

Photo of Perception

Instructor Perception

Share this video
  • Duration: 04:42
  • Views: 2037
  • Made with Release: 18
  • Works with Release: 18 and greater

Using Proxy Geometry in Cinema 4D's Object Tracker to inform the track

Justin Molush (Senior Designer at Perception) provides further insight into tracking concerns and choosing appropriate proxy geometry to use when tracking objects in Cinema 4D.

show less

Transcript

Before we go on to the next portion of this tutorial, I just want to take some time to explain some possible alternatives for solving this camera with more primitive geometry and other techniques for using general proxy geometry to get the transform of the wrist. In this case, we're tracking for the hand, which is a very organic and non-conventional shape, considering there's other much more primitive shapes that you can use this tracker for. So we're just going to go through a basic outline of how to use a pre-existing model that's available in the Content Browser, as well as quickly roughing up some proxy geometry to be on the receiving end of the Object Tracker. So when checking the Content Browser, you'll notice there's quite a few models that are available for you to choose from. So choosing a model that just has an extended hand like this usually is the best option. It has a default pose that is already relatively close to it, at least localized on the wrist location. So we're going to use that in this case, and use that as the receiving geometry for the information that the Object Tracker is going to try and extrapolate. One thing to be very aware of in regard to models like this is that there's a high degree of triangulation and faceting that is occurring on the geometry, which is going to overall affect the accuracy of the objects, all within the scene. If there are relatively dramatic, you know, triangular faces that have large degree of angle separation between ones that are adjacent to each other, you're going to run into issues where the overall transform that is being applied to the model is going to be influenced by the large degree of angle separation between adjacent faces or faces that don't even have to be adjacent, but that are supposed to be receiving similar transforms that are close to each other. For example, when you're trying to track the two points directly next to each other on the wrist, if there is a large degree of angle separation, the object tracker might have a difficult time solving a relatively accurate spatial transform for that, due to its expecting things at a certain distance and then it is not matching up entirely. So you might get some inaccuracy when you're using a relatively low poly model, like we are in this case, that is a little more triangulated, and in this case the poly flow doesn't really match up as nicely compared to the model that we were using in the beginning. In addition, you can yourself model some relatively rough proxy geometry to be on the receiving end of the Object Tracker. So, in this case, I simply created a plane and started to go through and started to generally warp the plane a little bit to start matching some of the surface consistency that you would expect from that localized area on the wrist, and then also double checking the general orientation and shape of this object in a variety of perspectives. If you just view it from camera perspective, you might have some points that fundamentally look correct spatially from that individual perspective. But when you look around the plane, like I did here, you'll start to notice that there are some pretty glaring problems in terms of where the points were originally located. So constantly toggling back and forth, sticking on a Wireframe and just building something that begins to contour the overall surface area of the wrist. You're going to have to use multiple perspectives to get that correct because, again, this is not meant to be totally super accurate, but is meant to more so simulate sort of the surface. So that when the Object Tracker takes the 2D tracking data from the Camera Tracker and then projects it on to the surface and does calculations between frames to figure out what point on the surface is distorting how and how is it transforming, it will have a much better accuracy. So the more accurate you can get the proxy geometry and the more accurate you can get the tracks on the wrist itself, the more accurate the transform is going to be at the end of this process. So, at the end here, you can see me subdivide the surface to make it a little more...you know, to make it a little smoother and give it just a much higher degree of definition for the Object Tracker to project onto and then figure out how it's transforming in space.
Resume Auto-Scroll?