Just getting to grips with CV-AR. The tutorial on Cineversity was fantastic, however I do have a query…
This primarily concerns using the ‘facial recording data’ to drive the animation of a separate 3D mesh.
I’ve watched this tutorial.
The tutorial makes perfect sense. It is essentially drawing a DIRECT link specifically between the ‘jaw open’ and ‘eyebrows up’ parameters in the CV-AR object, and matching them to an equivalent pose morph with the robot head.
I am attempting to use the facial recording to drive a more complex animation. What I have is 12 ‘blend shapes’ of my 3D characters face, which are generally based on phonetic pronunciation poses (so poses for ‘A, I’... ‘C,D,G’.....‘M,B,P…. etc). Essentially I have multiple blend shapes of my 3D character that look like this…
So my question is… How can I use a facial recording from CV-AR to drive an accurate transition between my various blend shapes for my 3D character?
I completely understand that in the tutorial, there is a DIRECT link between the ‘jaw open’ parameter, and the blend shape he created for his robot character with its mouth open. The problem with what I want to achieve is… It will clearly take a mixture of MANY CV-AR parameters, at many varying percentages to achieve just ONE of my ‘phonetic pronunciation blend shapes’ for my 3D character (let alone transition between one state to the other). I’m kind of at a loss as to how I would achieve it.
Ultimately, what I really want to achieve…. Is the ability to import a CV-AR facial recording, and have it accurately drive between the 12 blend shapes I have for my 3D Characters face.