8 of 8
8
CV-AR: Support
Posted: 21 March 2019 02:27 PM   [ Ignore ]   [ # 106 ]  
Total Posts:  22
Joined  2018-08-20
Kent Barber - 20 March 2019 07:30 PM
polypoint - 17 March 2019 10:46 PM

I’m baking the face mesh to PLA for further FX treatment, which works well. As the proper UV mapping gets lost at this point, i want to bake the texture as well. But that doen’t match. Is there a way to get the right mapping of the texture sequence on a baked mesh?

You should try setting the option to use a specific frame for the texture and UVs. Then the UVs will not change during playback. You can also create these files directly from CV-AR with only a single image in them. Just change the option at the top of the App from All Frames to Single Image. Then when you first start recording hold the camera close to your face in a neutral expression to get a good first frame for use as the texture.  When you transfer this capture to C4D it will only have 1 texture image and one set of UVs, and the UVs will not change during playback.

Is there a way to bake the image sequence? Using only one still image is not an option for what I want to do.

Profile
 
 
Posted: 21 March 2019 02:48 PM   [ Ignore ]   [ # 107 ]  
Avatar
Total Posts:  49
Joined  2009-02-17

There is no way to bake the images are present. I could write a plugin to do this, but that is out of scope at the moment. Effectively what you want to do is keep the uvs and rebake every image to map to this UV set.

Sorry unfortunately there is no streamlined way to do this. You might be able to bake/remap each image using the BodyPaint/UV tools.

I have my own plugins/tools that I use to do this.

Profile
 
 
Posted: 23 March 2019 01:08 PM   [ Ignore ]   [ # 108 ]  
Total Posts:  22
Joined  2018-08-20
Kent Barber - 21 March 2019 02:48 PM

There is no way to bake the images are present. I could write a plugin to do this, but that is out of scope at the moment. Effectively what you want to do is keep the uvs and rebake every image to map to this UV set.

Sorry unfortunately there is no streamlined way to do this. You might be able to bake/remap each image using the BodyPaint/UV tools.

I have my own plugins/tools that I use to do this.

would you mind to share a bit more about your baking process? It would be really helpful to find a solution. It seems that the UVs are dynamically changing, actually simulation a camera mapping from the position of the phone.

Profile
 
 
Posted: 27 March 2019 04:10 AM   [ Ignore ]   [ # 109 ]  
Total Posts:  1
Joined  2019-03-20

I have downloaded the trial version of C4D but the CV AR plugin is not working. I have contacted Maxon, they requested that I contact the support team at Cineversity. When I contacted them , they informed me that the CV AR plugin should work properly in the demo version and they directed me to here. Would you be able to help with that?

Profile
 
 
Posted: 08 April 2019 10:01 AM   [ Ignore ]   [ # 110 ]  
Avatar
Total Posts:  49
Joined  2009-02-17
polypoint - 23 March 2019 01:08 PM
Kent Barber - 21 March 2019 02:48 PM

There is no way to bake the images are present. I could write a plugin to do this, but that is out of scope at the moment. Effectively what you want to do is keep the uvs and rebake every image to map to this UV set.

Sorry unfortunately there is no streamlined way to do this. You might be able to bake/remap each image using the BodyPaint/UV tools.

I have my own plugins/tools that I use to do this.

would you mind to share a bit more about your baking process? It would be really helpful to find a solution. It seems that the UVs are dynamically changing, actually simulation a camera mapping from the position of the phone.

I use my own software to remap textures.

https://vimeo.com/295097398

I haven’t automated the process yet, but would be easy enough for me to do. Let me know if you think this would be useful and I could add in an option to convert all images. You could then use the Image Sequence on a material and the UVs would remain static throughout.

Profile
 
 
Posted: 26 May 2019 05:46 AM   [ Ignore ]   [ # 111 ]  
Total Posts:  1
Joined  2017-11-05

Hi Kent!

Is it possible that data in the plugin somehow has reversed left and right? See my screenshot, Face Mesh shows that left eye is closed, but in the blend shapes data you see that Right Eye Blink is 81% while Left is just 14%

Image Attachments
2019-05-26_16-36-33.png
Profile
 
 
Posted: 13 June 2019 02:43 PM   [ Ignore ]   [ # 112 ]  
Avatar
Total Posts:  49
Joined  2009-02-17
sh00rk - 26 May 2019 05:46 AM

Is it possible that data in the plugin somehow has reversed left and right? See my screenshot, Face Mesh shows that left eye is closed, but in the blend shapes data you see that Right Eye Blink is 81% while Left is just 14%

Hi sh00rk,

Thanks for this. Yes you are correct, these values to appear to be swapped. It may be all the eye values. I will look more into this to see where exactly the problem came from.

Thanks again for finding this one.

Best Regards,
Kent

Profile
 
 
Posted: 02 July 2019 07:05 PM   [ Ignore ]   [ # 113 ]  
Total Posts:  22
Joined  2018-08-20

Using blend-shapes from CV-AR face mesh for an iPhone X App?

I am wondering if there is a workflow to extract 52 blend-shapes from CV-AR face mesh so that they can be used for a iPhone X App. I understood that the plugin’s main purpose is to record and apply the pose morph data to other meshes inside C4D. However, if the blend shape mesh states are already stored somewhere, it would be useful to access them. Thanks for any hint!

Profile
 
 
Posted: 03 July 2019 08:53 PM   [ Ignore ]   [ # 114 ]  
Avatar
Total Posts:  49
Joined  2009-02-17
polypoint - 02 July 2019 07:05 PM

Using blend-shapes from CV-AR face mesh for an iPhone X App?

I am wondering if there is a workflow to extract 52 blend-shapes from CV-AR face mesh so that they can be used for a iPhone X App. I understood that the plugin’s main purpose is to record and apply the pose morph data to other meshes inside C4D. However, if the blend shape mesh states are already stored somewhere, it would be useful to access them. Thanks for any hint!

Unfortunately apple doesn’t provide the individual blend shapes as meshes. They only provide the final vertex positions of the face for each frame. This is most likely due to the ML model they are using to generate the mesh itself. My guess would be that there are no actual target blend shape meshes at all internally, and the system is just generating the mesh, each frame, based on the users face and the blend shape values that are fed into the ML model.

With that being said it still may be possible to reverse engineer the blend shapes from a recorded capture. If the actor moved their face through every extreme expression it may be possible to analyse the vertex data along with the blend shape values to determine some base shapes. More research would have to be done however to make this possible, and it is not currently planned.

Profile
 
 
   
8 of 8
8