A new version of Cineversity has been launched. This legacy site and its tutorials will remain accessible for a limited transition period

Visit the New Cineversity
   
 
Poll
Is this the right way to do so? What Method you prefer?
Full Automation, Ok if it does not look 100% natural 2
Create all the animation by hand, with posemorphs A, I, U, O, L, etc... 0
Use some other (expensive) Software than Cinema 4D 0
Let some other guys do it 0
Quit job and work at McDonalds ;-) 0
Total Votes: 2
You must be a logged-in member to vote
Using MoGraph Sound Effector and Delay with a xpresso script, how?
Posted: 06 August 2014 02:13 PM   [ Ignore ]  
Total Posts:  10
Joined  2014-02-04

Subject: LipSync, Automation, Soundfile, Sound Effector, Delay, Xpresso

We want to use a Soundfile for a relatively simple Lip movement of a character, using Posemorph.
With xpresso, we linked a output value from the MoGraph Sound Effector to the slider of the Posemorph for the Lip-Movement.

Simple and working so far. Only Problem is that the values of the Sound Effector are read-out Frame-per-Frame without Frames in-between, resulting in a relatively crude animation.
Using the Delay Effector might be the solution (we got it to work for an Equalizer-Style boxed thing made with Clone Object, see this tutorial (at Minute 18): http://greyscalegorilla.com/blog/tutorials/animate-with-music-using-the-sound-effector-in-cinema-4d/ )

but we did not get it to work with xpresso to control the posemorph-slider.

Is that possible? Or is there some work-around to gain control over it?

any suggestions will be appreciated!

thanks!

Profile
 
 
Posted: 06 August 2014 02:43 PM   [ Ignore ]   [ # 1 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Hi anyMOTION GRAPHICS,

Have a look the file that I have attached. I used a very fast beat, you might need to tweak your set up accordingly.

A little bit more work would be while using the History Node, lots of them and take the values from some different frames close of course to the current time ;o) and average those results.

What I have used in the past, is to set a plane (polygon/points) up, with the amount of points of the frames used, perhaps some more…

Then I would take the Time>Frame node to set for each frame a value (e.g., P.Y) for the point of the plane with the same number than the frame in the timeline. With a little XPresso, you can then take the values from before and after (once they are set) and average a certain range. (The point setting work is done before - and the XPresso is set to off for that part. In this way you can access each point and fine tune your animation. It is similar to keyframes, right… but you can manipulate the points, e.g., with deformers for example.

All the best

Sassi

P.S. one vote above is not real, I just wanted to check if anyone had an opinion so far. Please ignore one vote (first position)

Replace the audio file, please.

File Attachments
CV2_r15_drs_14_MGls_01.c4d.zip  (File Size: 36KB - Downloads: 256)
 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile
 
 
Posted: 07 August 2014 05:37 AM   [ Ignore ]   [ # 2 ]  
Total Posts:  10
Joined  2014-02-04

Dear Dr. Sassi,

Thank you very much for your fast and superb response, it works very well and is *exactly* what we want to achieve!

best regrads,

Profile
 
 
Posted: 07 August 2014 12:01 PM   [ Ignore ]   [ # 3 ]  
Total Posts:  10
Joined  2014-02-04

Dear Dr. Sassi,

We went one step further in development and want to share this insight with you and the community:

Since it worked just perfect to smooth the movement as in your example file, we now set up several Sound-Effectors, every one with a different frequency-range for the different formants of the human voice. The idea is to create corresponding pose-morphs for vowels (on wikipedia you’ll find a list with the average frequency for each vowel, look for “formants”)

Well, in the End, it worked better than expected! It looks nearly natural since the movement of the lips isn’t the same all the time - moreover, they do the right expression for A, E, U, O, etc..

Not 100% of course, but after tweaking and experimenting a bit, this one is really satisfing for an fully automated solution! Thanks again for your help, this was really, really helpful to us (since we are better animators than coders) :-D

best regards,

Profile
 
 
Posted: 07 August 2014 01:15 PM   [ Ignore ]   [ # 4 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Thanks a lot for the nice feedback, anyMOTIONS GRAPHICS, and for the nice insights to your findings. Yes, a good filter and preparation might be the key to get relatively close.
My impression is, that if the eyes are vivid and “speak” as well, the mouth is not so much observed. (Especially after watching the show “The Glades” where 50% of the time the sync if heavily off, up to a second or two).

Perhaps you might introduce one or two (subtle) poses, which are only adjusted with sliders, to fine tune the setting later on, just a thought. I think here of the the changes in our lips when express “certainty/un-certainty”, or other sub-layers of mood. ;o)

Thanks again, and my best wishes for the production!

Sassi

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile