This Page has been viewed 14737 times since March 2004! The VERC Collective has a nice FAQ running at their site. While we compose our own version from interviews, official e-mails and press releases, you can refer to the Collective's version which should serve you well in the meantime. Quote: How are the proper mouth movements accomplished when a model speaks?
To start with, you'll need to author keyshapes, or morph targets for each of your characters. Our facial animation system uses 34 keyshapes, 14 of which are required for proper lip-sync animation.
To create the lip-sync animation, you'll use our new character-acting tool, FacePoser. Load your character model into FacePoser, load the audio file into the phoneme editor window, type or paste the dialog from the audio file into the text entry window and the phoneme data will be extracted. The automatic extraction does a good job on clearly enunciated dialog of up to around 5 seconds in duration, though our animators often like to fine-tune the performances by hand afterward. More manual work may be necessary on audio files of longer duration, with poor fidelity, less clear performances or long gaps between dialog.
Read the rest at The Collective. |