Skip to main content

A generic framework for editing and synthesizing multimodal data with relative emotion strength

Computer Animation and Virtual Worlds

J.C.P. Chan, H.P.H. Shum, H. Wang, L. Yi, W. Wei, and E.S.L. Ho


Emotion is considered to be a core element in performances. In computer animation, both body motions and facial expressions are two popular mediums for a character to express the emotion. However, there has been limited research in studying how to effectively synthesize these two types of character movements using different levels of emotion strength with intuitive control, which is difficult to be modeled effectively. In this work, we explore a common model that can be used to represent the emotion for the applications of body motions and facial expressions synthesis. Unlike previous work that encode emotions into discrete motion style descriptors, we propose a continuous control indicator called emotion strength by controlling which a data?driven approach is presented to synthesize motions with fine control over emotions. Rather than interpolating motion features to synthesize new motion as in existing work, our method explicitly learns a model mapping low?level motion features to the emotion strength. Because the motion synthesis model is learned in the training stage, the computation time required for synthesizing motions at run time is very low. We further demonstrate the generality of our proposed framework by editing 2D face images using relative emotion strength. As a result, our method can be applied to interactive applications such as computer games, image editing tools, and virtual reality applications, as well as offline applications such as animation and movie production.

Papers

J. Chan, H. Shum, H. Wang, L. Yi, W. Wei, and E. Ho, A generic framework for editing and synthesizing multimodal data with relative emotion strength, Computer Animation and Virtual Worlds, vol. 30, iss. 6, 2019.
Abstract | Bibtex | PDF
Emotion is considered to be a core element in performances. In computer animation, both body motions and facial expressions are two popular mediums for a character to express the emotion. However, there has been limited research in studying how to effectively synthesize these two types of character movements using different levels of emotion strength with intuitive control, which is difficult to be modeled effectively. In this work, we explore a common model that can be used to represent the emotion for the applications of body motions and facial expressions synthesis. Unlike previous work that encode emotions into discrete motion style descriptors, we propose a continuous control indicator called emotion strength by controlling which a data?driven approach is presented to synthesize motions with fine control over emotions. Rather than interpolating motion features to synthesize new motion as in existing work, our method explicitly learns a model mapping low?level motion features to the emotion strength. Because the motion synthesis model is learned in the training stage, the computation time required for synthesizing motions at run time is very low. We further demonstrate the generality of our proposed framework by editing 2D face images using relative emotion strength. As a result, our method can be applied to interactive applications such as computer games, image editing tools, and virtual reality applications, as well as offline applications such as animation and movie production.
@article{wrro144010,
volume = {30},
number = {6},
month = {November},
author = {JCP Chan and HPH Shum and H Wang and L Yi and W Wei and ESL Ho},
note = {{\copyright} 2019 John Wiley \& Sons, Ltd. This is the peer reviewed version of the following article: Chan, JCP, Shum, HPH, Wang, H et al. (3 more authors) (2019) A generic framework for editing and synthesizing multimodal data with relative emotion strength. Computer Animation and Virtual Worlds. e1871. ISSN 1546-4261, which has been published in final form at https://doi.org/10.1002/cav.1871. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.},
title = {A generic framework for editing and synthesizing multimodal data with relative emotion strength},
publisher = {Wiley},
year = {2019},
journal = {Computer Animation and Virtual Worlds},
keywords = {data?driven; emotion motion; facial expression; image editing; motion capture; motion synthesis; relative attribute},
url = {https://eprints.whiterose.ac.uk/144010/},
abstract = {Emotion is considered to be a core element in performances. In computer animation, both body motions and facial expressions are two popular mediums for a character to express the emotion. However, there has been limited research in studying how to effectively synthesize these two types of character movements using different levels of emotion strength with intuitive control, which is difficult to be modeled effectively. In this work, we explore a common model that can be used to represent the emotion for the applications of body motions and facial expressions synthesis. Unlike previous work that encode emotions into discrete motion style descriptors, we propose a continuous control indicator called emotion strength by controlling which a data?driven approach is presented to synthesize motions with fine control over emotions. Rather than interpolating motion features to synthesize new motion as in existing work, our method explicitly learns a model mapping low?level motion features to the emotion strength. Because the motion synthesis model is learned in the training stage, the computation time required for synthesizing motions at run time is very low. We further demonstrate the generality of our proposed framework by editing 2D face images using relative emotion strength. As a result, our method can be applied to interactive applications such as computer games, image editing tools, and virtual reality applications, as well as offline applications such as animation and movie production.}
}

Authors from VCG

He Wang