Retour à l'accueil
   
 
General presentation
 
 
Scientific Committee
 
 
Credits
 
 
Conference proceedings
 
 
Conference archives
 

Mancini M., Hartmann B., Pelachaud C. : Gesture Expressivity in Embodied Conversational Agent - POS [x/87]

Poster


-  Mancini, Maurizio (LINC - Université Paris 8, Paris)
-  Hartmann, Björn (Stanford University, Palo Alto)
-  Pelachaud, Catherine (LINC - Université Paris 8, Paris)

Gesture Expressivity in Embodied Conversational Agent

Embodied Conversational Agents (ECAs) are a powerful user interface paradigm, aiming at transferring the inherent richness of human-human interaction to human-computer interaction. ECAs are virtual embodied representations of humans that communicate with the user (or other agents) through different communicative channels (called modalities) : voice, facial expression, gaze, gesture, and body movement. Effectiveness of an agent is dependent on her ability to suspend the user’s disbelief during an interaction. To increase believability and life-likeness of an agent, we seek to move away from a generic acting agent model and, instead, to simulate individualized agents that portray idiosyncratic behaviors. Human individuals differ not only in their reasoning, their set of beliefs, goals, and their emotive states, but also in their way of expressing such information through the execution of specifi c behaviors. Based on what we should call “behavioral infl uences”, the same kind of information should be conveyed by the agent using one or more modalities, on which different kinds of signals will be transmitted using the proper degree of intensity. We will use the work “expressivity” to refer to all of these kinds of behavioral differences. In this paper, we present our model of capturing expressivity in human gesturing and propose a set of parameters to characterize this individual variability in a conversational agent generation system. We then suggest a mapping of the identifi ed dimensions of expressivity onto particular sets of animation parameters for gesture animation. We demonstrate synthesized behaviors with different expressivity settings in our existing ECA system and present an out-look of how to integrate our work with higher-level agent functions such as simulations of personality or emotion. We will also present some evaluation studies we conducted to validate our model and how these results will direct our work on expressivity in the near future.