Effects of gender mapping on the perception of emotion from upper body movement in virtual characters

Please download to get full document.

View again

of 11
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report
Category:

Real Estate

Published:

Views: 0 | Pages: 11

Extension: PDF | Download: 0

Share
Description
Despite recent advancements in our understanding of the human perception of the emotional behaviour of embodied artificial entities in virtual reality environments, little remains known about various specifics relating to the effect of gender mapping
Transcript
  Effects of gender mapping on the perception of emotion from upper body movementin virtual characters Maurizio Mancini 1 , Andrei Ermilov 2 , Ginevra Castellano 3 , Fotis Liarokapis 4 ,Giovanna Varni 1 , and Christopher Peters 5 1 InfoMus Lab, University of Genoa, Italy  maurizio.mancini@unige.it, giovanna.varni@unige.it 2 Faculty of Engineering and Computing, University of Coventry, UK ermilova@uni.coventry.ac.uk 3 School of Electronic, Electrical and Computer Engineering, University of Birmingham, UK g.castellano@bham.ac.uk 4 Interactive Worlds Applied Research Group & Serious Games Institute, Coventry University, UK aa3235@coventry.ac.uk 5 School of Computer Science and Communication, Royal Institute of Technology (KTH), Sweden chpeters@kth.se Abstract.  Despite recent advancements in our understanding of thehuman perception of the emotional behaviour of embodied artificial en-tities in virtual reality environments, little remains known about variousspecifics relating to the effect of gender mapping on the perception of emotion from body movement. In this paper, a pilot experiment is pre-sented investigating the effects of gender congruency on the perceptionof emotion from upper body movements. Male and female actors wereenrolled to conduct a number of gestures within six general categories of emotion. These motions were mapped onto virtual characters with maleand female embodiments. According to the gender congruency condition,the motions of male actors were mapped onto male characters (congru-ent) or onto female characters (incongruent) and vice-versa. A significanteffect of gender mapping was found in the ratings of perception of threeemotions (anger, fear and happiness), suggesting that gender may be animportant aspect to be considered in the perception, and hence genera-tion, of some emotional behaviours. 1 Introduction Several studies explored the perception of behaviour in virtual characters [1,2].Results from these studies are significant, as they can be used to contribute to thedesign and development of more efficient and plausible simulations of artificialentities, with applications in important and complex fields such as virtual andaugmented reality. Perhaps of equal significance, these studies also have impacton deepening our understanding of how humans perceive other humans andbiological versus artificially generated motions.While studies have started to investigate the relationship between emotionand body movement by using virtual characters [3], little is known about the  effect of gender on the perception of emotion from body movement. This issueis important for creating expressive social entities that are able to successfullycommunicate with humans. For example, if humans are more sensitive to motionsof anger from male embodiments, this suggests that such behaviour needs to bemoderated in order to create more desirable impressions.In this paper, we present a pilot experiment investigating the perception of six basic emotions (anger, disgust, fear, happiness, sadness and surprise) fromupper body movements that have been mapped onto virtual characters of thesame and opposite gender from actors who srcinally conducted the movements.A corpus of emotional gestures was recorded from the performances of male andfemale actors using a Microsoft Kinect [4]. These movements were mapped ontovirtual character as follows: movements generated by a Female Actor (FA) weremapped onto both Female and Male Characters (FC and MC) and movementsgenerated by a Male Actor were mapped onto both Male and Female Characters(MC and FC).We expected that when the gender of the virtual character was congruentwith the gender of the srcinal actor, the recognition rate of the expressed emo-tion would be higher. An online experiment was performed to test our hypothe-sis, in which videos of the virtual characters were shown to twenty-four subjects.The results indicated that when the virtual character’s gender is congruent withthe gender of the srcinal actor, the recognition rate of the expressed emotion ishigher for a subset of the six emotions considered here.This paper is organised as follows. The next section (Section 2) provides asummary of relevant literature. Section 3 presents the corpus that was recordedwith the Microsoft Kinect as part of this study, detailing the mapping process andthe virtual stimuli used in the experiment. Section 4 describes a pre-experimentand Section 5 provides details of the mapping from actors to virtual charac-ters. Section 6 describes the online perceptual experiment, summarising anddiscussing the main results. Finally, Section 7 summarised the contributions andlimitations of the work in the context of future studies of impact in this domain. 2 Related work In the affective computing and computer animation communities, there has beengrowing interest in the study of perception of emotion from body movement us-ing virtual characters. McDonnell and colleagues [1], for example, investigatedthe perception of emotion expressed by virtual characters with different embod-iments and, more recently, the ability of humans to determine the gender of conversing characters based on facial and body cues [5]. Ennis and Eggs [6] ex-plored the use of complex emotional body language for a virtual character andfound that participants are better able to recognise complex emotions with nega-tive connotations rather than positive. Castellano et al. presented an experimentinvestigating the perception of synthesized emotional gestures performed by anembodied virtual agent based on actor’s movements with manual modulations[3].  In previous work on copying behaviour of real motion in virtual agents [7],it was shown that movement expressivity can convey the emotional content of people’s behaviour, e.g., if a virtual agent’s expressivity is not altered, thenemotional content cannot be conveyed effectively. Investigations in [8] consid-ered whether and how the type of gesture performed by a virtual agent affectsthe perception of emotion, concluding that a combination of type of movementperformed and its quality are important for successfully communicating emo-tions.In relation to gender perception from body movement, early studies by Ko-zlowski and Cutting [9] argued that the gender of walkers can be accuratelyrecognised without familiarity cues from dynamic displays of point-light sourcesplaced on upper/lower-body joints, respectively. Further, they also pointed outhow changes in the degree of arm-swing or in walking speed can interfere withthis recognition. More recently, gender recognition has had an important impactin computer vision, where researchers have developed several approaches basedon 2D-3D facial or full-body analysis (see [10] for a survey). However, relativelylittle is currently known about the effect of gender on the perception of emotionfrom body movement; the majority of studies within psychology and neuro-sciences have focused on perception of emotion from facial expressions, showingthat females are better at perceiving emotion from facial expression than males(e.g. [11]), and that different brain regions are activated by females and maleswhen viewing facial expressions that are expressing sadness and happiness [12].Further, Bartneck et al. [13] studied the effect of both culture and gender on theemotion recognition from complex expressions of avatars.Tracking movement is not a new domain of research and there are a number of approaches that have been used in the past. One of the most common techniquesis motion capture using body sensors or markers. These systems are typicallyexpensive and usually require the sensors or markers to be positioned at selectedplaces on the user. Although researchers have developed wearable systems thatcan capture movement for mapping onto virtual characters [14], there are otheralternatives including computer vision approaches. The advances of vision-basedhuman motion capture have been well documented [15] and they are becomingmore popular. For our experimental implementation we focused on the laterapproaches by utilising Microsoft Kinect as a low-cost alternative for capturingmotion. The maximum distance Kinect can operate is 5 metres and randomerror accuracy ranges from a few millimetres at 0.5m to 4cm of error at 5m [16],providing reasonable accuracy for some categories of motion recording. 3 Data collection from Actors A corpus of emotional gestures was recorded from the performances of male andfemale actors using a Microsoft Kinect as follows.Two amateur actors (1M:1F) were recruited to act six basic emotional states:anger, disgust, fear, happiness, sadness and surprise. The actors were instructedabout the final goals of the emotional gesture mapping experiment and provided  their written consent for using the recorded video and numerical data for researchpurposes.The actors were instructed to stand facing the camera and were asked not tomove beyond the boundaries of a square area marked on the center of the stage.This constraint has been imposed to minimize potential tracking problems, suchas occlusion, which may occur due to the use of a single Kinect if the actors werefree to walk around the stage.Each performed gesture involved the actor starting from and returning toa  starting   pose (i.e., standing still with arms along the body). Each gesturelasted up to 6 seconds. Actors could decide the gesture starting time by clappingtheir hands, after which the 6 second interval was measured. At the end of therecording interval, a sound informed the actor that they should return to thestarting pose.For each of the 6 emotional states the actor had to perform 10 repetitions,with a break of approximately 15 seconds between consecutive repetitions. Theemotional expressions were performed in the following order: anger, sadness,happiness, surprise, fear, disgust.Before performing the 10 emotion repetitions, actors were prepared with ashort predefined scenario in order to support them. For example, for sadness:“You are listening to your iPod, when it drops, you pick it up, but it won’twork.”Kinect Studio (part of the Windows Developer Toolkit) was used to recordthe actors, saving the information required for the mapping of motions to thecharacters described in Section 5. After each gesture was performed by the ac-tor, a Kinect data file was generated containing colour and depth information(Figures 3a and 3b). Thus the actors were recording at the same session and thevirtual characters were mapped at a later stage. 4 Pre-experiment user study From the corpus of gestures recorded in the data collection phase (Section 3),three gestures per emotion per actor were selected to form a final set of gesturesthat were as diverse as possible for each emotion.A categorisation task was performed in order to identify those gestures fromthe actors that were most recognisable for each emotion. Twenty participantstook part in this pre-experiment user study, which utilised a six-alternative forcedchoice paradigm: participants were asked to watch the video stimuli of the realactors and in each case indicate which one of the six basic emotions they thoughtwas being expressed. While a forced choice paradigm does not allow participantsto deviate from the proposed alternatives, it forces them to select the singleclosest option matching their judgement, which was desirable for the purposesof this study.The face and hands of the actors in the videos were blurred so as not tointerfere with the focus of the study, which is upper body movement. For eachactor and for each emotion, the gesture with the highest recognition rate was  Fig.1.  The interface of Kinect Studio. selected (Table 1). A few gestures had the same recognition rate: in these cases,the gesture whose expressed emotion was the most highly rated/less misclassi-fied across all emotions for that specific gesture was selected for the charactermapping phase, described in the next Section. Anger Disgust Fear Happiness Sadness SurpriseFemale actor 75% 35% 50% 85% 90% 55%Male actor 85% 75% 65% 85% 90% 70% Table 1.  Recognition rates of the body movements performed by the actors. 5 Character mapping Starting from the collected actors’ data (Section 3) and the corresponding recog-nition rates (Section 4), the character mapping stage involved two free pre-made3D character embodiments, one male and one female, that were chosen fromMixamo  1 .The mapping between the actors’ motions and characters was conducted us-ing a custom made real time skeleton motion tracking application developed in 1 http://www.mixamo.com
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x