Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction

Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction                                                          MA3



Saturday, September15th, 2012

9:00 - 9:15     Opening remarks

Paper Session 1

9:15-09:45  The Influence of Context Knowledge for Multimodal Annotation on natural Material

Authors:  Ingo Siegert, Ronald Böck and Andreas Wendemuth  [5_Paper.pdf]

9:45-10:15    Static Human Gesture Grading Based on Kinect

Authors:  Linwan Liu, Xiaoyu Wu, Linglin Wu and Tianchu Guo

10:15-10.45 The TSB Technique vs 3D Technology: A study to determine the increases of interpretability for an avatar's gaze across both approaches.

Authors:  Mark Dunne, Brian Mac Namee and John Kelleher 4_Paper.pdf

10:45-11:15 Coffee break

Paper Session 2

11:15-11:45   Facial expression as an input annotation modality for affective speech-to-speech translation

Authors:  Eva Szekely, Zeeshan Ahmed, Ingmar Steiner and Julie Carson-Berndsen 6_Paper.pdf

11:45-12:15  Incorporating Multi-Modal Evaluation into a Technology Enhanced Learning Experience,

Lynne Hall, Ruth Aylett, Colette Hume, Eva Krumhuber, and Nick Degens  2_Paper.pdf


12:15-12:45 Final Discussion


20 minutes presentations + 10 minutes for questions.

University of California, Santa Cruz

September 15th, 2012

Workshop at IVA 2012 Twelfth

International Conference on Intelligent Virtual Agents