Workshop Program
9:00-9:10 Welcome
9:10-10:10 Session 1: Perception of gaze
- Brain-Enhanced Synergistic Attention (BESA)
Deepak Khosla, Matthew Keegan, Lei Zhang, Kevin R, Martin, Darrel J. VanBuer, and David J. Huber
- Multi-Modal Object of Interest Detection Using Eye Gaze and RGB-D Cameras
Christopher McMurrough, Jonathan Rich, Christopher Conly, Vassilis Athitsos, and Fillia Makedon
- Perception of Gaze Direction for Situated Interaction
Samer Al Moubayed, Gabriel Skantze
10:10-10:40 coffee break
10:40-12:00 Session 2: Functions of gaze
- A Head-Eye Coordination Model for Animating Gaze Shifts of Virtual Characters
Sean Andrist, Tomislav Pejsa, Bilge Mutlu, and Michael Gleicher
- From the Eye to the Heart: Eye Contact Triggers Emotion Simulation
Magdalena Rychlowska, Leah Zinner, Serban C. Musca, and Paula M. Niedenthal
- Addressee Identification for Human-Human-Agent Multiparty Conversations in Different Proxemics
Naoya Baba, Hung-Hsuan Huang, and Yukiko I. Nakano
- Hard lessons learned: Mobile eye-tracking in cockpits
Hana Vrzakova and Roman Bednarik
12:00-13:40 lunch
13:40-15:00 Session 3: Empirical studies of gaze
- Analysis on Learners' Gaze Patterns and the Instructor's Reactions in Ballroom Dance Tutoring
Kosuke Kimura, Hung-Hsuan Huang, and Kyoji Kawagoe
- Multimodal Corpus of Conversations in Mother Tongue and Second Language by Same Interlocutors
Kosuke Kabashima, Masafumi Nishida, Kristiina Jokinen, and Seiichi Yamamoto
- Gaze and Conversational Engagement in Multiparty Video Conversation: An annotation scheme and classification of high and low levels of engagement
Roman Bednarik, Shahram Eivazi, and Michal Hradis
- Visual Interaction and Conversational Activity
Andres Levitski, Jenni Radun, and Kristiina Jokinen
15:00-15:30 coffee break
15:30-16:30 Poster Session
- Move it there, or not? The design of voice commands for gaze with speech
Monika Elepfandt and Martin Grund
- Eye gaze assisted human-computer interaction in a hand gesture controlled
multi-display environment
Tong Cha and Sebastian Maier
- A framework of personal assistant for computer users by analyzing video
stream
Zixuan Wang, Jinyun Yan, and Hamid Aghajan
- Simple Multi-party Video Conversation System Focused on Participant Eye Gaze
Saori Yamamoto, Mayumi Bono, and Yugo Takeuchi
- Sensing Visual Attention Using an Interactive Bidirectional HMD
Tobias Schuchert, Sascha Voth, and Judith Baumgarten
- Semantic Interpretation of Eye Movements Using Designed Structures of Displayed
Contents
Erina Ishikawa, Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, and Takashi Matsuyama
- A Communication Support Interface Based on Learning Awareness for Collaborative Learning
Yuki Hayashi, Tomoko Kojiri, and Toyohide Watanabe
16:30-17:30 Discussion
17:40 closing