ICMI 2016- Proceedings of the 18th ACM International Conference on Multimodal Interaction

Full Citation in the ACM Digital Library

SESSION: Invited Talks

Understanding people by tracking their word use (keynote)

Learning to generate images and their descriptions (keynote)

Embodied media: expanding human capacity via virtual reality and telexistence (keynote)

Help me if you can: towards multiadaptive interaction platforms (ICMI awardee talk)

SESSION: Oral Session 1: Multimodal Social Agents

Trust me: multimodal signals of trustworthiness

Semi-situated learning of verbal and nonverbal content for repeated human-robot interaction

Towards building an attentive artificial listener: on the perception of attentiveness in audio-visual feedback tokens

Sequence-based multimodal behavior modeling for social agents

SESSION: Oral Session 2: Physiological and Tactile Modalities

Adaptive review for mobile MOOC learning via implicit physiological signal sensing

Visuotactile integration for depth perception in augmented reality

Exploring multimodal biosignal features for stress detection during indoor mobility

An IDE for multimodal controls in smart buildings

SESSION: Poster Session 1

Personalized unknown word detection in non-native language reading using eye gaze

Discovering facial expressions for states of amused, persuaded, informed, sentimental and inspired

Do speech features for detecting cognitive load depend on specific languages?

Training on the job: behavioral analysis of job interviews in hospitality

Emotion spotting: discovering regions of evidence in audio-visual emotion expressions

Semi-supervised model personalization for improved detection of learner's emotional engagement

Driving maneuver prediction using car sensor and driver physiological signals

On leveraging crowdsourced data for automatic perceived stress detection

Investigating the impact of automated transcripts on non-native speakers' listening comprehension

Speaker impact on audience comprehension for academic presentations

EmoReact: a multimodal approach and dataset for recognizing emotional responses in children

Bimanual input for multiscale navigation with pressure and touch gestures

Intervention-free selection using EEG and eye tracking

Automated scoring of interview videos using Doc2Vec multimodal feature extraction paradigm

Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets

Multi-sensor modeling of teacher instructional segments in live classrooms

SESSION: Oral Session 3: Groups, Teams, and Meetings

Meeting extracts for discussion summarization based on multimodal nonverbal information

Getting to know you: a multimodal investigation of team behavior and resilience to stress

Measuring the impact of multimodal behavioural feedback loops on social interactions

Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings

SESSION: Oral Session 4: Personality and Emotion

Automatic recognition of self-reported and perceived emotion: does joint modeling help?

Personality classification and behaviour interpretation: an approach based on feature categories

Multiscale kernel locally penalised discriminant analysis exemplified by emotion recognition in speech

Estimating self-assessed personality from body movements and proximity in crowded mingling scenarios

SESSION: Poster Session 2

Deep learning driven hypergraph representation for image-based emotion recognition

Towards a listening agent: a system generating audiovisual laughs and smiles to show interest

Sound emblems for affective multimodal output of a robotic tutor: a perception study

Automatic detection of very early stage of dementia through multimodal interaction with computer avatars

MobileSSI: asynchronous fusion for social signal interpretation in the wild

Language proficiency assessment of English L2 speakers based on joint analysis of prosody and native language

Training deep networks for facial expression recognition with crowd-sourced label distribution

Deep multimodal fusion for persuasiveness prediction

Comparison of three implementations of HeadTurn: a multimodal interaction technique with gaze and head turns

Effects of multimodal cues on children's perception of uncanniness in a social robot

Multimodal feedback for finger-based interaction in mobile augmented reality

Smooth eye movement interaction using EOG glasses

Active speaker detection with audio-visual co-training

Detecting emergent leader in a meeting environment using nonverbal visual features only

Stressful first impressions in job interviews

SESSION: Oral Session 5: Gesture, Touch, and Haptics

Analyzing the articulation features of children's touchscreen gestures

Reach out and touch me: effects of four distinct haptic technologies on affective touch in virtual reality

Using touchscreen interaction data to predict cognitive workload

Exploration of virtual environments on tablet: comparison between tactile and tangible interaction techniques

SESSION: Oral Session 6: Skill Training and Assessment

Understanding the impact of personal feedback on face-to-face interactions in the workplace

Asynchronous video interviews vs. face-to-face interviews for communication skill measurement: a systematic study

Context and cognitive state triggered interventions for mobile MOOC learning

Native vs. non-native language fluency implications on multimodal interaction for interpersonal skills training

SESSION: Demo Session 1

Social signal processing for dummies

Metering "black holes": networking stand-alone applications for distributed multimodal synchronization

Towards a multimodal adaptive lighting system for visually impaired children

Multimodal affective feedback: combining thermal, vibrotactile, audio and visual signals

Niki and Julie: a robot and virtual human for studying multimodal social interaction

A demonstration of multimodal debrief generation for AUVs, post-mission and in-mission

Laughter detection in the wild: demonstrating a tool for mobile social signal processing and visualization

SESSION: Demo Session 2

Multimodal system for public speaking with real time feedback: a positive computing perspective

Multimodal biofeedback system integrating low-cost easy sensing devices

A telepresence system using a flexible textile display

Large-scale multimodal movie dialogue corpus

Immersive virtual reality with multimodal interaction and streaming technology

Multimodal interaction with the autonomous Android ERICA

Ask Alice: an artificial retrieval of information agent

Design of multimodal instructional tutoring agents using augmented reality and smart learning objects

AttentiveVideo: quantifying emotional responses to mobile video advertisements

Young Merlin: an embodied conversational agent in virtual reality

SESSION: EmotiW Challenge

EmotiW 2016: video and group-level emotion recognition challenges

Emotion recognition in the wild from videos using images

A deep look into group happiness prediction from images

Video-based emotion recognition using CNN-RNN and C3D hybrid networks

LSTM for dynamic emotion and group emotion recognition in the wild

Multi-clue fusion for emotion recognition in the wild

Multi-view common space learning for emotion recognition in the wild

HoloNet: towards robust emotion recognition in the wild

Group happiness assessment using geometric features and dataset balancing

Happiness level prediction with sequential inputs via multiple regressions

Video emotion recognition in the wild based on fusion of multimodal features

Wild wild emotion: a multimodal ensemble approach

Audio and face video emotion recognition in the wild using deep neural networks and small datasets

Automatic emotion recognition in the wild using an ensemble of static and dynamic representations

SESSION: Doctoral Consortium

The influence of appearance and interaction strategy of a social robot on the feeling of uncanniness in humans

Viewing support system for multi-view videos

Engaging children with autism in a shape perception task using a haptic force feedback interface

Modeling user's decision process through gaze behavior

Multimodal positive computing system for public speaking with real-time feedback

Prediction/Assessment of communication skill using multimodal cues in social interactions

Player/Avatar body relations in multimodal augmented reality games

Computational model for interpersonal attitude expression

Assessing symptoms of excessive SNS usage based on user behavior and emotion

Kawaii feeling estimation by product attributes and biological signals

Multimodal sensing of affect intensity

Enriching student learning experience using augmented reality and smart learning objects

Automated recognition of facial expressions authenticity

Improving the generalizability of emotion recognition systems: towards emotion recognition in the wild

SESSION: Grand Challenge Summary

Emotion recognition in the wild challenge 2016

SESSION: Workshop Summaries

1st international workshop on embodied interaction with smart environments (workshop summary)

ASSP4MI2016: 2nd international workshop on advancements in social signal processing for multimodal interaction (workshop summary)

ERM4CT 2016: 2nd international workshop on emotion representations and modelling for companion systems (workshop summary)

International workshop on multimodal virtual and augmented reality (workshop summary)

International workshop on social learning and multimodal interaction for designing artificial agents (workshop summary)

1st international workshop on multi-sensorial approaches to human-food interaction (workshop summary)

International workshop on multimodal analyses enabling artificial agents in human-­machine interaction (workshop summary)