UIST '18- The 31st Annual ACM Symposium on User Interface Software and Technology

Full Citation in the ACM Digital Library

SESSION: Keynote & Invited Talks

The Science and Practice of Transitions

We tend to set ourselves up to thrive in a particular state while ignoring the transitions between states. But there is magic in the transitions; they are where unexpected and interesting things happen. There is an opportunity for our user interfaces to better support the transitions we make. In this talk I will share some of what I have learned from years of productivity research about how to successfully transition between tasks over the course of a day, and reflect on how these findings might be extended to help us understand how we, as academics and practitioners, can successfully transition through the various contexts and roles that we hold in a lifetime.

Robots For Us: Organizational and Community Perspectives on the Collaborative Design of Ubiquitous Robots

Robots are expected to become ubiquitous in the near future, working with people in various environments, including homes, schools, hospitals, and offices. As physically and socially interactive technologies, robots present new opportunities for embodied interaction and active as well as passive sensing in these contexts. They have also been shown to psychologically impact individuals, affect group and organizational dynamics, and modify our concepts and experiences of work, care, and social relationships. Designing robots for increasingly ubiquitous everyday use requires understanding how robots are perceived, and can be adopted and supported in open-ended, natural social circumstances. This, in turn, calls for design and evaluation methodologies that go beyond the dyadic and small group interactions in laboratories that have largely been the focus of research in human-robot interaction. In this talk, I will present alternative perspectives on the design and evaluation of socially interactive robotic technologies in real-world contexts, focusing on several case studies of socially assistive robots in eldercare. I will first discuss how older adults make sense of robots for use in their homes, in relation to the broader social contexts in which they live, as part of collaborative design activities, and in the course of month-long implementations of robots in their homes. These in-home studies bring up various issues relating to the types of data older adults and the clinicians who work with them would like to collect, related privacy concerns, impacts on other people in the home, and how robot designs can support the relationships older adults hope to have with and through robots. Secondly, I will explore the institutional and community-based use and design of robots in different eldercare facilities, including a nursing home, a retirement community, and an intergenerational daycare. These studies bring out how robots fit into and affect the institutional and group dynamics of interaction, and also allow us to explore how robots might be envisioned as technologies that can support not only individual, but community-level goals. Through these case studies of robots, as emerging ubiquitous interactive technologies, I will bring out themes that can inform the design and study of pervasive systems more broadly, including collaborative design, the use of data collected during social interactions with and around technologies, related ethical concerns, and the need for incorporating the aims of groups, institutions, and communities in the design of intelligent interactive technologies.

SESSION: Session 1: Controlling and Collaborating in VR

Session details: Session 1: Controlling and Collaborating in VR

PuPoP: Pop-up Prop on Palm for Virtual Reality

The sensation of being able to feel the shape of an object when grasping it in Virtual Reality (VR) enhances a sense of presence and the ease of object manipulation. Though most prior works focus on force feedback on fingers, the haptic emulation of grasping a 3D shape requires the sensation of touch using the entire hand. Hence, we present Pop-up Prop on Palm (PuPoP), a light-weight pneumatic shape-proxy interface worn on the palm that pops several airbags up with predefined primitive shapes for grasping. When a user's hand encounters a virtual object, an airbag of appropriate shape, ready for grasping, is inflated by way of the use of air pumps; the airbag then deflates when the object is no longer in play. Since PuPoP is a physical prop, it can provide the full sensation of touch to enhance the sense of realism for VR object manipulation. For this paper, we first explored the design and implementation of PuPoP with multiple shape structures. We then conducted two user studies to further understand its applicability. The first study shows that, when in conflict, visual sensation tends to dominate over touch sensation, allowing a prop with a fixed size to represent multiple virtual objects with similar sizes. The second study compares PuPoP with controllers and free-hand manipulation in two VR applications. The results suggest that utilization of dynamically-changing PuPoP, when grasped by users in line with the shapes of virtual objects, enhances enjoyment and realism. We believe that PuPoP is a simple yet effective way to convey haptic shapes in VR.

SynchronizAR: Instant Synchronization for Spontaneous and Spatial Collaborations in Augmented Reality

We present SynchronizAR, an approach to spatially register multiple SLAM devices together without sharing maps or involving external tracking infrastructures. SynchronizAR employs a distance based indirect registration which resolves the transformations between the separate SLAM coordinate systems. We attach an Ultra-Wide Bandwidth~(UWB) based distance measurements module on each of the mobile AR devices which is capable of self-localization with respect to the environment. As users move on independent paths, we collect the positions of the AR devices in their local frames and the corresponding distance measurements. Based on the registration, we support to create a spontaneous collaborative AR environment to spatially coordinate users' interactions. We run both technical evaluation and user studies to investigate the registration accuracy and the usability towards spatial collaborations. Finally, we demonstrate various collaborative AR experience using SynchronizAR.

Ownershift: Facilitating Overhead Interaction in Virtual Reality with an Ownership-Preserving Hand Space Shift

We present Ownershift, an interaction technique for easing overhead manipulation in virtual reality, while preserving the illusion that the virtual hand is the user's own hand. In contrast to previous approaches, this technique does not alter the mapping of the virtual hand position for initial reaching movements towards the target. Instead, the virtual hand space is only shifted gradually if interaction with the overhead target requires an extended amount of time. While users perceive their virtual hand as operating overhead, their physical hand moves gradually to a less strained position at waist level. We evaluated the technique in a user study and show that Ownershift significantly reduces the physical strain of overhead interactions, while only slightly reducing task performance and the sense of body ownership of the virtual hand.

Wall-based Space Manipulation Technique for Efficient Placement of Distant Objects in Augmented Reality

We present a wall-based space manipulation (WSM) technique that enables users to efficiently select and move distant objects by dynamically squeezing their surrounding space in augmented reality. Users can bring a target object closer by dragging a solid plane behind the object and squeezing the space between them and the plane so that they can select and move the object more delicately and efficiently. We furthermore discuss the unique design challenges of WSM, including the dimension of space reduction and the recognition of the reduced space in relation to the real space. We conducted a user evaluation to verify how WSM improves the performance of the hand-centered object manipulation technique on the HoloLens for moving near objects far away and vice versa. The results indicate that WSM overall performed consistently well and significantly improved efficiency while alleviating arm fatigue.

SESSION: Session 2: Human-Robot Symbiosis

Session details: Session 2: Human-Robot Symbiosis

MobiLimb: Augmenting Mobile Devices with a Robotic Limb

In this paper, we explore the interaction space of MobiLimb, a small 5-DOF serial robotic manipulator attached to a mobile device. It (1) overcomes some limitations of mobile devices (static, passive, motionless); (2) preserves their form factor and I/O capabilities; (3) can be easily attached to or removed from the device; (4) offers additional I/O capabilities such as physical deformation and (5) can support various modular elements such as sensors, lights or shells. We illustrate its potential through three classes of applications: As a tool, MobiLimb offers tangible affordances and an expressive controller that can be manipulated to control virtual and physical objects. As a partner, it reacts expressively to users' actions to foster curiosity and engagement or assist users. As a medium, it provides rich haptic feedback such as strokes, pat and other tactile stimuli on the hand or the wrist to convey emotions during mediated multimodal communications.

MetaArms: Body Remapping Using Feet-Controlled Artificial Arms

We introduce MetaArms, wearable anthropomorphic robotic arms and hands with six degrees of freedom operated by the user's legs and feet. Our overall research goal is to re-imagine what our bodies can do with the aid of wearable robotics using a body-remapping approach. To this end, we present an initial exploratory case study. MetaArms' two robotic arms are controlled by the user's feet motion, and the robotic hands can grip objects according to the user's toes bending. Haptic feedback is also presented on the user's feet that correlate with the touched objects on the robotic hands, creating a closed-loop system. We present formal and informal evaluations of the system, the former using a 2D pointing task according to Fitts' Law. The overall throughput for 12 users of the system is reported as 1.01 bits/s (std 0.39). We also present informal feedback from over 230 users. We find that MetaArms demonstrate the feasibility of body-remapping approach in designing robotic limbs that may help us re-imagine what the human body could do.

Authoring and Verifying Human-Robot Interactions

As social agents, robots designed for human interaction must adhere to human social norms. How can we enable designers, engineers, and roboticists to design robot behaviors that adhere to human social norms and do not result in interaction breakdowns? In this paper, we use automated formal-verification methods to facilitate the encoding of appropriate social norms into the interaction design of social robots and the detection of breakdowns and norm violations in order to prevent them. We have developed an authoring environment that utilizes these methods to provide developers of social-robot applications with feedback at design time and evaluated the benefits of their use in reducing such breakdowns and violations in human-robot interactions. Our evaluation with application developers (N=9) shows that the use of formal-verification methods increases designers' ability to identify and contextualize social-norm violations. We discuss the implications of our approach for the future development of tools for effective design of social-robot applications.

GridDrones: A Self-Levitating Physical Voxel Lattice for Interactive 3D Surface Deformations

We present GridDrones, a self-levitating programmable matter platform that can be used for representing 2.5D voxel grid relief maps capable of rendering unsupported structures and 3D transformations. GridDrones consists of cube-shaped nanocopters that can be placed in a volumetric 1xnxn mid-air grid, which is demonstrated here with 15 voxels. The number of voxels and scale is only limited by the size of the room and budget. Grid deformations can be applied interactively to this voxel lattice by manually selecting a set of voxels, then assigning a continuous topological relationship between voxel sets that determines how voxels move in relation to each other and manually drawing out selected voxels from the lattice structure. Using this simple technique, it is possible to create unsupported structures that can be translated and oriented freely in 3D. Shape transformations can also be recorded to allow for simple physical shape morphing animations. This work extends previous work on selection and editing techniques for 3D user interfaces.

SESSION: Session 3: Fabrication

Session details: Session 3: Fabrication

Dynablock: Dynamic 3D Printing for Instant and Reconstructable Shape Formation

This paper introduces Dynamic 3D Printing, a fast and reconstructable shape formation system. Dynamic 3D Printing can assemble an arbitrary three-dimensional shape from a large number of small physical elements. Also, it can disassemble the shape back to elements and reconstruct a new shape. Dynamic 3D Printing combines the capabilities of 3D printers and shape displays: Like conventional 3D printing, it can generate arbitrary and graspable three-dimensional shapes, while allowing shapes to be rapidly formed and reformed as in a shape display. To demonstrate the idea, we describe the design and implementation of Dynablock, a working prototype of a dynamic 3D printer. Dynablock can form a three-dimensional shape in seconds by assembling 3,000 9 mm blocks, leveraging a 24 x 16 pin-based shape display as a parallel assembler. Dynamic 3D printing is a step toward achieving our long-term vision in which 3D printing becomes an interactive medium, rather than the means for fabrication that it is today. In this paper, we explore possibilities for this vision by illustrating application scenarios that are difficult to achieve with conventional 3D printing or shape display systems.

TrussFormer: 3D Printing Large Kinetic Structures

We present TrussFormer, an integrated end-to-end system that allows users to 3D print large-scale kinetic structures, i.e., structures that involve motion and deal with dynamic forces. TrussFormer builds on TrussFab, from which it inherits the ability to create static large-scale truss structures from 3D printed connectors and PET bottles. TrussFormer adds movement to these structures by placing linear actuators into them: either manually, wrapped in reusable components called assets, or by demonstrating the intended movement. TrussFormer verifies that the resulting structure is mechanically sound and will withstand the dynamic forces resulting from the motion. To fabricate the design, TrussFormer generates the underlying hinge system that can be printed on standard desktop 3D printers. We demonstrate TrussFormer with several example objects, including a 6 legged walking robot and a 4m tall animatronics dinosaur with 5 degrees of freedom.

Shape-Aware Material: Interactive Fabrication with ShapeMe

Makers often create both physical and digital prototypes to explore a design, taking advantage of the subtle feel of physical materials and the precision and power of digital models. We introduce ShapeMe, a novel smart material that captures its own geometry as it is physically cut by an artist or designer. ShapeMe includes a software toolkit that lets its users generate customized, embeddable sensors that can accommodate various object shapes. As the designer works on a physical prototype, the toolkit streams the artist's physical changes to its digital counterpart in a 3D CAD environment. We use a rapid, inexpensive and simple-to-manufacture inkjet printing technique to create embedded sensors. We successfully created a linear predictive model of the sensors' lengths, and our empirical tests of ShapeMe show an average accuracy of 2 to 3 mm. We present two application scenarios for modeling multi-object constructions, such as architectural models, and 3D models consisting of multiple layers stacked one on top of each other. ShapeMe demonstrates a novel technique for integrating digital and physical modeling, and suggests new possibilities for creating shape-aware materials.

Wireless Analytics for 3D Printed Objects

We present the first wireless physical analytics system for 3D printed objects using commonly available conductive plastic filaments. Our design can enable various data capture and wireless physical analytics capabilities for 3D printed objects, without the need for electronics. To achieve this goal, we make three key contributions: (1) demonstrate room scale backscatter communication and sensing using conductive plastic filaments, (2) introduce the first backscatter designs that detect a variety of bi-directional motions and support linear and rotational movements, and (3) enable data capture and storage for later retrieval when outside the range of the wireless coverage, using a ratchet and gear system. We validate our approach by wirelessly detecting the opening and closing of a pill bottle, capturing the joint angles of a 3D printed e-NABLE prosthetic hand, and an insulin pen that can store information to track its use outside the range of a wireless receiver.

SESSION: Session 4: Crowds and Human-AI Partnership

Session details: Session 4: Crowds and Human-AI Partnership

The Exploratory Labeling Assistant: Mixed-Initiative Label Curation with Large Document Collections

In this paper, we define the concept of exploratory labeling: the use of computational and interactive methods to help analysts categorize groups of documents into a set of unknown and evolving labels. While many computational methods exist to analyze data and build models once the data is organized around a set of predefined categories or labels, few methods address the problem of reliably discovering and curating such labels in the first place. In order to move first steps towards bridging this gap, we propose an interactive visual data analysis method that integrates human-driven label ideation, specification and refinement with machine-driven recommendations. The proposed method enables the user to progressively discover and ideate labels in an exploratory fashion and specify rules that can be used to automatically match sets of documents to labels. To support this process of ideation, specification, as well as evaluation of the labels, we use unsupervised machine learning methods that provide suggestions and data summaries. We evaluate our method by applying it to a real-world labeling problem as well as through controlled user studies to identify and reflect on patterns of interaction emerging from exploratory labeling activities.

Sprout: Crowd-Powered Task Design for Crowdsourcing

While crowdsourcing enables data collection at scale, ensuring high-quality data remains a challenge. In particular, effective task design underlies nearly every reported crowdsourcing success, yet remains difficult to accomplish. Task design is hard because it involves a costly iterative process: identifying the kind of work output one wants, conveying this information to workers, observing worker performance, understanding what remains ambiguous, revising the instructions, and repeating the process until the resulting output is satisfactory. To facilitate this process, we propose a novel meta-workflow that helps requesters optimize crowdsourcing task designs and Sprout, our open-source tool, which implements this workflow. Sprout improves task designs by (1) eliciting points of confusion from crowd workers, (2) enabling requesters to quickly understand these misconceptions and the overall space of questions, and (3) guiding requesters to improve the task design in response. We report the results of a user study with two labeling tasks demonstrating that requesters strongly prefer Sprout and produce higher-rated instructions compared to current best practices for creating gated instructions (instructions plus a workflow for training and testing workers). We also offer a set of design recommendations for future tools that support crowdsourcing task design.

Crowdsourcing Similarity Judgments for Agreement Analysis in End-User Elicitation Studies

End-user elicitation studies are a popular design method, but their data require substantial time and effort to analyze. In this paper, we present Crowdsensus, a crowd-powered tool that enables researchers to efficiently analyze the results of elicitation studies using subjective human judgment and automatic clustering algorithms. In addition to our own analysis, we asked six expert researchers with experience running and analyzing elicitation studies to analyze an end-user elicitation dataset of 10 functions for operating a web-browser, each with 43 voice commands elicited from end-users for a total of 430 voice commands. We used Crowdsensus to gather similarity judgments of these same 430 commands from 410 online crowd workers. The crowd outperformed the experts by arriving at the same results for seven of eight functions and resolving a function where the experts failed to agree. Also, using Crowdsensus was about four times faster than using experts.

Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Fact-checking, the task of assessing the veracity of claims, is an important, timely, and challenging problem. While many automated fact-checking systems have been recently proposed, the human side of the partnership has been largely neglected: how might people understand, interact with, and establish trust with an AI fact-checking system? Does such a system actually help people better assess the factuality of claims? In this paper, we present the design and evaluation of a mixed-initiative approach to fact-checking, blending human knowledge and experience with the efficiency and scalability of automated information retrieval and ML. In a user study in which participants used our system to aid their own assessment of claims, our results suggest that individuals tend to trust the system: participant accuracy assessing claims improved when exposed to correct model predictions. However, this trust perhaps goes too far: when the model was wrong, exposure to its predictions often degraded human accuracy. Participants given the option to interact with these incorrect predictions were often able improve their own performance. This suggests that transparent models are key to facilitating effective human interaction with fallible AI models.

Porta: Profiling Software Tutorials Using Operating-System-Wide Activity Tracing

It can be hard for tutorial creators to get fine-grained feedback about how learners are actually stepping through their tutorials and which parts lead to the most struggle. To provide such feedback for technical software tutorials, we introduce the idea of tutorial profiling, which is inspired by software code profiling. We prototyped this idea in a system called Porta that automatically tracks how users navigate through a tutorial webpage and what actions they take on their computer such as running shell commands, invoking compilers, and logging into remote servers. Porta surfaces this trace data in the form of profiling visualizations that augment the tutorial with heatmaps of activity hotspots and markers that expand to show event details, error messages, and embedded screencast videos of user actions. We found through a user study of 3 tutorial creators and 12 students who followed their tutorials that Porta enabled both the tutorial creators and the students to provide more specific, targeted, and actionable feedback about how to improve these tutorials. Porta opens up possibilities for performing user testing of technical documentation in a more systematic and scalable way.

SESSION: Session 5: Sensing and Acoustics

Session details: Session 5: Sensing and Acoustics

Ubicoustics: Plug-and-Play Acoustic Activity Recognition

Despite sound being a rich source of information, computing devices with microphones do not leverage audio to glean useful insights about their physical and social context. For example, a smart speaker sitting on a kitchen countertop cannot figure out if it is in a kitchen, let alone know what a user is doing in a kitchen - a missed opportunity. In this work, we describe a novel, real-time, sound-based activity recognition system. We start by taking an existing, state-of-the-art sound labeling model, which we then tune to classes of interest by drawing data from professional sound effect libraries traditionally used in the entertainment industry. These well-labeled and high-quality sounds are the perfect atomic unit for data augmentation, including amplitude, reverb, and mixing, allowing us to exponentially grow our tuning data in realistic ways. We quantify the performance of our approach across a range of environments and device categories and show that microphone-equipped computing devices already have the requisite capability to unlock real-time activity recognition comparable to human accuracy.

Vibrosight: Long-Range Vibrometry for Smart Environment Sensing

Smart and responsive environments rely on the ability to detect physical events, such as appliance use and human activities. Currently, to sense these types of events, one must either upgrade to "smart" appliances, or attach aftermarket sensors to existing objects. These approaches can be expensive, intrusive and inflexible. In this work, we present Vibrosight, a new approach to sense activities across entire rooms using long-range laser vibrometry. Unlike a microphone, our approach can sense physical vibrations at one specific point, making it robust to interference from other activities and noisy environments. This property enables detection of simultaneous activities, which has proven challenging in prior work. Through a series of evaluations, we show that Vibrosight can offer high accuracies at long range, allowing our sensor to be placed in an inconspicuous location. We also explore a range of additional uses, including data transmission, sensing user input and modes of appliance operation, and detecting human movement and activities on work surfaces.

SilentVoice: Unnoticeable Voice Input by Ingressive Speech

SilentVoice is a new voice input interface device that penetrates the speech-based natural user interface (NUI) in daily life. The proposed "ingressive speech" method enables placement of a microphone very close to the front of the mouth without suffering from pop-noise, capturing very soft speech sounds with a good S/N ratio. It realizes ultra-small (less than 39dB(A)) voice leakage, allowing us to use voice input without annoying surrounding people in public and mobile situations as well as offices and homes. By measuring airflow direction, SilentVoice can easily be separated from normal utterances with 98.8% accuracy; no activation words are needed. It can be used for voice-activated systems with a specially trained voice recognizer; evaluation results yield word error rates (WERs) of 1.8% (speaker-dependent condition), and 7.0% (speaker-independent condition) with a limited dictionary of 85 command sentences. A whisper-like natural voice can also be used for real-time voice communication.

SoundBender: Dynamic Acoustic Control Behind Obstacles

Ultrasound manipulation is growing in popularity in the HCI community with applications in haptics, on-body interaction, and levitation-based displays. Most of these applications share two key limitations: a) the complexity of the sound fields that can be produced is limited by the physical size of the transducers, and b) no obstacles can be present between the transducers and the control point. We present SoundBender, a hybrid system that overcomes these limitations by combining the versatility of phased arrays of Transducers (PATs) with the precision of acoustic metamaterials. In this paper, we explain our approach to design and implement such hybrid modulators (i.e. to create complex sound fields) and methods to manipulate the field dynamically (i.e. stretch, steer). We demonstrate our concept using self-bending beams enabling both levitation and tactile feedback around an obstacle and present example applications enabled by SoundBender.

SESSION: Session 6: Visualizations in 2D and 3D

Session details: Session 6: Visualizations in 2D and 3D

Vizir: A Domain-Specific Graphical Language for Authoring and Operating Airport Automations

Automation is one of the key solutions proposed and adopted by international Air Transport research programs to meet the challenges of increasing air traffic. For automation to be safe and usable, it needs to be suitable to the activity it supports, both when authoring it and when operating it. Here we present Vizir, a Domain-Specific Graphical Language and an Environment for authoring and operating airport automations. We used a participatory-design process with Air Traffic Controllers to gather requirements for Vizir and to design its features. Vizir combines visual interaction-oriented programming constructs with activity-related geographic areas and events. Vizir offers explicit human-control constructs, graphical substrates and means to scale-up with multiple automations. We propose a set of guidelines to inspire designers of similar usable hybrid human-automation systems.

MoSculp: Interactive Visualization of Shape and Time

We present a system that visualizes complex human motion via 3D motion sculptures-a representation that conveys the 3D structure swept by a human body as it moves through space. Our system computes a motion sculpture from an input video, and then embeds it back into the scene in a 3D-aware fashion. The user may also explore the sculpture directly in 3D or physically print it. Our interactive interface allows users to customize the sculpture design, for example, by selecting materials and lighting conditions. To provide this end-to-end workflow, we introduce an algorithm that estimates a human's 3D geometry over time from a set of 2D images, and develop a 3D-aware image-based rendering approach that inserts the sculpture back into the original video. By automating the process, our system takes motion sculpture creation out of the realm of professional artists, and makes it applicable to a wide range of existing video material. By conveying 3D information to users, motion sculptures reveal space-time motion information that is difficult to perceive with the naked eye, and allow viewers to interpret how different parts of the object interact over time. We validate the effectiveness of motion sculptures with user studies, finding that our visualizations are more informative about motion than existing stroboscopic and space-time visualization methods.

Maestro: Designing a System for Real-Time Orchestration of 3D Modeling Workshops

Instructors of 3D design workshops for children face many challenges, including maintaining awareness of students' progress, helping students who need additional attention, and creating a fun experience while still achieving learning goals. To help address these challenges, we developed Maestro, a workshop orchestration system that visualizes students' progress, automatically detects and draws attention to common challenges faced by students, and provides mechanisms to address common student challenges as they occur. We present the design of Maestro, and the results of a case-study evaluation with an experienced facilitator and 13 children. The facilitator appreciated Maestro's real-time indications of which students were successfully following her tutorial demonstration, and recognized the system's potential to "extend her reach" while helping struggling students. Participant interaction data from the study provided support for our follow-along detection algorithm, and the capability to remind students to use 3D navigation.

Turbulence Ahead - A 3D Web-Based Aviation Weather Visualizer

Although severe aircraft accidents have been reduced in the last decades, the number of injuries and fatalities caused by turbulence is still rising. Current aviation weather products are unable to provide a holistic and intuitive view of the overall weather situation, especially in terms of turbulence forecasts. This work introduces an interactive 3D prototype developed with a user-centered design approach. The prototype focuses on the visualization of significant weather charts, which are utilized during flight preparation. An online user study is conducted to compare the prototype with today's 2D paper maps. A total of 64 pilots from an internationally operating airline participated in the study. Among the major findings of the study is that the prototype significantly decreased the cognitive load, and enhanced spatial awareness and usability. To determine the spatial awareness, a novel similarity measure for spatial configurations of aviation weather data is introduced.

SESSION: Session 7: Sensing in the Small Scale

Session details: Session 7: Sensing in the Small Scale

CamTrackPoint: Camera-Based Pointing Stick Using Transmitted Light through Finger

We present CamTrackPoint, a new input interface that can be controlled by finger gestures captured by front or rear cameras of a mobile device. CamTrackPoint mounts a 3D-printed ring on the camera's bezel, and it senses the movements of the user's finger by tracking the light passed through the finger. The proposed method provides mobile devices with a new input interface that offers physical force feedback like a pointing stick. The cost of our method is low as it needs only a simple ring-shaped part on the camera bezel. Moreover, the ring doesn't disturb the functions of the camera, unless a user uses the interface. We implement a prototype for a smartphone; two CamTrackPoint rings are made for the front and rear cameras. We evaluate its performance and characteristics in an experiment. The proposed technique provides smooth scrolling and would give better game experience on the available smartphone.

Indutivo: Contact-Based, Object-Driven Interactions with Inductive Sensing

We present Indutivo, a contact-based inductive sensing technique for contextual interactions. Our technique recognizes conductive objects (metallic primarily) that are commonly found in households and daily environments, as well as their individual movements when placed against the sensor. These movements include sliding, hinging, and rotation. We describe our sensing principle and how we designed the size, shape, and layout of our sensor coils to optimize sensitivity, sensing range, recognition and tracking accuracy. Through several studies, we also demonstrated the performance of our proposed sensing technique in environments with varying levels of noise and interference conditions. We conclude by presenting demo applications on a smartwatch, as well as insights and lessons we learned from our experience.

Touch+Finger: Extending Touch-based User Interface Capabilities with "Idle" Finger Gestures in the Air

In this paper, we present Touch+Finger, a new interaction technique that augments touch input with multi-finger gestures for rich and expressive interaction. The main idea is that while one finger is engaged in a touch event, a user can leverage the remaining fingers, the "idle" fingers, to perform a variety of hand poses or in-air gestures to extend touch-based user interface capabilities. To fully understand the use of these idle fingers, we constructed a design space based on conventional touch gestures (i.e., single- and multi-touch gestures) and inter- action period (i.e., before and during touch). Considering the design space, we investigated the possible movement of the idle fingers and developed a total of 20 Touch+Finger gestures. Using ring-like devices to track the motion of the idle fingers in the air, we evaluated the Touch+Finger gestures on both recognition accuracy and ease of use. They were classified with a recognition accuracy of over 99% and received positive and negative comments from 8 participants. We suggested 8 interaction techniques with Touch+Finger gestures that demonstrate extended touch-based user interface capabilities.

FingerArc and FingerChord: Supporting Novice to Expert Transitions with Guided Finger-Aware Shortcuts

Keyboard shortcuts can be more efficient than graphical input, but they are underused by most users. To alleviate this, we present "Guided Finger-Aware Shortcuts" to reduce the gulf between graphical input and shortcut activation. The interaction technique works by recognising when a special hand posture is used to press a key, then allowing secondary finger movements to select among related shortcuts if desired. Novice users can learn the mappings through dynamic visual guidance revealed by holding a key down, but experts can trigger shortcuts directly without pausing. Two variations are described: FingerArc uses the angle of the thumb, and FingerChord uses a second key press. The techniques are motivated by an interview study identifying factors hindering the learning, use, and exploration of keyboard shortcuts. A controlled comparison with conventional keyboard shortcuts shows the techniques encourage overall shortcut usage, make interaction faster, less error-prone, and provide advantages over simply adding visual guidance to standard shortcuts.

Tacttoo: A Thin and Feel-Through Tattoo for On-Skin Tactile Output

This paper introduces Tacttoo, a feel-through interface for electro-tactile output on the user's skin. Integrated in a temporary tattoo with a thin and conformal form factor, it can be applied on complex body geometries, including the fingertip, and is scalable to various body locations. At less than 35µm in thickness, it is the thinnest tactile interface for wearable computing to date. Our results show that Tacttoo retains the natural tactile acuity similar to bare skin while delivering high-density tactile output. We present the fabrication of customized Tacttoo tattoos using DIY tools and contribute a mechanism for consistent electro-tactile operation on the skin. Moreover, we explore new interactive scenarios that are enabled by Tacttoo. Applications in tactile augmented reality and on-skin interaction benefit from a seamless augmentation of real-world tactile cues with computer-generated stimuli. Applications in virtual reality and private notifications benefit from high-density output in an ergonomic form factor. Results from two psychophysical studies and a technical evaluation demonstrate Tacttoo's functionality, feel-through properties and durability.

SESSION: Session 8: Authoring, Reading and Writing

Session details: Session 8: Authoring, Reading and Writing

Extending a Reactive Expression Language with Data Update Actions for End-User Application Authoring

Mavo is a small extension to the HTML language that empowers non-programmers to create simple web applications. Authors can mark up any normal HTML document with attributes that specify data elements that Mavo makes editable and persists. But while applications authored with Mavo allow users to edit individual data items, they do not offer any programmatic data actions that can act in customizable ways on large collections of data simultaneously or that modify data according to a computation. We explore an extension to the Mavo language that enables non-programmers to author these richer data update actions. We show that it lets authors create a more powerful set of applications than they could previously, while adding little additional complexity to the authoring process. Through user evaluations, we assess how closely our data update syntax matches how novice authors would instinctively express such actions, and how well they are able to use the syntax we provided.

Immersive Trip Reports

Since the advent of consumer photography, tourists and hikers have made photo records of their trips to share later. Aside from being kept as memories, photo presentations such as slideshows are also shown to others who have not visited the location to try to convey the experience.However, a slideshow alone is limited in conveying the broader spatial context, and thus the feeling of presence in beautiful natural scenery is lost. We address this by presenting the photographs as part of an immersive experience. We introduce an automated pipeline for aligning photographs with a digital terrain model. From this geographic registration, we produce immersive presentations which are viewed either passively as a video, or interactively in virtual reality. Our experimental evaluation verifies that this new mode of presentation successfully conveys the spatial context of the scene and is enjoyable to users.

Non-Linear Editing of Text-Based Screencasts

Screencasts, where recordings of a computer screen are broadcast to a large audience on the web, are becoming popular as an online educational tool. To provide rich interactions with the text within screencasts, there are emerging platforms that support text-based screencasts by recording every character insertion and deletion from the creator and reconstructing its playback on the viewer's screen. However, these platforms lack support for non-linear editing of screencasts, which involves manipulating a sequence of text editing operations. Since text editing operations are tightly coupled in sequence, modifying an arbitrary part of the sequence often creates ambiguity that yields multiple possible results that require user's choice for resolution. We present an editing tool with a non-linear editing algorithm for text-based screencasts. The tool allows users to edit any arbitrary part of a text-based screencast while preserving the overall consistency of the screencast. In an exploratory user study, all subjects successfully carried out a variety of screencast editing tasks using our prototype screencast editor.

Multitasking with Play Write, a Mobile Microproductivity Writing Tool

Mobile devices offer people the opportunity to get useful tasks done during time previously thought to be unusable. Because mobile devices have small screens and are often used in divided attention scenarios, people are limited to using them for short, simple tasks; complex tasks like editing a document present significant challenges in this environment. In this paper we demonstrate how a complex task requiring focused attention can be adapted to the fragmented way people work while mobile by decomposing the task into smaller, simpler microtasks. We introduce Play Write, a microproductivity tool that allows people to edit Word documents from their phones via such microtasks. When participants used Play Write while simultaneously watching a video, we found that they strongly preferred its microtask-based editing approach to the traditional editing experience offered by Mobile Word. Play Write made participants feel more productive and less stressed, and they completed more edits with it. Our findings suggest microproductivity tools like Play Write can help people be productive in divided attention scenarios.

Facilitating Document Reading by Linking Text and Tables

Document authors commonly use tables to support arguments presented in the text. But, because tables are usually separate from the main body text, readers must split their attention between different parts of the document. We present an interactive document reader that automatically links document text with corresponding table cells. Readers can select a sentence (or tables cells) and our reader highlights the relevant table cells (or sentences). We provide an automatic pipeline for extracting such references between sentence text and table cells for existing PDF documents that combines structural analysis of tables with natural language processing and rule-based matching. On a test corpus of 330 (sentence, table) pairs, our pipeline correctly extracts 48.8% of the references. An additional 30.5% contain only false negatives (FN) errors -- the reference is missing table cells. The remaining 20.7% contain false positives (FP) errors -- the reference includes extraneous table cells and could therefore mislead readers. A user study finds that despite such errors, our interactive document reader helps readers match sentences with corresponding table cells more accurately and quickly than a baseline document reader.

SESSION: Session 9: Electronics

Session details: Session 9: Electronics

ElectroTutor: Test-Driven Physical Computing Tutorials

A wide variety of tools for creating physical computing systems have been developed, but getting started in this domain remains challenging for novices. In this paper, we introduce test-driven physical computing tutorials, a novel application of interactive tutorial systems to better support users in building and programming physical computing systems. These tutorials inject interactive tests into the tutorial process to help users verify and understand individual steps before proceeding. We begin by presenting a taxonomy of the types of tests that can be incorporated into physical computing tutorials. We then present ElectroTutor, a tutorial system that implements a range of tests for both the software and physical aspects of a physical computing system. A user study suggests that ElectroTutor can improve users' success and confidence when completing a tutorial, and save them time by reducing the need to backtrack and troubleshoot errors made on previous tutorial steps.

WiFröst: Bridging the Information Gap for Debugging of Networked Embedded Systems

The rise in prevalence of Internet of Things (IoT) technologies has encouraged more people to prototype and build custom internet connected devices based on low power microcontrollers. While well-developed tools exist for debugging network communication for desktop and web applications, it can be difficult for developers of networked embedded systems to figure out why their network code is failing due to the limited output affordances of embedded devices. This paper presents WiFröst, a new approach for debugging these systems using instrumentation that spans from the device itself, to its communication API, to the wireless router and back-end server. WiFröst automatically collects this data, displays it in a web-based visualization, and highlights likely issues with an extensible suite of checks based on analysis of recorded execution traces.

Assembly-aware Design of Printable Electromechanical Devices

From smart toys and household appliances to personal robots, electromechanical devices play an increasingly important role in our daily lives. Rather than relying on gadgets that are mass-produced, our goal is to enable casual users to custom-design such devices based on their own needs and preferences. To this end, we present a computational design system that leverages the power of digital fabrication and the emergence of affordable electronics such as sensors and microcontrollers. The input to our system consists of a 3D representation of the desired device's shape, and a set of user-preferred off-the-shelf components. Based on this input, our method generates an optimized, 3D printable enclosure that can house the required components. To create these designs automatically, we formalize a new spatio-temporal model that captures the entire assembly process, including the placement of the components within the device, mounting structures and attachment strategies, the order in which components must be inserted, and collision-free assembly paths. Using this model as a technical core, we then leverage engineering design guidelines and efficient numerical techniques to optimize device designs. In a user study, which also highlights the challenges of designing such devices, we find our system to be effective in reducing the entry barriers faced by casual users in creating such devices. We further demonstrate the versatility of our approach by designing and fabricating three devices with diverse functionalities.

RFIMatch: Distributed Batteryless Near-Field Identification Using RFID-Tagged Magnet-Biased Reed Switches

This paper presents a technique enabling distributed batteryless near-field identification (ID) between two passive radio frequency ID (RFID) tags. Each conventional ultra-high-frequency (UHF) RFID tag is modified by connecting its antenna and chip to a reed switch and then attaching a magnet to one of the reed switch's terminals, thus transforming it into an always-on switch. When the two modules approach each other, the magnets counteract each other and turn off both switches at the same time. The coabsence of IDs thus indicates a unique interaction event. In addition to sensing, the module also provides native haptic feedback through magnetic repulsion force, enabling users to perceive the system's state eyes-free, without physical constraints. Additional visual feedback can be provided through an energy-harvesting module and a light emitting diode. This specific hardware design supports contactless, orientation-invariant sensing, with a form factor compact enough for embedded and wearable use in ubiquitous computing applications.

I/O Braid: Scalable Touch-Sensitive Lighted Cords Using Spiraling, Repeating Sensing Textiles and Fiber Optics

We introduce I/O Braid, an interactive textile cord with embedded sensing and visual feedback. I/O Braid senses proximity, touch, and twist through a spiraling, repeating braiding topology of touch matrices. This sensing topology is uniquely scalable, requiring only a few sensing lines to cover the whole length of a cord. The same topology allows us to embed fiber optic strands to integrate co-located visual feedback. We provide an overview of the enabling braiding techniques, design considerations, and approaches to gesture detection. These allow us to derive a set of interaction techniques, which we demonstrate with different form factors and capabilities. Our applications illustrate how I/O Braid can invisibly augment everyday objects, such as touch-sensitive headphones and interactive drawstrings on garments, while enabling discoverability and feedback through embedded light sources.

SESSION: Session 10: Navigation

Session details: Session 10: Navigation

ShareSpace: Facilitating Shared Use of the Physical Space by both VR Head-Mounted Display and External Users

Currently, "walkable" virtual reality (VR) is achieved by dedicating a room-sized space for VR activities, which is not shared with non-HMD users engaged in their own activities. To achieve the goal of allowing shared use of space for all users while overcoming the obvious difficulty of integrating use with those immersed in a VR experience, we present ShareSpace, a system that allows external users to communicate their needs for physical space to those wearing an HMD and immersed in their VR experience. ShareSpace works by allowing external users to place "shields" in the virtual environment by using a set of physical shield tools. A pad visualizer helps this process by allowing external users to examine the arrangement of virtual shields. We also discuss interaction techniques that minimize the interference between the respective activities of the HMD wearers and the other users of the same physical space. To evaluate our design, a user study was conducted to collect user feedback from participants in four trial scenarios. The results indicate that our ShareSpace system allows users to perform their respective activities with improved engagement and safety. In addition, this study shows that while the HMD users did perceive a considerable degree of interference due to the internal visual indications from the ShareSpace system, they were still more engaged in their VR experience than when interrupted by direct external physical interference initiated by external users.

Scenograph: Fitting Real-Walking VR Experiences into Various Tracking Volumes

When developing a real-walking virtual reality experience, designers generally create virtual locations to fit a specific tracking volume. Unfortunately, this prevents the resulting experience from running on a smaller or differently shaped tracking volume. To address this, we present a software system called Scenograph. The core of Scenograph is a tracking volume-independent representation of real-walking experiences. Scenograph instantiates the experience to a tracking volume of given size and shape by splitting the locations into smaller ones while maintaining narrative structure. In our user study, participants' ratings of realism decreased significantly when existing techniques were used to map a 25m2 experience to 9m2 and an L-shaped 8m2 tracking volume. In contrast, ratings did not differ when Scenograph was used to instantiate the experience.

Increasing Walking in VR using Redirected Teleportation

Teleportation is a popular locomotion technique that lets users safely navigate beyond the confines of available positional tracking space without inducing VR sickness. Because available walking space is limited and teleportation is faster than walking, a risk with using teleportation is that users might end up abandoning walking input and only relying on teleportation, which is considered detrimental to presence. We present redirected teleportation; an improved version of teleportation that uses iterative non-obtrusive reorientation and repositioning using a portal to redirect the user back to the center of the tracking space, where available walking space is larger. A user study compares the effectiveness, accuracy, and usability of redirected teleportation with regular teleportation using a navigation task in three different environments. Results show that redirected teleportation allows for a better utilization of available tracking space than regular teleportation, as it requires significantly fewer teleportations, while users walk more and use a larger portion of the available tracking space.

Adasa: A Conversational In-Vehicle Digital Assistant for Advanced Driver Assistance Features

Advanced Driver Assistance Systems (ADAS) come equipped on most modern vehicles and are intended to assist the driver and enhance the driving experience through features such as lane keeping system and adaptive cruise control. However, recent studies show that few people utilize these features for several reasons. First, ADAS features were not common until recently. Second, most users are unfamiliar with these features and do not know what to expect. Finally, the interface for operating these features is not intuitive. To help drivers understand ADAS features, we present a conversational in-vehicle digital assistant that responds to drivers' questions and commands in natural language. With the system prototyped herein, drivers can ask questions or command using unconstrained natural language in the vehicle, and the assistant trained by using advanced machine learning techniques, coupled with access to vehicle signals, responds in real-time based on conversational context. Results of our system prototyped on a production vehicle are presented, demonstrating its effectiveness in improving driver understanding and usability of ADAS.

Effects of an Adaptive Modality Selection Algorithm for Navigation Systems

Portable electronic navigation systems are often used for directional guidance when humans need to navigate terrain quickly and accurately. Prior work in this field has focused on using either the visual or haptic sensory modality for providing such guidance, and results have indicated that either option may be preferable depending upon the user's specific needs. However, conventional methods involve selecting a single modality based on which will work best with the task the user is most likely to perform and using this modality throughout the duration of the navigation. In this paper, we describe the design and results of a study intended to evaluate the effectiveness of an adaptive modality selection algorithm that dynamically selects a navigation system's directional guidance modality while considering both task-specific benefits and the time-varying effects of switching cost, stimulus-specific adaptation, and habituation. Our findings indicate that use of this algorithm can improve user performance in the presence of multiple simultaneous tasks.

SESSION: Session 11: Mobile Interactions

Session details: Session 11: Mobile Interactions

Ultra-Low-Power Mode for Screenless Mobile Interaction

Smartphones are now a central technology in the daily lives of billions, but it relies on its battery to perform. Battery optimization is thereby a crucial design constraint in any mobile OS and device. However, even with new low-power methods, the ever-growing touchscreen remains the most power-hungry component. We propose an Ultra-Low-Power Mode (ULPM) for mobile devices that allows for touch interaction without visual feedback and exhibits significant power savings of up to 60% while allowing to complete interactive tasks. We demonstrate the effectiveness of the screenless ULPM in text-entry tasks, camera usage, and listening to videos, showing only a small decrease in usability for typical users.

Learning Design Semantics for Mobile Apps

Recently, researchers have developed black-box approaches to mine design and interaction data from mobile apps. Although the data captured during this interaction mining is descriptive, it does not expose the design semantics of UIs: what elements on the screen mean and how they are used. This paper introduces an automatic approach for generating semantic annotations for mobile app UIs. Through an iterative open coding of 73k UI elements and 720 screens, we contribute a lexical database of 25 types of UI components, 197 text button concepts, and 135 icon classes shared across apps. We use this labeled data to learn code-based patterns to detect UI components and to train a convolutional neural network that distinguishes between icon classes with 94% accuracy. To demonstrate the efficacy of our approach at scale, we compute semantic annotations for the 72k unique UIs in the Rico dataset, assigning labels for 78% of the total visible, non-redundant elements.

Lip-Interact: Improving Mobile Device Interaction with Silent Speech Commands

We present Lip-Interact, an interaction technique that allows users to issue commands on their smartphone through silent speech. Lip-Interact repurposes the front camera to capture the user's mouth movements and recognize the issued commands with an end-to-end deep learning model. Our system supports 44 commands for accessing both system-level functionalities (launching apps, changing system settings, and handling pop-up windows) and application-level functionalities (integrated operations for two apps). We verify the feasibility of Lip-Interact with three user experiments: evaluating the recognition accuracy, comparing with touch on input efficiency, and comparing with voiced commands with regards to personal privacy and social norms. We demonstrate that Lip-Interact can help users access functionality efficiently in one step, enable one-handed input when the other hand is occupied, and assist touch to make interactions more fluent.

Self-Powered Gesture Recognition with Ambient Light

We present a self-powered module for gesture recognition that utilizes small, low-cost photodiodes for both energy harvesting and gesture sensing. Operating in the photovoltaic mode, photodiodes harvest energy from ambient light. In the meantime, the instantaneously harvested power from individual photodiodes is monitored and exploited as clues for sensing finger gestures in proximity. Harvested power from all photodiodes are aggregated to drive the whole gesture-recognition module including the micro-controller running the recognition algorithm. We design robust, lightweight algorithm to recognize finger gestures in the presence of ambient light fluctuations. We fabricate two prototypes to facilitate user's interaction with smart glasses and smart watch. Results show 99.7%/98.3% overall precision/recall in recognizing five gestures on glasses and 99.2%/97.5% precision/recall in recognizing seven gestures on the watch. The system consumes 34.6 µW/74.3 µW for the glasses/watch and thus can be powered by the energy harvested from ambient light. We also test system's robustness under varying light intensities, light directions, and ambient light fluctuations, where the system maintains high recognition accuracy (> 96%) in all tested settings.

Robust Annotation of Mobile Application Interfaces in Methods for Accessibility Repair and Enhancement

Accessibility issues in mobile apps make those apps difficult or impossible to access for many people. Examples include elements that fail to provide alternative text for a screen reader, navigation orders that are difficult, or custom widgets that leave key functionality inaccessible. Social annotation techniques have demonstrated compelling approaches to such accessibility concerns in the web, but have been difficult to apply in mobile apps because of the challenges of robustly annotating interfaces. This research develops methods for robust annotation of mobile app interface elements. Designed for use in runtime interface modification, our methods are based in screen identifiers, element identifiers, and screen equivalence heuristics. We implement initial developer tools for annotating mobile app accessibility metadata, evaluate our current screen equivalence heuristics in a dataset of 2038 screens collected from 50 mobile apps, present three case studies implementing runtime repair of common accessibility issues, and examine repair of real-world accessibility issues in 26 apps. These contributions overall demonstrate strong opportunities for social annotation in mobile accessibility.

SESSION: Session 12: Modeling and Animation

Session details: Session 12: Modeling and Animation

4DMesh: 4D Printing Morphing Non-Developable Mesh Surfaces

We present 4DMesh, a method of combining shrinking and bending thermoplastic actuators with customized geometric algorithms to 4D print and morph centimeter- to meter-sized functional non-developable surfaces. We will share two end-to-end inverse design algorithms. With our tools, users can input CAD models of target surfaces and produce respective printable files. The flat sheet printed can morph into target surfaces when triggered by heat. This system saves shipping and packaging costs, in addition to enabling customizability for the design of relatively large non-developable structures. We designed a few functional artifacts to leverage the advantage of non-developable surfaces for their unique functionalities in aesthetics, mechanical strength, geometric ergonomics and other functionalities. In addition, we demonstrated how this technique can potentially be adapted to customize molds for industrial parts (e.g., car, boat, etc.) in the future.

Blocks-to-CAD: A Cross-Application Bridge from Minecraft to 3D Modeling

Learning a new software application can be a challenge, requiring the user to enter a new environment where their existing knowledge and skills do not apply, or worse, work against them. To ease this transition, we propose the idea of cross-application bridges that start with the interface of a familiar application, and gradually change their interaction model, tools, conventions, and appearance to resemble that of an application to be learned. To investigate this idea, we developed Blocks-to-CAD, a cross-application bridge from Minecraft-style games to 3D solid modeling. A user study of our system demonstrated that our modifications to the game did not hurt enjoyment or increase cognitive load, and that players could successfully apply knowledge and skills learned in the game to tasks in a popular 3D solid modeling application. The process of developing Blocks-to-CAD also revealed eight design strategies that can be applied to design cross-application bridges for other applications and domains.

A Mixed-Initiative Interface for Animating Static Pictures

We present an interactive tool to animate the visual elements of a static picture, based on simple sketch-based markup. While animated images enhance websites, infographics, logos, e-books, and social media, creating such animations from still pictures is difficult for novices and tedious for experts. Creating automatic tools is challenging due to ambiguities in object segmentation, relative depth ordering, and non-existent temporal information. With a few user drawn scribbles as input, our mixed initiative creative interface extracts repetitive texture elements in an image, and supports animating them. Our system also facilitates the creation of multiple layers to enhance depth cues in the animation. Finally, after analyzing the artwork during segmentation, several animation processes automatically generate kinetic textures that are spatio-temporally coherent with the source image. Our results, as well as feedback from our user evaluation, suggest that our system effectively allows illustrators and animators to add life to still images in a broad range of visual styles.

TakeToons: Script-driven Performance Animation

Performance animation is an expressive method for animating characters through human performance. However, character motion is only one part of creating animated stories. The typical workflow also involves writing a script, coordinating actors, and editing recorded performances. In most cases, these steps are done in isolation with separate tools, which introduces friction and hinders iteration. We propose TakeToons, a script-driven approach that allows authors to annotate standard scripts with relevant animation events like character actions, camera positions, and scene backgrounds. We compile this script into a story model that persists throughout the production process and provides a consistent structure for organizing and assembling recorded performances and propagating script or timing edits to existing recordings. TakeToons enables writing, performing and editing to happen in an integrated and interleaved manner that streamlines production and facilitates iteration. Informal feedback from professional animators suggests that our approach can benefit many existing workflows supporting individual authors and production teams with many different contributors.

Montage: A Video Prototyping System to Reduce Re-Shooting and Increase Re-Usability

Video prototypes help capture and communicate interaction with paper prototypes in the early stages of design. However, designers sometimes find it tedious to create stop-motion videos for continuous interactions and to re-shoot clips as the design evolves. We introduce Montage, a proof-of-concept implementation of a computer-assisted process for video prototyping. Montage lets designers progressively augment video prototypes with digital sketches, facilitating the creation, reuse and exploration of dynamic interactions. Montage uses chroma keying to decouple the prototyped interface from its context of use, letting designers reuse or change them independently. We describe how Montage enhances video prototyping by combining video with digital animated sketches, encourages the exploration of different contexts of use, and supports prototyping of different interaction styles.

SESSION: Session 13: Bodies and Sensing

Session details: Session 13: Bodies and Sensing

Designing Groundless Body Channel Communication Systems: Performance and Implications

Novel interactions that capacitively couple electromagnetic (EM) fields between devices and the human body are gaining more attention in the human-computer interaction community. One class of these techniques is Body Channel Communication (BCC), a method that overlays physical touch with digital information. Despite the number of published capacitive sensing and communication prototypes, there exists no guideline on how to design such hardware or what are the application limitations and possibilities. Specifically, wearable (groundless) BCC has been proven in the past to be extremely challenging to implement. Additionally, the exact behavior of the human body as an EM-field medium is still not fully understood today. Consequently, the application domain of BCC technology could not be fully explored. This paper addresses this problem. Based on a recently published general purpose wearable BCC system, we first present a thorough evaluation of the impact of various technical parameter choices and an exhaustive channel characterization of the human body as a host for BCC. Second, we discuss the implications of these results for the application design space and present guidelines for future wearable BCC systems and their applications. Third, we point out an important observation of the measurements, namely that BCC can employ the whole body as user interface (and not just hands or feet). We sketch several applications with these novel interaction modalities.

Orecchio: Extending Body-Language through Actuated Static and Dynamic Auricular Postures

In this paper, we propose using the auricle - the visible part of the ear - as a means of expressive output to extend body language to convey emotional states. With an initial exploratory study, we provide an initial set of dynamic and static auricular postures. Using these results, we examined the relationship between emotions and auricular postures, noting that dynamic postures involving stretching the top helix in fast (e.g., 2Hz) and slow speeds (1Hz) conveyed intense and mild pleasantness while static postures involving bending the side or top helix towards the center of the ear were associated with intense and mild unpleasantness. Based on the results, we developed a prototype (called Orrechio) with miniature motors, custom-made robotic arms and other electronic components. A preliminary user evaluation showed that participants feel more comfortable using expressive auricular postures with people they are familiar with, and that it is a welcome addition to the vocabulary of human body language.

Designing Socially Acceptable Hand-to-Face Input

Wearable head-mounted displays combine rich graphical output with an impoverished input space. Hand-to-face gestures have been proposed as a way to add input expressivity while keeping control movements unobtrusive. To better understand how to design such techniques, we describe an elicitation study conducted in a busy public space in which pairs of users were asked to generate unobtrusive, socially acceptable hand-to-face input actions. Based on the results, we describe five design strategies: miniaturizing, obfuscating, screening, camouflaging and re-purposing. We instantiate these strategies in two hand-to-face input prototypes, one based on touches to the ear and the other based on touches of the thumbnail to the chin or cheek. Performance assessments characterize time and error rates with these devices. The paper closes with a validation study in which pairs of users experience the prototypes in a public setting and we gather data on the social acceptability of the designs and reflect on the effectiveness of the different strategies.

Asterisk and Obelisk: Motion Codes for Passive Tagging

Machine readable passive tags for tagging physical objects are ubiquitous today. We propose Motion Codes, a passive tagging mechanism that is based on the kinesthetic motion of the user's hand. Here, the tag comprises of a visual pattern that is displayed on a physical surface. To scan the tag and receive the encoded information, the user simply traces their finger over the pattern. The user wears an inertial motion sensing (IMU) ring on the finger that records the traced pattern. We design two motion code schemes, Asterisk and Obelisk that rely on directional vector data processed from the IMU. We evaluate both schemes for the effects of orientation, size, and data density on their accuracies. We further conduct an in-depth analysis of the sources of motion deviations in the ring data as compared to the ground truth finger movement data. Overall, Asterisk achieves a 95% accuracy for an information capacity of 16.8 million possible sequences.

SESSION: Session 14: Novel Haptics

Session details: Session 14: Novel Haptics

Magneto-Haptics: Embedding Magnetic Force Feedback for Physical Interactions

We present magneto-haptics, a design approach of haptic sensations powered by the forces present among permanent magnets during active touch. Magnetic force has not been efficiently explored in haptic design because it is not intuitive and there is a lack of methods to associate or visualize magnetic force with haptic sensations, especially for complex magnetic patterns. To represent the haptic sensations of magnetic force intuitively, magneto-haptics formularizes haptic potential from the distribution of magnetic force along the path of motion. It provides a rapid way to compute the relationship between the magnetic phenomena and the haptic mechanism. Thus, we can convert a magnetic force distribution into a haptic sensation model, making the design of magnet-embedded haptic sensations more efficient. We demonstrate three applications of magneto-haptics through interactive interfaces and devices. We further verify our theory by evaluating some magneto-haptic designs through experiments.

RESi: A Highly Flexible, Pressure-Sensitive, Imperceptible Textile Interface Based on Resistive Yarns

We present RESi (Resistive tExtile Sensor Interfaces), a novel sensing approach enabling a new kind of yarn-based, resistive pressure sensing. The core of RESi builds on a newly designed yarn, which features conductive and resistive properties. We run a technical study to characterize the behaviour of the yarn and to determine the sensing principle. We demonstrate how the yarn can be used as a pressure sensor and discuss how specific issues, such as connecting the soft textile sensor with the rigid electronics can be solved. In addition, we present a platform-independent API that allows rapid prototyping. To show its versatility, we present applications developed with different textile manufacturing techniques, including hand sewing, machine sewing, and weaving. RESi is a novel technology, enabling textile pressure sensing to augment everyday objects with interactive capabilities.

Haptic Feedback to the Palm and Fingers for Improved Tactile Perception of Large Objects

When one manipulates a large or bulky object, s/he utilizes tactile information at both fingers and the palm. Our goal is to efficiently convey contact information to a user's hand during interaction with a virtual object. We propose a haptic system that can provide haptic feedback to thumb/middle finger/index finger and on a palm. Our interface design utilizes a novel compact mechanism to provide haptic information to the palm. Also, we propose a haptic rendering strategy to calculate haptic feedback continuously. We demonstrate that cutaneous feedback on the palm improves the haptic perception of a large virtual object compared to when there is only kinesthetic feedback to the fingers.

ElectricItch: Skin Irritation as a Feedback Modality

Grabbing users' attention is a fundamental aspect of interactive systems. However, there is a disconnect between the ways our devices notify us and how our bodies do so naturally. In this paper, we explore the body's modality of itching as a way to provide such natural feedback. We create itching sensations via low-current electric stimulation, which allows us to quickly generate this sensation on demand. In a first study we explore the design space around itching and how changes in stimulation parameters influence the resulting sensation. In a second study we compare vibration feedback and itching integrated in a smartwatch form factor. We find that we can consistently induce itching sensations and that these are perceived as more activating and interrupting than vibrotactile stimuli.

SESSION: Session 15: Touch Interaction

Session details: Session 15: Touch Interaction

InfiniTouch: Finger-Aware Interaction on Fully Touch Sensitive Smartphones

Smartphones are the most successful mobile devices and offer intuitive interaction through touchscreens. Current devices treat all fingers equally and only sense touch contacts on the front of the device. In this paper, we present InfiniTouch, the first system that enables touch input on the whole device surface and identifies the fingers touching the device without external sensors while keeping the form factor of a standard smartphone. We first developed a prototype with capacitive sensors on the front, the back and on three sides. We then conducted a study to train a convolutional neural network that identifies fingers with an accuracy of 95.78% while estimating their position with a mean absolute error of 0.74cm. We demonstrate the usefulness of multiple use cases made possible with InfiniTouch, including finger-aware gestures and finger flexion state as an action modifier.

Next-Point Prediction for Direct Touch Using Finite-Time Derivative Estimation

End-to-end latency in interactive systems is detrimental to performance and usability, and comes from a combination of hardware and software delays. While these delays are steadily addressed by hardware and software improvements, it is at a decelerating pace. In parallel, short-term input prediction has shown promising results in recent years, in both research and industry, as an addition to these efforts. We describe a new prediction algorithm for direct touch devices based on (i) a state-of-the-art finite-time derivative estimator, (ii) a smoothing mechanism based on input speed, and (iii) a post-filtering of the prediction in two steps. Using both a pre-existing dataset of touch input as benchmark, and subjective data from a new user study, we show that this new predictor outperforms the predictors currently available in the literature and industry, based on metrics that model user-defined negative side-effects caused by input prediction. In particular, we show that our predictor can predict up to 2 or 3 times further than existing techniques with minimal negative side-effects.

FDSense: Estimating Young's Modulus and Stiffness of End Effectors to Facilitate Kinetic Interaction on Touch Surfaces

We make touch input by physically colliding an end effector (e.g., a body part or a stylus) with a touch surface. Prior studies have examined the use of kinematic variables of collision between objects, such as position, velocity, force, and impact. However, the nature of the collision can be understood more thoroughly by considering the known physical relationships that exist between directly measurable variables (i.e., kinetics). Based on this collision kinetics, this study proposes a novel touch technique called FDSense. By simultaneously observing the force and contact area measured from the touchpad, FDSense allows estimation of the Young's modulus and stiffness of the object being contacted. Our technical evaluation showed that FDSense could effectively estimate the Young's modulus of end effectors made of various materials, and the stiffness of each part of the human hand. Two applications using FDSense were demonstrated, for digital painting and digital instruments, where the result of the expression varies significantly depending on the elasticity of the end effector. In a following informal study, participants assessed the technique positively.

Unimanual Pen+Touch Input Using Variations of Precision Grip Postures

We introduce a new pen input space by forming postures with the same hand that also grips the pen while writing, drawing, or selecting. The postures contact the multitouch surface around the pen to enable detection without special sensors. A formative study investigates the effectiveness, accuracy, and comfort of 33 candidate postures in controlled tasks. The results indicate a useful subset of postures. Using raw capacitive sensor data captured in the study, a convolutional neural network is trained to recognize 10 postures in real time. This recognizer is used to create application demonstrations for pen-based document annotation and vector drawing. A small usability study shows the approach is feasible.

SESSION: Session 16: VR Interaction Techniques

Session details: Session 16: VR Interaction Techniques

RollingStone: Using Single Slip Taxel for Enhancing Active Finger Exploration with a Virtual Reality Controller

We propose using a single slip tactile pixel on virtual reality controllers to produce sensations of finger sliding and textures. When a user moves the controller on a virtual surface, we add a slip opposite to the movement, creating an illusion of a finger that is sliding on the surface, while varying the slip feedback changes lateral forces on fingertip. When coupled with hand motion the lateral forces can be used to create perceptions of artificial textures. RollingStone has been implemented as a prototype VR controller consisting of a ball-based slip display positioned under the user's fingertip. Within the slip display, a pair of motors actuates the ball, which is capable of gener- ating both short- and long-term two-degree-of-freedom slip feedback. An exploratory study was conducted to ensure that changing the relative motion between the finger and the ball could alter the perceptions conveying the properties of a tex- ture. The following two perception-based studies examined the minimum changes in speed of slip and angle of slip that are detectable by users. The results help us to design haptic patterns as well as our prototype applications. Finally, our preliminary user evaluation indicated that participants wel- comed RollingStone as a useful addition to the range of VR controllers.

Spacetime: Enabling Fluid Individual and Collaborative Editing in Virtual Reality

Virtual Reality enables users to explore content whose physics are only limited by our creativity. Such limitless environments provide us with many opportunities to explore innovative ways to support productivity and collaboration. We present Spacetime, a scene editing tool built from the ground up to explore the novel interaction techniques that empower single user interaction while maintaining fluid multi-user collaboration in immersive virtual environment. We achieve this by introducing three novel interaction concepts: the Container, a new interaction primitive that supports a rich set of object manipulation and environmental navigation techniques, Parallel Objects, which enables parallel manipulation of objects to resolve interaction conflicts and support design workflows, and Avatar Objects, which supports interaction among multiple users while maintaining an individual users' agency. Evaluated by professional Virtual Reality designers, Spacetime supports powerful individual and fluid collaborative workflows.

Evaluation of Interaction Techniques for a Virtual Reality Reading Room in Diagnostic Radiology

Today, radiologists diagnose three dimensional medical data using two dimensional displays. When designing environments with optimal conditions for such a process various aspects like contrast, screen reflection and background light have to be considered. As shown in previous research, applying virtual environments in combination with a Head-Mounted Display for diagnostic imaging provides potential benefits to reduce issues of bad posture and diagnostic mistakes. However, there is little research in exploring the usability and user experience of such beneficial environments. In this work we designed and evaluated different means of interaction to increase radiologists' performance. Therefore we created a virtual reality radiology reading room and employed it to evaluate three different interaction techniques. These allow a direct, semi-direct and indirect manipulation for performing scrolling- and windowing- tasks which are the most important for a radiologist. A study including nine radiologists was conducted and evaluated using the User Experience Questionnaire. Results indicate that direct manipulation is the preferred interaction technique, it outscored the other two control possibilities in attractiveness and pragmatic quality.

SESSION: Session 17: Haptics and VR

Session details: Session 17: Haptics and VR

DualPanto: A Haptic Device that Enables Blind Users to Continuously Interact with Virtual Worlds

We present a new haptic device that enables blind users to continuously interact with spatial virtual environments that contain moving objects, as is the case in sports or shooter games. Users interact with DualPanto by operating the me handle with one hand and by holding on to the it handle with the other hand. Each handle is connected to a pantograph haptic input/output device. The key feature is that the two handles are spatially registered with respect to each other. When guiding their avatar through a virtual world using the me handle, spatial registration enables users to track moving objects by having the device guide the output hand. This allows blind players of a 1-on-1 soccer game to race for the ball or evade an opponent; it allows blind players of a shooter game to aim at an opponent and dodge shots. In our user study, blind participants reported very high enjoyment when using the device to play (6.5/7).

VR Grabbers: Ungrounded Haptic Retargeting for Precision Grabbing Tools

Haptic feedback in VR is important for realistic simulation in virtual reality. However, recreating the haptic experience for hand tools in VR traditionally requires hardware with precise actuators, adding complexity to the system. We propose Ungrounded Haptic Retargeting, an interaction technique that provides a realistic haptic experience for grabbing tools using only passive mechanisms. This technique leverages the ungrounded feedback inherent in grabbing tools combined with dynamic visual adjustments of their position in virtual reality to create an illusion of physical presence for virtual objects. To demonstrate the capabilities of this technique, we created VR Grabbers, an exemplary passive VR controller, similar to training chopsticks, with haptic feedback for precise object selection and manipulation. We conducted two user studies based on VR Grabbers. The first study probed the perceptual limits of the illusion; we found that the maximum position difference between the virtual and physical world acceptable to the user is (-1.48, 1.95) cm. The second study showed that task performance of the VR Grabbers controller with Ungrounded Haptic Retargeting enabled outperforms the same controller with Ungrounded Haptic Retargeting disabled.

DextrES: Wearable Haptic Feedback for Grasping in VR via a Thin Form-Factor Electrostatic Brake

We introduce DextrES, a flexible and wearable haptic glove which integrates both kinesthetic and cutaneous feedback in a thin and light form factor (weight is less than 8g). Our approach is based on an electrostatic clutch generating up to 20 N of holding force on each finger by modulating the electrostatic attraction between flexible elastic metal strips to generate an electrically-controlled friction force. We harness the resulting braking force to rapidly render on-demand kinesthetic feedback. The electrostatic brake is mounted onto the the index finger and thumb via modular 3D printed articulated guides which allow the metal strips to glide smoothly. Cutaneous feedback is provided via piezo actuators at the fingertips. We demonstrate that our approach can provide rich haptic feedback under dexterous articulation of the user's hands and provides effective haptic feedback across a variety of different grasps. A controlled experiment indicates that DextrES improves the grasping precision for different types of virtual objects. Finally, we report on results of a psycho-physical study which identifies discrimination thresholds for different levels of holding force.

HydroRing: Supporting Mixed Reality Haptics Using Liquid Flow

Current haptic devices are often bulky and rigid, making them unsuitable for ubiquitous interaction and scenarios where the user must also interact with the real world. To address this gap, we propose HydroRing, an unobtrusive, finger-worn device that can provide the tactile sensations of pressure, vibration, and temperature on the fingertip, enabling mixed-reality haptic interactions. Different from previous explorations, HydroRing in active mode delivers sensations using liquid travelling through a thin, flexible latex tube worn across the fingerpad, and has minimal impact on a user's dexterity and their perception of stimuli in passive mode. Two studies evaluated participants' ability to perceive and recognize sensations generated by the device, as well as their ability to perceive physical stimuli while wearing the device. We conclude by exploring several applications leveraging this mixed-reality haptics approach.

FacePush: Introducing Normal Force on Face with Head-Mounted Displays

This paper presents FacePush, a Head-Mounted Display (HMD) integrated with a pulley system to generate normal forces on a user's face in virtual reality (VR). The mechanism of FacePush is obtained by shifting torques provided by two motors that press upon a user's face via utilization of a pulley system. FacePush can generate normal forces of varying strengths and apply those to the surface of the face. To inform our design of FacePush for noticeable and discernible normal forces in VR applications, we conducted two studies to iden- tify the absolute detection threshold and the discrimination threshold for users' perception. After further consideration in regard to user comfort, we determined that two levels of force, 2.7 kPa and 3.375 kPa, are ideal for the development of the FacePush experience via implementation with three applications which demonstrate use of discrete and continuous normal force for the actions of boxing, diving, and 360 guidance in virtual reality. In addition, with regards to a virtual boxing application, we conducted a user study evaluating the user experience in terms of enjoyment and realism and collected the user's feedback.

SESSION: Session 18: Web

Session details: Session 18: Web

Arboretum and Arbility: Improving Web Accessibility Through a Shared Browsing Architecture

Many web pages developed today require navigation by visual interaction-seeing, hovering, pointing, clicking, and dragging with the mouse over dynamic page content. These forms of interaction are increasingly popular as developer trends have moved from static, logically structured pages to dynamic, interactive pages. However, they are also often inaccessible to blind web users who tend to rely on keyboard-based screen readers to navigate the web. Despite existing web accessibility standards, engineering web pages to be equally accessible via both keyboard and visuomotor mouse-based interactions is often not a priority for developers. Improving access to this kind of visual and interactive web content has been a long-standing goal of HCI researchers, but the barriers have proven to be too varied and unpredictable to be overcome by some of the proposed solutions: promoting guidelines and best practices, automatically generating accessible versions of pre-exisiting web pages, or developing human-assisted solutions, such as screen and cursor-sharing, which tend to diminish an end user's agency. In this paper we present a real-time, collaborative approach to helping blind web users overcome inaccessible parts of existing web pages. We introduce *Arboretum*, a new architecture that enables any web user to seamlessly hand off controlled parts of their browsing session to remote users, while maintaining control over the interface via a "propose and accept/reject" mechanism. We illustrate the benefit of Arboretum by using it to implement *Arbility*, a browser that allows blind users to hand off targeted visual interaction tasks to remote crowd workers. We evaluate the entire system in a study with 9 blind web users, showing that Arbility allows them to interact with web content that was previously difficult to access via a screen reader alone.

Fusion: Opportunistic Web Prototyping with UI Mashups

Modern web development is rife with complexity at all layers, ranging from needing to configure backend services to grappling with frontend frameworks and dependencies. To lower these development barriers, we introduce a technique that enables people to prototype opportunistically by borrowing pieces of desired functionality from across the web without needing any access to their underlying codebases, build environments, or server backends. We implemented this technique in a browser extension called Fusion, which lets users create web UI mashups by extracting components from existing unmodified webpages and hooking them together using transclusion and JavaScript glue code. We demonstrate the generality and versatility of Fusion via a case study where we used it to create seven UI mashups in domains such as programming tools, data science, web design, and collaborative work. Our mashups include replicating portions of prior HCI systems (Blueprint for in-situ code search and DS.js for in-browser data science), extending the p5.js IDE for Processing with real-time collaborative editing, and integrating Python Tutor code visualizations into static tutorials. These UI mashups each took less than 15 lines of JavaScript glue code to create with Fusion.

Rousillon: Scraping Distributed Hierarchical Web Data

Programming by Demonstration (PBD) promises to enable data scientists to collect web data. However, in formative interviews with social scientists, we learned that current PBD tools are insufficient for many real-world web scraping tasks. The missing piece is the capability to collect hierarchically-structured data from across many different webpages. We present Rousillon, a programming system for writing complex web automation scripts by demonstration. Users demonstrate how to collect the first row of a 'universal table' view of a hierarchical dataset to teach Rousillon how to collect all rows. To offer this new demonstration model, we developed novel relation selection and generalization algorithms. In a within-subject user study on 15 computer scientists, users can write hierarchical web scrapers 8 times more quickly with Rousillon than with traditional programming.

Idyll: A Markup Language for Authoring and Publishing Interactive Articles on the Web

The web has matured as a publishing platform: news outlets regularly publish rich, interactive stories while technical writers use animation and interaction to communicate complex ideas. This style of interactive media has the potential to engage a large audience and more clearly explain concepts, but is expensive and time consuming to produce. Drawing on industry experience and interviews with domain experts, we contribute design tools to make it easier to author and publish interactive articles. We introduce Idyll, a novel "compile-to-the-web" language for web-based interactive narratives. Idyll implements a flexible article model, allowing authors control over document style and layout, reader-driven events (such as button clicks and scroll triggers), and a structured interface to JavaScript components. Through both examples and first-use results from undergraduate computer science students, we show how Idyll reduces the amount of effort and custom code required to create interactive articles.

Ply: A Visual Web Inspector for Learning from Professional Webpages

While many online resources teach basic web development, few are designed to help novices learn the CSS concepts and design patterns experts use to implement complex visual features. Professional webpages embed these design patterns and could serve as rich learning materials, but their stylesheets are complex and difficult for novices to understand. This paper presents Ply, a CSS inspection tool that helps novices use their visual intuition to make sense of professional webpages. We introduce a new visual relevance testing technique to identify properties that have visual effects on the page, which Ply uses to hide visually irrelevant code and surface unintuitive relationships between properties. In user studies, Ply helped novice developers replicate complex web features 50% faster than those using Chrome Developer Tools, and allowed novices to recognize and explain unfamiliar concepts. These results show that visual inspection tools can support learning from complex professional webpages, even for novice developers.