ISS '18- Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces

Full Citation in the ACM Digital Library

SESSION: Keynote Talks

The City Is My Homescreen

This talk will address how to take the city as inspiration for contemporary interaction design and for the architecture of tomorrow's hybrid digital-physical services, spaces and communities. Using numerous project examples, the talk will illustrate how using the city itself as an interface for new forms of infrastructure and services --- via a diverse range of affordances, intelligences, and surfaces --- might create and reinforce both social fabric and wellbeing, rather than diminish it. As opposed to the generally individuating nature of today's interfaces, I will draw inspiration from contemporary architecture, distributed integrated systems, and decentralised machine learning, to describe a richer potential understanding of interactions, spaces and architecture in the city. Ultimately, the key questions underpinning tomorrow's urban interactions --- such as application, ownership, and identity --- will be linked to the potential of augmented modes of legible urban interactions, enabling the co-design of a new kind of city, creating new forms of public value for a broader range of citizens.

City and Architecture as Media

The infusion of digital technology into our built environment is proceeding rapidly --- building facades, facilities, and city landmarks around the world are fast becoming integrated with digital media. The enduring popularity of large-scale projection mapping has opened the door to increased collaboration between architects and digital media professionals; in Japan, many city developers and even the central government have begun to show active interest in transforming our buildings, parks, and cities using digital media. The 2020 Tokyo Summer Olympics and the preceding construction boom ensure that this trend will intensify even further in this nation in the coming years. In this talk, I will provide an overview of cutting-edge practices integrating digital media with architecture / urban design using our own projects as case studies, and explore the future potential of "City and Architecture as Media''.

SESSION: Session 1: Wall Displays in our Daily lifes

Session details: Session 1: Wall Displays in our Daily lifes

Welcome OnBoard: An Interactive Large Surface Designed for Teamwork and Flexibility in Surgical Flow Management

Effective management of surgical suites require teamwork and constant adaptation. Attempts to bring computer support might deteriorate information accuracy, be inflexible and turn staff collaborative tools into administrative tools. We present OnBoard, an 84" multitouch surface application to support surgical flow management. OnBoard supports multiple users, transposes existing coordination artifacts into interactive objects, and offers interactions such as free writing or addition or rescheduling of cases. As part of a cyber-infrastructure, OnBoard enables users to benefit from real-time activity sensors in the surgical suite while offering ways to mitigate potential glitches. OnBoard was installed in a surgical suite of 12 operating rooms for 2 months that performed about 300 procedures. We observed its use, ran interviews and questionnaires and tailored the interactions according to users' needs. OnBoard suggests that tailored, flexible tools might effectively support the surgical staff and be an important part of patients' health improvement.

Increasing Passersby Engagement with Public Large Interactive Displays: A Study of Proxemics and Conation

Prior research has shown that large interactive displays deployed in public spaces are often underutilized, or even un- noticed, phenomena connected to 'interaction' and 'display blindness', respectively. To better understand how designers can mitigate these issues, we conducted a field experiment that compared how different visual cues impacted engagement with a public display. The deployed interfaces were designed to progressively reveal more information about the display and entice interaction through the use of visual content designed to evoke direct or indirect conation (the mental faculty related to purpose or will to perform an action), and different animation triggers (random or proxemic). Our results show that random triggers were more effective than proxemic triggers at overcoming display and interaction blindness. Our study of conation - the first we are aware of - found that "conceptual" visuals designed to evoke indirect conation were also useful in attracting people's attention.

Sage River Disaster Information (SageRDI): Demonstrating Application Data Sharing In SAGE2

The Scalable Amplified Group Environment (SAGE2) is an open-source, web-based middleware for driving tiled display walls. SAGE2 allows running multiple applications at once within its workspace. In large display walls, users tend to collaborate using multiple applications in the same space for simultaneous interaction and review. Unfortunately, many of these applications were created independently by different developers and were never intended to interoperate which greatly limits their potential reusability. SAGE2 developers face system limitations where applications are data segregated and cannot easily communicate with others. To counter this problem, we developed the SAGE2 data sharing components. We describe the Sage River Disaster Information (SageRDI) application and the SAGE2 architectural implementations necessary for its operation. SageRDI enables river.go.jp, an existing website that provides water sensor data for Japan, to interact with other SAGE2 applications without modifying the website's server, hosted files, nor any of the default SAGE2 applications.

Post-meeting Curation of Whiteboard Content Captured with Mobile Devices

The traditional dry-erase whiteboard is a ubiquitous tool in the workplace, particularly in collaborative meeting spaces. Recent studies show that meeting participants commonly capture whiteboard content using integrated cameras on mobile devices such as smartphones or tablets. Yet, little is known about how people curate or use such whiteboard photographs after meetings, or how their curation practices relate to post-meeting actions. To better understand these post-meeting activities, we conducted a qualitative, interview-based study of 19 frequent whiteboard users to probe their post-meeting practices with whiteboard photos. The study identified a set of unmet design needs for the development of improved mobile-centric whiteboard capture systems. Design implications stemming from the study include the need for mobile devices to quickly capture and effortlessly transfer whiteboard photos to productivity-oriented devices and to shared-access tools, and the need to better support the extraction of whiteboard content directly into other productivity application tools.

SESSION: Session 2: Multi-camera Set-ups for Multi-user Spaces

Session details: Session 2: Multi-camera Set-ups for Multi-user Spaces

Browsing Group First-Person Videos with 3D Visualization

This work presents a novel user interface applying 3D visualization to understand complex group activities from multiple first-person videos. The proposed interface is designed to assist video viewers to easily understand the collaborative relationships of group activity based on where the individual worker is located in a workspace and how multiple workers are positioned to one another during the group activity. More specifically, the interface not only shows all recorded first-person videos but also visualizes the 3D position and orientation of each view point (i.e., the 3D position of each worker wearing a head-mounted camera) with a reconstructed 3D model of the workspace. Our user study confirms that the 3D visualization helps video viewers to understand geometric information of a worker and collaborative relationships of group activity easily and accurately.

EagleView: A Video Analysis Tool for Visualising and Querying Spatial Interactions of People and Devices

To study and understand group collaborations involving multiple handheld devices and large interactive displays, researchers frequently analyse video recordings of interaction studies to interpret people's interactions with each other and/or devices. Advances in ubicomp technologies allow researchers to record spatial information through sensors in addition to video material. However, the volume of video data and high number of coding parameters involved in such an interaction analysis makes this a time-consuming and labour-intensive process. We designed EagleView, which provides analysts with real-time visualisations during playback of videos and an accompanying data-stream of tracked interactions. Real-time visualisations take into account key proxemic dimensions, such as distance and orientation. Overview visualisations show people's position and movement over longer periods of time. EagleView also allows the user to query people's interactions with an easy-to-use visual interface. Results are highlighted on the video player's timeline, enabling quick review of relevant instances. Our evaluation with expert users showed that EagleView is easy to learn and use, and the visualisations allow analysts to gain insights into collaborative activities.

Velt: A Framework for Multi RGB-D Camera Systems

We present Velt, a flexible framework for multi RGB-D camera systems. Velt supports modular real-time streaming and processing of multiple RGB, depth and skeleton streams in a camera network. RGB-D data from multiple devices can be combined into 3D data like point clouds. Furthermore, we present an integrated GUI, which enables viewing and controlling all streams, as well as debugging and profiling performance. The node-based GUI provides access to everything from high level parameters like frame rate to low level properties of each individual device. Velt supports modular preprocessing operations like downsampling and cropping of streaming data. Furthermore, streams can be recorded and played back. This paper presents the architecture and implementation of Velt.

Hands-Free Remote Collaboration Over Video: Exploring Viewer and Streamer Reactions

Video conferencing is often used to connect geographically distributed users and can be used to help them engage in a shared activity in a mobile setting for remote collaboration. One key limitation of today's systems however is that they typically only provide one camera view, which often requires users to hold a mobile phone during the activity. In this paper, we explore the opportunities and challenges of utilizing multiple cameras to create a hands-free, mobile, remote collaboration experience. Results from a study with 54 participants (18 groups of three) revealed that remote viewers could actively participate in an activity with the help of our system and that both the local participant and remote viewers preferred hands-free mode over traditional video conferencing. Our study also revealed insights on how remote viewers manage multiple camera views and showed how automation can improve the experience.

SESSION: Session 3: Gestures and Selection for Interactive Surfaces

Session details: Session 3: Gestures and Selection for Interactive Surfaces

Replicating User-defined Gestures for Text Editing

Although initial ideas for building intuitive and usable handwriting applications originated nearly 30 years ago, recent advances in stylus technology and handwriting recognition are now making handwriting a viable text-entry option on touchscreen devices. In this paper, we use modern methods to replicate studies form the 80's to elicit hand-drawn gestures from users for common text-editing tasks in order to determine a "guessable' gesture set and to determine if the early results still apply given the ubiquity of touchscreen devices today. We analyzed 360 gestures, performed with either the finger or stylus, from 20 participants for 18 tasks on a modern tablet device. Our findings indicate that the mental model of "writing on paper' found in past literature largely holds even today, although today's users' mental model also appears to support manipulating the paper elements as opposed to annotating. In addition, users prefer using the stylus to finger touch for text editing, and we found that manipulating "white space' is complex. We present our findings as well as a stylus-based, user-defined gesture set for text editing.

How Memorizing Positions or Directions Affects Gesture Learning?

Various techniques have been proposed to faster command selection. Many of them either rely on directional gestures (e.g. Marking menus) or pointing gestures using a spatially-stable arrangement of items (e.g. FastTap). Both types of techniques are known to leverage memorization, but not necessarily for the same reasons. In this paper, we investigate whether using directions or positions affects gesture learning. Our study shows that, while recall rates are not significantly different, participants used the novice mode more often and spent more time while learning commands with directional gestures, and they also reported more physical and mental efforts. Moreover, this study also highlights the importance of semantic relationships between gestural commands and reports on the memorization strategies that were elaborated by the participants.

How to Hold Your Phone When Tapping: A Comparative Study of Performance, Precision, and Errors

We argue that future mobile interfaces should differentiate between various contextual factors like grip and active fingers, adjusting screen elements and behaviors automatically, thus moving from merely responsive design to responsive interaction. Toward this end we conducted a systematic study of screen taps on a mobile device to find out how the way you hold your device impacts performance, precision, and error rate. In our study, we compared three commonly used grips and found that the popular one-handed grip, tapping with the thumb, yields the worst performance. The two-handed grip, tapping with the index finger, is the most precise and least error-prone method, especially in the upper and left halves of the screen. In landscape orientation (two-handed, tapping with both thumbs) we found the best overall performance with a drop in performance in the middle of the screen. Additionally, we found differentiated trade-off relationships and directional effects. From our findings we derive design recommendations for interface designers and give an example how to make interactions truly responsive to the context-of-use.

Risk Effects of Surrounding Distractors Imposing Time Penalty in Touch-Pointing Tasks

Optimal target size has been studied for touch-GUI design. In addition, because the degree of risk for tapping unintended targets significantly affects users' strategy, some researchers have investigated the effects of margins (or gaps) between GUI items and the risk level (here, a penalty time) on user performance. From our touch-pointing tasks in grid-arranged icons, we found that a small gap and a long penalty time did not significantly change the task completion time, but they did negatively affect the error rates. As a design implication, we recommend using 1-mm gaps to balance the space occupation and user performance. We also found that we could not estimate user performance by using Fitts' and FFitts' laws, probably because participants had to focus their attention on avoiding distractors while aiming for the target.

SESSION: Session 4: Mobile and Wearable Text Entry

Session details: Session 4: Mobile and Wearable Text Entry

Optimal-T9: An Optimized T9-like Keyboard for Small Touchscreen Devices

T9-like keyboards (i.e., 3×3 layouts) have been commonly used on small touchscreen devices to mitigate the problem of tapping tiny keys with imprecise finger touch (e.g., T9 is the default keyboard on Samsung Gear 2). In this paper, we proposed a computational approach to design optimal T9-like layouts by considering three key factors: clarity, speed, and learnability. In particular, we devised a clarity metric to model the word collisions (i.e., words with identical tapping sequences), used the Fitts-Digraph model to predict speed, and introduced a Qwerty-bounded constraint to ensure high learnability. Founded upon rigorous mathematical optimization, our investigation led to Optimal-T9, an optimized T9-like layout which outperformed the original T9 and other T9-like layouts. A user study showed that its average input speed was 17% faster than T9 and 26% faster than a T9-like layout from literature. Optimal-T9 also drastically reduced the error rate by 72% over a regular Qwerty keyboard. Subjective ratings were in favor of Optimal-T9: it had the lowest physical, mental demands, and the best perceived-performance among all the tested keyboards. Overall, our investigation has led to a more efficient, and more accurate T9-like layout than the original T9. Such a layout would immediately benefit both T9-like keyboard users and small touchscreen device users.

DiaQwerty: QWERTY Variants to Better Utilize the Screen Area of a Round or Square Smartwatch

The QWERTY keyboard has a wide form factor, whereas smartwatches frequently have round or square form factors. Even if extra space for an input field is considered, the aspect ratio of the QWERTY keyboard may not be optimal for a round or square smartwatch. We surmised that a narrower keyboard would be better able to utilize the screen area of a round or square smartwatch, thereby enabling larger keys and improved performance. Larger keys, however, may not necessarily result in better performance because of the reduced familiarity of a modified keyboard layout. To investigate this, we designed DiaQwerty keyboards, which are QWERTY variants with an aspect ratio of 10:7, and compared them with a plain QWERTY keyboard and a SplitBoard on round and square smartwatches. For a 33 mm round watch, DiaQwerty was comparable with QWERTY and was faster than SplitBoard. For a 24 mm x 30 mm square watch, DiaQwerty was faster than both QWERTY and SplitBoard. In the latter case, DiaQwerty achieved a 15% improvement over QWERTY. We concluded that the advantages of the enlarged keys outweighed the disadvantages of the reduced familiarity of the modified layout on a square watch.

Design and Evaluation of Semi-Transparent Keyboards on a Touchscreen Tablet

As tablet computers are hosting more productivity applications, efficient text entry is becoming more important. A soft keyboard, which is the primary text entry interface for tablets, however, often competes with applications for the limited screen space. A promising solution to this problem may be a semi-transparent soft keyboard (STK), which can share the screen with an application. A few commercial STK products are already available, but research questions about the STK design have not been explored in depth yet. Therefore, we conducted three experiments to answer 1) the effect of the transparency level on usability, 2) exploration of diverse design options for an STK, and 3) the effect of an STK on the different text caret positions. The results imply that STKs with 50% opacity showed a balanced performance; well-designed STKs were acceptable in both content reading and typing situations; which could reach 90-100% of an opaque keyboard in terms of overall performance; and the text caret could intrude the STK down to the number row.

A Small Virtual Keyboard is Better for Intermittent Text Entry on a Pen-Equipped Tablet

Tap-typing with a pen on a virtual keyboard is often used for intermittent text entry on a pen-equipped tablet; however, this has usability issues owing to the large size of the virtual keyboard. We speculated that tap-typing with a pen on a tablet may not require such a large keyboard, but there was no empirical study to help us to determine the best virtual keyboard size for it. Therefore, we conducted two experiments. In the first experiment, we compared virtual keyboards with different key sizes (3, 4, 5, 6, 8, 10, and 13 mm) using a text entry task. The results showed that keyboards with 6-, 8-, and 10-mm keys were the best when text entry speed, task load, and preference were considered. In the second experiment, a keyboard with 6-mm keys was compared with other conventional pen text entry options using a presentation slide editing task with a pen. The results showed that the keyboard with 6-mm keys performed better than the others in terms of preference, System Usability Scale (SUS) scores, and ease-of-learning scores in a Usefulness, Satisfaction, and Ease of use (USE) questionnaire.

SESSION: Session 5: Interactive Spaces: Moving and on the Move

Session details: Session 5: Interactive Spaces: Moving and on the Move

FingerInput: Capturing Expressive Single-Hand Thumb-to-Finger Microgestures

Single-hand thumb-to-finger microgestures have shown great promise for expressive, fast and direct interactions. However, pioneering gesture recognition systems each focused on a particular subset of gestures. We are still in lack of systems that can detect the set of possible gestures to a fuller extent. In this paper, we present a consolidated design space for thumb-to-finger microgestures. Based on this design space, we present a thumb-to-finger gesture recognition system using depth sensing and convolutional neural networks. It is the first system that accurately detects the touch points between fingers as well as the finger flexion. As a result, it can detect a broader set of gestures than the existing alternatives, while also providing high-resolution information about the contact points. The system shows an average accuracy of 91% for the real-time detection of 8 demanding thumb-to-finger gesture classes. We demonstrate the potential of this technology via a set of example applications.

On the Effects of a Nomadic Multisensory Solution for Children's Playful Learning

The paper describes the design of Ahù, a nomadic smart multisensory solution for children's playful learning. Ahù has a combination of features that make it unique to the current available multisensory technologies for learning: it has a totem shape, supports multimedia stimuli by communicating through voice, lights and multimedia contents projected on two different fields on the floor and allows cooperative and competitive games. The system is nomadic, so it can be moved around, and can be controlled with easy input methods. An exploratory study investigates and provides Ahù's potential to promote engagement, collaboration, socialization and learning in classrooms.

The Haptic Video Player: Using Mobile Robots to Create Tangible Video Annotations

Video and animation are common ways of delivering concepts that cannot be easily communicated through text. This visual information is often inaccessible to blind and visually impaired people, and alternative representations such as Braille and audio may leave out important details. Audio-haptic displays with along with supplemental descriptions allow for the presentation of complex spatial information, along with accompanying description. We introduce the Haptic Video Player, a system for authoring and presenting audio-haptic content from videos. The Haptic Video Player presents video using mobile robots that can be touched as they move over a touch screen. We describe the design of the Haptic Video Player system, and present user studies with educators and blind individuals that demonstrate the ability of this system to render dynamic visual content non-visually.

AdapTable: Extending Reach over Large Tabletops through Flexible Multi-Display Configuration

Large interactive tabletops are beneficial for various tasks involving exploration and visualization, but regions of the screen far from users can be difficult to reach. We propose AdapTable; a concept and prototype of a flexible multi-display tabletop that can physically reconfigure its layout, allowing for interaction with difficult-to-reach regions. We conducted a design study where we found users preferred to change screen layouts for full-screen interaction, motivated by reduced physical demands and frustration. We then prototyped AdapTable using four actuated tabletop displays each propelled by a mobile robot, and a touch menu was chosen to control the layout. Finally, we conducted a user study to evaluate how well AdapTable addresses the reaching problem compared with a conventional panning technique. Our findings show that AdapTable provides a more efficient method for complex full-screen interaction.

SESSION: Session 6: Novel Modalities: Sand, Air and Water

Session details: Session 6: Novel Modalities: Sand, Air and Water

Scoopirit: A Method of Scooping Mid-Air Images on Water Surface

We propose Scoopirit, a system that displays an image standing vertically at an arbitrary three-dimensional position under and on a water surface. Users can scoop the image from an arbitrary horizontal position on water surface with their bare hands. So that it can be installed in public spaces containing water surfaces visited by an unspecified number of people, Scoopirit displays images without adding anything to the water and enables user to interact with the image without wearing a device. Because of the configuration of the system, users can easily understand the relationship between the real space and the image. This study makes three contributions. First, we propose a method to scoop the mid-air image at an arbitrary horizontal position. Second, we design the offset that is useful for measuring the water level of the water surface scooped with an RGB-D camera. Third, we propose a method to design interaction areas.

Fade-in Pixel: A Selective and Gradual Real-World Pixelization Filter

Our goal is to explore novel techniques for blurring real-world objects with a physical filter that changes their appearance and naturally induces a line of sight. To reach this goal, we have developed a system, called "Fade-in Pixel", that enables real-world objects to be pixelized gradually and selectively. We realized this gradual and selective pixelization by using 3D printing a transparent lens array filter and controlling the filling and discharging of transparent liquid having the same refractive index as that of the transparent resin. The result is that objects in the real world appear pixelized when the liquid is discharged and appear clear when the liquid is filled. This paper describes the motivation for this study, along with the design, implementation, and interactive applications of Fade-in Pixel.

Sand to Water: Manipulation of Liquidness Perception with Fluidized Sand and Spatial Augmented Reality

Recently, there has been a renewed interest in Fluidized Bed Interface. It can give us the haptic feedback of sand and fluid reversibly, but exhibits difficulty in visually changes. So, we proposed a novel spatial augmented reality technique to provide the visual liquidness impression. Our system gives visual feedback to users by projecting liquid images on the surface of the interface. The image is generated on the basis of human visual cognition and is different from a mere fluid simulation. As the result, the visual liquidness of our system was emphasized compared to the Fluidized Bed Interface. In addition, the effect of emphasizing visual liquidness is confirmed in the fluidized bed phenomenon, and it is found out that our system is superior in displaying a liquid image. Our system is expected not only to be applied in entertainments but also to contribute to the elucidation of human liquid cognitive mechanisms.

LeviCursor: Dexterous Interaction with a Levitating Object

We present LeviCursor, a method for interactively moving a physical, levitating particle in 3D with high agility. The levitating object can move continuously and smoothly in any direction. We optimize the transducer phases for each possible levitation point independently. Using precomputation, our system can determine the optimal transducer phases within a few microseconds and achieves round-trip latencies of 15 ms. Due to our interpolation scheme, the levitated object can be controlled almost instantaneously with sub-millimeter accuracy. We present a particle stabilization mechanism which ensures that the levitating particle is always in the main levitation trap. Lastly, we conduct the first Fitts' law-type pointing study with a real 3D cursor, where participants control the movement of the levitated cursor between two physical targets. The results of the user study demonstrate that using LeviCursor, users reach performance comparable to that of a mouse pointer.

SESSION: Session 7: Interactive and Augmented Spaces

Session details: Session 7: Interactive and Augmented Spaces

MlioLight: Projector-camera Based Multi-layered Image Overlay System for Multiple Flashlights Interaction

We propose MlioLight, a projector-camera unit-based projection-mapping system for overlaying multiple images on a screen or on real world objects using multiple flashlight-type devices. We focus on detecting the areas of overlapping lights in a multiple light source scenario and overlaying multi-layered information on real world objects in these areas. To blend multiple images, we developed methods for light identification and overlapping area detection using wireless synchronization between a high-speed camera and multiple flashlight devices. In this study, we describe the concept of MlioLight as well as its prototype implementation and applications. In addition, we present an evaluation of the proposed prototype.

ColourAIze: AI-Driven Colourisation of Paper Drawings with Interactive Projection System

ColourAIze is an interactive system that analyses black and white drawings on paper, automatically determines realistic colour fills using artificial intelligence (AI) and projects those colours onto the paper within the line art. In addition to selecting between multiple colouring styles, the user can specify local colour preferences to the AI via simple stylus strokes in desired areas of the drawing. This allows users to immediately and directly view potential colour fills for paper sketches or published black and white artwork such as comics. ColourAIze was demonstrated at the Winter 2017 Comic Market in Tokyo, where it was used by more than a thousand visitors. This short paper describes the design of the system and reports on usability observations gathered from demonstrators at the fair.

Floor-Projected Guidance Cues for Collaborative Exploration of Spatial Augmented Reality Setups

In this paper we present a floor-based user interface (UI) that allows multiple users to explore a spatial augmented reality (SAR) environment with both monoscopic and stereoscopic projections. Such environments are characterized by a low level of user instrumentation and the capability of providing a shared interaction space for multiple users. However, projector-based systems using stereoscopic display are usually single-user setups, since they can provide the correct perspective for only one tracked person. To address this problem, we developed a set of guidance cues, which are projected onto the floor in order to assist multiple users regarding (i) the interaction with the SAR system, (ii) the identification of regions of interest and ideal viewpoints, and (iii) the collaboration with each other. In a user study with 40 participants all cues were evaluated and a set of feedback elements, which are essential to guarantee an intuitive self-explaining interaction, was identified. The results of the study also indicate that the developed UI guides users to more favorable viewpoints and therefore is able to improve the experience in a multi-user SAR environment.

Awareness Techniques to Aid Transitions between Personal and Shared Workspaces in Multi-Display Environments

In multi-display environments (MDEs) that include large shared displays and desktops, users can engage in both close collaboration and parallel or personal work. To transition between the displays can be challenging in complex settings, such as crisis management rooms. To provide workspace awareness and to factilitate these transitions, we design and implement three interactions techniques that display users' activities. We explore how and where to display this activity: briefly on the shared display, or more persistently on a peripheral floor display. In a user study, motivated by the context of a crisis room where multiple operators with different roles need to cooperate, we tested the usability of the techniques and provided insights on such transitions in systems running on MDEs.

SESSION: Posters

A Study of Material Sonification in Touchscreen Devices

Even in the digital age, designers largely rely on physical material samples to illustrate their products, as existing visual representations fail to sufficiently reproduce the look and feel of real world materials. Here, we investigate the use of interactive material sonification as an additional sensory modality for communicating well-established material qualities like softness, pleasantness or value. We developed a custom application for touchscreen devices that receives tactile input and translate it into material rubbing sound using granular synthesis. We used this system to perform a psychophysical study, in which the ability of the user to rate subjective material qualities is evaluated, with the actual material samples serving as reference stimulus.

A Tabletop System Using an OmniDirectional Projector-Camera

We propose an omnidirectional projection system embedding a projector with an ultra wide-angle lens in the table. AR markers are attached on the target surfaces so that the system can track them with an omnidirectional camera. Finally, we developed a prototype of the system and introduced some applications to show the effectiveness of our method.

Aligned Functional Paste: Enhancing Handwriting with Retrieval Results on a Large Surface

This paper proposes a new interaction method called aligned functional paste. The method allows users to select handwritten keywords, to apply some mapping function to the keywords and to align results of the mapping function in a few steps.

An Encounter Type VR System Aimed at Exhibiting Wall Material Samples for Show House

In this research, we propose a system that can change the tactile material of the wall surface especially in the VR show house. In order to present multiple types of wall materials, an encounter type tactile sense presentation unit with a plurality of wall materials mounted on a uniaxial robot presents a specific type of wall material according to the movement of the hand of the experiencer. With this encounter type approach, users can experience tactile sensations of multiple kinds of realistic wall materials. We examined the specs necessary for such presentation, constructed the system, and conducted a user study to examine the effect of the proposed system, comparing with visual only and visual + force only conditions.

Body-Prop Interaction: Augmented Open Discs and Egocentric Body-Based Interaction for Exploring Immersive Visualizations

Immersive Visualizations of 3-dimensional scientific data pose unique interaction challenges. One challenge is to be able to interact with the visual volumetric representations of data using direct manipulation in a way that facilitates exploration and discovery, yet maintains data relationships. In this paper, we present Body-Prop Interaction, a novel tangible multimodal interface for immersive visualizations. Our interaction technique combines multiple input and output modalities with 3D printed open discs that are tracked and augmented with virtual information. To demonstrate our technique, we implemented interactive tasks for a volume visualization of graphite billet data. While demonstrated for this type of data visualization, our novel interaction technique has the potential to be used to interact with other types of augmented and virtual reality.

Cross-Ratio Based Gaze Estimation using Polarization Camera System

Eye-based interaction is one of the solutions for achieving intuitive interfaces on surfaces such as a large display, and thus, various eye-tracking methods have been studied. Cross-ratio based gaze estimation, which determines the point-of-gaze on a screen, has been studied actively as a novel eye-tracking method because the method does not require a hardware calibration defining the relationship between a camera and monitor. We expect that the cross-ratio method will be a breakthrough for eye-based interaction under various circumstances such as tabletop devices and digital whiteboards. In eye-tracking, near-infrared light is often emitted, and at least four LEDs are located on display corners for detecting the screen plane in the cross-ratio based method. However, long-time radiation of near-infrared light can make a user fatigued. Therefore, in this study, we attempted to extract the screen area correctly without near-infrared radiation emission. A polarizing filter is included in the display, and thus, visibility of the screen can be controlled by the light's polarization direction of the external polarized light filter. We propose gaze estimation based on the cross-ratio method using a developed polarization camera system, which can capture two polarized images of different angles simultaneously. Further, we confirmed that the point-of-gaze could be estimated using the screen reflection detected by computing the differences between two images without near-infrared emission.

CrosSI: A Novel Workspace with Connected Horizontal and Vertical Interactive Surfaces

A typical desk workspace includes both horizontal and vertical areas, wherein users can freely move items between the horizontal and vertical surfaces and edit objects; for example, a user can place a paper on the desk, write a memo, and affix it to the wall. However, in workspaces designed to handle digital information, the usable area is limited to specific devices such as displays and tablets, and the user interface is discontinuous in terms of both the software and hardware. Therefore, we propose a multi-surface information display system CrosSI, which includes a horizontal surface and multiple cubic structures with vertical surfaces on the horizontal surface. Using this system, we investigate the ability to visualize and move information on a continuous system of horizontal and vertical surfaces.

Define, Refine, and Identify Events in a Bendable Interface

As we aspire to bring bendable interfaces closer to mainstream use, we need to resolve practical issues such as the event model for bend input. This work takes steps towards understanding the bend events space for a simplified 1DOF (Degree of Freedom) device. We suggest that a rich set of informative bend events can elevate application development for bendable devices. In this work, we describe the current state of event models for bendable devices, our suggested refined model, and the steps we are taking toward implementing an event system.

Designing for Ambient UX: Case Study of a Dynamic Lighting System for a Work Space

The research aims at proposing and validating a framework for supporting the design for user experience in interactive spaces (Ambient UX). It suggests that dynamic changes in interactive spaces should be designed focusing on their effects on three levels of the user experience: physical wellbeing, meanings, and social relations. Validation occurred through a field study performed in a work environment, where a dynamic lighting system was designed and installed. Preliminary results validate the relevance of the three levels, thus laying the ground for further research and discussion.

Here's looking at you: A Spherical FTVR Display for Realistic Eye-Contact

In this work we describe the design, implementation and initial evaluation of a spherical Fish Tank Virtual Reality (FTVR) display for realistic eye-contact. We identify display shape, size, and depth cues as well as model fidelity as important considerations and challenges for setting up realistic eye-contact and package it into a reproducible framework. Based on the design, we implemented and evaluated the system to assess the effectiveness of the eye-contact. In our initial evaluation participants were able to identify eye-contact with an accuracy of 89.6%. Moreover eye-contact with a virtual character triggered changes in participant's social behavior that are in line with real world eye-contact scenarios. Taken together, these results provide practical guidelines for building displays for realistic eye-contact and can be applied to applications such as teleconferencing and VR treatment in psychology.

Practice System for Controlling Cutting Pressure for Paper-cutting

We describe a system for paper-cutting with a knife with a blade attached to the tip of the stylus on a drawing display. The purpose of this research is to support controlling cutting pressure for novices. Novices tend to cut paper with an unstable pressure stronger than necessary. Therefore, some instructors teach practicing to control the pressure to novices We have developed a device to support cutting practice by a stylus and drawing display. We measured the difference between pressures of novices and experts. In addition, we developed a system that encourages appropriate pressure based on expert pressure. This system shows the pressure difference between the user and the experts in color and sound. We experimented to compare the effectiveness of the system. As a result, the novices practiced with the system, the range of pressure and variation improved than the existing practice method.

Recognizing Gestures on Projected Button Widgets with an RGB-D Camera Using a CNN

Projector-camera systems can turn any surface such as tabletops and walls into an interactive display. A basic problem is to recognize the gesture actions on the projected UI widgets. Previous approaches using finger template matching or occlusion patterns have issues with environmental lighting conditions, artifacts and noise in the video images of a projection, and inaccuracies of depth cameras. In this work, we propose a new recognizer that employs a deep neural net with an RGB-D camera; specifically, we use a CNN (Convolutional Neural Network) with optical flow computed from the color and depth channels. We evaluated our method on a new dataset of RGB-D videos of 12 users interacting with buttons projected on a tabletop surface.

Simplicity is Not Always Best: Image Preferences in Patients with Mild Cognitive Impairment

Mild cognitive impairment (MCI) is one of the most prevalent chronic diseases related to the decline of cognitive ability for elderly people. This decline in cognitive function could cause older adults to experience difficulty in interpreting the meanings of function icons on the interfaces of household appliances. The aim of this study was to explore the recognition of the meanings of different types of product function icons in individuals with MCI. These four common functions of images were redesigned according to two directions as test samples: from complex to simplified form and from plane to stereo form. In this study, 31 elderly people with MCI and their caregivers were invited to participate. The following test results were revealed: 1) Participants reported that stereoscopic images were more recognizable than flat images. 2) Images with more details were easier to recognize than those that were abstract. These results can be applied to the interface design of household appliances, which can also provide more data for designing product interfaces for elderly users and users with mild cognitive disorders.

SmartSurveys: Does Context Influence Whether We'll Share Healthcare Experience Data with our Smartphone?

Consumer feedback is collected in many industries, including in healthcare where patient feedback contributes to a higher quality of care. Current collection methods include complaints, local surveys, and patient stories, but these methods yield low participation at high costs. Providers need affordable and effective ways to collect feedback, and smartphone applications present as suitable solutions. However, previous research shows that patients are hesitant to provide smartphone-based feedback in a care setting due to perceived risks and apparent futility of expecting change as a result. We will conduct a study to observe consumer behaviour using smartphones to provide service feedback in healthcare spaces versus non-healthcare spaces. We will identify addressable barriers that impact the adoption of smartphone technology to gather patient experience data in health care spaces.

Traditionally Crafted Digital Interfaces

Digital textiles merged with traditional designs could create culture-specific products for the users. The present paper elaborates designing of three LED embedded scarves - Aster, Iris and Nymphaea - merged with traditional embroidery and hand painting techniques. The essence of traditional craft making and digital crafts has been enriched with design, texture, pattern and material exploration. Further, the scarves have been applied to a scenario of non-verbal communication in the social space of user, since limited research is available for non-verbal communication with respect to textiles in India. The experimentation conducted reveals high perceived usefulness and perceived ease of use of 60 young female respondents for using digitally crafted textiles for non-verbal expression/communication in the social space. The research aims to bring together traditional craftsmen and designers to create digital crafts for the new era of crafting traditions with context specific applications.

SESSION: Demonstrations

An Integral Illumination Device Using Heat-Resistant Hybrid Optics

We present a high-output integral illumination device, capable of generating a wide array of programmable lighting effects with sufficient luminous intensity to provide ambient lighting for small indoor environments. The technical foundations of such devices have been laid out in prior work, but past implementations suffered from relatively low ceilings with regards to light output, which has so far limited their range of potential applications. The defining feature of our new setup is a custom, heat-resistant optical instrument that can easily be built using a combination of manual and digital fabrication.

AURA: Urban Personal Projection to Initiate the Communication

We present a concept of AURA, an urban personal projection to initiate the communication. In this work, we focus on how to break the ice with strangers through technology in urban public space. The Aura, as an enliven spiritual pet, floats around user's feet. We introduce a simple interaction scenario, attracting a person who comes within personal distance of the user who carries the AURA. To break the ice between strangers, a projected butterfly as the Aura is moved toward a person who comes within 2m of the user, and then back and forth to attract that person. We believe that externalized interactive representation of the user in the form of a spiritual pet can ease and facilitate the communication, serve as a conversation starter, and make the interactions between people more fun.

Crafting Textiles of the Digital Era

Traditional methods of crating with precious and semi-precious metals and embellishments have been replaced selectively to embed electronics that can make textile crafts dynamic. The idea is to merge smart materials seamlessly as part of design element such as they can be used for dynamicity when needed, else they remain neatly embedded as part of the textile motifs.

Dynamic, Flexible and Multi-dimensional Visualization of Digital Photos and their Metadata

Digital photos and their metadata get explosively growth, requiring better support to browsing them. We propose a dynamic and flexible visualization of digital photos and their metadata. A prototype is designed and implemented based on the algorithm of D-FLIP, our previous work for flexibly displaying a large photo collection. We design various dynamic photo visualizations with up to four-dimensional meta-information by using Bertin's visual variables and three-dimensional representation. It allows users to dynamically and effectively manage photo visualizations by selecting meta-information of user's current needs and interest.

FlexFace: A Head Gesture Motion Display with Flexible Screen for Telecommunication

We propose a new telecommunication system using a flexible screen to display head motion gesture. Previous video-based telecommunication systems with a movable display used a flat and solid screen, and their presence was abiotic. On the other hand, traditional animation techniques provide the art of making solid objects appear to "animate". One of the most important principles of the animation techniques is "squash and stretch" that deforms the shape of the object. This technique inspired the proposed system: we employed a flexible screen and deformed it by using four servo motors to represent biotic motion.

Handheld Haptic Interface for Rendering Size, Shape, and Stiffness of Virtual Objects

PaCaPa is a handheld device that presents haptic stimuli on a user's palm when the user interacts with virtual objects using virtual tools. It is a box-shaped device and has two wings. This device can present the contact force and contact angle by opening and closing the wings based on the angle between the direction of the virtual tool and hand. By changing these haptic sensations on the palm and fingers, it enables the users to perceive different size, shape, and stiffness of virtual objects.

JackIn Neck: A Neckband Wearable Telepresence System Designed for High Comfortability

We present a wearable telepresence system, JackIn Neck, that can be worn on the neck and supports joint activities with a local user, a conversation partner who meet the local user, and a remote user of this system. While previous works demonstrated clear advantages of the neck for the location of wearable devices, a telepresence system tailored for such use had not been developed. JackIn Neck realizes this form factor by combining a camera with a fisheye lens, speakers, and microphones. Because our device is easy to put on and take off, as well as being comfortable to wear, we see the potential of our system to be adopted in the wild, such as sightseeing and event participation remotely.

MagicPAPER: Tabletop Interactive Projection Device Based on Kraft Paper

As the most common writing material in our daily life, paper is an important carrier of traditional painting, and it also has a more comfortable physical touch than electronic screens. In this study, we designed a shadow-art device for human-computer interaction called MagicPAPER, which is based on physical touch detection, gesture recognition, and reality projection. MagicPAPER consists of a pen, kraft paper, and several detection devices, such as AirBar, Kinect, LeapMotion, and WebCam. To make our MagicPAPER more interesting, we developed more than a dozen applications that allow users to experience and explore creative interactions on a desktop with a pen and a piece of paper. Results of our user study showed that MagicPAPER has received positive feedback from many different types of users, particularly children.

NiwViw: Immersive Analytics Authoring Tool

Immersive analytics uses augmented and virtual reality technologies to better understand and interact with multi-dimensional data within a physical space. NiwViw is an application that allows non-technical users to create their own immersive visualizations. NiwViw currently supports the iPad, Microsoft HoloLens, and the HTC Vive.

OmniEyeball: An Interactive I/O Device For 360-Degree Video Communication

We propose OmniEyeball (OEB), which is a novel interactive 360° image I/O system combining a spherical display system with an omnidirectional camera. We also present our experimental design of a user interface on the OEB, including a vision-based touch detection technique as well as several visual and interactive features. Our proposed techniques may contribute to solving the weak awareness of the opposite side of the spherical display as well as the workload caused by walking around in the 360° symmetric video communication.

Printable Hydroponics: Digital Fabrication of Ecological Systems

We demonstrate a technique to 3D print hydroponic systems which support the growth of various plant species. Our technique fabricates a landscape made entirely out of plastic, and automatically attaches plants seeds to predesignated positions on its surface; the end result is that by subjecting the printed, seed-attached landscape to water, light, and nutrient solutions, eventually plants will grow, creating a lush hydroponic "garden". Though currently only tested at modest scales the technique is theoretically scalable, possibly to architectural and environmental scales. We expect the technique to provide a foundation on which a new field of 3D printing research/practice can be built-i.e., printable ecological systems.

Proposal of a Method of Directing by Pseudo-hologram Using Motion Body and Body Information

In recent years, digital contents are attracting attention again, but in the past many passive types of exhibits are present, and it is considered that giving interactive ability is important. In this research, we propose a production method assuming use such as exhibition. Using the produced aquarium type device, two types of images are projected from different directions onto underwater film and combined. One is a motion object detected by a camera, an image with an effect applied in real time, and the other is an image giving an effect based on user's physical information. With this method, it is possible to make the two types of images feel the depth of the space, and it is possible to create a new dynamic presentation. We also conducted an evaluation study on the color of the image to be projected and examined the images to be used.

Real Time Animation of 3D Models with Finger Plays and Hand Shadow

In this paper the authors report a method for animating 3D models with finger play and hand shadow. For preparation, the motion of the models is associated with a motion of the hands. Appropriate association based on finger play and hand shadow provides intuitive operation. For example, the user makes a hermit crab's claw pinch by pinching the index and middle finger. On using the system the user doesn't need to wear sensors or gloves. The operation of the method is so easy that children can animate 3D models. In the evaluation, participants are mostly positive to the method.

Real-time Visual Feedback for Golf Training Using Virtual Shadow

In this work, we propose a golf training system using real-time visual feedback. The system projects the virtual shadow of the user on the ground in front of the user; this shadow provides feedback to the user without form collapse. Additionally, an expert's contour is overlaid on the virtual shadow of the user to make them aware of the difference between their form and that of the expert. In this system, the user can receive feedback with a fixed face orientation according to the actual golf swing; such feedback cannot be realized in systems projecting feedbacks on a wall. Moreover, by imitating the shadows already used in conventional golf training, the cost of adaptation to this training system can be reduced.

Real-Virtual Bridge: Operating Real Smartphones from the Virtual World

Modern industrial products should incorporate inclusive designs, be sustainable, and capable of addressing emerging usage. The adaptation interface is a supplementary approach to meet these targets. To make capacitive touchscreens adaptive, a method to simulate finger touch is needed. Takashina and others have proposed quasi-touch to allow a touch panel to recognize touches electrically without using fingers. Virtual reality is expected to be found in practical applications now after success in entertainment applications. In such practical applications, a mechanism to connect real objects and virtual objects is needed. We call this mechanism a real-virtual bridge, which is a form of an adaptation interface. As a simple example, a user might want to operate their smartphone when they does a job in the virtual world. Therefore, we applied quasi-touch to allow users to operate real smartphones from virtual worlds, and constructed a proof-of-concept prototype. In a preliminary evaluation, a participant played a real smartphone game in the virtual space and reported that he felt as if he had played a game on a real smartphone.

Scalable Autostereoscopic Display for Interaction with Floating Images

The purpose of this paper is to introduce using scalable autostereoscopic displays that allow users to interact with floating images publicly, such as for digital signage. By tiling multiple display modules, it is possible to project larger autostereoscopic images in the air than by displaying it using a single display module. By projecting autostereoscopic images, it is possible for the user to understand the contents in better detail compared to a standard 2D public displays. Furthermore, by projecting floating images, it is possible to reduce the spatial divergence between the user's body and the autostereoscopic images, so that the user can interact intuitively with autostereoscopic images. Moreover, since this system has scalability, it is possible to expand the screen size just by adding more display modules. This expandability makes the system easy to move and arrange in public spaces. In this paper, we propose an autostereoscopic display module which can enlarge the screen size in the horizontal direction as the first step to realizing our concept.

SNS Door Phone as Robotic Process Automation

We developed SNS Door Phone by making an interphone system an IoT device. We integrated SNS and QR-code recognition function with an interphone system. Thanks to connection with SNS, we can know the visit of the parcel delivery service anytime through SNS even if during going out. Thanks to introduction of QR-code recognition function, if a parcel deliveryman only showed the QR-code of the parcel in front of SNS Door Phone, the re-delivery operation information would be sent to a user automatically through SNS. Then, the user can call or ask re-delivery arrangement using smart phone without inputting any additional data. We can consider this kind of seamless re-delivery operation to be a good example of Robotic Process Automation.

SoundPond: Making Sound Visible and Intuitively Manipulable

We present an interactive sound handling system called SoundPond, in which a sound unit is treated as a virtual sound object. A short recorded sound is visualized as an oval shape that can be changed by the user, leading to modification of characteristics of the sound. Interactions between other sound objects and drawn objects are implemented. Each sound object is also affected by the pond in which it is placed. The proposed system attempts to provide intuitive and versatile interactions for rich sound handling. In the prototype system, basic functions are implemented using a multi-touch display, a microphone, and a pair of stereo speakers. Possible future application areas using SoundPond are graphical sound composition, sound performance, and a sound playground. We believe that SoundPond has the potential to be used for sound expression based on the creativity of the user.

Touchable Wall: Easy-to-Install Touch-Operated Large-Screen Projection System

Recently, small and inexpensive portable projectors are commonly being used for presentations. For convenience, it is desired to perform a direct pointing operation on the screen without requiring to grip or mount any device and control the pointer intuitively. Therefore, this study proposes a new touch-operated large-screen projection system using acoustic vibration sensing and a projector-camera system. We apply a continuous signal in the inaudible range of an actuator to a surface and acquire its elastic compliance change as a resonance of sounds. We perform touch detection, position estimation, and pressure estimation by using machine learning techniques. Our actuator and sensor units are small and easy to install on a surface. Furthermore, our system can detect "true" touch without requiring any devices such as pens or pointers. We demonstrate a series of novel interactions for a large-screen projection system with our proposed technology through three applications.

Ultrasound from Cyber Map: Intuitive Guidance Method using Binaural Parametric Loudspeaker

The people with visual impairment are facing the problem of the difficulty of reaching their desired destination. However, to have people who are with visual impairment updated with the necessary information about the location that they are traversing in order to secure safety and independent mobility, they need to depend on non-visual information to get the information they need.This paper proposes a system that automatically presents the sound image directly from arrival point without depending on the installation position of the sound source.This system works via presenting audible cues in the direction of the destination of the user without depending on the loudspeaker location. The success probability of direction guiding to the fixed destination for the proposed method was calculated and the results has revealed success probability of about 90%. As a result, this system presents the direction of the destination intuitively and easily by presenting the sound image of the guidance sound only for the person who needs information.

Generating Spherical Hyperlapse Videos via Recursive Intelligent Sampling for StratoJump

StratoJump is an installation that allows participants to explore the experience of high altitude jumping toward the edge of space. In this peper, we presents an vision-based algorithm for generating hyperlapse of spherical videos captured in high altitude space. Unlike previous work, we consider high altitude videos that cannot perfectly be stabilized by 3D reconstruction-based algorithms. We proposed recursive intelligent sampling to simultaneously stabilize and shorten spherical videos. Our preliminary results show that the proposed method can reduce cumulative errors of stabilization compared to a frame by frame method in reasonable time.

SESSION: Workshops & Tutorials

Computational Augmented Reality Displays

Interactive surfaces and spaces (ISS) research has been advanced by augmented reality (AR) display technologies. Recent trends of computational displays overcome limitations of existing display technologies by optimizing both hardware and software while considering human perception. We plan to held a workshop on computational AR displays (CARD) to explore emerging ISS research issues by promoting communications and interactions between ISS and CARD communities.

Approaching Aesthetics on User Interface and Interaction Design

Although the HCI community inevitably contributes to engagement via beauty according to the attention paid to known and yet to be discovered principles of aesthetics for digital interface design, it is lacking an epistemological corpus which should include the notion, human factors and the quantification of aesthetic aspects. The aim of the proposed workshop is to discuss these issues in order to strengthen aesthetic studies specifically for HCI and related fields. We want to create a forum for discussing, drafting and promoting the foundations for disciplined aesthetics design within the HCI community. We thus welcome contributions such as theories, methodologies, evaluation methods, and potential applications regarding effective aesthetics for HCI and related fields. Concretely, we aim to (i) map the present state-of-art of aesthetic research in HCI, (ii) build a multidisciplinary community of experts, and (iii) raise the profile of this aesthetics research area within HCI community.