Welcome to this issue of the Proceedings of the ACM on Human-Computer Interaction, which will focus on contributions from the research community Engineering Interactive Computing Systems (EICS). This diverse research community explores the methods, processes, techniques and tools that support specifying, designing, developing, deploying and verifying interactive systems. Building interactive systems is a multifaceted and challenging activity, involving a plethora of different actors and roles. This is particularly true in the domain of HCI, where we continuously push the edge of what is possible, where there is a crucial need for adequate processes, tools and methods to build reliable, useful and usable systems that help people cope with the ever-increasing complexity of work and life.
The contents of this issue on EICS is the sum of four separate rounds of submissions, evenly spaced from July 2017 through May 2018. In total, the rounds attracted a total of 81 submissions from Asia, Canada, Australia, Europe, Africa, and the United States. Promising submissions in a round that were not accepted were invited to resubmit to a subsequent round, and 6 of the papers appearing in this issue were accepted after at least one round of resubmission. In each round, papers were subject to a rigorous reviewing process where they were reviewed by two EICS senior editors, as well as external reviewers. At the conclusion of each round, a Virtual Committee meeting was held to discuss all of the papers and arrive at final decisions. Ultimately, 14 papers were accepted over all rounds.
This issue exists because of the dedicated volunteer effort of 20 senior editors who handled two to four papers each round, and 115 expert reviewers to ensure high quality and insightful reviews for all papers in all rounds. Reviewers and committee members were kept constant as much as possible for papers that were submitted to multiple rounds. Senior members of the editorial group also helped shepherd some papers, reflecting the deep commitment of this research community.
We are excited by the detailed and insightful work that resulted in this PACMHCI EICS issue and look forward to equally high quality submissions in subsequent submission cycles over the coming year. For those interested in this area, this group holds their next annual conference June 19-22, 2018 in Paris, France. That conference will provide many opportunities to share ideas with other researchers and practitioners from institutions around the world.
In our daily lives we are witnessing a proliferation of digital devices including tablets, smartphones, digital cameras or wearable appliances. A major effort has been made to enable these devices to exchange information in intelligent spaces and collaborative settings. However, the arising technical challenges often manifest themselves to end users as limitations, inconsistencies or added complexity. A wide range of existing and emerging devices cannot be used with existing solutions for cross-device information exchange due to restrictions in terms of the supported communication protocols, hardware or media types. We present INFEX, a general and extensible framework for cross-device information exploration and exchange. While existing solutions often support a restricted set of devices and networking protocols, our unifying and extensible INFEX framework enables information exchange and exploration across arbitrary devices and also supports devices that cannot run custom software or do not offer their own I/O modalities. The plug-in based INFEX architecture allows developers to provide custom but consistent user interfaces for information exchange and exploration across a heterogeneous set of devices.
Researchers who perform Ecological Momentary Assessment (EMA) studies tend to rely on informatics experts to set up and administer their data collection protocols with digital media. Contrary to standard surveys and questionnaires that are supported by widely available tools, setting up an EMA protocol is a substantial programming task. Apart from constructing the survey items themselves, researchers also need to design, implement, and test the timing and the contingencies by which these items are presented to respondents. Furthermore, given the wide availability of smartphones, it is becoming increasingly important to execute EMA studies on user-owned devices, which presents a number of software engineering challenges pertaining to connectivity, platform independence, persistent storage, and back-end control. We discuss TEMPEST, a web-based platform that is designed to support non-programmers in specifying and executing EMA studies. We discuss the conceptual model it presents to end-users, through an example of use, and its evaluation by 18 researchers who have put it to real-life use in 13 distinct research studies.
The availability of consumer-level devices for both visualising and interacting with Virtual Reality (VR) environments opens the opportunity to introduce more immersive contents and experiences, even on the web. For reaching a wider audience, developing VR applications in a web environment requires a flexible adaptation to the different input and output devices that are currently available. This paper examines the required support and explores how to develop VR applications based on web technologies that can adapt to different VR devices. We summarize the main engineering challenges and we describe a flexible framework for integrating and exploiting various VR devices for both input and output. Using such framework, we describe how we re-implemented four manipulation techniques from the literature to enable them within the same application, providing details on how we adapted its parts for different input and output devices such as Kinect and Leap Motion. Finally, we briefly examine the usability of the final application using our framework.
This paper proposes and evaluates an approach for building models of installed industrial Cyber-Physical Systems using augmented reality on smartphones. It proposes a visual language for annotating devices, containers, flows of liquids and networking connections in augmented reality. Compared to related work, it provides a more lightweight and flexible approach for building 3D models of industrial systems. The models are further used to automatically infer software configuration of controllable industrial products. This addresses a common problem of error-prone and time-consuming configuration of industrial systems in the current practice. The proposed approach is evaluated in a study with 16 domain experts. The study participants are involved in creating a model of an industrial system for water treatment. Their comments show that the approach can enable a less error-prone configuration for more complex systems. Opportunities for improvement in usability and reflections on the potential of the approach are discussed.
Gesture-based interfaces are becoming a widely used interaction modality in many industrial applications. Therefore, it is important to guarantee usable and ergonomic interfaces for workers. The purpose of this study was to investigate whether the use of digital human models (DHMs) by human factors/ergonomics (HFE) experts can complement the user evaluation of gesture interface prototypes. Two case studies were conducted, in which gesture-based systems for remote robot control were evaluated. The results indicate that the use of DHMs supports the findings from self-reported HFE evaluations. However, digital human modeling still has some limitations. For example, in this study, it was not possible to evaluate small muscle groups (e.g. fingers). We argue that adaptation of the DHMs could be a rapid and simple alternative for supporting the HFE design of gestures.
Augmented Reality (AR) developers face a proliferation of new platforms, devices, and frameworks. This often leads to applications being limited to a single platform and makes it hard to support collaborative AR scenarios involving multiple different devices. This paper presents XD-AR, a cross-device AR application development framework designed to unify input and output across hand-held, head-worn, and projective AR displays. XD-AR's design was informed by challenging scenarios for AR applications, a technical review of existing AR platforms, and a survey of 30 AR designers, developers, and users. Based on the results, we developed a taxonomy of AR system components and identified key challenges and opportunities in making them work together. We discuss how our taxonomy can guide the design of future AR platforms and applications and how cross-device interaction challenges could be addressed. We illustrate this when using XD-AR to implement two challenging AR applications from the literature in a device-agnostic way.
Since its introduction in 2005, the TUIO protocol has been widely employed within a multitude of usage contexts in tangible and multi-touch interaction. While its simple and versatile design still covers the core functionality of interactive tabletop systems, the conceptual and technical developments of the past decade also led to a variety of ad-hoc extensions and modifications for specific scenarios. In this paper, we present an analysis of the strengths and shortcomings of TUIO 1.1, leading to the constitution of an extended abstraction model for tangible interactive surfaces and the specification of the second-generation TUIO 2.0 protocol, along with several example encodings of existing tangible interaction concepts.
360-degree video is increasingly used to create immersive user experiences; however, it is typically limited to a single user and not interactive. Recent studies have explored the potential of 360 video to support multi-user collaboration in remote settings. These studies identified several challenges with respect to 360 live streams, such as the lack of gaze awareness, out-of-sync views, and missed gestures. To address these challenges, we created 360Anywhere, a framework for 360 video-based multi-user collaboration that, in addition to allowing collaborators to view and annotate a 360 live stream, also supports projection of annotations in the 360 stream back into the real-world environment in real-time. This enables a range of collaborative augmented reality applications not supported with existing tools. We present the 360Anywhere framework and tools that allow users to generate applications tailored to specific collaboration and augmentation needs with support for remote collaboration. In a series of exploratory design sessions with users, we assess 360Anywhere's power and flexibility for three mobile ad-hoc scenarios. Using 360Anywhere, participants were able to set up and use fairly complex remote collaboration systems involving projective augmented reality in less than 10 minutes.
While CAVE Automatic Virtual Environments (CAVE) have been around for over 2 decades they remain complex to setup, unaffordable to most, and generally limited to data and model visualization applications for academia and industry. In this paper, we present a solution to create a monocular CAVE using the Unity 3D game engine by adding motion parallax and full-body interaction support via the use of a Kinect V2 low-cost sensor. More importantly, we provide a functional and easy to use plugin for that effect, the KAVE, and its configuration tool. Here, we describe our own low-cost CAVE setup, a range of alternative configurations to CAVE systems using this technology and example applications. Finally, we discuss the potential of such an approach considering the current advancements in VR and gaming.
Manual assembly at production is a mentally demanding task. With rapid prototyping and smaller production lot sizes, this results in frequent changes of assembly instructions that have to be memorized by workers. Assistive systems compensate this increase in mental workload by providing "just-in-time" assembly instructions through in-situ projections. The implementation of such systems and their benefits to reducing mental workload have previously been justified with self-perceived ratings. However, there is no evidence by objective measures if mental workload is reduced by in-situ assistance. In our work, we showcase electroencephalography (EEG) as a complementary evaluation tool to assess cognitive workload placed by two different assistive systems in an assembly task, namely paper instructions and in-situ projections. We identified the individual EEG bandwidth that varied with changes in working memory load. We show, that changes in the EEG bandwidth are found between paper instructions and in-situ projections, indicating that they reduce working memory compared to paper instructions. Our work contributes by demonstrating how design claims of cognitive demand can be validated. Moreover, it directly evaluates the use of assistive systems for delivering context-aware information. We analyze the characteristics of EEG as real-time assessment for cognitive workload to provide insights regarding the mental demand placed by assistive systems.
The persistent difficulty to develop and maintain interactive software has unveiled the inadequacy of traditional imperative programming languages. In the recent years, several solutions have been proposed to enrich the existing languages with constructs dedicated to interaction. In this paper, we propose a different approach that takes interaction as the primary concern to build a new programming language. We present Djnn, a conceptual framework based on the concepts of process and process activation, then we introduce Smala a programming language derived from this framework. We propose a solution for the unification of the concepts of event and data-flow, and for the derivation of complex control structures from a small set of basic ones. We detail the syntax and the semantics of Smala. Finally, we illustrate through a real-size application how it enables building all parts of an interactive software. Djnn and Smala may offer designers and programmers usable means to think of interactions and translate them into running code.
The large availability of touch-sensitive screens fostered the research in gesture recognition. The Machine Learning community focused mainly on accuracy and robustness to noise, creating classifiers that precisely recognize gestures after their performance. Instead, the User Interface Engineering community developed compositional gesture descriptions that model gestures and their sub-parts. They are suitable for building guidance systems, but they lack a robust and accurate recognition support. In this paper, we establish a compromise between the accuracy and the provided information introducing G-Gene, a method for transforming compositional stroke gesture definitions into profile Hidden Markov Models (HMMs), able to provide both a good accuracy and information on gesture sub-parts. It supports online recognition without using any global feature, and it updates the information while receiving the input stream, with an accuracy useful for prototyping the interaction. We evaluated the approach in a user interface development task, showing that it requires less time and effort for creating guidance systems with respect to common gesture classification approaches.
Visualization systems such as dashboards are commonly used to analyze data and support users in their decision making, in communities as different as medical care, transport and software engineering. The increasing amount of data produced and continuous development of new visualizations exacerbate the difficulty of designing such dashboards, while the visualization need is broaden to specialist and non-specialist final users. In this context, we offer a multi-user approach, based on Model Driven Engineering (MDE). The idea is for the designer to express the visualization need by characterization, according to a given taxonomy. We provide a Domain Specific Language (DSL) to design the system and a Software Product Line (SPL) to capture the technological variability of visualization widgets. We performed a user study, using a software project management use case, to validate if dashboard users and designers are able to use a taxonomy to express their visualization need.
With today's technology, elderly can be supported in living independently in their own homes for a prolonged period of time. Monitoring and analyzing their behavior in order to find possible unusual situation helps to provide the elderly with health warnings at the proper time. Current studies are focusing on the elderly daily activity and the detection of anomalous behaviors aiming to provide the older people with remote support. To this aim, we propose a real-time solution which models the user daily routine using a task model specification and detects relevant contextual events occurred in their life through a context manager. In addition, by a systematic validation through a system that automatically generates wrong sequences of tasks, we show that our algorithm is able to find behavioral deviations from the expected behavior at different times by considering the extended classification of the possible deviations with good accuracy.