Designing foldable user interfaces ("foldinterfaces") is complex and time-consuming because of the multiple technologies involved in the process, from the hardware details for folding pixels to design requirements regarding efficiency and ease of use. To assist this process, we introduce a visual editor for designers to specify foldinterfaces by implementing an extension of the Yoshizawa-Randlett diagramming system for origami and Event-Condition-Action rules from event-driven software architecture. The outcome is rendered as a 3-D foldable surface in a virtual environment.
Decision support systems (DSS) help users to make more informed and more effective decisions. In recent years, many intelligent DSS (IDSS) in business contexts involve machine learning (ML) methods, which make them inherently hard to explain and comprehend logically. Incomprehensible predictions, however, might violate the users' expectations. While explanations can help with this, prior research also shows that providing explanations in all situations may negatively impact trust and adherence, especially for users experienced in the decision task at hand. We used a human-centered design approach with domain experts to design a DSS for funds management in the construction industry and identified a strong need for control, personal involvement, and adequate data. To create an adequate level of trust and reliance, we contrasted the system's predictions with the values derived from an analytic hierarchical process (AHP), which makes the relative importance of our users' decision-making criteria explicit. We developed a prototype and evaluated its acceptance with 7 construction industry experts. By identifying situations in which the ML prediction and the domain expert potentially disagree, the DSS can identify a persuasion gap and use explanations more selectively. Our evaluation showed promising results and we plan to generalize our approach to a wider range of explainable artificial intelligence (XAI) problems, e.g., to provide explanations with arguments tailored to the user.
This paper presents the Model Voyager, a web-based application for visualizing user interface models structured according to the four abstraction levels of the Cameleon Reference Framework: tasks and concepts, abstract user interface, concrete user interface, and final user interface. This application enables the designer to collect, edit, and manage collections of user interface models for a project or for maintaining an accessible catalogue of models, along with their illustrations. It also introduces three deployment mechanisms: multi-reification, multi-abstraction, and multi-translation. This paper demonstrates the applicability of the Model Voyager on the "car rental" case study, a reference example chosen by the W3C group on model-based user interfaces.
Since aesthetics of a graphical user interface have become a sub-factor of the ISO 25010 standard on software quality, organisations start wondering how to practically evaluate this property in a consistent way. To address this challenge, we present a process for computing aesthetics at the level of a Concrete User Interface instead of Final User Interface to make it platform independent. This process consists of performing the following steps: reverse engineer a final user interface into its concrete equivalent, optionally edit it, and automatically compute ten aesthetic metrics at the level of concrete user interfaces.
taskSketch is an eclipse-based task model editor to enable designers to sketch a task model and to recognize it by a multi-stroke gesture recognition of pre-defined shapes and relations. The tool supports three levels of definition (LoF, LoD, LoC) and is linked to other models by model mapping. taskSketch is flexible enough to accommodate variations of the task model notation by defining new shapes and relations, and by searching template and pattern matchings. To demonstrate this capability, we introduce three concepts: the level of fidelity, the level of detail, and the level of criticism. This process is generalizable to other similar models involved in the model-based user interface design.
We present an application and corresponding user interface where the user, a psychologist, controls a virtual puppet (a cartoon-like character in Virtual Reality) that displays human-like emotions for the benefit of hospitalized children. A crossing-based UI enables specification of emotional states for the virtual puppet, which are chosen from a widget UI control showing several categories of body poses, movements, and emotions. Input is performed with pen gestures that trigger animated transitions between the states of the virtual puppet. Gesture articulation parameters, such as gesture speed and the pressure applied on the touchscreen, control the rendering of the various animations. We discuss a practical application for the pediatric departments of hospitals.
In this paper we describe SeqCheck, a model checking tool which allows us to investigate if certain properties hold for an interactive system. These properties allow us to determine if the interaction sequence model of an interactive system's overlap component behaves as expected. We describe the properties we have defined for this overlap component and then demonstrate the use of SeqCheck to identify when these do not hold.
Assessing and reproducing user's mobility has multiple purposes for interactive systems. In particular, the quantification of gait parameters has been used for user modelling, virtual environments, and augmented reality. While many technologies can be used to assess gait, measuring spatio-temporal parameters and their fluctuations, it is important to evaluate how many steps are necessary to represent the gait pattern of an individual, in order to provide better feedback to the user and improve user experience. In this preliminary study, we evaluate the intra-session reliability of spatio-temporal gait parameters for 24 healthy adults walking two trials of 15m in a corridor. Angular velocity data were acquired from body-worn inertial measurement units attached to participants' right and left shanks. An adaptive algorithm was applied for gait event detection, and gait parameters were analyzed according to pre-defined numbers of steps extracted from the full length of the trial. The main contribution of the present analysis is to present a method of gait event detection, segmentation and analysis that can be used for adjusting interactive systems to individual users.
Augmented Reality (AR) is a technology on the rise. Due to its growing popularity and application in various domains, ways of increasing user friendliness and usability of AR are now becoming more important. Context-awareness is one of these ways, as it can make an AR application adjust to the user, their situation and their needs, making the application more ergonomic and easy to work with. Especially in mobile AR, context changes happen often, due to the user moving around and using their devices in different environments and situations. In order to ease the development of context-aware applications for mobile AR, the modular development framework AARCon is proposed. AARCon consists of context monitoring features to keep track of the context of use and adaptation features to react to context changes. To show the potential of AARCon, a case study is presented, where AARCon is used to create a context-aware AR printer maintenance application.
Complex websites need to evolve continuously as they contribute to shaping the relationships between involved stakeholders. The paper presents conceptual ideas and a prototypical tool to facilitate participation in web redesign activities. The proposed meta-level support allows to augment dedicated parts of web pages by sets of design options. End users are provided with means to explore and assess the alternative design ideas in a parallel prototyping manner and to create and share annotations. A real-world example is used to motivate and illustrate the suggested approach. The contribution to end-user design and cultures of participation is discussed.
Technology mediated nudges to alter and aid decision making have been explored using various modalities in HCI. In this paper we propose the development of user interfaces which make use of an electrical nerve stimulation mechanism to act as a 'closed-loop' nudging aid for navigation, enhanced nudge feedback based gaming and human to human interaction. We demonstrates three major application interfaces a) using nerve nudges incorporated with a navigation API to assist walking on a predefined route tested on people without visual impairments which could be further extended as a useful application scenario for people with visual impairments. b) Improving gaming performance using forced motor events from nerve stimulation in a bidirectional car racing game. c) enabling human to human control which could be further used in neuro-feedback mechanisms. We also discuss future applications of our nudging mechanism in accident-avoidance systems and human to human learning.
Smart homes (SmHs) integrate a number of user interaction technologies, which have the potential to cause privacy violations between occupants of the home either through inappropriate information disclosures or disturbances. One approach to protecting SmH occupants' privacy is to develop techniques for dynamically adapting the UI layer according to the context, as well as each occupant's preferences and capabilities. Therefore, the focus of this research is to develop a framework and set of algorithms which can adapt the configuration and behavior of UI layer in order to preserve the privacy of SmH occupants while maintaining usability.
Graphical User Interfaces are the most common way of interaction with the devices we use in our everyday life. Given the important and long-lasting impact that the visual design of GUIs has on User Experience, its experimental study is of high importance. However, this activity suffers from a lack of reproducibility of experimental results due to the significant amount of time and resources to conduct such experiments and create datasets. To address this problem, this thesis aims at developing an application which purpose is to facilitate the construction of datasets in the context of experimental studies of GUIs. The application parameterizes the design of experimental studies related to GUIs and automates various steps in order to facilitate their deployment and to foster their reproducibility. We explain the research approach, the workflows and features underlying the application. Finally, we discuss the current state of the thesis and the future work to be achieved.
Virtual Reality applications have become popular over the last decade, gaining significant interest in industry and research. While the technology has its roots in the entertainment industry, there is an increase of research experiments and pilot studies where the technology is applied in professional context for behaviour and skill training. However, the implementation of effective training systems is a complex task; the development process remains largely ad hoc because there are no sufficient tools that appropriately support the development process. This thesis proposes a method for developing Head Mounted Display based virtual reality training applications for cognitive intensive training tasks. The method extends the concept of developing a parametrized system with run-time modifiable attributes and provides the necessary toolkit for the application implementation. The primary target user group are the developers and by extension the designer and training administrator.
Crowdsourcing has great potential in supporting humans to be more creative. This doctoral dissertation explores crowd-powered creativity support systems and covers a research arc from the fundamental prerequisites of leveraging crowds for creativity support to an accompanying set of case studies to clarify how complex creative work can be supported in practice.
This tutorial provides an overview and a practical experience on task models based engineering of interactive systems. It highlights how the recent advances in task modelling techniques can be exploited to design, develop and assess interactive systems. We present, from an engineering perspective and using examples from industrial case studies, how task models can support the design, development and evaluation of interactive systems. Furthermore, half of the tutorial focuses on the process of customizing task modelling notations for specific engineering needs and on its application using the tool supported HAMSTERS-XL notation.
We aim at exploring the different approaches to design interactive applications for groups of users using a set of interacting surfaces to perform their tasks with an optimal user experience. Participants are invited to present both the models and/or design methods as well as the case studies and applications they are studying in this context. We would like to set up a discussion group in order to put each person's work in perspective with the notion of territoriality applied to ambient computing and multiple devices. The territoriality theory may serve as a basis for the design of complex interactive applications of quality. From these discussions will emerge a mapping of models and design methods that could be mutualized and combined.