With the increasing power and flexibility of technologies available to consumers, we are seeing a revolution in how assistive technology (AT) is being created and by whom. This talk will highlight the potential of these technologies for people with disabilities, as well as the challenges that end users face in leveraging them effectively to address AT issues.
In the literature, automation is usually addressed as a goal (produce systems that are as autonomous as possible) as a process (producing systems with autonomous behaviors) or as a state (a system performing in an autonomous way). These uses suggest that automation is a global concept that does not need decomposing. However, when designing systems (including interactive systems), automation can only be incorporated at very low-level details, when some functions (previously performed by humans) are migrated to the system. There is a similarity between this global vision of automation and the global vision of human body in biology before the advent of anatomy (that aims at decomposing organisms in parts) and physiology (that aims at understanding the functions of organisms and their parts). This presentation will follow the path of anatomy and physiology to understand better what automation is, how automation can be designed, how partly autonomous systems can better support users and why full automation is a desirable but foolish, inadvisable and unwise target.
Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically reverse engineer user interfaces and generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).
The TOUCAN IDE (Integrated Development Environment) provides support for building effective interactive applications programmed in Java Swing or in JavaFX. Taking into account the effectiveness factor of usability requires from developers to guarantee that the application allows users to reach their goals and to complete their tasks. This means that users' goals and tasks have been analyzed and that their explicit description is available. By providing support for mapping and co-execution of task descriptions with interactive application software, TOUCAN IDE enables to program an interactive application and, at the same time, to support its effectiveness. In this article we highlight the main features of TOUCAN, as well as its underlying principles for enabling the mapping and co-execution of an interactive application with its associated task models.
Task modeling is a fundamental activity in the model-based design of user interfaces (MB-UID). It is supported by various task notations and tools which allow users, for example, to edit and to animate task models. Most of the tools offer graphical editors. While it is commonly believed that graphical notations and an interactive specification of task models is to be preferred to textual specifications, there are also drawbacks to this approach. For instance, it is more difficult to sketch first ideas or to switch between different modeling tools although existing task notations for MB-UID share many concepts. The paper presents a text-based domain-specific language called DSL-CoTaL for writing task specifications. It integrates essential concepts from existing approaches such as hierarchical task decomposition, temporal constraints between subtasks, collaborative tasks, and generic task components. DSL-CoTaL comes with a syntax-driven editor and can easily provide code generation for graphical editors as shown at the examples of CoTaSE, CTTE, and HAMSTERS.
Many requirements for quality in use are elicited in the late development phase. However, if requirements are elicited in the late development phase, the development may return to the previous phase or some requirements cannot be realized due to costs and schedules. To reduce these cases, we propose a method to elicit the requirements in the requirements analysis phase. First, software developers analyze the user characteristics (UCs) of the target users and specify important quality characteristics (QCs) for quality in use and UI design items based on the relationships among UC, QC, and UI design items. Because UI design items are considerations to develop UIs, the specified UI design items are elicited as UI requirements. Thus, when important QCs are specified, UI requirements can be easily elicited by tracing the relationships from QCs to UI design items.
Testing strategies for interactive systems require that we find suitable ways of incorporating tests of functionality with tests of interfaces and interactivity. For safety-critical interactive systems the problem is harder due to the necessity to ensure higher thresholds for testing to ensure safety is preserved. Conversely however, for safety-critical systems we are more likely to have a larger set of artefacts such as formal models, specifications etc. which can be used as the basis for test generation. The challenge is in finding ways of making use of (what may be a diverse set of) such models and specifications to assist with the testing process. In this paper we describe how we incorporate interactive system models with behavioural specifications to automatically generate test stubs. We show how the declarative test language Gherkin and its associated Cucumber tool can be integrated into an interactive system modelling environment to achieve this. The tool can either convert interaction models into behavioural models or vice versa, and both sets of models are then used to generate test stubs.
Modelling interactive systems is an important part of a sound software engineering process. In order to support this and reduce the amount of time it takes to create and update the models we typically need tools to support this process. Interaction sequences are a particular type of model used for interactive systems which allow us to reason about their behaviour when a user interacts with the system. In this paper we describe the 'Sequence Simulator' tool which allows us to build, modify, and manipulate interaction sequence models. The tool also supports model abstraction using the 'self-containment' property and we show how this automatically abstracts and expands the state space of the models as required.
This paper presents on-going work developing a formal framework for the model-based analysis of human-machine interaction in multiple critical systems. The framework builds on classical results from applied psychology on selective attention and working memory. The framework is intended for developers of interactive critical systems to identify plausible human multitasking strategies that are likely to be adopted by operators when using multiple interactive systems at the same time, and to estimate the memory load necessary to complete concurrent tasks. This type of analysis is especially useful at the early stages of system design, to better understand the effort necessary to operate the system when an implementation or a prototype of the system is unavailable. The analysis can also be used retrospectively, to analyse already implemented systems and complement results from user studies. An example based on infusion pumps, used in chemotherapy to infuse doses over a period, is employed to demonstrate the utility of the framework. The framework makes it possible to model the interactive tasks necessary to configure the pumps and start the infusion. The results of the analysis indicate situations where the operator is unable to carry out the task because of omission errors. These results are in line with experimental results reported in the literature, and may provide more detailed hypotheses that can be validated experimentally.
Accessibility of Web-pages and of graphical user interfaces (GUIs) in general is a major concern for their creation and, in particular, their automated generation. While automated runtime generation and adaptation addressed such problems some time ago for native GUIs, Web-pages generated at design-time are not flexible enough.
In this paper, we specifically address low-vision accessibility of Web-pages. Recent advances of responsive design help in this regard, but it usually has to be created manually. Instead, we study combining automated design-time generation of Web-pages with responsive design for improving accessibility. We built a prototype using Bootstrap as a potential Final User Interface of automated design-time generation of Web-pages. It demonstrates the feasibility and even reveals new insights for such a combination.
The IVY workbench is a model checking based tool for the analysis of interactive system designs. Experience shows that there is a need to complement the analytic power of model checking with support for model validation and analysis of verification results. Animation of the model provides this support by allowing iterative exploration of its behaviour. This paper introduces a new model animation plugin for the IVY workbench. The plugin (AniMAL) complements the modelling and verification capabilities of IVY by providing users with the possibility to interact directly with the model.
This paper describes the design and implementation of Alice, an online collaboration system that integrates several commercial cloud storage services and reconfigures team member's interactions into a time-oriented visualization. Designers use social media and other general-purpose tools to collaborate across organizations due to their accessibility and ease of use and learning. However, such tools are cumbersome in covering the specific collaboration needs of design activities. Alice elicits ad hoc collaboration in design teams by facilitating team members to use their own set of tools in a seamless way. Participants highlighted that Alice serves as a central reference for the coordination of teamwork by providing an overview of rhythms and temporal relationships of design activities. Our work suggests that collaboration tools could be portrayed as technologies of time production. This reveals opportunities for future research on how the temporal agency embodied by the collaboration tools affects collaborative activities in design.
Persuasive technology aims at supporting people to change their attitudes and/or behaviors sustainably. Application areas include energy saving, green mobility, medical observance, addiction, etc. Emergency to solve these societal challenges makes the field meet a great success. We propose UP!, a problem space to structure the exploration of the design space, an increment to the SEPIA framework. UP! combines two perspectives, the User and the Phenomenon under study, so that to create the right system that makes the user understand the problematic phenomenon and act appropriately to change sustainably.
Context-oriented programs take into account detected contexts to alter an application so that it exhibits the most appropriate behaviour for that particular context. However Context-Oriented Programming (COP) focuses mostly on adapting behavioural aspects while mostly ignoring adaptation of the user interface aspect. In the HCI community, on the other hand, much research has been done on user interface adaptation but without paying much attention to the behavioural aspect. This PhD work seeks to reconcile both communities to allow programmers to build realistic applications that are more sensitive to their surrounding environment, as well as to evaluate the user acceptance of such systems.
Continuous Software Engineering (CSE) activities, i.e., rapidly delivering new software functionality to software users and implementing received feedback, have became an established development practice for creating interactive systems. The frequency of software changes turns the feedback loop with users into a critical element of CSE that has not been addressed sufficiently; thus, it may be challenging for developers to understand users' software usage. This research project aims to enable a better understanding of users during CSE. We investigate relevant usage knowledge needs, the unobtrusive collection of usage data by software and hardware sensors, how usage data can be related to feature increments, and ways to externalize tacit usage knowledge. Leveraging these insights, we develop a platform to monitor, visualize, and understand usage knowledge to support developers during the design and development of interactive systems. The overall goal is to accurately fit the functionality of interactive systems to user needs.
Interaction sequences (ISeqs) are an abstraction of interactive systems which allow us to inspect the interactive system behaviour. In this research, ISeqs are used to support interactive system testing. In interactive system testing the components of an interactive system, the functional and the interactive, are often tested separately. However, errors can still occur when these components overlap. Therefore, we must investigate ways to test this overlap as part of a more comprehensive testing approach. ISeqs provide a clear view of this overlap, which we use to inform a model-based testing approach. By testing not only the functional and interactive components of a system, but also this overlap, ISeqs provide us with a way to better cover the interactive system state space, improving system reliability and safety.
Large companies rely on recommender systems to support users' processes of decision-making and analysis of large datasets of items. In critical context, such as civil aircraft cockpit, recommender systems can be a powerful tool for operators to support their tasks. Operators can be confronted with choosing the right option depending on the current alerts and context from a set of alternatives. The main goal of the presented PhD is to propose a model-based approach for the design and the development of dependable and usable recommender systems. This paper elaborates on challenges and approaches for engineering dependable and usable recommender systems.
The increasing variety of Input/Output devices and functionalities in interactive systems raises concerns regarding the way they are tested. Indeed, while most of the existing testing techniques are suitable for interactive systems supporting WIMP interactions, only few of them support, partially, the testing of highly interactive systems (e.g. smart speakers or smartwatches). We claim that techniques and tools that provides support for the testing of highly interactive systems must take into account the entire architecture of interactive systems. This includes the variety of I/O devices and their drivers, permissions at the operating system level, applications running on the system, etc. In this doctoral consortium paper, we present an ongoing PhD in which we propose to define an approach and tools to support the testing of all the elements of the architecture of highly interactive systems.
The rapid growth of the aging population and the increasing cost of the hospitalization are arousing the urgent need of the remote health monitoring system. Using the physiological sensing devices enable early detecting of health issues and allow for prompt treatment to help elderly towards changing their anomalous behavior and having a healthy lifestyle. Our approach, exploited task models to produce scenarios (which is the expected user behavior) and a middleware software, Context Manager to detect the events happened in the real context. Later, our real-time algorithm compares the expected user behavior to the real one detected in user context to find the anomalies if there is any. Finally, we validated our approach via a simulator, which automatically generates the anomalous sequences of user activities. The experimental results show that our system can detect abnormal user behavior precisely and effectively. Besides, the system should be able to generate proper action based on the detected deviation to motivate older people towards a healthy lifestyle.