Automatic analysis of human affective and social signals brought computer science closer to social sciences and, in particular, enabled collaborations between computer scientists and behavioral scientists. In this talk, I highlight the main research areas in this burgeoning interdisciplinary area, and provide an overview of the opportunities and challenges. Drawing on examples from our recent research, such as automatic analysis of interactive play therapy sessions with children, and diagnosis of bipolar disorder from multimodal cues, as well as relying on examples from the growing literature, I explore the potential of human-AI collaboration, where AI systems do not replace, but support monitoring and human decision making in behavioral and clinical sciences.
In this talk I will discuss a common and pernicious form of technical debt–called design debt, or architecture debt. I will briefly present the theoretical foundation behind this form of debt and present a broad set of evidence demonstrating its dramatic effects on project outcomes. That is the bad news. The good news is that we can fully automatically pinpoint the causes and scope of such debt. I will describe how we can automatically locate it, measure it, and create the business case for removing it. Finally, I will explain how we can remove–pay down–this debt via refactoring. I will also sketch some of my experiences in transitioning the tooling to do so with industrial partners, and describe some of their experiences and outcomes.
Mental health conditions pose a major challenge for individuals, healthcare systems and society – and the COVID-19 pandemic has likely worsened this issue. According to the Mental Health Foundation of New Zealand, one in five people will develop a serious mood disorder, including depression, at some time in their life. Co-designed solutions to increase resilience and well-being in young people have specifically been recognised as part of the National Suicide Prevention Strategy and the New Zealand Health Strategy. Virtual Reality (VR) in mental health is an innovative field. Recent studies support the use of VR technology in the treatment of anxiety, phobia, and pain management. However, there is little research on using VR for supporting, treating and preventing depression. There is also very little work done in offering an individualised VR experience to improve mental health. In our earlier work, we presented iVR, a novel individualised VR experience for enhancing peoples' self-compassion, and in the long run, their mental health, and described its design and architecture. In this paper, we outline the results of a feasibility study conducted recently. Most participants believed introducing elements of choice within iVR enhanced their user experience and that iVR had the potential to enhance people's self-compassion. We also approached seven mental health professionals for feedback, who felt that introducing elements of choice within iVR would increase their knowledge of clients. Our contribution can pave the way for large-scale efficacy testing, clinical use, and cost-effective delivery of intelligent individualised VR technology for mental health therapy in future.
In this paper, we describe the design, development and preliminary evaluation of a novel animation tool for smartphones and tablets, with the goal of improving creativity and problem-solving ability among novice users, mainly young children. The user first draws the objects with markers and crayons on paper and takes a photo with their mobile or tablet to import their drawings into the app as game sprites. They then design their animation scene and point to a part of the character and selecting its animation type from existing buttons. Hence the selected part is animated in a vertex shader while ensuring a smooth transition with the non-pointed part. The results of a preliminary evaluation showed that young children enjoyed interacting with our tool and would recommend it to their friends.
Keywords and Phrases: Creativity; video games; visual animation; ease of use.
With advances of new technologies there is a growing interest in augmenting human senses with sensory information. This paper aims at using electrical muscle stimulation (EMS) to support sensing the proximity of objects. We propose a concept relating distance to properties of electrical muscle stimulation. Our study provides first evidence that our approach can be used to convey information on the proximity of objects to users of the EMS system. We report on the correlations between a simulated proximity and the proximity felt based on EMS. Based on this we propose fields of applications which can benefit from such a system. Our approach can be used where people need to be notified with regard to spatial information, e.g., dangerous objects approaching from behind.
The ever-increasing availability and variety of resources to create physical computing systems keep attracting electronics hobbyists and do-it-yourself enthusiasts. Nevertheless, the prototyping and development of these systems are still challenging to the novices. In this paper, we propose a tool (built on top of the Jupyter computational notebook) as a way for supporting step-by-step assisted learning and knowledge sharing. We extended the Jupyter notebook functionalities and implemented a custom-tailored kernel to seamlessly enable the interaction between the end-user web interface and the Arduino boards. We consider that this approach can effectively support physical computing novices in understanding, writing, and executing the code while empowering them to document and share the development steps they followed.
We present MR4ISL (Mixed Reality for Implicit Social Learning), a HoloLens application designed to examine the psychological aspects involved by implicit social learning, a key process responsible for information acquisition at an unconscious level that influences humans’ behavioral, cognitive, and emotional functioning. We describe the engineering details of MR4ISL, present our roadmap for identifying technical solutions for believable animations of virtual avatars relevant for implicit social learning, and exemplify use cases of MR4ISL that emerged from discussions with three researchers from psychology. To date, MR4ISL is the only tool that uses mixed reality simulations to increase the external validity of psychological research in the study of implicit social learning.
Taking measurements of scaffolds is a key process in the scaffolding business for resource planning and invoicing. Nevertheless, mostly analog methods are used for this purpose, which are cumbersome and time-consuming and often lead to errors and uncertainties in post-processing due to missing context information or imprecise measures. To overcome these issues, Augmented Reality (AR) is evaluated as an alternative to improve the current process of planning and measurements. For this, the ARMS framework for measurement and planning of scaffolds was developed and implemented based on an Android AR application. As first evaluation results based on expert interviews indicate, the implemented AR application has the potential to improve the current analog process of taking measurements in the scaffolding business.
Engineering the dialog of a humanoid robot like Pepper with human beings is challenging. The software from Softbanks allows the programming of spoken sentences and the understanding of specific words for apps on the robot. However, we recognized that the fexibility of the software is limited in this way. Therefore, we developed tools based on state machines that run on a server and communicate with the robot. In this way Pepper becomes a kind of thin robot. The paper discusses the architecture of the software, its advantages and the tools used within our project E-BRAiN (Evidence-Based Robot Assistance in Neurorehabilitation). Additionally, one of our identified interaction patterns is discussed.
We describe the software architecture of a toolbox of reusable components for the configuration of convolutional neural networks (CNNs) for classification and labeling problems. The toolbox architecture has been designed to maximize the reuse of established algorithms and to include domain experts in the development and evaluation process across different projects and challenges. In addition, we implemented easy-to-edit input formats and modules for XAI (eXplainable AI) through visual inspection capabilities. The toolbox is available for the research community to implement applied artificial intelligence projects.
This note discusses how to build a real-time recognizer for stroke gestures based on Long Short Term Memory Recurrent Neural Networks. The recognizer provides both the gesture class prediction and the completion percentage estimation for each point in the stroke while the user is performing it. We considered the stroke vocabulary of the $1 and $N datasets, and we defined four different architectures. We trained them using synthetic data, and we assessed the recognition accuracy on the original $1 and $N datasets. The results show an accuracy comparable with state of the art approaches classifying the stroke when completed, and a good precision in the completion percentage estimation.
Cyber-Physical System (CPS) development becomes challenging when digital representations, so-called Digital Twin (DT) models, are synchronized with physical counterparts for real-time design and control. Learner support requires transparent and intelligible capacity building tools to grasp the idea of such complex systems and develop integrated design and engineering skills. Tangible components can help as physical cognitive artifacts for Internet-of-Things (IoT) applications being part of CPS. However, the intertwining of physical and digital elements requires a didactically grounded vocabulary and development concept to learn with tangible artifacts as mental representation. In this paper we introduce an integrated tangible design and engineering learning support system. It synchronizes physical IoT components with their DT representation, as composed by means of M5Stack (m5stack.com), UIFlow© (flow.m5stack.com) using Blockly for visual programming, and the diagrammatic D3j-editor (D3js.org) for editing DTs and monitoring/controlling CPS behavior. Learners can design and engineer a CPS either via tangible IoT (control) elements, or digitally. Prepared use cases from smart home management and home healthcare serve as teasers for actively engaging learners in CPS development tasks.
We designed a framework to generalize the development of applications with UI elements distributed across co-located devices. The framework is comprised of diverse components in order to deal with the complexity of such a task, including: authentication and authorization services; a broker to sync information across multiple application instances; background services that gather the capabilities of the devices; an indoor positioning system to determine when devices are close to each other; and a library which helps integrating Web applications with the broker, determining which components to show based on UI requirements and device capabilities, and custom elements to manage the distribution of the UI components and the multiple UI application states. Collaboration is supported by sharing UI states with other users.
Prototyping is an essential step in developing tangible experiences and novel devices, ranging from haptic feedback to wearables. However, prototyping of actuated devices nowadays often requires repetitive and time-consuming steps, such as wiring, soldering, and programming basic communication, before HCI researchers and designers can focus on their primary interest: designing interaction. In this paper, we present ActuBoard, a prototyping platform to support 1) quick assembly, 2) less preparation work, and 3) the inclusion of non-tech-savvy users. With ActuBoard, users are not required to create complex circuitry, write a single line of firmware, or implementing communication protocols. Acknowledging existing systems, our platform combines the flexibility of low-level microcontrollers and ease-of-use of abstracted tinker platforms to control actuators from separate applications. As further contribution, we highlight the technical specifications and published the ActuBoard platform as Open Source.
The CHIIoT workshop series brings together researchers and practitioners from human-computer interaction (HCI) design, computer science, and electrical engineering working on new challenges in industry and academia. In EICS 2021, This workshop will provide a platform for participants to review and discuss challenges and opportunities in the intersection of computer-human interaction and the internet of things, focusing on human-centered applications using emerging connectivity and sensing technologies. We aim to jointly develop a design space and identify opportunities for future research.
Traditionally, most UX designers, computer scientists and software engineers have not had to consider risks to the public from using their systems. However, the current evolution of digital systems in terms of the increasing number of users, their growing complexity and the pervasiveness of Artificial Intelligence techniques allow common HCI designers and engineers to build systems that create risks for the individual, groups of people, or event to the entire society.
In this workshop, we aim at collecting the views and the current practice in the management of the risks and benefits in the engineering of interactive digital systems. Such a view will draw the way for new research, methods, and tools to incorporate the risk analysis into the current engineering and design practices.
The workshop is proposed on behalf of the IFIP Working Groups 2.7/13.4 on User Interface Engineering.