There are many opinions on how to write an influential CHI paper, ranging from writing in an active voice to including colons in the title. However, little is known about how we actually write, and how writing influences impact. We conducted quantitative analyses of the full text of all 6578 CHI papers published since 1982 to investigate. We looked at readability, titles, novelty, and name-dropping and related these measures to the papers' citation count; overall and for different subcommittees. We found that CHI papers are more readable than papers from other fields. Furthermore, readability, title length, and novelty markers all influence citation counts.
In August 2018, a student protest initiated in Bangladesh sought justice when two school students were run over by public bus. Student protesters were demonstrating on the street for days until they were physically attacked. Concurrent with the physical attacks, the country experienced a disconnect. Internet, restrictions on social media usage, and several high-profile arrests of people speaking about the incidents. These suppressive encounters created what we call a "digital silence." In response, we collected stories from people, which depict their effort to seek out information about the events unfolding and share their perspective of what happened. Through these in-the-moment stories, we see a glimpse of how the information suppression impacted people with varying proximity to the events, including protesters, bystanders, and family members. We also reflect on the benefit of the subtle defiance of storytelling for storytellers in the midst of this social justice effort.
In this paper, I use 'science fiction autoethnography' to reflect on conducting an imaginary, quantitative study. My fictional study is intended to evaluate a real-life artefact: the 'umamimi' robotic horse ears. This physical device provides a backdrop, against which my experiences and self-reflections are used to critique quantitative 'hard science'. My own cognitive bias, rigid attachment to a viewpoint and presumptions (concerning anticipated results) all provide the real story. When I conduct an imaginary study, what does the process and its speculative results say about my autobiographical story and both the object and subject's broader societal and cultural meanings?
Drawing on philosophies of embodied, distributed & extend cognition, this paper argues that the mind is readable from sensors worn on the body and embedded in the environment. It contends that past work in HCI has already begun such work, introducing the term models of minds to describe it. To those who wish to develop the capacity to build models of minds, we argue that notions of the mind are entangled with the technologies that seek to sense it. Drawing on the racial and gendered history of surveillance, we advocate for future work on how models of minds may reinforce existing vulnerabilities, and create new ones.
Non-binary people are rarely considered by technologies or technologists, and often subsumed under binary trans experiences on the rare occasions when we are discussed. In this paper we share our own experiences and explore potential alternatives - utopias, impossible places, as our lived experience of technologies' obsessive gender binarism seems near-insurmountable. Our suggestions on how to patch these gender bugs appear trivial while at the same time revealing seemingly insurmountable barriers. We illustrate the casual violence technologies present to non-binary people, as well as the on-going marginalisations we experience as HCI researchers. We write this paper primarily as an expression of self-empowerment that can function as a first step towards raising awareness towards the complexities at stake.
The ethical implications of algorithmic systems have been much discussed in both HCI and the broader community of those interested in technology design, development and policy. In this paper, we explore the application of one prominent ethical framework-Fairness, Accountability, and Transparency-to a proposed algorithm that resolves various societal issues around food security and population ageing. Using various standardised forms of algorithmic audit and evaluation, we drastically increase the algorithm's adherence to the FAT framework, resulting in a more ethical and beneficent system. We discuss how this might serve as a guide to other researchers or practitioners looking to ensure better ethical outcomes from algorithmic systems in their line of work.
HCI interventions often fall short of delivering lasting impact in resource-constrained contexts. We reflect on a project where we followed the "right" steps of needs-based, human-centered design, yet failed to deliver impact to the community. We introduce a framework that evaluates an intervention's potential for sustainable impact by maximizing use of assets in the community and minimizing novelty. We propose assets-based design as an approach that starts with what a community has, leveraging those assets in a design, as opposed to a needs-based approach that focuses on adding what a community lacks.
Don't ignore this because its about speech technology. VUIs (voice user interfaces) won a best paper in CHI 2018. Did that get your attention? Good. Siri, Ivona, Google Home, and most speech synthesis systems have voices which are based on imitating a neutral citation style of speech and making it sound natural. But, in the real world, darling, people have to act, to perform! In this paper we will talk about speech synthesis as performance, why the uncanny valley is a bankrupt concept, and how academics can escape from studying corporate speech technology as if it's been bestowed by God.
Ubiquitous computing is leading to ubiquitous sensing. Sensor components such as motion, proximity, and biometric sensors are increasingly common features in everyday objects. However, the presence and full capabilities of these components are often not clear to users. Sensor-enhanced objects have the ability to perceive without being perceived. This reduces the ability of users to control how and when they are being sensed. To address this imbalance, this project identifies the need to be able to deceive 'smart' objects, and proposes a number of practical interventions to increase user awareness of sensors, and encourage agency over digital sensing through acts of dishonesty to objects.
This paper challenges the position that design is a future oriented discipline, and rather turns an eye to the past as potential material for re-design. We claim that what we call 'the past' is far from static, monolithic, immutable, and is rather subjective, fluid, and constantly renegotiated. People constantly engage in re-designing the past by re-elaborating, reckoning, and plainly forgetting. The rewriting of the past, such as in historical revisionism, is often seen as an attempt to wipe-out, and hence re-inscribe and perpetuate, injustice, oppression, and even genocide. With this paper we call for more courage to take ownership of the past as something malleable, to take responsibility for it, and in so doing to open up design opportunities to a plurality of voices.
The past decade has seen a welcome rise in critical reflection in HCI [29,13,3,19,20,21]. But the use of manifestos - not to promote but to provoke - is still rare in comparison to more established disciplines. Digital activism has given new life to the manifesto, and the manifesto may in turn give new life to CHI - prompting new ideas by temporarily liberating scholars from the confines of careful speech and rational argument. We present a manifesto for manifestos; a chance for the CHI community to question its status quo and dream of its possible futures using our purpose-built authoring tools.
This consolidation of 18 stories from students and researchers of human-centered computing (HCC) represent some of the diverse shades of feminism that are present in our field. These stories, our stories, reflect how we see the world and why, also articulating the change we wish to bring.
Recent exposures of extant and potentially discriminatory impacts of technological advancement have prompted members of the computing research field to reflect on their duty to actively predict and mitigate negative consequences of their work. In 2018, Hecht et al. proposed changes to the peer-review process attending to the computing research community's responsibility for impacts on society. In requiring researchers and reviewers to expressly consider the positive and negative consequences of each study, the hope is that our community can earnestly shape more ethical innovation and inquiry. We question whether most researchers have sufficient historical context and awareness of activist movements to recognize crucial impacts to marginalized populations. Drawing from the work of feminist theorists and critical disability scholars, we present case studies in leveraging "situated knowledges" in the analysis of research ethics.
Dichotomous inference is the classification of statistical evidence as either sufficient or insufficient. It is most commonly done through null hypothesis significance testing (NHST). Although predominant, dichotomous inferences have proven to cause countless problems. Thus, an increasing number of methodologists have been urging researchers to recognize the continuous nature of statistical evidence and to ban dichotomous inferences. We wanted to see whether they have had any influence on CHI. Our analysis of CHI proceedings from the past nine years suggests that they have not.
Withing the fields of HCI and game design, conventional design practices have been criticised for perpetuating the status quo and marginalising users beyond the norm [11], [1], e.g. through genderized assumptions about user interaction [13]. To solve this problem of conservatism in HCI, one recommended strategy has been queering; the use of mischiveous, spaceful, and oblique design principles [13]. This contribution focuses on the conventional computer mouse within videogames as an example for a conventional input device optimised for a limited set of interactions. The article first reviews HCI discourses on the mouse within technology studies, game culture, and queer game studies. In these three domains, the mouse has been consistently reduced to its functionality as high-precision point-and-click device, constructing it as conservative seemingly hard-wired to cater to male-centred pleasures. We then discuss three experimental game design strategies to queer the mouse controller in The Undie Game, a cooperative wearable mouse-based installation game by the Copenhagen Game Collective. The Undie Game speculates about ways to confront and disrupt conventional expectations about gaming by fa''silly''tating interaction for two players who wear a mouse controller in their panties and collectively steer a 3D high definition tongue on screen to achieve a mutual highscore. By creating a social, silly, and potentially daunting play experience, The Undie Game reinterprets the affordances of the computer mouse to bring subjects like consent, failure, and ambiguity into the picture.
Recent years have seen a dramatic increase in HCI research on the use of technology in spiritual practices and environments. Some of these works cover spiritual/transcendent experiences associated with these contexts, but strikingly few of them describe in any way the experiences they studied or aimed to support, let alone give definitions of the terms they use for those experiences. Even fewer papers cite any literature on the relevant experiences. We have to ask: How do the authors understand the experiences their work is aiming to observe, invite, or support? How do they know when and whether they have observed, invited, or supported the kinds of experiences they target? How do they know what they are studying? This paper discusses the presence and absence of definitions of terms for spiritual/transcendent experiences in HCI research, and of citations of relevant literature. It speculates about possible reasons for the oversight, proposes some definitions aimed at filling the gap, and suggests an approach to operationalizing some of the proposed definitions.
This reflective essay documents an attempt to design self-tracking technologies for menopause. This process culminated in the decision to not design. The contribution of this essay is the knowledge produced through reflecting on inaction. From an investigation into current examples, it became clear that applying self-tracking to menopause was fundamentally inappropriate. These technologies were also found to risk resulting in more harm than good; both in essentializing and medicalizing a non-medical process, and in perpetuating notions of the bodily experience of the menopausal transition as a negative experience.
Today's mainstream Human-Computer Interaction (HCI) research primarily addresses functional concerns - the needs of users, practical applications, and usability evaluation. Tangible Bits and Radical Atoms are driven by vision and carried out with an artistic approach. While today's technologies will become obsolete in one year, and today's applications will be replaced in 10 years, true visions - we believe - can last longer than 100 years. Tangible Bits (3, 4) seeks to realize seamless interfaces between humans, digital information, and the physical environment by giving physical form to digital information and computation, making bits directly manipulatable and perceptible both in the foreground and background of our consciousness (peripheral awareness). Our goal is to invent new design media for artistic expression as well as for scientific analysis, taking advantage of the richness of human senses and skills we develop throughout our lifetime interacting with the physical world, as well as the computational reflection enabled by real-time sensing and digital feedback. Radical Atoms (5) leaps beyond Tangible Bits by assuming a hypothetical generation of materials that can change form and properties dynamically, becoming as reconfigurable as pixels on a screen. Radical Atoms is the future material that can transform its shape, conform to constraints, and inform the users of their affordances. Radical Atoms is a vision for the future of Human-Material Interaction, in which all digital information has a physical manifestation, thus enabling us to interact directly with it.
After four decades of practice, User Experience design has reached a maturity level integral to the success of every business venture. Whether the product or service provided competes in the consumer, enterprise or medical sector, UX quality is known to directly impact effectiveness, efficiency and satisfaction, the combination of which determines consumer acceptance. However, great design alone is not sufficient to achieve meaningful impact. Products with high usability lab ratings have been rejected in the crucible of real-life usage because they don't add sufficient value for either the consumer or company that delivers them to market. The failure of these so called "great designs" reduces them at best to museum or portfolio pieces. True impact is only achieved when the designed artifact reaches a critical level of market adoption. The service benchmarks today are Facebook with over two billion active users and Google with 1.2 trillion searches a year. Achieving significant market adoption is difficult. It requires not only delightfully fulfilling users' needs but also a UX strategy and design optimization to fit corporate business models and marketing channels, both characterized by substantial financial risk. If there is no ROI for the product, then by association there is no ROI for design or the UX team itself. UX earns a "seat at the table" by simultaneously delivering value for both the business and the user. Owning the Business of UX role contains strategy and management challenges. Mastering them can bring UX to corporate parity with the more established engineering and marketing professions.
Science and design should be relevant and accessible to everyone. HCI has a long history of service, engagement, and connection with people with higher risk for educational, physical, and social challenges. Previous winners of this award are on the forefront of those efforts towards support for these diverse and often vulnerable populations. On this 15th anniversary of this award, we have a moment to reflect on these advances. Increasingly, the CHI community is connected to our worlds outside of research and scholarship. However, we see that connection is not enough. We must seek instead for true engagement. What does it mean to be a true partner, to take small steps to increase engagement in projects, in designs, and in scholarly work? Building on an existing ethos of service, three years ago, the CHI community undertook an effort to positively impact the cities we visit: Day of Service. This step is just one in a long line of efforts on the part of a responsible, committed group of scholars to leaving this world better than we found it. However, these efforts also represent some of the challenges of our own privilege. How can we go beyond service to true collaboration? How can we bring to bear our vast resources while listening to the community and valuing their expertise and lived experiences? Finally, the CHI community has always been a place of greater diversity than some similar and surrounding academic communities. Recent efforts have focused on expanding that diversity further still: diversity of thought, diversity of experience and physical bodies, and diversity of racial, ethnic, and gender boundaries. What happens once we have recruited this diverse community? How do we ensure long-term inclusion in all activities and in the highest levels of leadership? The CHI Social Impact Award is an incredible honor and the talk an excellent platform. In this talk, I will reflect alongside the community. I will describe research focused on empowering people who are not typically represented in the design process as well as the requisite inclusive and democratic approaches to design. In particular, I will focus on the ways in which thought and action are deeply intertwined [2] and the generation of knowledge through participatory cycles of action and reflection [1]. I will also go beyond these specific research projects and practices to discuss the progress of the CHI community as a whole and our work to create an environment that is engaged, collaborative, and inclusive.
A 'data-driven life' has become an established feature of present and future technological visions. This dissertation interrogates the human experience of a data-driven life, by conceptualising, investigating, and speculating about personal informatics tools as new technologies of memory. I argue that the prevalence of quantified data and metrics is creating fundamentally new and distinct records of everyday life: a 'quantified past'. To address this, I conducted qualitative and idiographic fieldwork -- with long-term self-trackers, and subsequently with users of 'smart journals' -- to investigate how this data-driven record mediates the experience of remembering. Further, I undertook a speculative and design-led inquiry to explore the context of a 'quantified wedding'. Adopting a context where remembering is centrally valued, this Research through Design project demonstrated opportunities for the design of data-driven tools for remembering. Crucially, while speculative, this project maintained a central focus on individual experience, and introduced an innovative methodological approach 'Speculative Enactments' for engaging participants meaningfully in speculative inquiry.
Text input methods are an integral part of our daily interaction with digital devices. However, their design poses a complex problem: for any method, we must decide which input action (a button press, a hand gesture, etc.) produces which symbol (e.g., a character or word). With only 26 symbols and input actions, there are already more than 1026 distinct solutions, making it impossible to find the best one through manual design. Prior work has shown that we can use optimization methods to search such large design spaces efficiently and automatically find a good user interface with respect to the given objectives [6]. However, work in the text entry domain has been limited mostly to the performance optimization of (soft-)keyboards (see [2] for an overview). The Ph.D. thesis [2] advances the field of text-entry optimization by enlarging the space of optimizable text-input methods and proposing new criteria for assessing their optimality. Firstly, the design problem is formulated as an assignment problem for integer programming. This enables the use of standard mathematical solvers and algorithms for efficiently finding good solutions. Then, objective functions are developed, for assessing their optimality with respect to motor performance, ergonomics, and learnability. The corresponding models extend beyond interaction with soft keyboards, to consider multi-finger input, novel sensors, and alternative form factors. In addition, the thesis illustrates how to formulate models from prior work in terms of an assignment problem, providing a coherent theoretical basis for text entry optimization. The proposed objectives are applied in the optimization of three assignment problems: text input with multi-finger gestures in mid-air [8], text input on a long piano keyboard [4], and - for a contribution to the official French keyboard standard - input of special characters via a physical keyboard [3]. Combining the proposed models offers a multi-objective optimization approach able to capture the complex cognitive and motor processes during typing. . .
Computers are now ubiquitous. However, computers and digital content have remained largely separate from the physical world - users explicitly interact with computers through small screens and input devices, and the "virtual world" of digital content has had very little overlap with the practical, physical world. My thesis work is concerned with helping computing escape the confines of screens and devices, spilling digital content out onto the physical world around us. In this way, I aim to help bridge the gap between the information-rich digital world and the familiar environment of the physical world and allow users to interact with digital content as they would ordinary physical content. I approach this problem from many angles: from the low-level work of providing high-fidelity touch interaction on everyday surfaces, easily transforming these surfaces into enormous touchscreens; to high-level questions surrounding the interaction design between physical and virtual realms. To achieve this end, building on my prior work, I developed two physical embodiments of this new mixed-reality design: a tiny, miniaturized projector and camera system providing the hardware basis for a projected on-world interface, and a head-mounted augmented-reality head-mounted display modified to support touch interaction on arbitrary surfaces.
The way people participate in decision making has radically changed over the last few decades. Technology has facilitated the sharing of knowledge, ideas and opinions across social structures and, has allowed grass-root initiatives to flourish. Participatory civic technology has helped local communities to embrace civic action on matters of shared concern. In this case study, we describe SENSEI, a year-long participatory sensing movement. Local community organisations, decision makers, families, individuals and researchers worked together to co-create civic technologies to help them address environmental issues of shared interest, such as invasive plant species, abandoned items in the forests and nice places. Over 240 local participants have taken part to the different stages of this year long process which included ten community events and workshops. As a result, over hundred concrete ideas about issues of common interest were generated, nearly thirty civic tech prototypes were designed and developed, along hundreds of environmental observations. In this paper, we describe the process or orchestration of this initiative and present key reflections from it.
In previous work, we have developed the theoretical concept of Critical Experience and the Participatory Evaluation with Autistic ChildrEn (PEACE) method. We grounded both in a series of separate case studies which allowed us to understand how to gather more and richer insights from the children than previously. This is crucial for child-led research projects. In this paper, we present additional cases in more detail which demonstrate the applicability of our concept of Critical Experience on cases in which PEACE was used. This provides new insights into how Critical Experience handles child-led evaluation strategies and how it can be applied and potentially transferred to different contexts, guiding other researchers and practitioners in evaluating participatory processes.
HCI and the tech industry are increasingly interested in designing products that afford meaningful user experiences. Yet while several metrics of meaningfulness have been suggested, their utility and relevance for industry is unclear. We conducted workshops with 9 welfare technology companies and presented them with different metrics from existing literature in HCI, psychology, and industry, to evaluate their product and consider how relevant designing for meaningfulness is for them in their practice. We point to four metrics which companies considered particularly relevant, and suggest that further defining metrics of meaningfulness in HCI would be beneficial to both academia and industry.
As research methods evolve to provide a voice to understudied, distributed communities, we explore our experiences running and analyzing Asynchronous Remote Communities (ARC) studies. Our experiences stem from four separate Facebook-based ARC studies with people who experience: rare disease, pregnancy, miscarriage, or HIV. We delve into these studies' methods, and present updated guidelines focused on improved study design, data collection, and analysis plans for ARC studies.
This case study describes a game designed to serve as new literacy education tool, playful polling system for research audience perceptions. The game underwent two primary designer iterations. As a result of design changes and renewed political chatter about fake news, the game's second iteration gathered more than 500,000 plays. The data collected reveals useful patterns in understanding news literacy and the perception of play experiences. This data of more than 45,000 players, indicates that the older the person the better they are at identifying fake news, until the approximate age of 70. It also indicates that higher education correlates to better performance at identifying real news from fake, although the time it takes to do so varies. This case study demonstrates the potential for such game designs to collect data useful to non-game contexts.
With the rise of chronic diseases as the number one cause of death and disability among urban populations, it has become increasingly important to design for healthy environments. There is, however, a lack of interdisciplinary approaches and solutions to improve health and well-being through urban planning and design. This case study offers an HCI solution and approach to design for healthy urban structures and dynamics in existing neighborhoods. We discuss the design process and design of ROOT, an interactive lighting system that aims to stimulate walking and running through supportive, collaborative and social interaction. We exemplify how multidisciplinary HCI approaches in a hackathon setting can contribute to real life urban health challenges. This case study concludes that the experimental and collaborative nature of a hackathon facilitates the rapid exchange of perspectives and fosters interdisciplinary research and practice in urban planning and design.
Individuals may use their wheelchair to play VR games, explore three-dimensional, visual worlds and take part in virtual social events, even if they do not master the hand- or head- inputs that are common for VR. We present the development of a low-cost, do-it-yourself, wheelchair locomotion-device, which allows navigation in VR. More than 50 people, including 9 wheelchair users, participated in the evaluations of three prototypes and a number of games developed for them. Initially, cybersickness turned out to be a problem, but when we changed to an electric wheelchair instead of a manual and fine-tuned the controls this discomfort was remarkably reduced. We suggest using this device for gaming, training, interaction design, accessibility studies and the operation of robots.
We conducted an in situ study of six households in domestic and driving situations in order to better understand how voice assistants (VA) are used and evaluate the efficiency of vocal interactions in natural contexts. The filmed observations and interviews revealed activities of supervision, verification, diagnosis and problem-solving. These activities were not only costly in time, but they also interrupted the flow in the inhabitants' other activities. Although the VAs were expected to facilitate the accomplishment of a second, simultaneous task, they in fact were a hindrance. Such failures can cause abandonment, but the results nevertheless revealed a paradox of use: the inhabitants forgave and accepted these errors, while continuing to appropriate the vocal system.
Creative generative machine learning interfaces are stronger when multiple actors bearing different points of view actively contribute to them. User experience (UX) research and design involvement in the creation of machine learning (ML) models help ML research scientists to more effectively identify human needs that ML models will fulfill. The People and AI Research (PAIR) group within Google developed a novel program method in which UXers are embedded into an ML research group for three months to provide a human-centered perspective on the creation of ML models. The first full-time cohort of UXers were embedded in a team of ML research scientists focused on deep generative models to assist in music composition. Here, we discuss the structure and goals of the program, challenges we faced during execution, and insights gained as a result of the process. We offer practical suggestions for how to foster communication between UX and ML research teams and recommended UX design processes for building creative generative machine learning interfaces.
Young children increasingly interact with voice-driven interfaces, such as conversational agents (CAs). The social nature of CAs makes them good learning partners for children. We have designed a storytelling CA to engage children in book reading activities. This case study presents the design of this CA and investigates children's interactions with and perception of the CA. Through observation, we found that children actively responded to the CA's prompts, reacted to the CA's feedback with great affect, and quickly learned the schema of interacting with a digital interlocutor. We also discovered that the availability of scaffolding appeared to facilitate child-CA conversation and learning. A brief post-reading interview suggested that children enjoyed their interaction with the CA. Design implications for dialogic systems for young children's informal learning are discussed.
The Tesserae project investigates how a suite of sensors can measure workplace performance (e.g., organizational citizenship behavior), psychological traits (e.g., personality, affect), and physical characteristics (e.g., sleep, activity) over one year. We enrolled 757 information workers across the U.S. and measure heart rate, physical activity, sleep, social context, and other aspects through smartwatches, a phone agent, beacons, and social media. We report challenges that we faced with enrollment, privacy, and incentive structures while setting up such a long-term multimodal large-scale sensor study. We discuss the tradeoffs of remote versus in-person enrollment, and showed that directly paid, in-person enrolled participants are more compliant overall compared to remotely-enrolled participants. We find that providing detailed information regarding privacy concerns up-front is highly beneficial. We believe that our experiences can benefit other large sensor projects as this field grows.
Social media serves as a platform to share thoughts and connect with others. The ubiquitous use of social media also enables researchers to study human behavior as the data can be collected in an inexpensive and unobtrusive way. Not only does social media provide a passive means to collect historical data at scale, it also functions as a "verbal" sensor, providing rich signals about an individual's social ecological context. This case study introduces an infrastructural framework to illustrate the feasibility of passively collecting social media data at scale in the context of an ongoing multimodal sensing study of workplace performance (N=757). We study our dataset in its relationship with demographic, personality, and wellbeing attributes of individuals. Importantly, as a means to study selection bias, we examine what characterizes individuals who choose to consent to social media data sharing vs. those who do not. Our work provides practical experiences and implications for research in the HCI field who seek to conduct similar longitudinal studies that harness the potential of social media data.
We present a system-initiative multimodal speech-based dialogue system for the Mini-Mental State Examination (MMSE). The MMSE is a questionnaire-based cognitive test, which is traditionally administered by a trained expert using pen and paper and afterwards scored manually to measure cognitive impairment. By using a digital pen and speech dialogue, we implement a multimodal system for the automatic execution and evaluation of the MMSE. User input is evaluated and scored in real-time. We present a user experience study with 15 participants and compare the usability of the proposed system with the traditional approach. Our experiment suggests that both modes perform equally well in terms of usability, but the proposed system has higher novelty ratings. We compare assessment scorings produced by our system with manual scorings made by domain experts.
This study examines the participatory design process of a virtual reality (VR) reentry training program with a women's prison. Conceptually drawing on previous work in VR exposure training, this prototype consists of guided, first-person 3D-360° video episodes that depict psychologically stressful situations that women commonly face when returning home. Critical story and production elements, including screenplay, acting, and narration, were created with incarcerated and formerly incarcerated women. The institutional, technological, and cultural restrictions of prison, combined with the tensions of making media with often exploited groups, forced adaptations of participatory design methods. The inclusion of incarcerated female voices resulted in an immersive narrative that reflects this group's specific challenges. The next phase is to evaluate its efficacy against non-immersive comparative trainings for reentry-related anxieties.
Online discussion platforms can face multiple challenges of abusive behaviour. In order to understand the reasons for persisting such behaviour, we need to understand how users behave inside and outside a community. In this paper, we propose a novel methodology to generate a dataset from offline and online group discussion conversations. We advocate an empirical-based approach to explore the space of abusive behaviour. We conducted a user-study (N = 15) to understand what factors facilitate or amplify forms of behaviour in cases of online conversation that are less likely to be tolerated in face-to-face. The preliminary analysis validates our approach to analyse large-scale conversation dataset.
While traditional live-broadcasting is typically comprised of a handful of well-defined workflows, these become insufficient when targeting multiple screens and interactive companion devices on the viewer side. In this case study, we describe the development of an end-to-end system enabling immersive and interactive experiences using an object-based broadcasting approach. We detail the deployment of this system during the live broadcast of the FA Cup Final at Wembley Stadium in London in May 2018. We also describe the trials and interviews we ran in the run-up to this event, the infrastructure we used, the final software developed for controlling and rendering on-screen graphics and the system for generating and configuring the live broadcast-objects. In this process, we learned about the workflows inside an OB truck during live productions through an ethnographic study and the challenges involved in running an object-based broadcast over the Internet, which we discuss alongside other gained insights.
We report on the design and implementation of a 3-week long summer academy introducing high school students to 3D modeling and 3D printing experiences. Supporting youth in developing 3D modeling knowledge can enhance their capacity to effectively use an array of emerging technologies such as Virtual Reality, Augmented Reality, and digital fabrication. We used tools and practices from both formal and informal education, such as storylining, to inform the design of the curriculum. We collected data through surveys, artifacts, observations, screen recordings, and group videos. Our findings suggest that (1) emphasizing curricular coherence as a design goal and (2) providing youth with multiple avenues for engaging in 3D modeling can help to: spark youth interest in 3D printing/modeling, maintain youth engagement in learning activities over the course of several weeks, and provide youth with opportunities to develop their spatial thinking skills.
With the rise of automated vehicles, a new road user had to be designed, an autonomous system that needs to integrate into an ecosystem of human-human interaction. Traditionally automotive UX is focused on the interaction between the driver and the vehicle. This new design challenge however comprised a change of perspective from driver/inside to road user/outside and from a system that is steered by a human being to an intelligent system that proactively makes decisions in a public space. A new approach was necessary to handle this change of perspective in the design process and to instill it into the heads of the stakeholders. We modified a user centered process to satisfy the challenge of designing the automated vehicle as a social actor. For example, we designed for acceptance by defining a character based on hopes and concerns of the public. The flow of communication was analyzed and intent based visual and acoustic signals were designed and evaluated in a purpose-built simulator. The lessons we learned from this process might also be applicable to the design of other autonomous, public facing systems.
Equipment shortages in Africa undermine Science, Technology, Engineering and Mathematics (STEM) Education. We have pioneered the LabHackathon (LabHack): a novel initiative that adapts the conventional hackathon and draws on insights from the Open Hardware movement and Responsible Research and Innovation (RRI). LabHacks are fun, educational events that challenge student participants to build frugal and reproducible pieces of laboratory equipment. Completed designs are then made available to others. LabHacks can therefore facilitate the open and sustainable design of laboratory equipment, in situ, in Africa. In this case study we describe the LabHackathon model, discuss its application in a pilot event held in Zimbabwe and outline the opportunities and challenges it presents.
Artificial Intelligence (AI) assistants have been a hot topic for a few years. Popular solutions - such as Google Assistant, Microsoft's Cortana, Apple's Siri, and Amazon Alexa - are becoming resourceful AI-assistants for general users. Apart from some mishaps, those assistants have a successful history in supporting people's everyday tasks. The same cannot be said in industry-specific scenarios, in which AI-assistants are still a bet. Companies combining AI with human expertise and experience can be stand out in their industry. This is particularly important for industries that rely their strategic decision-making processes on knowledge workers actions. More than another system, AI-assistants are new players in the human-computer interaction. But how and when should an AI-assistant interfere in a knowledge worker task? In this paper, we present findings from a case study using the Wizard of Oz approach in an oil and gas company. Our findings begin to answer that question for what kind of interference knowledge workers in that domain would accept from an AI-assistant.
Potential negative outcomes of machine learning and algorithmic bias have gained deserved attention. However, there are still relatively few standard processes to assess and address algorithmic biases in industry practice. Practical tools that integrate into engineers' workflows are needed. As a case study, we present two tooling efforts to create tools for teams in practice to address algorithmic bias. Both intend to increase understanding of data, models, and outcome measurement decisions. We describe the development of 1) a prototype checklist based on existing literature frameworks; and 2) dashboarding for quantitatively assessing outcomes at scale. We share both technical and organizational lessons learned on checklist perceptions, data challenges and interpretation pitfalls.
In late 2017, Uber was nearly a year into a complete redesign of its driver-facing mobile app. This case study describes the research program we executed to support the app's global beta launch, which aimed to "Build Together" with drivers across different geographies. With the goal of minimizing the time-space-cognitive distance between beta drivers and the product team, we deployed researchers in 7 cities for a 3-week research sprint, combining four high-touch ethnographic methods to understand drivers' reaction to the product. Unusually, we used an internal Google+ social media site to post a continual stream of raw, unsynthesized "atomic evidence" from research activities. The G+ unexpectedly went viral, creating extremely high engagement, impact, and stakeholder sentiment. Here we discuss the pros, cons, and impact of our approach, and also how success came from creating space for others to create, engage with, and act on raw evidence from the field.
With growing concern for the intimate dimensions of technology development, HCI scholars have begun to grapple with who wields power in design around the body. However, beyond menstruation, few studies have sought to examine the role of technology design in the later stages of life for menstruating people. This paper considers menopause as a central, but overlooked life phase informing the design of future intimate technologies. We review empirical analysis of menopausal experiences and design provocations that emerged from our work. We end with a reflection on the opportunities and pitfalls around menopause design.
Measuring user experience (UX) is an important part of the design process, yet there are few methods to evaluate UX in the early phases of product development. We introduce Triptech, a method used to quickly explore novel product ideas. We present how it was used to gauge the frequency and importance of user needs, to assess the desirability and perceived usefulness of design concepts, and to draft UX requirements for Now Playing-an on-device music recognition system for the Pixel 2. We discuss the merits and limitations of the Triptech method and its applicability to tech-driven innovation practices.
Understandings of user-centered design incorporate the need to include users and stakeholders in the design process from early on, employing visual and 'enactment' principles and approaches. Virtual Reality (VR) and 3D visualizations offer such opportunities for enhanced 'enactments' of proposed designs through immersion. Within PASSME H2020 European project, 3D design visualizations for novel concepts for an airport interior were developed and tested early on with users to identify the best interior design principles among the alternatives considered to reduce passenger stress, waiting times and improve overall Passenger Experience (PAX). Using the potential of VR, concepts were tested with users and we identified passenger emotional and design-driven responses to boarding gates and lounge visualizations to inform the iterative development of in-situ passenger-centric interventions. We elicited both emotional, practical and operational needs and requirements for improving PAX within airports and found out that users can use VR to imagine interaction scenarios within the proposed design products.
This paper presents a method for hands-on creation of data comics in a workshop context and includes a description of the results, lessons learned and future improvements. Data comics is a promising format for data-driven storytelling, leveraging the power of data visualization and visual storytelling with comics. However, authoring data comics requires a diverse range of skills that are both creative and analytical. Our workshop is aimed at developing a blue-print for future workshops and reflecting on challenges and potential improvements. Within a 3-week assignment for an illustration class, we ran three 3-hour sessions. Our design was informed by the experiences of previous data-comics workshops. Results show the creative potential of data comics. Challenges to learn from the workshop include the stages to introduce data visualizations and journalistic narratives, the structuring of stories and the method of developing iterations of comic drafts. We close by reflecting on these challenges and how they can inform future improvements and adaptations.
The present case study describes and comments on an experimental activity with 9-11 year old children of a public school in Lecce (Italy) in August 2018. The pupils were required to create computational tools using materials recycled from their own home. We adopted a constructionist perspective; we wanted to foster reflection and discussion among the young participants on the amount of household waste produced and how it can be repurposed for creating novel objects. In order to achieve our scope, we were guided by collapse informatics theory and research through design.
A long-term summative evaluation program was undertaken at Microsoft. This program focused on generating a Single Usability Metric (SUM) score across products over time but encountered a number of issues including error rate reliability, challenges establishing objective time-on-task targets, and scale anchoring. These issues contributed to making SUM difficult to communicate, prompting exploration of an alternative single usability metric using simple thresholds developed from anchor text and inter-metric correlations.
Data on how humans perceive the attractiveness of their (urban) environments has been mainly gathered with qualitative methods, including workshops, interviews and group discussions. Qualitative methods help us to understand the phenomenon, albeit with the cost of sufficient information as concerns its details. We may end up confirming something that we as researchers have 'programmed' to get as a result. Here we take a complementary approach, having collected eye tracking data from two case experiments. The participants to these experiments were professional urban planners and non-professionals respectively. We asked them to view planning-related artefacts comprising architectural illustrations, photographed landscapes and planning sketches. After analysing the findings from these experiments, we draw guidelines for using the eye tracking system in urban planning processes for gathering the human perceptions of attractive urban environments
This case study presents "TechShops", a collaborative workshop-based approach to learning about technologies with Young Adults with Intellectual Disability (YAID) in exploratory design research. The "TechShops" approach emerged because we found it difficult to engage YAID in traditional contextual interviews. Hence, we offered a series of "TechShops", which we found useful in: enabling engagement with participants, their families and support staff; fostering relationships; and gaining research access. We explain the context of "TechShops", and reflect upon the opportunities and challenges that the approach offers for both researchers and YAID in exploratory design research.
Usability tests help us obtain quantitative and qualitative data with real users who perform actual tasks with a product. Usability tests were carry out to evaluate a designed product for a Student Design Competition (SDC). The following document relates the process of adapting usability tests to visually impaired children, who were the target audiences in a project. In interaction with children we learned how to help children understand some concepts involved in the product faster. This interaction resulted in a reliable device whose characteristics fit directly with user's needs.
This course is for students, practitioners, and academics who are interested in planning their careers. The rapid pace of change leaves some tools and technologies behind. We must focus our attention primarily on current developments. Why look in the rear-view mirror? Some topics that are being actively explored now will soon be of less interest, so career planning benefits from thoughtfully identifying trajectories. To make use of relevant information in other fields you must understand how their terminologies, priorities, and methods evolved. We will cover the history of several HCI fields and discuss opportunities and challenges that lie ahead. Software evolved from passively reacting to human input to today's dynamic partnership. In some areas HCI advanced steadily, elsewhere it reached dead ends or seemed to go in circles. Understanding these patterns will prepare you to respond to unexpected developments in years to come. The forces that shaped HCI in computer science, human factors, information systems, information science, and design are covered, with examples and implications for our new era.
Wearable technologies afford more pervasive access to the human user: higher bandwidth for communication in multiple modalities, and better context-awareness through sensing the user's body and environment. However, stand-alone devices like wristbands and clip-ons are limited in the body areas they can simultaneously access. Clothing and textiles provide a useful platform for distributed systems, but present unique challenges in design and fabrication. This course provides an introduction to the tools, methods, and techniques of designing and fabricating with soft goods, including patternmaking and construction techniques for different material types at multiple scales, and e-textile methods and materials.
Why do some interfaces allow users to find what they need easily while others do not? What information can the visual system effortlessly extract, and what requires slower, more cognitive processes? What does eye-tracking data tell us about what users perceive? Vision scientists have recently made ground-breaking progress in understanding many aspects of vision that are key to good design. A critical determining factor is what information is available at a glance. This course reviews state-of-the-art vision science, including a computational model that visualizes the available information. We will demonstrate use of this model in evaluating and guiding visual designs.
Freehand sketching is a valuable process, input, output, and tool, often used by people to communicate and express ideas, as well as document, explore and describe concepts between researcher, user, or client. Sketches are fast, easy to create, and - by varying their fidelity - can be used in all areas of HCI. Sketching in HCI will explore and demonstrate themes around sketching in HCI with the aim of producing tangible outputs. Attendees will leave the course with the confidence to engage actively with sketching on an everyday basis in their research practice.
We base everything that we do as researchers on what we write. Primarily for graduate students and young researchers, it is hard to turn a research project into a successful CHI publication. This struggle continues for postdocs and young professors trying to author excellent reviews for the CHI community that pinpoint flaws and improvements in research papers. This third edition of the successful CHI paper writing course offers hands-on advice and more in-depth tutorials on how to write papers with clarity, substance, and style. It is structured into three 80-minute units with a focus on writing CHI papers.
The objective of this course is to provide newcomers to Human-Computer Interaction (HCI) with an introduction and overview of the field. Attendees often include practitioners without a formal education in HCI, and those teaching HCI for the first time. This course includes content on theory, cognition, design, evaluation, and user diversity.
In this two-session course, attendees learn how to conduct empirical research in human-computer interaction (HCI). This course delivers an A-to-Z tutorial on designing a user study and demonstrates how to write a successful CHI paper. It would benefit anyone interested in conducting a user study or writing a CHI paper. Only general HCI knowledge is required.
Come learn from Bloomberg UX designers how to apply professional design and presentation skills to your CHI presentation to ensure you make the biggest impact on your audience in the limited time and space you have. In part 1 you will learn how to convey your information and message visually: first by finding the key story you are trying to tell and then using principles of visual hierarchy to make that story pop! In part 2 you will learn how to convey your information orally: effectively getting and keeping your audience's attention so they remember your message. http://www.beproatchi.com/
It is not unusual for empirical scientists, who are often not specialists in statistics, to have only limited trust in the statistical analyses that they apply to their data. The claim of this course is that an improved human-computer interaction with statistical methods can be accomplished by providing a simple mental model of what statistics does, and to support this model through well-chosen visualizations and interactive exploration. In order to support this proposed approach, an entirely new program for performing interactive statistics, called ILLMO, was developed. This course will use examples of frequent statistical tasks such as hypothesis testing, linear regression and clustering to introduce the key concepts underlying intuitive and interactive statistics.
This course offers an introduction to ethnography for Human Factors Research. It covers relevant topics along the research process from decision arguments for the method and study design, up to data collection, analysis and interpretation. Ethical questions will include the researcher's role(s) in the field and modes of data presentation. The collection of multi-dimensional sets of data - a trademark of high-quality ethnographic work - enables inter-weaving threads of HCI perspectives in complex human factors and user research contexts. To achieve this, a comprehensive toolbox of ethnographic methods is introduced along with practical hands-on sessions to familiarize with these methodological instruments.
This course is a hands-on introduction to interactive electronics prototyping for people with a variety of backgrounds, including those with no prior experience in electronics. Familiarity with programming is helpful, but not required. Participants learn basic electronics, microcontroller programming and physical prototyping using the Arduino platform, then use digital and analog sensors, LED lights and motors to build, program and customize a small paper robot.
This course introduces participants to rapid prototyping techniques for augmented reality and virtual reality interfaces. Participants will learn about both physical prototyping with paper and Play-Doh as well as digital prototyping via new visual authoring tools for AR/VR. The course is structured into four sessions. After an introduction to AR/VR prototyping principles and materials, the next two sessions are hands-on, allowing participants to practice new physical and digital prototyping techniques. These techniques use a combination of new paper-based AR/VR design templates and smartphone-based capture and replay tools, adapting Wizard of Oz for AR/VR design. The fourth and final session will allow participants to test and critique each other's prototypes while checking against emerging design principles and guidelines. The instructor has previously taught the techniques to broad student audiences with a wide variety of non-technical backgrounds, including design, architecture, business, medicine, education, and psychology, who shared a common interest in user experience and interaction design. The course is targeted at non-technical audiences including HCI practitioners, user experience researchers, and interaction design professionals and students. A useful byproduct of the course will be a small portfolio piece of a first AR/VR interface designed iteratively and collaboratively in teams.
Over the last two decades, creative, lean and strategic design approaches have become increasingly prevalent in the development of interactive technologies, but tensions exist with longer established approaches such as human factors engineering and user-centered design. These tensions can be harnessed productively by first giving equal status in principle to creative, business and agile engineering practices, and then supporting this with flexible critical approaches and resources that can balance and integrate across a range of multidisciplinary design practices.
A crucial step in designing a user interface for a software application is to design a coherent, task-focused conceptual model (CM). With a CM, designers design better, developers develop better, and users learn and use better. Unfortunately, this step is often skipped, resulting in incoherent, arbitrary, inconsistent, overly-complex applications that impede design, development, learning, understanding, and use. This course covers what CMs are, how they help, how to develop them, and provides hands-on experience.
Any move towards more ethical design and technologies that genuinely improve our lives requires that those technologies respect our psychological needs. Currently, there is no systematic integration of wellbeing science into tech development, and the many technology-induced harms to mental health, reported in the media daily, attest to this deficit. But the status quo is changing. A demand for more "Humane Technologies" [12] is forcing companies to rethink digital business as usual. Fortunately, recent research has uncovered new ways to make psychologically respectful technologies possible. Just as we can design ergonomically to support physical wellness, we can design psycho-ergonomically to support psychological health. By integrating well-evidenced theory and methods from multiple disciplines, we can design and develop new technologies to "do no harm" and even increase psychological wellbeing [1]. In this course we will introduce frameworks for designing technologies that respect human values and wellbeing [6,7,8,9,10] together with an established ethical framework within which to situate this design for flourishing [11]. We also provide practical tools for ideation, design, and the evaluation of the psychological impact of products.
This course introduces computational methods in human--computer interaction. Computational interaction methods use computational thinking---abstraction, automation, and analysis---to explain and enhance interaction. This course introduces the theory of practice of computational interaction by teaching Bayesian methods for interaction across four wide areas of interest when designing computationally-driven user interfaces: decoding, adaptation, learning and optimization. The lectures center on hands-on Python programming interleaved with theory and practical examples grounded in problems of wide interest in human-computer interaction.
UI design rules and guidelines are not simple recipes. Applying them effectively requires determining rule applicability and precedence and balancing trade-offs when rules compete. By understanding the underlying psychology, designers and evaluators enhance their ability to apply design rules. This two-part (160-minute) course explains that psychology.
We are witnessing an increase in fieldwork within the field of HCI, particularly involving marginalized or under-represented populations. This has posed ethical challenges for researchers during such field studies, with "ethical traps" not always identified during planning stages. This is often aggravated by the inconsistent policy guidelines, training, and application of ethical principles. We ground this in our collective experiences with ethically-difficult research, and frame it within common principles that are common across many disciplines and policy guidelines - representative of the instructors' diverse and international backgrounds.
Economics provides an intuitive and natural way to formally represent the cost and benefits of interacting with applications, interfaces and devices. By using economics models it is possible to reason about interaction and make predictions about how changes to the system will affect performance and behavior. In this course, we provided an overview of relevant economic concepts and then showed how economics can be used to model human computer interaction to generated hypotheses about interaction which can be used to inform design and guide experimentation. As a case study, we demonstrate how various interactions with search and recommender applications can be modeled, before concluding the day with a hands-on modeling session using example and participant problems.
The epoch of the platform economy has arrived. Companies face the question of how to build up disruptive digital services with disruptive business models and establish a digital ecosystem. Traditional UCD methods concentrate on the conception of particular services. Nevertheless, most of these are isolated solutions of single companies. Building services for the digital transformation era requires additional methods to come up with end-to-end consumer experiences and sustainable business models across boundaries of single companies for benefiting consumers. In this course participants learn the "Tangible Ecosystem Design? Method (TED), that supports the conception of Digital Ecosystems by using tangible elements.
The objective of this course is to provide an overview of legal issues in HCI. The course will focus on five different areas: accessibility, privacy, intellectual property, telecommunications, and requirements in using human participants in research.
Being able to visualize data in consistent high-quality ways is a useful skill for HCI researchers and practitioners. In this course, attendees will learn how to produce high quality plots and visualizations using the ggplot2 library for the R statistical computing language. There are no prerequisites and attendees will leave with scripts to get them started as well as foundational knowledge of free open-source tools that they can build on to produce complex, even interactive, visualizations.
Intelligence is now a widely accepted part of systems that we interact with every day, but comes in many forms as far as the user is concerned. There is a need for interaction designers to have an understanding of the nature of intelligent systems and be able to categorise different types of intelligence as it appears to the user. This course will give attendees an appreciation of what 'intelligence' or 'smartness' within computer systems is, discuss how it is currently perceived, and describe the enablers and barriers to its effective use. The course will categorise different kinds of intelligence capability. It will offer interaction design guidelines or heuristics for the design of user interfaces with intelligent features to enable them to be more effective as partners with humans. It will use a mixture of teaching techniques combining presentation, discussion and class exercise.
Eye tracking is an important tool in usability testing of a screen-based user interface. Though eye tracking has been used in usability testing for quite a while, challenges remain. For example, how to accurately calibrate gaze point? How to interpret a scan pattern? In this tutorial, we will introduce the basics of the human oculomotor system, the role of eye tracking in cognition, eye tracking recording techniques, and data analysis methods. Upon the completion of this tutorial, students will have a basic understanding of physiological and psychological mechanisms underlying eye tracking, data collection techniques, and data analysis methods.
This course will allow participants to understand the complexities of games user research methods for user experience research in games. For this, we have put together three-course sessions at CHI (80 minutes each) on applications of different user research methods in games evaluation and playtesting exercises to help participants turn player feedback into actionable design recommendations. This course consists of three interactive face-to-face units during CHI 2019.
With the rise of digital assistants, chatbots, and other conversational interfaces, there's a huge demand for detail and instruction for Conversation Design. This course provides a focused walkthrough of the principles, strategies, and process of Conversation Design. Topics include understanding users, defining persona, analyzing conversation components, dialog writing strategies, and the detailed process of creating natural dialog. Interactive components at each stage engage participants with individual worksheets, small group exercises and reviews, and a team project. Participants will gain an understanding of the complexity and challenges of Conversation Design, and learn about resources and tools for doing it well.
This course is a hands-on introduction to the fabrication of flexible, transparent free-form displays based on electrochromism for an audience with a variety of backgrounds, including artists and designers with no prior knowledge of physical prototyping. Besides prototyping using screen printing or ink-jet printing of electrochromic ink and an easy assembly process, participants will learn essentials for designing and prototyping electrochromic displays.
The traditional "human" model in human-computer interaction prioritizes the human brain, with physical and sensory interaction as secondary emphases. As wearable technologies proliferate and mature, the user experience and human factors of the rest of the body become increasingly important. This course will provide an overview of the basic foundations of wearability and human factors of wearable systems, from anatomy and physiology to body schema, physiological experience of on-body artifacts, and the ways in which dress affects and communicates identity and social relationships.
The teaching of harmonic foundations in music is a common learning objective in many education systems. However, music theory is often considered as a non-interactive subject matter that requires huge efforts to understand. With this work, we contribute a novel tangible device, called ScaleDial, that makes use of the relations between geometry and music theory to provide interactive, graspable and playful learning experiences. Therefore, we introduce an innovative tangible cylinder and demonstrate how harmonic relationships can be explored through a physical set of digital manipulatives, that can be arranged and stacked on top of an interactive chromatic circle. Based on the tangible interaction and further rich visual and auditory output capabilities, ScaleDial enables a better understanding of scales, pitch constellations, triads, as well as intervals. Further, we describe the technical realization of our advanced prototype and show how we fabricate the magnetic, capacitive and mechanical sensing.
Reflection-in-action (RiA) refers to teachers' reflections on their teaching performance during busy classroom routines. RiA is a demanding competence for teachers, but little has been known about how HCI systems could support teachers' RiA during their busy and intensive teaching. To bridge this gap, we design and evaluate an ambient information system named ClassBeacons. ClassBeacons aims to help teachers intuitively reflect-in-action on how to divide time and attention over pupils throughout a lesson. ClassBeacons subtly depicts teachers' division of time and attention over pupils through multiple light-objects distributed over students' desks. Each light-object indicates how long the teacher has been cumulatively around it (helping an adjacent student) by shifting color. A field evaluation with eleven teachers proved that ClassBeacons enhanced teachers' RiA by supporting their sensemaking of ongoing performance and modification of upcoming actions. Furthermore, ClassBeacons was experienced to unobtrusively fit into teachers' routines without overburdening teaching in progress.
There is a dearth of appropriate tools for young learners with mixed visual abilities to engage with computational learning. Addressing this gap, we present Project Torino, a physical programming language for teaching computational learning to children ages 7-11 regardless of level of vision. To create code, children connect and manipulate tactile objects to create music, audio stories, or poetry. Designed to be made and deployed at scale, Project Torino (along with a scheme of work) has been successfully used by 30 non-specialist teachers with 75 children across the UK over three months.
This paper describes a demo prototype of a tangible user interface (TUI) concept that is derived from the expressive play of musical string instruments. We translated this interaction paradigm to an interactive demo which offers a novel gesture vocabulary (strumming, picking, etc.). In this work we present our interaction concepts, prototype description, technical details and insights on the rapid and low-cost manufacturing and design process. (Video demonstration: https://vimeo.com/309265370)
Acoustic Levitation can hold millimetric objects in mid-air without any physical contact. This capability has been exploited to create displays since being able to position mid-air physical voxels enables for rich data representations. However, most of the times interesting features of acoustic levitation are not exploited. Acoustic Levitation is harmless and sound diffracts around objects, thus we can insert our hand inside the levitator and touch the levitated particles without harmful effect on us. In this demo, we showcase more tangible interactions with acoustically levitated particles by passing acoustically-transparent structures through the particles, manipulating particles in mid-air with wearable levitators or by moving multiple particles with direct manipulation. We hope that this demo provides a more tangible experience of acoustic levitation. Since all the presented devices are Do-It-Yourself, we encourage visitors to experiment further with acoustic levitation.
Ultrasound enables new types of human-computer interfaces, ranging from auditory and haptic displays to levitation (visual). We demonstrate these capabilities with an ultrasonic phased array that allows users to interactively manipulate levitating objects with mid-air hand gestures whilst also receiving auditory feedback via highly directional parametric audio, and haptic feedback via focused ultrasound onto their bare hands. Therefore, this demo presents the first ever ultrasound rig which conveys information to three different sensory channels and levitates small objects simultaneously.
This demo presents Refinity - an interactive holographic signage for the new retail shopping experience. In our demo, we show a concept of futuristic shopping experience with a tangible 3D mid-air interface that allows customers to directly select and explore realistic virtual products using autostereoscopic 3D display combined with mid-air haptics and finger tracker. We also present an example of in-store shopping scenario for natural interactions with 3D. This shopping experience will engage users in producing a memorable in-store experience with the merging of digital and physical interactions.
For blind users, spatial information is often presented in non-spatial form such as electronic speech.We explore the possibility of representing spatial data on refreshable tactile graphic displays in combination with audio feedback and utilizing both static and dynamic tactile information. We demonstrate an implementation of a New York Times style crossword puzzle, providing interactions to query location and stored data, ask for clues in across and down directions, edit and fill in the crossword puzzle using a Perkins style braille keyboard or a typewriter-style keyboard , and verify answers. Through our demonstration, we explore tradeoffs related to available tactile real estate, and overcrowding of the tactile image with a view toward reducing the cognitive workload involved in retaining a working mental model of the active grid, and the time to complete a letter placement task.
Multiplayer augmented reality (AR) games allow players to inhabit a shared physical environment populated with interactive digital objects. However, currently available games fall short because of either limited synchronicity or limited opportunities for player movement. Here, we present Brick, a synchronous multiplayer AR game at the room scale. Brick's players collaborate to fill in a pattern of empty slots using digital "bricks" scattered about the room. This paper provides an overview of Brick from a design and technical perspective. It also discusses how Brick extends the current scope of AR games to include collaborative gameplay.
This demonstration presents the design principles of the Navigo games for reading. By reflecting on our design tools and processes we explore the way theory, empirical evidence and best practice and expertise have informed our design. We look into the reciprocal role of theory and design and provide transferable lessons for design of educational technologies in the context of HCI.
Social closeness is important for an individual's health and well-being, and this is especially difficult to maintain over a distance. Games can help with this, to connect and strengthen relationships or create new ones by enabling shared playful experiences. The demo proposed is a game we designed called 'In the Same Boat', a two-player game intended to foster social closeness between players over a distance. We leverage the synchronization of both players' physiological data (heart rate, breathing,facial expressions) mapped to an input scheme to control the movement of a canoe down a river.
In this demo, we present Slackliner 2.0, an interactive slackline training assistant which features head and skeleton tracking, and real-time feedback through life-size projection. Like in other sports, proper training leads to a faster increase of skill and lessens the risk of injuries. We chose a set of exercises from slackline literature and implemented an interactive trainer which guides the user through the exercises giving feedback if the exercises were executed correctly. Based on lessons learned from our study and prior demonstrations we present a revised version of Slackliner that uses head tracking to better guide the user's attention and movements. Additionally a new visual indicator informs the trainee about her arm posture during the performance. This has been also included in an updated post-analysis view that provides the trainee with more detailed feedback about her performance. The present demo showcases an interactive sports training system that provides in-situ feedback while following a well-guided learning procedure.
Ola De La Vida is a three-player cooperative game installation which is designed to harness the qualities of social play environment such as an arcade or play party (an event which mixes games, music, dance and socializing). Ola De La Vida uses physical and digital design techniques that consider the unique qualities of a social play space being sensitive to the complex personal, social and interpersonal aspects each player may experience. The game installation uses its scale to heighten the visibility of the game in a crowded play space, custom control methods to lower barriers to entry and costume to promote teamwork, collaboration and to lower social anxieties. These techniques, in partnership with the digital-physical nature of the game play aims to entice players and spectators in a social environment to participate. Through this installation we hope to encourage discussion around designing for participation and the challenges of social play.
We introduceCrushed It!, an interactive game on a sensor floor. This floor is combined with a multiple projector system to reduce occlusions from players' interactions with the floor. Individual displays, an HTC Vive to track player position, and smart watches were added to provide an extra layer of interactivity. We created this interactive experience to explore collaboration between people when interacting with large displays. We contribute a novel combination of different technologies for this game system and our studies showed this game is both entertaining and provides players with motivation to stay physically active. We believe presenting at interactivity would be a benefit to both our research and to the attendees of CHI 2019.
We present FoldTronics, a 2D-cutting based fabrication technique to integrate electronics into 3D folded objects. The key idea is to cut and perforate a 2D sheet using a cutting plotter to make it foldable into a 3D honeycomb structure; before folding, users place the electronic components and circuitry onto the sheet. The fabrication process only takes a few minutes allowing to rapidly prototype functional interactive devices. The resulting objects are lightweight and rigid, thus allowing for weight-sensitive and force-sensitive applications. Due to the nature of honeycombs, the created objects can be folded flat along one axis and thus can be efficiently transported in this compact form factor.
We present an interactive editing system for laser cutting called kyub. Kyub allows users to create models efficiently in 3D, which it then unfolds into the 2D plates laser cutters expect. Unlike earlier systems, such as FlatFitFab, kyub affords construction based on closed box structures, which allows users to turn very thin material, such as 4mm plywood, into objects capable of withstanding large forces, such as chairs users can actually sit on. To afford such sturdy construction, every kyub project begins with a simple finger-joint "boxel"-a structure we found to be capable of withstanding over 500kg of load. Users then extend their model by attaching additional boxels. Boxels merge automatically, resulting in larger, yet equally strong structures. While the concept of stacking boxels allows kyub to offer the strong affordance and ease of use of a voxel-based editor, boxels are not confined to a grid and readily combine with kuyb's various geometry deformation tools. In our technical evaluation, objects built with kyub withstood hundreds of kilograms of loads. We demonstrate the kyub software to the CHI audience and allow them to experience the resulting models first hand.
With recent interest in shape-changing interfaces, material-driven design, wearable technologies, and soft robotics, digital fabrication of soft actuatable material is increasingly in demand. Much of this research focuses on elastomers or non-stretchy air bladders. In this work, we explore a series of design strategies for machine knitting actuated soft objects by integrating tendons with shaping and anisotropic texture design.
We present CATS, a digital painting system that synthesizes textures from live video in real-time, short-cutting the typical brush- and texture- gathering workflow. Through the use of boundary-aware texture synthesis, CATS produces strokes that are non-repeating and blend smoothly with each other. This allows CATS to produce paintings that would be difficult to create with traditional art supplies or existing software. We evaluated the effectiveness of CATS by asking artists to integrate the tool into their creative practice for two weeks; their paintings and feedback demonstrate that CATS is an expressive tool which can be used to create richly textured paintings.
With Maker-friendly environments like the Arduino IDE, embedded programming has become an important part of STEM education. But learning embedded programming is still hard, requiring both coding and basic electronics skills. To understand if a different programming paradigm can help, we developed Flowboard, which uses Flow-Based Programming (FBP) rather than the usual imperative programming paradigm. Instead of command sequences, learners assemble processing nodes into a graph through which signals and data flow. Flowboard consists of a visual flow-based editor on an iPad, a hardware frame integrating the iPad, an Arduino board and two breadboards next to the iPad, letting learners connect their visual graphs seamlessly to the input and output electronics. Graph edits take effect immediately, making Flowboard a live coding environment.
Live programming is a novel approach for programming practice. Programmers are given real-time feedback when writing code, traditionally via a graphical user interface. Despite live programming's practical values, such as providing an easier overview of code and better understanding of its structure, it is not yet widely used. In this work, we extend live programming to general purpose code editors, which allows for live programming to be used by programmers, and provides new interfaces for understanding and changing the functionality of code. To achieve this we extended a fully-featured IDE with the ability to show input/output examples of code execution, as the programmer is writing code. Furthermore, we integrate programming by example (PBE) synthesis into our tool by allowing the user to change the shown output, and have the code update automatically. Our goal is to use live programming to give novice programmers a new way to interact and understand programming, as well as being a useful development tool for more advanced programmers.
Demonstration for a dynamic depth-of-field projection mapping on a 3D moving object would be performed. Conventional projection mapping was limited on 2D space, due to their narrow depth-of-field projection range. Our system included a high-speed projector, a high-speed variable focus lens, a depth sensor by a stereo camera, so that the depth information would be detected and then served as feedback to correct the focal length of the projection. As a result, a projection mapping would be well-focused projected on a 3D dynamic moving object.
We present Springlets, expressive, non-vibrating mechanotactile interfaces on the skin. Embedded with shape memory alloy springs, we implement Springlets as thin and flexible stickers to be worn on various body locations, thanks to their silent operation even on the neck and head. We present a technically simple and rapid technique for fabricating a wide range of Springlet interfaces. We developed six modular Springlets: a pincher, a directional stretcher, a presser, a puller, a dragger, and an expander (Fig. 1). In our hands-on demonstration, we show our modular Springlets and several Springlet interfaces for tactile social communication, physical guidance, health interfaces, navigation, and virtual reality gaming. Attendees can wear the interfaces and explore their expressive variable force profiles and spatiotemporal patterns.
We demonstrate a sensing technique for data gloves using conductive fiber. This technique enables us to estimate hand shapes (bend of a finger and contact between fingers) and differentiates a grabbing tag. To estimate how far each finger bends, the electrical resistance of the conductive fiber is measured; this resistance decreases as the finger bends because the surface of the glove short circuits. To detect contact between fingers, we apply alternating currents with different frequencies to each finger and measure the signal propagation between the fingers. This principle is also used to differentiate a grabbing tag (each tag has an alternating current with a unique frequency). We developed a prototype data glove based on this technique.
LUNE is a displayed lighting object representing time. It provides real-time lighting visualized moon phase. People can recognize the date of month through abstract image of moon. This product was developed to investigate the use of metaphor from lighting to represent time. A diverse set of researches and design initiatives related to time, temporality and slowness has emerged in the DIS and HCI communities. An important area of work is to represent time. The primary objective of this research is to suggest the new perspective of time called sense of time. First, we can understand how people perceive the time and we can also trace recent research related to perspectives of time in HCI. Second, we designed artifacts and investigated the use of moon phase as material to represent time in a new approach.
Bear & Co. is a fictitious immersion into the world of being part of an IOT start-up. We invite visitors to join the company, and facilitate their journey through various ethical conundrums, as they become part of the company. First, they must state their values - what they will bring to the company and care most about. Then, we test those values through different scenarios and problems that are unexpected and that do not have easy answers. Finally, we debrief our visitors and invite them to peruse explanations for various ethical approaches presented as maps and diagrams, where they can interrogate their own decisions against three different philosophical viewpoints.
Responsive and Emotive wearables are concerned with the visualisation of environmental and physiological data. Four research prototypes were created for doctoral research to investigate the possibility that wearable technology could be used to create new forms of nonverbal communication using physiological data from the wearer's body. The research investigated who the audience might be for these research prototypes and their concerns and requirements. Through a lens of how these artifacts might they be used in social or formal contexts, the research gathered data from fifty potential users of emotive wearables and examined the usage and user preferences of such devices. Findings reflected the concerns of potential users from aesthetics and functionality to ethical and privacy issues.
Being-in-the-Gallery is an immersive experience which explores the embodied nature of virtual reality and the implications this has on contemporary sculptural practice and our encounters with both. This interactive artwork is experienced through the HTC Vive headset, with movement and touch key elements to the aesthetic of this work. It combines both a physical and a digital sculpture and in doing so creates a mixed reality that plays on a disconnect between what we can see and what we can feel.
Human perception has long been influenced by technological breakthroughs. An intimate mediation of technology lies in between our direct perceptions and the environment we perceive. Through three extreme ideal types of perceptual machines, this project defamiliarizes and questions the habitual ways in which we interpret, operate, and understand the visual world intervened by digital media. The three machines create: Hyper-sensitive vision - a speculation on social media's amplification effect and our filtered communication landscape. Hyper-focused vision - an analogue version of the searching behavior on the Internet. Hyper-commoditized vision - monetized vision that meditates on the omnipresent advertisement targeted all over our visual field. The site of intervention is the visual field in a technologically augmented society. All the three machines have both internal state and external signal.
The mixed reality lab has now been a staple of the CHI community for twenty years. From its founding in 1999 through to today, we have placed our relationship with art and artists at the forefront of our research methods. In this retrospective exhibition, we present some of our most recent and exciting work, alongside some of our archived works, and ask viewers to consider twenty years of CHI research and innovation - not just from our lab, but from the whole CHI community. Back in 1999 when we started, Virtual Reality was the exciting new technology. A lot has changed since then.
Come Hither to Me is an interactive robotic performance, which examines the emotive social interaction between an audience and a robot. Our interactive robot attempts to communicate and flirt with audience members in the gallery. The robot uses feedback from sensors, auditory data, and computer vision techniques to learn about the participants and inform its conversation. The female robot approaches the audience, picks her favorites, and starts charming them with seductive comments, funny remarks, backhanded compliments, and personal questions. We are interested in evoking emotions in the participating audience through their interactions with the robot. Come Hither to Me strives to invert gender roles and stereotypical expectations in flirtatious interactions. This performative piece explores the dynamics of social communication, objectification of women, and the gamification of seduction. The robot reduces flirtation to an algorithm, codifying pick-up lines and sexting paradigms.
"Eyes" is an interactive biometric data art that transforms human's Iris data into musical sound and 3D animated image. The idea is to allow the audience to explore their own identities through unique visual and sound generated by their iris patterns based on iris recognition and image processing techniques. Selected iris images are printed in 3D sculptures, and it replays the sound and animated images on the sculptures. This research-based artwork has an experimental system generating distinct sounds for each different iris data using visual features such as colors, patterns, brightness and size of the iris. It has potentials to lead the new way of interpreting complicated dataset with the audiovisual output. Moreover, aesthetically beautiful, mesmerizing and uncanny valley-effected artwork can create personalized art experience and multimodal interaction. Multisensory interpretations of this data art can lead a new opportunity to reveal users' narratives and create their own "sonic signature."
Hybrid Dandelion is an interactive real-time animation art installation, used bionic mechanisms of algorithmic design to study generative rules for mimic biological form. It provides an approach to combine the algorithmic data structures from distributed computing, fractal tree, and L-system for investigate growth dandelion-like morphology. This work creates an interaction scenario with using facial recognition to scan audience's biometrics as means decoding your genetic data, and then inserting (the trait) into the dandelion model for modifying its indeed generative rules as like metaphor hybrid genetically modified task. It allows the audience to experience their unique data-driven artificial life form - dandelion as embodied and possessed through their facial features, heartbeat signal, and emotion expression in artistic expression.
Death Ground is a competitive musical installation-game for two players. The work is designed to provide the framework for the players/participants in which to perform game-mediated musical gestures against each-other. The main mechanic involves destroying the other player's avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such power-ups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned power-ups.
Alternate realities through headsets, such as augmented, mixed and virtual reality are becoming part of people's everyday life. Except in some limited context, usual keyboards are inappropriate for such technological medium and alternative interfaces for text-entry must be explored. In this paper we present the keycube, a general-purpose cubic handheld device that goes beyond the text-entry interface by including multiple keys, a touch-screen, an inertial unit with six degrees-of-freedom and a vibrotactile feedback. Strong of its form factor and affordance, the keycube offers advantages with regards to mobility, comfort, learnability, privacy and playfulness. Thus, the combination creates a novel text-entry interface convenient for many use cases across the whole reality-virtuality continuum.
We demonstrate Transcalibur, which is a hand-held VR controller that can render a 2D shape by changing its mass properties on a 2D planar area. We built a computational perception model using a data-driven approach from the collected data pairs of mass properties and perceived shapes. This enables Transcalibur to easily and effectively provide convincing shape perception based on complex illusory effects. Our user study showed that the system succeeded in providing the perception of various desired shapes in a virtual environment. In the demonstration, users can explore VR application that can feel the sensation of wielding sword, shield and crossbow and with these fight with a dragon.
Locomotion in Virtual Reality (VR) is an important topic as there is a mismatch between the size of a Virtual Environment and the physically available tracking space. Although many locomotion techniques have been proposed, research on VR locomotion has not concluded yet. In this demonstration, we contribute to the area of VR locomotion by introducing VRChairRacer. VRChairRacer presents a novel mapping the velocity of a racing cart on the backrest of an office chair. Further, it maps a users' rotation onto the steering of a virtual racing cart. VRChairRacer demonstrates this locomotion technique to the community through an immersive multiplayer racing demo.
We present VRBox--an interactive sandbox for playful and immersive terraforming that combines the approach of augmented sandboxes with virtual reality technology and mid-air gestures. Our interactive demonstration offers a virtual reality (VR) environment containing a landscape, which the user designs via interacting with real sand while wearing a VR head-mounted display (HMD). Whereas real sandboxes have been used with augmented reality before, our approach using sand in VR offers novel and original interactive features such as exploring the sand landscape from a first person perspective. In this demo, users can experience our VR-sandbox system consisting of a box with sand, multiple Kinect depth sensing, an HMD, and hand tracking, as well as an interactive world simulation.
This demo showcases an interactive virtual reality experience for language learning that allows users to enter a virtual world to explore and interact with their surroundings while learning Spanish. Through immersive game-play on the Oculus Rift, users explore Spanish translations of everyday household items in a search-and-find format, scoring points when they can correctly identify and select objects. Users are able to put what they learn into practice in real time. Study participants who tried the experience said they found this method of language learning to be more enjoyable than traditional methods of studying due to the gamification created and it not "feeling like studying". As virtual reality headsets continue to become more accessible to the public, addresses the cost limitations of traveling overseas to achieve immersion in foreign language. This application can be expanded to most real-world scenarios and locations. Additionally, it can be applied to any language.
We demonstrate the online deployment of Geollery, a mixed reality social media platform. We introduce an interactive pipeline to reconstruct a mirrored world at two levels of detail - the street level and the bird's-eye view. Instead of using offline 3D reconstruction approaches, our system streams and renders a mirrored world in real time, while depicting geotagged social media as billboards, balloons, framed photos, and virtual gifts. Geollery allows multiple users to see, chat, and collaboratively sketch with spatial context in this mirrored world. We demonstrate a wide range of use cases including crowdsourced tourism, interactive audio guide with immersive spatial context, and meeting remote friends in mixed reality. We envision Geollery will be inspiring and useful as a standalone social media platform for those looking to explore new areas or looking to share their experiences. Please refer to https://geollery.com for the paper and live demos.
This paper explores how human perceptions, actions, and interactions can be changed through an embodied and active experience of being a smaller person in a real-world environment, which we call an egocentric smaller person experience. We developed a wearable visual translator that provides the perspective of a smaller person by shifting the wearer's eyesight level down to their waist using a head-mounted display and a stereo camera module, while allowing for field of view control through head movements. In this study, we investigated how the developed device can modify the wearer's body representation and experiences based on a field study conducted at a nursing school and museums, and through lab studies. Using this device, designers and teachers can understand the perspectives of a smaller-person including a child in an existing environment.
Current virtual reality applications do not support people who have low vision. We present SeeingVR, a set of 14 tools that enhance a VR app for people with low vision by providing visual and audio augmentations. A user can select, adjust, and combine different tools based on their preferences. We demonstrate the design of SeeingVR in this paper.
This paper describes a Virtual Reality (VR) exergame platform used to investigate player motivation and experience of physical activity through variations of game designs. The VR-Rides platform combines a desk cycle, real-world imagery, an HTC VIVE head-mounted device (HMD) with controllers and a Microsoft wrist band, such that the player can navigate locations in a safe immersive virtual environment. Panorama images come from Google Street View. Two games were developed to explore motivation, enjoyment and their link to player's perceived experiences in immersive VR exergaming setup. The platform acts as a test-bed to iteratively design and evaluate theory driven immersive VR game designs to support activities pertaining to player's health and wellbeing goals.
Growing organizational safety awareness is propelling the interest in development of novel solutions for fire training. Significant effort in this area has previously gone into designing various immersive virtual reality (VR) systems in hopes of enabling safe exploration and experiencing the consequences of ones actions in scenarios that would be too hazardous in the real life. Yet, the fact that VR generally lacks the sensory feedback of a real life fire situation has been found to impede a sense of realism and validity of such experiences. As part of our efforts to mitigate this issue, we now present a new prototype fire evacuation simulator designed to deliver not just audiovisual feedback, but likewise real-time heat and scent stimulation.
This paper describes progress made in design and development of a new digital musical instrument (MIDI controller), MoveMIDI, and highlights its unique 3D positional movement interaction design differing from recent orientational and gestural approaches. A user constructs and interacts with MoveMIDI's virtual, 3D interface using handheld position-tracked controllers to control music software, as well as non-musical technology such as stage lighting. MoveMIDI's virtual interface contributes to solving problems difficult to solve with hardware MIDI controller interfaces such as customized positioning and instantiation of interface elements, and accurate, simultaneous control of independent parameters. MoveMIDI's positional interaction mirrors interaction with some physical acoustic instruments and provides visualization for an audience. Beta testers of MoveMIDI have created emergent use cases for the instrument.
Meditative movement involves regulating attention to the body whilst moving, to create a state of meditation. This can be difficult for beginners, we propose that drones can facilitate this as they can move with and give feedback to whole body movements. We present a demonstration that explores various ways drones could facilitate meditative movement by drawing attention to the body. We designed a two-handed control map for the drone that engages multiple parts of the body, a light foam casing to give the impression that the drone is floating and an onboard light which gives feedback to the speed of the movement. The user will experience both leading and following the drone to explore the interplay between mapping, form, feedback and instruction. The demonstration relates to an expansion of the attention regulation framework, which is used to inform the design of interactive meditative experiences and human-drone interactions.
In this demonstration, we present iScream!, a novel gustosonic experience that generates unique digital sounds as a result of eating ice cream. The system uses capacitive sensing to detect eating actions and based on these actions, it plays out six different playful sounds to facilitate a playful eating experience. Our aim is to support a playful way of eating because we believe that interactive technology offers unique opportunities to facilitate novel engaging eating experiences. Ultimately, with this work, we aim to inspire and guide designers working with interactive playful gustosonic experiences, which open up new interaction possibilities to experience eating as play.
Pusing Tiang (Malay; loosely translated as Round-a-Pole) is an installation that augments circle dance by turning it into a game. It was pilot tested in a school among 16 primary schoolchildren. The results show that the installation encourages schoolchildren to synchronise their movement through play. The installation matches CHI 2019 theme of weaving the threads of CHI and playing the game at CHI symbolises the Celtic knot logo of strength and friendship.
Collaborative movie viewing with the loved ones increases connectedness and social bonds within family members and friends. Furthermore, with the rapid adoption of personal mobile devices, people often engage in this activity being geographically separated. However, conveying our feelings and emotions about a recently watched movie or a video clip is often limited to a post on social media or a short blurb on an instant messaging app. Drawing on the popular interest in quantified-self, which envisioned one collecting and sharing biophysical information from everyday routines (e.g., workouts), we have designed and developed Movie+, a mobile application, which utilizes personal biophysical data to construct an individual's "emotional fingerprint" while viewing a video clip. Movie+ allows the selective sharing of this information through different visualization options, as well as rendering others' emotional fingerprints over the same clip. In this submission, we outline the design rationale and briefly describe our application prototype.
Family connections are maintained through sharing reminiscences, often supported by family photographs which easily prompt memories. This is increasingly important as we age, as picture-based reminiscence has been shown to reduce older adults' social isolation. However, there is a gap between sharing memories from physical pictures and the limited support for oral social reminiscence afforded by digital tools. PhotoFlow supports older adults' picture-mediated social storytelling of family memories using an intuitive metaphor mirroring sharing physical family pictures on a table top. The app uses the speech of oral storytelling to automatically organize pictures based only on what has been said. This simplifies the overall process of family picture interactions by leveraging one enjoyable aspect to ease a more effortful one. In particular, the familiar table top interaction metaphor has the potential to bridge the gap between physical picture reminiscence and managing digital picture collections.
I investigate how individuals can employ human centered design techniques to adopt positive behaviors in their everyday life. I have designed and built a tool that supports individuals in planning physical activity using design processes such as understanding other people's behavioral needs, ideating behavioral interventions and prototyping behavioral solutions using evidence-based techniques. In my dissertation I will demonstrate how tools can support iterative behavioral design by evaluating behavioral solutions, drawing insights from testing, and making iterations to a behavioral solution. This work will result in new tools and design methods that enable people to adapt evidence-based techniques in a way that fits with their everyday needs.
This project aims to develop a voice training application for transgender people. Voice training is typically conducted by a speech therapist, and consists of personalized sessions that support individuals in changing their voices (such as modifying pitch, resonance, or speech patterns). The reasons why people may pursue voice training are varied, but often includes discomfort with voice being misaligned with gender identity. Training with a speech therapist may be inaccessible due to health disparities; thus, a technological solution, as I propose in my research, is necessary. This project will address existing constraints to design a novel voice training application in partnership with community members, using a participatory research methodology and combining the fields of speech science, feminist and queer theory, and HCI.
This is a practice based PhD research that explores inter-affective movement-based communication as an approach to mediated connectedness (an immediate, felt experience of feeling close to another person) over distance in dyads. Inspired by recent socio-cognitive theories (e.g. enactive intersubjectivity [8], synergistic approach [16], in this thesis communication is viewed as a dynamic coordination between two holistic living bodies, rather than two abstract minds transmitting information from sender to receiver. The main hypothesis is that coordinated inter-affective movement can facilitate the feeling of connectedness in mediated settings. I will creatively explore this assumption by developing a sequence of experimental design artifacts.
Current food consumption patterns are unsustainable. The food system is globalized and dominated by a few large organisations, which dis-empowers people to make changes to it. However, grassroots communities are important in engendering positive change from the bottom up. Long-term thinking is a key to empowering these communities in transitioning towards sustainable food systems. This research is concerned with practices of "futuring" in grassroots communities and how HCI can facilitate openness, participation, and coordination in constructing visions of the future, and in reconciling these with the everyday practices of the communities.
Mindfulness meditation has the potential to help practitioners cope with their stress. Currently, projects often use corrective feedback models to help people understand when they are out of a mindfulness state. My dissertation uses research by design to build a technology intervention for mindfulness meditation that adopts a strategy of gently guiding and supporting the user's in-the-moment experience of practising meditation through a natural soundscape that responds to the user's brainwave activity collected from the Muse EEG Headset.
Social platforms present a challenge for self-presentation and identity management by obscuring audiences behind algorithmic mechanisms. Users are increasingly aware of this and actively adapting through folk theorization, but we do not know how users are coping with the constant change endemic to these platforms, or how we can assist users in this coping process. My dissertation will examine how users perceive and adapt to the constantly-changing platform space using self-presentation and audience management as an illustrative case.
My research explores how individuals with mental illness express themselves online and off. Through digital ethnography, including interviews with Instagram users and manual collection of public content on Instagram, I have holistically examined the experience of mental illness as expressed through social media. This user-centric approach reveals and addresses the limitations of computational techniques, which my dissertation work will address by combining qualitative methods with generative algorithms to explore new 'ways of seeing' mental illness. I will create new tools enabling users to generate representations from their own posts in support of creating new representations of mental illness, advancing algorithmic fairness, and confronting technological forms of oppression online.
Programming requires expertise to employ effectively. My research aims to help end user programmers more effectively author, understand, and reuse code and data through the design of new languages and program visualization tools. New programming languages can raise the level of abstraction to focus on relevant domain-specific details. Improved tools can better align with and enrich end user programmers' mental models. Visualizing program state and behavior promotes program understanding, and can proactively surface surprising or incorrect results. My future work proposes to explore new visualization techniques and languages to facilitate understanding of constraint programming systems.
The following extended abstract describes a research plan for and preliminary findings of a dissertation thesis on player types. The described four studies aim to answer to questions of what player types are, how it is possible to improve such categorizations, and whether player typologies based on self-reports can be validated by emotional responses during playing.
In complex chronic care, patients' ongoing awareness of their health status and ability to articulate health needs are vital to active participation in care, yet they face various challenges that could thwart their potential to engage in such participation. My research explores how design methods in HCI can evolve to meet these challenges by engaging both adolescents and family caregivers throughout the process of tracking the patients' illness experiences and co-designing rich representations that are expected to support adolescents' communication of these experiences in care. This thesis will contribute 1) a critical understanding of the ways in which human-centered design can address primary challenges that adolescents face when engaging in care, 2) a novel method for conducting co-design research with chronically ill patient families, and a 3) family-centered mobile health technology that demonstrates the feasibility of engaging pediatric patient families.
Multi-device environments have an enormous potential to enable more flexible workflows during our daily work. At the same time, visual data exploration is characterized as a fragmented sensemaking process requiring a high degree of flexibility. In my thesis, I am aiming to bring these two worlds into symbiosis, specifically for sensemaking with multivariate data visualizations and graph visualizations. This involves three main objectives: (i) understanding the devices' roles in dynamic device ensembles and their relations to exploration patterns, (ii) identifying mechanisms for adapting visualizations for different devices while preserving a consistent perception and interaction, and, finally, (iii) supporting users and developers in designing such distributed visualization interfaces, e.g., through specific guidelines. As specific contributions, it is planned that (i) and (ii) emerge into a design space, while (iii) leads to a set of heuristics. So far, I was able to extensively work on the first objective as well as to touch on the other two.
My research introduces expressive biosignals as a novel social cue to improve interpersonal communication. Expressive biosignals are sensed physiological data revealed between people to provide a deeper understanding of each other's psychological states. My prior work has shown the potential for these cues to provide authentic and validating emotional expression, while fostering awareness and social connection between people. In my proposed research, I expand on this work by exploring how social responses to biosignals can benefit communication through empathy-building and social support. This work will scope the design space for expressive biosignals and inform future interventions for a variety of social contexts, including interpersonal relationships and mental health.
Creative problem-solving requires both exploratory and evaluative thinking skills. The contextual, open-ended nature of creative tasks makes them uniquely challenging to teach and learn. People tend to under-explore in problem-solving, using the most available representation of a problem and hindering potentially more creative solutions. My dissertation examines how inventive scaffolds provide feedback between the exploration and evaluation processes of creative problem-solving, potentially amplifying creativity of solutions. I investigate this through two interventions. First, interactive guidance and adaptive suggestions embodied in the CritiqueKit system to improve critique and evaluation of creative work. Second, problem-framing scaffolds to reduce fixation and enhance exploration. My research demonstrates methods for increasing human inventiveness with relevance in creative education and the design of creativity support interfaces.
My research examines new ways of persuading citizens to change their behavior towards waste management, and protect the environment. As a first step towards contributing to research, I conducted a user-based study, to find what strategies could be used to motivate national citizens to adopt positive waste disposal behaviors. Mapping the results to their matching persuasive technology techniques and operationalizing them in a mobile web platform, I show how mobile persuasive system interventions could be designed to promote positive and environmentally responsible behaviors and protect the environment from pollution.
The introduction of technology into the worlds of fashion and haute couture, has made it possible for fashion designers and technologists to create and experiment with garments and wearables in a variety of novel and expressive forms. Several of these haute couture garments infused with technology are shown on international runways and can ultimately influence the design of consumer fashion and wearable products. Within this context, I describe my dissertation which aims to explore and understand the role of technology throughout the process of design and fabrication in the haute tech couture domain and uncover broader implications for the design of wearables.
This overview describes my ongoing dissertation research on diversity within collaborative video game design. First, I explain why research into daily work within this field is needed, especially with a focus on diversity. Next, I briefly review previous research and identify three key areas for considering diversity in the field: participation of underrepresented and marginalized groups, the structure of organizations, and collaborative work tool selection and use. I then outline my qualitative research approach of conducting semi-structured interviews with video game designers. Finally, I present some preliminary results and expected contributions for this research.
Industry and research increasingly explore opportunities to make our homes smart, e.g. through the Internet of Things (IoT). Technological developments nurture this rise of smart products, seemingly corresponding to households' needs. Yet, these domestic environments remain a complex domain to study or design for. This work explores the understudied complexity of families' needs and values in relation to connected and smart technology, in particular as a multi-user group. By leveraging participatory and do-it-yourself practices, I aim to engage families in discussion - and empower them to externalize and reflect upon their views. As such, I can study their reflective practices to reveal (tacit) understandings and (latent) needs which informs future developments in smart home technologies.
Digital data is a pervasive component of modern society, with people managing a growing number of data types across many devices. My research explores people's choices on what to keep over the long-term and aims to design personalized data management tools. In a first study, I characterized individual differences in data preservation behaviors. I plan to use interviews, a survey, and probing methods to further extend this characterization and define a design space for long-term data management. Then, I plan to build and evaluate a prototype that synthesizes findings from all my studies.
This describes the background and motivation for dedicating my PhD to the exploration of socially-focused technologies for childhood cancer patients. Very little work has been done, especially in the field of human and child computer interaction, to explore the ways in which the hospital context in conjunction with the cancer experience impact children's social and emotional well-being during middle childhood (ages 6-12), and in turn how technology could improve their experience. My research seeks to (1) empower children with cancer by providing a platform for them to voice their own experiences with isolation, loneliness, and loss of a normal childhood, as well as how technology may better support their needs, (2) contribute design knowledge about how to support meaningful social interaction and play that is age and 'ability' appropriate, and (3) provide insight for future design and evaluation studies by better understanding constraints/opportunities for socially-focused technologies intended for use in a real world pediatric hospital environment.
This doctoral work considers how to best co-design with minimally-verbal children on the autism spectrum in classroom contexts. It focuses on 1) leveraging personal interests and individual strengths to foster engagement, social interaction and self-expression through novel technologies and 2) child-centred, holistic methodological approaches to co-design work. This research questions how integrating these may better engage and include minimally-verbal children on the spectrum in the co-design of digital technologies.
Despite pervasive messaging about the dangers of "screen time," children and families remain avid consumers of digital media and other technologies. Given competing narratives heralding the promise or the peril of children's technology, how can designers best serve this audience? In this panel, we bring together world experts from: children's media and communications, pediatrics and human development, HCI and design, and industry product development to debate the validity of pundits' concerns and discuss designers' opportunities and obligations with respect to creating products for this user group. Panelists bring diverse--and sometimes conflicting--perspectives on the conceptual frameworks that are most appropriate for understanding family technology use, the ethical considerations designers should bring to this space, and the most pressing needs for future research. Grounding the conversation in guidance from the audience, panelists will share their visions for a research agenda that separates moral panics from credible concerns and promotes the design of positive digital experiences for children and families.
Tangible user interfaces have a rich history in HCI research ever since their introduction two decades ago. But what are the practical implications, the commercial potential, and the future of this influential paradigm? This panel starts by looking into the importance of tangible interaction and its current role. It will then draw on the expertise of both the panelists and the audience to speculate about its future and new opportunities for the field. The panelists represent a variety of perspectives from both industry and academia, and includes some of the most well-known innovators in the field. The format builds on the CHI 2006 panel The state of tangible interfaces: projects, studies, and open issues, which shared some of the same organizers.
Since its inception in the early 1980's, HCI and UX have sought wider recognition and influence. Now digital transformation, a pervasive shift in the role of information technology, will offer both practitioners and researchers far more influential roles in organizations. This shift, which is taking place in industry, education and government is part of a larger shift to a global, digitally connected society.
This panel builds on the theme of CHI'19 - Weaving the Threads - through its focus on how HCI/UX, research and practice will be integrated as organizations transform into digitally-driven entities.
The panel brings together thought leaders with backgrounds in both academia and industry. With extensive audience participation, we will explore the implications of digital transformation on the roles of HCI/UX, the challenges, and the new skills needed to support a culture change and collaborate with a wider range of stakeholders.
It is our hope that this panel can become a jumping off point for future work, built on our belief that HCI/UX are vital drivers of new technologies, and of beneficial societal transformations. Through national agencies and, perhaps in time, becoming part of the UN's Sustainable Development Agenda, would position HCI/UX to fulfill its role as a key driver of both business and social transformation.
As an interdisciplinary field, CHI embraces diverse research practices ranging from controlled lab experiments to field studies in homes and communities. While quantitative research in the lab emphasizes the scientific rigor for testing hypotheses, qualitative research in the wild maximizes the understanding of contexts of using technologies. Furthermore, each type of research inherently makes its impact in different ways. This panel invites researchers with varied backgrounds to talk about the tensions and trade-offs between research in the lab and in the wild, in aspects of scientific rigor, real-world relevance and impact. The goal is to enhance the mutual understanding between researchers with diverged values and practices within the CHI community.
An ongoing challenge within the diverse HCI and social computing research communities is understanding research ethics in the face of evolving technology and methods. Building upon successful town hall meetings at CHI 2018, GROUP 2018 and CSCW 2018, this panel will be structured to facilitate audience discussion and to collect input about current challenges and processes. It will be led by members of the ACM SIGCHI Research Ethics Committee. We will pose open questions and invite audience discussion of practices centered on recent "hot topic" issues. For this year's town hall, the primary focus will be on paths to balancing the often-competing regulatory frameworks under which we operate (some of which having recently undergone significant revisions) with our community's efforts to reveal ethical challenges posed by new interactive technologies and new contexts of use. We will engage the audience in discussions on whether there is a non-colonial role for ethics education within the broad HCI community, how that may capture the cultural and disciplinary differences that are woven into CHI's fabric, and how research ethical issues should be handled in SIGCHI paper submission and review process.
As a scholarly field, the ACM SIGCHI community maintains a strong focus on conferences as its main outlet for scholarly publication. Historically, this originates in how the field of computer science adopted a conference-centric publication model as well as in the organizational focus of ACM. Lately, this model has become increasingly challenged for a number of reasons, and multiple alternatives are emerging within the SIGCHI community as well as in adjacent communities. Through revisiting examples from other conferences and neighboring communities, this panel explores alternative publication paths and their opportunities and risks.
Some life experiences can generate profound and long-lasting shifts in core beliefs and attitudes, including subjective transformation. These experiences can change what individuals know and value, their perspective on the world and life, evolving them as a grown person. For these characteristics, transformative experiences are gaining increasing attention in psychology, neuroscience, and philosophy. One potentially interesting question related to transformative experiences concerns how they can be invited by means of interactive technologies. This question lies at the center of a new research program, transformative experience design, which has two aims: (1) to investigate phenomenological and neurocognitive aspects of transformative experiences, as well as their implications for individual growth and psychological well-being; and (2) to translate such knowledge into tentative design principles for developing experiences that aim to support meaning in life and personal growth. Our goal for this SIG is to discuss challenges and opportunities for transformative experiences in the context of interactive technologies.
Augmented reality and spatial information manipulation is being increasingly used as part of environ- ment integrated form factors and wearable device such as head-mounted displays. The integration of this exciting technology in many aspects of peoples' lives is transforming the way we understand computing, pushing the boundaries of Spatial Interfaces into virtual but embedded environments. We think that the time is ripe for a renewed discussion about the role of Augmented Reality within Spatial Interfaces. With this SIG we want to expand the discussion related to Spatial Interfaces and the way they impact interaction with the world in two areas. First, we aim to critically discuss the definition of Spatial Interfaces and outline the common components that build such interfaces in today's world. Second, we would like the community to reflect on the path ahead and focus on the potential of what kind of experiences can Spatial Interfaces achieve today
This SIG meeting will examine the domestic technologies and routines of diverse households as well as the role of gender in the use and maintenance of these technologies. Our aim is to bring together domestic technology experts and social scientists who study the domestic environment across a range of socio-economic groups to discuss the present and the future of domestic technologies, including their impacts on the lives of those who are often unvoiced, such as paid domestic workers.
The proliferation of digital technologies has facilitated the adoption of innovative approaches to addressing global maternal health challenges. Worldwide, HCI researchers - from both resource-constrained and resource-rich countries are actively engaged in developing novel responses to an ever-evolving maternal health landscape. However, opportunities for these researchers to interact and engage in sustained dialogue and collaboration are limited. The purpose of this Special Interest Group (SIG) is to bring these professionals together to support an active global network of maternal health researchers and facilitate collaboration across borders.
If social, economic and environmental sustainability are linked, then support for the increasing number of non-profit groups and member-owned organizations offering what Trebor Scholz has called "platform cooperativism" [17] has never been more important. Together, these organizations not only tackle issues their members identify in the world of work, but also provide network-driven collections of shared things (e.g., books, tools) and resources (e.g., woodworking spaces, fab labs) that benefit local communities, potentially changing, not just use of resources at community level, but socio-economic structures on the ground (e.g., [15]). In contrast to for-profit services often associated with the sharing economy (e.g., Uber, Airbnb), platform co-ops attempt to advocate ecological, economic and social sustainability, with the goal to promoting a fairer distribution of goods and labor, ultimately creating a stronger sense of community. While some HCI sub-communities (e.g., CSCW) have started to explore this emergent phenomenon, especially leveraging ethnographic research methods, researchers have called for more diverse HCI approaches to address the growing scope of challenges within platform co-ops, member-driven exchange systems, and cooperativism more broadly. This SIG aims to bring together researchers from different HCI sub-communities to identify future research directions in HCI around cooperativism and platforms.
Asynchronous Remote Communities (ARC) methodology has been used to explore HCI topics in a range of contexts. This innovative methodology takes advantage of the technological tools and platforms that are often the subject of HCI research to extend existing methods of data collection, pushing methodology beyond historical modes and allowing better connection with populations who have previously been left out of the research process. This SIG will make space for researchers and practitioners who are interested in using ARC methodology to connect with people who have already used ARC and discuss challenges and opportunities with this methodology, and how to extend similar kinds of innovative, distributed computing based methods into new contexts.
Participatory Design (PD) provides unique benefits in designing technology with and for specific target audiences. However, it can also be an intensive and difficult process, with unexpected situations which can arise at any stage. In this Special Interest Group (SIG), we propose that PD researchers may exchange "war stories'' about their unexpected and difficult experiences with PD. This will facilitate reflective discussions and the identification of possible solutions, and enable future PD research to plan for similar situations, thereby making difficulties a little less unexpected.
Due to policies supporting the inclusion of disabled children in mainstream schools and the use of technologies to enable personalized schooling, there are broad research incentives and opportunities to design technologies for disabled children in educational contexts. A workshop at CHI 2018 with researchers and practitioners working on accessible and assistive technologies for children in educational settings raised two on-going challenges in this area: (1) Very few assistive technologies proposed in research are evaluated in context, notably because of the many practical constraints on evaluation when working with these small communities of diverse individuals; (2) The scholars turning their attention to context raise new design preoccupations, such as interdependence, for which we do not yet have a community consensus regarding the suitable approaches to evaluation. Although this workshop was conducted with researchers working with children with visual impairments, these challenges apply more widely to the field of technologies for children with disabilities in educational settings. The purpose of this SIG is to bring together researchers and practitioners to encourage the homogenization of evaluation approaches for accessible and assistive technologies in schools.
In this SIG, we propose a gathering of researchers and practitioners thinking about HCI in learning and educational contexts to foster an ongoing Learning and Education community at CHI. With the recent increase in CHI submissions relating to learning (40% more submissions than previous CHI), this SIG is an opportunity to foster an inclusive dialogue on designing and studying phenomena, tools, and processes related to learning and education. This SIG will bring together researchers, educators, and practitioners with three goals in mind: (1) discussing more inclusive cross-disciplinary perspectives on learning; (2) defining future directions and standards for learning and education contributions in CHI; and (3) building community across research/practice boundaries.
The global refugee crisis is a significant current challenge affecting millions of children. The process of refugee migration comes with major immediate as well as long-term risks to children's physical and mental health, education, and prospects. Despite the multiple dangers and challenges during migration, most refugee families have access to and make use of interactive technologies, prior to, during, and after migration. This SIG meeting is an opportunity to discuss novel potential roles for technologies to alleviate some of the challenges faced by child refugees.
The increasing corpus on queer research within HCI, which started by focusing on sites such as location-based dating apps, has begun to expand to other topics such as identity formation, mental health and physical well-being. This Special Interest Group (SIG) aims to create a space for discussion, connection and camaraderie for researchers working with queer populations, queer people in research, and those using queer theory to inform their work. We aim to facilitate a broad-ranging, inclusive discussion of where queer HCI research goes next.
Sketching is universal. It enables us to work through problems, communicate complexity, work with people who have diverse needs, and document work processes we employ within Human-Computer Interaction. Increased interest in sketching as a methodology within HCI has led to increased attendance of interactive courses, meet-ups, and discussion groups, from those who are complete beginners, to seasoned researchers with the skills and knowledge to support others. By bringing together these individuals, we are able to advance the understanding of how sketching underpins research, and how we might work with sketching as technology advances. SketCHI 2.0 aims to support ongoing discussions and collaborations around sketching in HCI, and further build the Sketching HCI community. As well as drawing on location, feedback, and discussion, we will form collaborative working groups to further our collective interest in this area and conduct high-level discussions about the practical applications and outputs of sketching in HCI.
Gender disparity in high tech is a long-standing challenge. The number of women in tech is lower than 30 years ago; women leave the field 50% more often than men; attrition costs companies money and talent. This SIG addresses the issue of gender and retention by changing key work practices. As a maker community critique, giving and receiving feedback on one's work, is a necessary and everyday experience. Yet especially women lose self esteem when criticized. Since HCI professionals are key participants in critique it is a good place to start improving interactions within diverse teams. In this SIG we engage the community in a critical examination and reinvention of the critique process. Using a Mini Living Lab format, participants share experiences of their critique practices, brainstorm improvements, and try them out with a practice problem. The organizers share insights from their Living Lab company-research partnerships addressing gender and retention.
Currently, the United Nations High Commissioner for refugees estimates that there are around 65.8 million forcibly displaced people worldwide [16]. As digital technologies have become more available, humanitarian researchers and organizations have begun to explore how technologies may be used to address refugee needs under the umbrella of Digital Humanitarianism. Interest in refugee and humanitarian contexts has also been expressed within the HCI community through the organization of workshops at conferences. While previous engagements within the HCI community have focused on our experiences of working within refugee contexts as well as developing a common research agenda, we have yet to explore how HCI research fits within wider humanitarian research and in relation to digital humanitarianism. This SIG invites HCI researchers to engage in discussions on situating HCI research within humanitarian research and response.
Augmenting grassroots community policing (CP) efforts with technologies that assist citizens is a promising strategy for reducing real and perceived fear of crime. We used a human-centered design approach, working with residents of the St. Paul Summit-University neighborhood, to discern abstract functionalities for developing new CP technology. We then created and evaluated NeighBoard (Figure 1), which aims to enhance the social fabric of communities by letting citizens implement their own strategies for preventing crime and maintaining safety in their neighborhood.
The current Indian education system promotes competition among students, emphasizing highly on ranks and marks scored. As a consequence, students tend to drift away from a collaborative mindset to a competitive one. The learning gap is even larger in schools catering to lower income groups due to the absence of digital infrastructure and digital knowledge. As a result, a large section of Indian students is cut off from a major source of knowledge, the internet, causing the gap to increase between students in the society. With co ed, we attempt to bridge these gaps by providing a platform to foster mutual learning as well as combining digital mediums and the long-established pen and paper in a seamless manner.
Visual impairment can profoundly impact well-being and social advancement. Current solutions for accessing graphical information fail to provide an affordable, user-friendly collaborative platform for visually impaired and sighted people to work together. Therefore, sighted users tend to have low expectations from visually impaired people while working in a team. Hence, visually impaired people feel discouraged to participate in a mixed population collaborative environment. Consequently, their generative capabilities remain devalued. In this paper, we propose an audio-haptic enabled tool (Drawxi) for free-form sketching and sharing simple diagrams (processes, workflows, ideas, perspectives, etc.). It provides a common platform for visually-impaired and sighted people to work together by communicating each other's ideas visually. Thus, enabling the discovery of generative capabilities in a hands-on way. We relied upon participatory research methods (Contextual inquiry, Co-Design) involving visually impaired participants throughout the design process. We evaluated our proposed design through usability testing which revealed that collaboration between visually impaired and sighted people benefits from the use of common tools and platforms. Thereby, enhancing the degree of their participation in a collaborative environment and quality of co-creation activities.
Body shape anxiety is one of the common problems distressing people around the world. While people are constantly suffering from body shape anxiety, they do not realize that beauty standard is not an absolute truth and that it is changing and depends on time and space. To help people free from the constraint imposed by social beauty standards and connect those who have concerns over their body shape, we devised a technological solution named "Shadoji", which can bring people a new perspective on their body shapes, give them a chance to create their unique body shape emoji, and to explore diverse body shapes from users around the world.
This study focuses on solutions to issues that arise from gaps in communication between primary family caregivers of older adults and respite caregivers. We collected data through 18 semi-structured interviews with primary family and respite caregivers and qualitatively analyzed the interviews to extract common needs. Participants identified three main needs that our designs address: building trust through status updates, learning routines & care management, and accessing technology. Based on those needs, we designed a prototype of an application which connects primary family caregivers with respite caregivers and facilitates communication between the involved parties. This design can serve as a framework for future work designed to improve elder care in general, the well-being of caregivers, and the effectiveness of respite care.
A healthy romantic relationship is one of the life goals of many couples. Sharing feelings, emotions, thoughts, activities, and even finances enables couples to spend quality time with each other. In particular, financial management plays an important role in relationship quality of emerging adult couples. We present Amor, a mobile application that bridges emerging adult couples closely to pool and spend their money together for a common goal. Amor also allows the couples to explore activities that harmonize with their characteristics as well as an encouragement for the two to achieve their desired goals.
Occidental societies are becoming increasingly demanding. Citizens often feel overwhelmed and feel the need to rest, which they do so by isolating themselves from others. Individuals who use this time alone to reflect on their situation find that this reflection helps create deeper connections with others afterwards. With this in mind, we developed Sentiō, a solution that aims to encourage people to take time for themselves in hopes that it will lead them to be more open-minded when interacting with others. The solution helps people focus on bodily sensations and generates a comfortable and effortless time for introspection. Sentiō uses the subconscious benefits of perceiving one's heartbeat to foster personal reflection.
While having strong social networks is essential for good quality of life, socializing gets more difficult as we age even if we are surrounded by people in a nursing home setting. Care staff and family members want to be able to connect with residents/loved ones but have limitations (time constraints, unsure how), and it can be challenging to feel a connection to another resident who has differing abilities. We propose that through sharing strong positive memories, members of a nursing home community (residents, care staff, and families) will be able to build empathy for one another and thus strengthen their community bonds. By applying multi-modal technologies to a familiar medium, the book, we provided a means by which intergenerational users, from various backgrounds, could interact with and gain meaning from the content.
Children's ability to walk to school or play around their neighborhoods without parental supervision has severely declined in the past decade. Loss of local activities and mobility may leave children prepared for transitions to adulthood and make them less independent and may have negative health effects. In this paper, we discuss the findings of our research on children's unsupervised play in their neighborhoods. We propose HelloBox, a system including an app, wearable RFID tags, and check-in stations to keep track of children when they are outside without an adult. HelloBox lets parents know where their children are without restricting their independence and builds community between local families. The project focuses on improving children's independent mobility and encouraging social interactions between families within neighborhoods.
Hearing loss is a prevalent public health crisis that often goes unnoticed in public eye due to its invisibility. People who are hard of hearing (HH) often experience social isolation caused by the lack of understanding from the hearing community. Our research revealed that there is a communication gap between the hearing and HH communities. The current state of art tends to focus on assisting HH people to participate in conversations, which places more responsibility on the HH community. There is a lack of responsibility of people in the hearing community. However, communication works in two directions; it requires both parties to take responsibility. In this paper, we present HearU - an integrated, multichannel system that includes public interaction. Through the creation of an immersive experience that increases people's awareness of hearing loss, we aim to use HearU as a medium to weave these two communities together as a whole.
Amyotrophic Lateral Sclerosis (ALS) is a serious and poorly understood disease, impacting 50,000 people a year globally. Our research found that people with ALS express a lack of connection with other people with the disease, and that the general public lacks awareness about ALS. We also identified an engagement problem with the currently available resources to connect and support people with ALS. To address these issues, we introduce 'PALS' - an accessible crowdsourcing and connection quilt, first hung like a tapestry in the ALS clinic, then later used as an interactive public display. The quilt offers the opportunity to access crowdsourced information concerning individual experiences of ALS. Our work offers three primary contributions: 1) adding to limited HCI research concerning the ALS community by establishing the needs, 2) applying the 'PALS' quilt design solution to these needs, and 3) combining three modalities: crowdsourcing, tangible tapestry displays, and interactive waiting education in a unique way.
Retirees suffer from impaired mobility, loss of friends and family, and loneliness. Although these problems could be mitigated through the use of digital communication devices and the internet, many retirees lack the skill and confidence to use them effectively. The existing community initiatives that aim to help retirees understand technology often lack volunteers to teach them. This is why we developed TechBuddies, a two-component system aimed at engaging more students as volunteers. The first component consists of an interactive display that raises awareness about volunteering opportunities by simulating an interaction between a retiree and the potential volunteer. The second component involves an app-based platform that facilitates communication about events and encourages long-term engagement. TechBuddies raises awareness about and provides a platform for inter-generational interactions between students and retirees who are a part of the same local community.
Heart rate variability (HRV) has become a wide-spread area for the investigation of the health and stress states of individuals. This paper aims at exploring the effectiveness of representing HRV measures with alternative modalities, other than visual displays, such as audio or haptics. Therefore, we undertook a preliminary study in which we applied a parameter mapping sonification approach to transform the HRV signal into an audible form. In this work, we sought to evaluate the human perception of the developed auditory interface. Hence, a dataset that involves interbeat interval measurements of individuals experiencing changes in mental state in the form of meditation was selected as the basis of the study. The HRV parameters of the dataset were mapped to acoustic features using a linear mapping technique. The feasibility of the system was assessed by measuring the learnability, performance, latency, and confidence aspects. The results suggest a great potential of incorporating auditory displays in the analysis of HRV. Participants were able to distinguish the different meditation states and types with minimal training time. However, further studies should be conducted on a larger population to provide verification of the findings of this preliminary study.
While learning science research has explored approaches improving students' problem solving skills by introducing tools that support students in metacognitive reflection, this work has focused on problems with clear solutions, rather than addressing the question of how metacognitive reflection can help students develop their self-regulation skills, which help students understand and control their learning environment through planning, practice, and self-evaluation. This paper presents Muse, an in-action chatbot interface that prompts students to reflect metacognitively on their self-direction process in the midst of working on their independent research projects. Students participate in the Design, Technology, and Research (DTR) program, which provides several undergrads the opportunity to self-direct an independent research project by using the socio-technological model Agile Research Studios (ARS). Results from a case study suggest that Muse helps students identify time consuming habits and set aside less important tasks by giving participants the opportunity to act on their reflections and adjust aspects of their process that they deem less effective.
While museums are often designed to engage and interest a wide variety of audiences, teenagers are a neglected segment. This PhD research in Digital Media explores how digital technologies can facilitate natural history museums in creating immersive museum experiences for teenagers (15-18 years old), especially through digital storytelling and gamification frameworks. This contribution would be a set of guidelines that will aid in designing interactive experiences inside these museums. So far, we have involved a total of 155 teens through co-design sessions, 130 in focus groups, and 98 in usability studies, as well as 3 museums, 12 curators, and 17 master students. Through qualitative analysis, our preliminary findings suggest that teenagers value gamification and storytelling elements when thinking about enjoyable museum tours, while curators value story-based narratives as the most prominent method to provide enjoyable museum experience for teens. Based on the findings identified, and in collaboration with the Madeira-ITI, two interactive mobile experiences targeted at teenagers were developed for the Natural History Museum of Funchal, Portugal.
This paper describes a PhD research project involving people with dementia and practitioners who work primarily with people with dementia to support engagement in meaningful activities and activities of everyday living. The aim of this work is to develop a technology which adapts to changing cognitive demands of people with dementia in order to facilitate continuous engagement in meaningful activities. In depth, semi-structured interviews were conducted with practitioners to understand their methods for personalization of activities and the implications for design of future adaptive technologies. Preliminary results from interviews with Occupational Therapists are presented.
Mobile app designers aim to develop the best mobile software interfaces in the least amount of time, and rely on testing ideas with prototypes in lieu of building costly, fully functioning applications. Yet, designers cannot effectively prototype some complex app experiences, including augmented reality applications like Pokémon GO, because existing tools lack the needed features, or because prototyping in them is too time intensive to be feasible. To solve this problem, we introduce Lake, a mobile application prototyping tool that enables the creation of complex mobile applications with the same ease as paper prototyping. By leveraging the Wizard of Oz technique used in paper prototyping in our digital medium, we enable designers to prototype at the same low cost as paper, but at a much higher fidelity. Through a pilot study (N=6), we find that designers are able to gather organic in-context feedback from complex prototypes made with Lake.
Nowadays, speech is becoming a more common, if not standard, interface to technology. This can be seen in the trend of technology changes over the years. Increasingly, voice is used to control programs, appliances and personal devices within homes, cars, workplaces, and public spaces through smartphones and home assistant devices using Amazon's Alexa, Google's Assistant and Apple's Siri, and other proliferating technologies. However, most speech interfaces are not accessible for Deaf and Hard-of-Hearing (DHH) people. In this paper, performances of current Automatic Speech Recognition (ASR) with voices of DHH speakers are evaluated. ASR has improved over the years, and is able to reach Word Error Rates (WER) as low as 5-6% [1][2][3], with the help of cloud-computing and machine learning algorithms that take in custom vocabulary models. In this paper, a custom vocabulary model is used, and the significance of the improvement is evaluated when using DHH speech.
Power wheelchairs (PW) are an example of an assistive technology in that they are used to increase, maintain, or improve the functional capabilities of persons with disabilities [6]. As seen in [5] [3], the commercially available products do not provide any assistance beyond enhanced mobility. Furthermore, existing PW research fails at comprehensively noting an individual's challenges, such as navigating through narrow passages or fixing their broken wheelchairs. Instead, they focus mostly on novel interaction methods such as BCI, head and gaze control [9] [15]. In this paper, we explore these individual needs and show that PWs have the novel potential to become a smart wheelchair at an affordable price. Our research follows the double diamond (DD) process [2] and relies on the participatory design (PD) methodology [7], which addresses all stakeholders involved. Namely, we consider individuals with wheelchairs, the assistive technology research community, and the PW industry. For further insight we also contacted medical doctors, healthcare professionals, and non- profit organizations. We spent time getting to know these communities through interviews, surveys, demonstrations, and continuous user inputs, aligning our work to the PD tools. We found that individuals using wheelchairs overall desire safety, accessibility, and a durable design. Guided by these results, we designed a proof of concept (POC) system called the Affordable Smart Wheelchair (ASW) for indoor use. This kit implements full-autonomy in the form of indoor navigation from one room to another and to predetermined docking locations through voice control. It also has semi-autonomous functions in the form of manual joystick control augmented with real-time collision avoidance and staircase detection.
The sense of sight takes a dominating role in learning mathematical graphs. Most visually impaired students drop out of mathematics because necessary content is inaccessible. Sonification and auditory graphs have been the primary methods of representing data through sound. However, the representation of mathematical elements of graphs is still unexplored. The experiments in this paper investigate optimal methods for representing mathematical elements of graphs with sound. The results indicate that the methods of design in this study are effective for describing mathematical elements of graphs, such as axes, quadrants and differentiability. These findings can help visually impaired learners to be more independent, and also facilitate further studies on assistive technology.
The availability of mHealth technologies has increased exponentially, particularly fitness and calorie tracking applications. Recent studies and anecdotal evidence has highlighted the potential of these technologies to serve as tools of bad eating behavior due to its focus on self-monitoring and calorie counting. The current research investigates on the potential of using social-orienting features of technology, specifically bandwagon and identity cues, to incentivize food-based nutrition (FBN) rather than a calorie-only approach. For this purpose, a 2 x 2 mixed factorial online experiment was conducted with bandwagon cue as a within-subject factor and identity cue as a between-subject factor. Results reveal that 67.6% of participants selected high bandwagon cue meals, regardless of its nutritional value. Bandwagon perception was the only significant predictor of meal selection, indicating that an increase in one unit improved the odds of an individual choosing a high bandwagon meal by 69%.
This paper describes a research project that aims to examine an unexplored design space-compassion cultivation for wellness. This project seeks to understand the components of compassionate interactions between new mothers and their proximate and primary supporters-partners -- to inform the design of compassion cultivating interventions for maternal wellness. A discussion of research activities undertaken thus far and preliminary results are presented.
Parents receive conflicting information on the benefits and burdens of children's technology use, especially novel technologies such as digital home assistants. To understand parents' views, we analyzed relevant Amazon Echo device product reviews posted to Amazon.com, and deployed a User Benefit and Burden Survey to 131 parents on Amazon Mechanical Turk to explore their perspectives of Amazon Echo digital home assistants. Our work explores parents' perceptions of the devices with regards to their children and families, in terms of attributes such as benefits and burdens. This study contributes an empirical, family-centered understanding of and design opportunities for whole home personal assistants in support of a diversity of families.
The rapid adoption of smart home devices has brought with it a widespread lack of understanding amongst users over where their devices send data. Smart home ecosystems represent complex additions to existing wicked problems around network privacy and security in the home. This work presents the Aretha project, a device which combines the functionality of a firewall with the position of voice assistants as the hub of the smart home, and the sophistication of modern conversational voice interfaces. The result is a device which can engage users in conversation about network privacy and security, allowing for the forming and development of complex preferences that Aretha is then able to act upon.
Generative design tools together with large screen displays provide designers an opportunity to explore large numbers of design alternatives. There are numerous design studies on exploring multiple simultaneous designs, but few present interface solutions and system features for such exploration. To the best of my knowledge, no study probes exploring a large design space with multiple simultaneous states. The premise of my research is that, if designers can work directly with large numbers of designs with new representations and tools as part of the design workflow, we should expect new patterns and strategies to emerge and change the design process. Such task environments may present novel actions, task sequences, methods and techniques. What are new actions and techniques that would enable working seamlessly with multiple designs? My research aims to answer this (and similar) questions; and, more specifically, to uncover how designers' augment their work through spatial structuring of the task environment to reduce the cognitive cost of working with multiple simultaneous designs on a large work-surface. I conducted a lab experiment with nine designers. The results suggest design features for new front-end gallery interfaces for managing a large set of design variations while enabling simultaneous editing.
Exergames are proven to be effective in helping older adults improve their physical and mental capabilities. However, older adults' maladaptation to exergames may occur due to the complexity of game tasks and interfaces. In this work, we show that familiarity can improve older adults' adaptation to exergames. The results of our first study indicate that older adults with higher level of familiarity to exergames exhibit higher level of motivation and ability. To maximize the effectiveness of exergames, it is helpful to provide older adults exergames which they are more familiar with. To evaluate the familiarity level of exergames, we propose a novel familiarity model with five factors, namely prior experience, positive emotion, repeated time, level of processing, and retention rate. Results from our second study show that the identified five factors have significant positive correlations with familiarity and there is a high positive correlation between familiarity levels and participants' satisfaction to the exergames.
We introduce a novel typing technique for special symbols in the keyboard-only environment. The technique, called GPK (Gliding on Physical Keyboard), consists of two steps for entering special symbols. First, a user draws the special symbol on the keyboard by gliding over keys; Second, the user can select the desired symbol from the predicted symbols generated by GPK. Users can also switch from this mode to normal typing mode. We also present an application of this input technique based on web browsers. A user study with nine participants who are familiar with keyboard input showed the input efficiency of GPK. We compared the typing efficiency of GPK with other special symbol typing methods. We could deploy this method to office environments where users have desktop computers with a keyboard only. It could also inspire future work of integrating this method into word processors, document preparation systems and web environments.
With recent interest in shape-changing interfaces, material-driven design, wearable technologies, and soft robotics, digital fabrication of soft actuatable material is increasingly in demand. Much of this research focuses on elastomers or non-stretchy air bladders. Computationally-controlled machine knitting offers an alternative fabrication technology which can rapidly produce soft textile objects that have a very different character: breathable, lightweight, and pleasant to the touch. These machines are well established and optimized for the mass production of garments, but have received much less attention as general purpose fabrication device compared to other digital fabrication techniques such as CNC machining or 3D printing. In this work, we explore a series of design strategies for machine knitting actuated soft objects by integrating tendons with shaping and anisotropic texture design.
Often, online ads are disruptive and annoying. As a consequence, ad blockers are used to prevent ads from appearing on a website. However, web service providers lose more than 35 billion dollars per year because of this development. As an alternative, we investigate the user enjoyment and the advertising effectiveness of playfully deactivating online ads. This video showcase illustrates the research method and the most interesting results of our previous research. Here, we assessed the perception of eight game concepts allowing users to playfully deactivate ads and implemented three well-perceived ones. These were found to be more enjoyable than deactivating ads without game elements, with one game concept being even preferred over using an ad blocker. We also found positive effects on ad effectiveness as compared to the baseline.
We present an interactive Virtual Reality (VR) experience that uses biometric information for reflection and relaxation. We monitor in real-time brain activity using a modified version of the Muse EEG and track heart rate (HR) and electro dermal activity (EDA) using an Empatica E4 wristband. We use this data to procedurally generate 3D creatures and change the lighting of the environment to reflect the internal state of the viewer in a set of visuals depicting an underwater audiovisual composition. These 3D creatures are created to unconsciously influence the body signals of the observer via subtle pulses of light, movement and sound. We aim to decrease heart rate and respiration by subtle, almost imperceptible light flickering, sound pulsations and slow movements of these creatures to increase relaxation.
In this video artwork, the author looks at the interaction between neurodiversity and media design, focusing on his acoustic condition of hyperacusis as well as his neurodivergent condition of Asperger's. There are many inner thoughts that the hyper sensorial acoustic disorder brings, and the author creates poems as an output, interrelating art and technology in the cognitive process, and underlining a cross-connection between the data and emotions.
Project ComunicArte is a virtual reality videogame for training the ability of public speaking. It is built as an environment where the speaker confronts a virtual audience that reacts in real time to the speaker's features, such as voice, gestures and bio-metric parameters (heart rate or skin conductivity, among others). The novelty of this videogame is that it is focused on the audience, since in real life, the only feedback we receive when we speak in public is that of our listeners. By their reactions we can determine if our communication is being effective. For that purpose, we included in the game a virtual audience based on agents that is able to provide feedback to the speakers in real time, so that they can react and adapt their speech accordingly.
With this video, we showcase the possibilities of the Internet of Things and machine learning to support improvisation in older people. Available smart products for older people tend to be developed with narrowly predefined use-scenarios, based on stereotypes of elderly as passive, immobile, and technologically incompetent. This type of products may restrict older people's existing capabilities of resourcefulness and autonomy [1]. As an alternative approach, we created Connected Resources, a family of combinable sensors and actuators for older people that encourages them to negotiate and situate use according to their personal and changing circumstances with a high level of freedom. The objects learn older people's unique ways of using these artifacts and share them through an online platform to encourage the development of new strategies. In this way, Connected Resources celebrate older people's creativity, instead of providing normative solutions or enforcing compliance with certain behavior.
It is increasingly hard for adults and children alike to be attentive given the increasing amounts of information and distractions surrounding us. We have developed AttentivU: a device, in a socially acceptable form factor of a pair of glasses, that a person can put on in moments when he/she wants/needs to be attentive. The AttentivU glasses use electroencephalography (EEG) as well as Electrooculography (EOG) sensors to measure attention of a person in real-time and provide either audio or haptic feedback to the user when their attention is low, thereby nudging them to become engaged again. We have tested this device in workplace and classroom settings with over 80 subjects. We have performed experiments with people studying or working by themselves, viewing online lectures as well as listening to classroom lectures. The obtained results show that our device makes a person more attentive and produces improved learning and work performance outcomes.
TrussFormer is an integrated end-to-end system that allows users to 3D print large-scale kinetic structures, i.e., structures that involve motion and deal with inertial forces. TrussFormer builds on TrussFab, from which it inherits the ability to create static large-scale truss structures from 3D printed connectors and PET bottles. TrussFormer adds movement to these structures by placing linear actuators into them: either manually, wrapped in reusable components called assets, or by demonstrating the intended movement. TrussFormer verifies that the resulting structure is mechanically sound and will withstand the dynamic forces resulting from the motion. To fabricate the design, TrussFormer generates the underlying hinge system that can be printed on standard desktop 3D printers. We demonstrate TrussFormer with several example objects, including a 6 legged walking robot and a 4m tall animatronics dinosaur with 5 degrees of freedom.
Childhood obesity is one of the greatest public health challenges of the 21st century, although it could easily be prevented by regular physical activity. Exergames have been applauded for their potential to counteract this tendency in a playful and motivating manner. However, current solutions often lack a user-centered, interdisciplinary design approach covering the physical and virtual design levels. Consequently, motivational factors, attractiveness and effectiveness of these fitness games remain limited. To contribute to this topic, we - a team of sports scientist and game designers - developed the adaptive exergame Plunder Planet (Fig. 1) with and for children and young adolescents [1]. It can be played as single or cooperative multiplayer [2] with two different motion-based controllers that require either haptic or gesture-based input movements (Fig. 2) and trigger different (social) gameplay experiences. Based on the player's heart rate and in-game performance the game difficulty and complexity are continuously adjusted to provide a maximum attractive and effective experience.
With emerging trends of notifying persons through ubiquitous technologies, such as ambient light, vibrotactile, or auditory cues, none of these technologies are truly ubiquitous and have proven to be easily missed or ignored. In this work, we propose Slappyfications, a novel way of sending unmissable embodied and ubiquitous notifications through a palm-based interface. Our prototype enables the users to send three types of Slappyfications: poke, slap, and the STEAM-HAMMER. Through a Wizard-of-Oz study, we show the applicability of our system in real-world scenarios. The results reveal a promising trend, as none of the participants missed a single Slappyfication.
Research indicates that personal adoption of emerging ubicomp technologies is being notoriously hampered by a variety of critical issues including trust, privacy and security. Issues such as these cannot be studied and understood by evaluating computer systems in isolation, but rather by taking a 'big picture' approach and examining their synergy with the broader social context. Traditional low-fidelity prototyping methods, such as interface mockups, are however poorly equipped to convey such broader settings. Video-based scenarios on the other hand are uniquely qualified to portray rich socio-technical ecosystems. By creating a set of provocative video scenarios that contextualize and provide a backdrop for prospective technologies, we thus seek to draw attention to the potentially important role that worldbuilding strategies might play in the future of low fidelity prototyping.
Fireworks are enjoyed throughout the world, but are primarily a visual experience. To make fireworks more inclusive for Blind and Low-Vision (BLV) persons, we have developed a large-scale interactive tactile display that produces tactile fireworks. Fast dynamic tactile effects are created at high spatial resolution, using directable nozzles that spray water jets onto the rear of a flexible screen. The screen furthermore has back-projected visual content and touch interaction.
A BLV focus group provided input during the development process, and a user study with BLV users showed that Feeling Fireworks is an enjoyable and meaningful experience. Quotes from blind users include "First time to get the feeling of what's happening in the sky. Fountain - awesome, first time had a feeling of that is what is a fountain.", "My mom always told me about fireworks but now I understand it." and "Now I know why people like fireworks."
Beyond the Feeling Firework application, this is a novel approach for scalable tactile displays with potential for broader use.
The nature has myriad plant organisms, many of them carrying unique sensing and expression abilities. Plants can sense the environment, other living entities and regenerate, actuate or grow in response. Our interaction mechanisms and communication channels with such organisms in nature are subtle, unlike our interaction with digital devices. We propose a new convergent view of interaction design in nature by merging and powering our electronic functionalities with existing biological functions of plants.
Cyborg Botany is a design exploration of deep technological integration within plants. Each desired synthetic function is grown, injected or carefully placed in conjunction with a plant's natural functions. With a nanowire grown inside the xylem of a plant [1, 2, 3, 4], we demonstrate its use as a touch sensor, motion sensor, antenna and more. We also demonstrate a software through which a user clicks on a plant's leaves to individual control their movement [6], and explore the use of plants as a display [5]. Our goal is to make use of a plant's own sensing and expressive abilities of nature for our interaction devices. Merging synthetic circuitry with plant's own physiology could pave a way to make these lifeforms responsive to our interactions and their ubiquitous sustainable deployment.
We present CATS, a digital painting system that synthesizes textures from live video in real-time, short-cutting the typical brush- and texture- gathering workflow. Through the use of boundary-aware texture synthesis, CATS produces strokes that are non-repeating and blend smoothly with each other. This allows CATS to produce paintings that would be difficult to create with traditional art supplies or existing software. We evaluated the effectiveness of CATS by asking artists to integrate the tool into their creative practice for two weeks; their paintings and feedback demonstrate that CATS is an expressive tool which can be used to create richly textured paintings.
Mobeybou, is a digital manipulative (DM) that uses physical blocks to interact with digital content. It intends to create an environment for promoting the development of language and narrative competences as well as digital literacy among pre and primary school children. It offers a variety of characters, objects and landscapes from various cultures around the world and can be used for creating multicultural narratives. An interactive app developed for each country provides additional cultural and geographical information about each represented culture.
In this video, we showcased "A-line", a 4D printing system for designing and fabricating morphing three-dimensional shapes out of simple linear elements. A-line integrates a method of bending angle control in up to eight directions for one printed line segment, using a single type of thermoplastic material. A software platform to support the design, simulation and tool path generation is developed to support the design and manufacturing of various A-line structures. The video showcases the design space of A-line, including the unique properties of line sculpting, the suitability for compliant mechanisms and the ability to travel through narrow spaces and self-deploy or self-lock on site.
In this video, we showcased "ElectroDermis", a fabrication approach that simplifies the creation of highly-functional and stretchable wearable electronics that are conformal and fully untethered by discretizing rigid circuit boards into individual components. These individual components are wired together using stretchable electrical wiring and assembled on a spandex blend fabric, to provide high functionality in a robust form-factor that is reusable. We describe a series of example applications that illustrate the feasibility and utility of our system.
The Workgroup on Interactive Systems in Healthcare (WISH) brings together industry and academic researchers in human-computer interaction, biomedical informatics, health informatics, mobile health, and other disciplines to develop a cross-disciplinary research agenda that will drive future innovations in health care. We propose a Symposium at CHI 2019 to host WISH with the goal of facilitating a common space to share and discuss methods, study designs, and dissemination in a collaborative fashion. The symposium also aims to actively provide mentoring opportunities to junior and new health technology researchers - from undergraduates to mid-career researchers who want to focus on interactive systems in Healthcare. This will be the eighth WISH meeting in a series of successful workshops bringing together different research communities and practitioners around challenges of designing, implementing, and evaluating interactive health technologies.
The HCI Across Borders (HCIxB) community has been growing in recent years, thanks in particular to the Development Consortium at CHI 2016 and the HCIxB Symposia at CHI 2017 and 2018. This year, we propose an HCIxB symposium that continues to build scholarship potential of early career HCIxB researchers, strengthening ties between more and less experienced members of the community. We especially invite scholarship with a focus on intersections, examining and/or addressing multiple forms of marginalization (e.g. race, gender, class, among others).
At CHI 2018, a workshop on developing a community of practice to support global HCI education was held, building on six years of research and collaboration in the area of HCI education. Many themes emerged from the workshop activities and discussions. Two particularly stood out: creating channels for discussions related to HCI education and providing a platform for sharing HCI curricula and teaching experiences. To that end, we are organizing a CHI 2019 symposium dedicated exclusively to HCI education: EduCHI 2019: Global Perspectives on HCI Education. The symposium will focus on the canons of HCI education in 2019 and beyond. It will offer a venue for HCI educators across disciplinary and geographical borders to discuss, dissect, and debate HCI teaching and learning. Through keynote addresses, paper presentations, and a panel discussion, we aim to discuss current and future HCI education trends, curricula, pedagogies, teaching practices, and diverse and inclusive HCI education. Post-symposium initiatives will aim to document and publish the discussions from the symposium.
The first few years after completing a PhD can be challenging to navigate. Job hunting, interviewing, navigating new contexts such as a junior academic position, applying for funding as a first time project investigator, learning to adapt to the culture of an industry-based workplace, supervising graduate students or full-time employees - these are just a few of the scenarios recent PhD graduates find themselves in. Within HCI, one may encounter more discipline-specific challenges, such as keeping up with the CHI publication cycles while taking on new administrative duties. The CHI community, however, strives to be collectively supportive and inclusive of researchers at all stages of their career - this is even more important as many of our design approaches are rooted in empathy for and empowerment of participants. By more actively supporting each other as researchers in our career paths, we can better grow as a community, and reflect it back into our collective body of practice. The Early Career Development Symposium has been proposed (and held yearly since 2016) to provide a more formal mentoring venue that reflects our aims as a community to more meaningfully support each other.
With psychiatric conditions like depression now the leading global causing of disability, the need for innovative solutions is apparent. The promise of mental health care delivered through technology (eMentalHealth) to provide personalized care offers a promising solution that has galvanized interest worldwide. However, in order to ensure that eMentalHealth is scalable and sustainable, service delivery and health and social care policies need to be integrated into the design of technology interventions. This will require new forms of interdisciplinary collaborations that we hope to foster in this symposium. Thus, in this fourth in our series of Symposia on Computing and Mental Health, the focus will beon the intersection of the communities innovating in this space: patients, designers, data scientists, clinicians, researchers, computer scientists, developers, and entrepreneurs guided by core medical ethical principles including respect for persons, beneficence, and justice. Our aim is thus to enable the vision of better mental health through ethical innovation powered by the right collaborations
This symposium showcases the latest work from Asia on interactive systems and user interfaces that address under-explored problems and demonstrate unique approaches. In addition to circulating ideas and sharing a vision of future research in human-computer interaction, this symposium aims to foster social networks among academics (researchers and students) and practitioners and create a fresh research community from Asian region.
Commercial drones have recently been developed to encompass use cases beyond aerial photography and videography. Researchers have explored wider applications of drones including using drones as social companions, as key components in virtual environments, as assistive tools for people with disabilities, and even as sport companions. However the uptake of research in Human-Drone Interaction (HDI) also brought forth a plethora of challenges that are unique to this platform. While drones were initially considered as flying robots, recent works have shown that traditional Human-Robot Interaction (HRI) methodologies cannot simply be applied to HDI. For example, how do we deal with privacy and safety concerns associated with drones in public space? What is the appropriate methodology to evaluate HDI applications? How do the size, altitude, and speed of drones influence their perception? The aim of this workshop is to bring together researchers and practitioners from both academia and industry to identify: 1) novel HDI applications, and 2) key challenges in this area to drive research in the coming decade. The long-term goal is to create a strong interdisciplinary research community that includes researchers and practitioners from HCI, HRI, Ubiquitous Computing, Interaction Techniques, User Privacy, and Design.
ArabHCI initiative was inaugurated in a CHI17 SIG Meeting that brought together 45+ HCI Arab/non-Arab researchers/practitioners who are conducting/interested in HCI within Arab communities. The meeting started an ongoing dialogue that recognizes the fact that HCI is still in its infancy in the Arab world and explores challenges and opportunities for shaping the future of HCI in the region. The meeting was further followed by three successful meetings in SIGCHI sponsored events that included general discussions about the state of HCI research in Arab countries, and a thematic discussion on exploring participatory design practices with Arab communities. In this workshop, we build on the momentum generated by our previous meetings, and attempt to draw a roadmap for HCI research and practice in Arab world. Our goal is to bring together researchers and practitioners to discuss case studies from their own work, share experiences and lessons learned, and envision the future of the field in this area. We plan to share the results of our discussions and research agenda with the wider CHI community through various social and scholarly channels.
Many healthcare systems around the world are noted as fragmented, complex and low-quality, leaving patients and caregivers with no choice but to engage in "infrastructuring work" to make it function for them. However, the work patients do to construct their functioning health service systems often remain invisible, with very little resources and technologies available to assist them. This workshop aims to bring together researchers, health practitioners, and patients to examine, discuss, and brainstorm ways to re-envision our healthcare service systems from the perspective of patients and caregivers. We aim to unpack the types of work that patients and caregivers do to reconfigure, reconstruct and adapt the healthcare infrastructure, and to brainstorm design solutions that can provide better infrastructuring assistance to them.
Human Computer Interaction (HCI) is experiencing explosive growth in both Chinese industry and academia. We propose to organize an international workshop to coordinate and unite the ongoing efforts, and to facilitate collaboration between local community with the global HCI community. This extended abstract describes the background, goals, organizers, themes, and plans of the proposed workshop.
Today, new mothers are experiencing parenthood differently. Digital resources can provide a wealth of information, present opportunities for socialising, and even assist in tracking a baby's development. However, women are often juggling the role of motherhood with other commitments, such as work. The aim of this workshop is to understand the digital support needs and practices during parenthood from the perspective of employed mothers. We are interested in exploring the ways that women utilise the technologies which have been designed to support mothers, and specifically, the importance of work-life balance and the various roles that mothers play. There is a need to better understand and identify which technologies are being used to support working women through their motherhood journey, and ensure a healthy transition to support women's changing identities.
Supporting creativity has been considered as one of the grand challenges of Human Computer Interaction. All creativity lies within humanity and crowdsourcing is a powerful approach for tapping into the collective insights of diverse crowds. Thus, crowdsourcing has great potential in supporting creativity. In this workshop, we brainstorm new crowdsourcing systems and concepts for supporting creativity, by bringing together researchers and industry professionals in a full-day workshop. The workshop consists of discussions of ideas contributed by the participants and hands-on brainstorming sessions in groups for ideating new crowd-powered systems that support creativity. We center the workshop around two themes: supporting the individual and facilitating creativity in groups.
Despite improvements in the accessibility of digital technologies and growing numbers of tools designed specifically for older adults, adoption of such tools remains low for this demographic. This workshop aims to explore the contextual factors that contribute to reduced uptake among older adults in order to understand how to design digital technologies that will be appealing to and work for them, fitting with recent calls for more holistic approaches to designing for older adults. Going beyond standard accessibility considerations, and aiming to inform design of technologies for the general population rather than the design of senior-friendly variants of such tools, we will generate a set of principles for developing tools that older adults can and will use.
Automated driving is one of the most discussed disruptive technologies of this decade. It promises to increase drivers' safety and comfort, improve traffic flow, and lower fuel consumption. This has a significant impact on our everyday life and mobility behavior. Beyond the passengers of the vehicle, it also impacts others, for example by lowering the barriers to visit distant relatives. In line with the CHI2019 conference theme, our aim is to weave the threads of vehicle automation by gathering people from different disciplines, cultures, sectors, communities, and backgrounds (designers, researchers, and practitioners) in one community to look into concrete future scenarios of driving automation and its impact on HCI research and practice. Using design fiction, we will look into the future and use this fiction to guide discussions on how automated driving can be made a technology that works for people and society.
In recent years, AI systems have become both more powerful and increasingly promising for integration in a variety of application areas. Attention has also been called to the social challenges these systems bring, particularly in how they might fail or even actively disadvantage marginalised social groups, or how their opacity might make them difficult to oversee and challenge. In the context of these and other challenges, the roles of humans working in tandem with these systems will be important, yet the HCI community has been only a quiet voice in these debates to date. This workshop aims to catalyse and crystallise an agenda around HCI's engagement with AI systems. Topics of interest include explainable and explorable AI; documentation and review; integrating artificial and human intelligence; collaborative decision making; AI/ML in HCI Design; diverse human roles and relationships in AI systems; and critical views of AI.
Independent navigation in unfamiliar and complex environments is a major challenge for blind people. This challenge motivates a multi-disciplinary effort in the CHI community aimed at developing assistive technologies to support the orientation and mobility of blind people, including related disciplines such as accessible computing, cognitive sciences, computer vision, and ubiquitous computing. This workshop intends to bring these communities together to increase awareness on recent advances in blind navigation assistive technologies, benefit from diverse perspectives and expertises, discuss open research challenges, and explore avenues for multi-disciplinary collaborations. Interactions are fostered through a panel on Open Challenges and Avenues for Interdisciplinary Collaboration, Minute-Madness presentations, and a Hands-On Session where workshop participants can hack (design or prototype) new solutions to tackle open research challenges. An expected outcome is the emergence of new collaborations and research directions that can result in novel assistive technologies to support independent blind navigation.
Current Machine Learning (ML) models can make predictions that are as good as or better than those made by people. The rapid adoption of this technology puts it at the forefront of systems that impact the lives of many, yet the consequences of this adoption are not fully understood. Therefore, work at the intersection of people's needs and ML systems is more relevant than ever. This area of work, dubbed Human-Centered Machine Learning (HCML), re-thinks ML research and systems in terms of human goals. HCML gathers an interdisciplinary group of HCI and ML practitioners, each bringing their unique, yet related perspectives. This one-day workshop is a successor of Gillies et al. 2016 CHI Workshop and focuses on recent advancements and emerging areas in HCML. We aim to discuss different perspectives on these areas and articulate a coordinated research agenda for the XXI century.
The advances in Robotics offer exciting opportunities for robots to become socially collaborative technologies. But are we ready for and are the robots capable of enabling a good level of interaction and user experience? How can the CHI community work with the Human Robot Interaction (HRI) community to share best practices and methods in order to continue to advance research that crosses methodological and cultural boundaries between HRI and HCI? This workshop will bring together key researchers working in and across both HCI and HRI to share existing challenges and opportunities to advance the field of Socially Collaborative Robotics. We will look to share our recent research experiences and practices in order to build capacity in the crossings between HCI and HRI.
This workshop provides a venue within CHI for research through design (RtD) practitioners to present their work and discuss how, with whom, and why it is used. Building on the success of prior RtD and design research workshops at CHI, this workshop will focus on how RtD artifacts are used, with the goal of connecting diverse works with broader methodologies in HCI and Design.
The aim of this one-day workshop is to provide a forum for HCI researchers to discuss a wide range of issues at the intersection of philosophy and HCI. The participants will reflect on how philosophy influenced the development of HCI in the past, how philosophical insights are being utilized in current HCI research, and how philosophy can help HCI identify and address the emerging challenges facing the field. The main objectives of the workshop are to bring together HCI researchers interested in philosophy and produce an agenda for future research bringing HCI and philosophy closer together.
With the rise of big data, there has been an increasing need to understand who is working in data science and how they are doing their work. HCI and CSCW researchers have begun to examine these questions. In this workshop, we invite researchers to share their observations, experiences, hypotheses, and insights, in the hopes of developing a taxonomy of work practices and open issues in the behavioral and social study of data science and data science workers.
Craft practices such as needlework, ceramics, and woodworking have long informed and broadened the scope of HCI research. Whether through sewable microcontrollers or programs of small-scale production, they have helped widen the range of people and work recognised as technological and innovative. However, despite this promise, few organisational resources have successfully drawn together the disparate threads of scholarship and practice attending to HCI craft. In this workshop, we propose to gather a globally distributed group of craft contributors whose work reflects crucial but under-valued HCI positions, practices, and pedagogies, Through historically and politically engaged work, we seek to build community across boundaries and meaningfully broaden what constitutes innovation in HCI to date.
Traditionally, many consumer-focused technologies have been designed to maximize user engagement with their products and services. More recently, many technology companies have begun to introduce digital wellbeing features, such as for managing time spent and for encouraging breaks in use. These are in the context of, and likely in response to, renewed concerns in the media about technology dependency and even addiction. The promotion of technology abstinence is also increasingly widespread, e.g., via digital detoxes. Given that digital technologies are an important and valuable feature of many people's lives, digital wellbeing features are arguably preferable to abstinence. However, how these are defined and designed is something that needs to be explored further. In this one-day workshop we welcome both industry and academic participants to discuss what digital wellbeing means, who is responsible for it, and whether and how we should design for it going forward.
There is widespread societal concern regarding the reduction in the amount of time that we all spend playing outdoors. Outdoor play can be important for our social and physical well-being and moreover helps us to connect to space, place and environment. Of course, the CHI community continues to explore play across many contexts; however, specifically designing for outdoor play remains underexplored. This workshop aims to bring together those who are interested in technological, social and design aspects of outdoor play for all ages. We will use participants' insights, energies and expertise to explore the challenges and focus on how we can build a community to share innovative designs, generate knowledge and make actionable research in this context.
Everyday mobile usage of AR and VR Head-Mounted Displays (HMDs) is becoming a feasible consumer reality. The current research agenda for HMDs has a strong focus on technological impediments (e.g. latency, field of view, locomotion, tracking, input) as well as perceptual aspect (e.g. distance compression, vergence-accomodation ). However, this ignores significant challenges in the usage and acceptability of HMDs in shared, social and public spaces. This workshop will explore these key challenges of HMD usage in shared, social contexts; methods for tackling the virtual isolation of the VR/AR user and the exclusion of collocated others; the design of shared experiences in shared spaces; and the ethical implications of appropriating the environment and those within it.
The use of speech as an interaction modality has grown considerably through the integration of Intelligent Personal Assistants (IPAs- e.g. Siri, Google Assistant) into smartphones and voice based devices (e.g. Amazon Echo). However, there remain significant gaps in using theoretical frameworks to understand user behaviours and choices and how they may applied to specific speech interface interactions. This part-day multidisciplinary workshop aims to critically map out and evaluate theoretical frameworks and methodological approaches across a number of disciplines and establish directions for new paradigms in understanding speech interface user behaviour. In doing so, we will bring together participants from HCI and other speech related domains to establish a cohesive, diverse and collaborative community of researchers from academia and industry with interest in exploring theoretical and methodological issues in the field.
Immersive Analytics is concerned with the design and evaluation of interactive next-generation interfaces that support human understanding, data analysis, and decision making. New immersive technologies present many opportunities for enhancing humans' experiences with data interaction, but also present many challenges, a subset of which are specific to the analytics domain. This workshop is centered around a set of group prototyping sessions, aimed at identifying new approaches to existing design challenges. In addition to giving perspective on opportunities and difficulties faced by future designers, these exercises will also explore new prototyping methods and tools for the design of interactive data-centric interfaces. This part-day workshop aims to build new ties between the existing immersive analytics community with researchers across many disciplines of the CHI community.
While new media technologies hold the potential to serve journalism's dual goals of informing and engaging the public, these technologies also challenge the journalistic norms of accuracy, impartiality and transparency. The key question in this workshop is: How can HCI support accurate, impartial and transparent journalism? This question is ever more timely as the need for accurate and credible journalism is growing amid the proliferation of disinformation and opinion manipulation. In this workshop, we will identify challenges and solutions in the design of user interfaces, user experiences and production processes in journalism. We bring together researchers and practitioners designing, deploying and studying new technologies in journalism. The goal of the workshop is to harness the potential of HCI for supporting accurate, impartial and transparent journalism.
With the rise of live streaming and esports in recent years, it becomes increasingly important for the HCI community to understand this phenomenon. The organizers encourage people to submit papers with novel interfaces they wish to explore in a live streaming context. In this workshop, participants will discuss different facets of the live streaming experience and obtain a greater understanding of the culture that exists on streaming platforms like Twitch. They will then participate in a design exercise, forming groups and iterating on a design together. Discussion/design topics will include ideas encouraging audience participation, moderation of toxicity, and other such topics. Participants will leave the workshop with ideas about how they can better design games and other experiences taking the live streaming ecosystem into consideration.
In recent years responsible innovation has gained significant traction and can be seen to adorn a myriad of research platforms, education programs, and policy frameworks. In this workshop, we invite HCI researchers to discuss the relations between the CHI community and responsible innovation. This workshop looks to build provocations and principles for and with HCI researchers who are or wish to become responsible innovators. The workshop looks to do this by asking attendees to think about the social, environmental, and economic impacts of ICT and HCI and explore how research innovation frameworks speak to responsible HCI innovation. Through the workshop we look to examine 5 questions to develop a set of provocations and principles, which will help encourage HCI and computer science researchers, educators, and innovators to reflect on the impact of their research and innovation.
Automated systems and their interfaces are increasingly merging with our ambient environmentleading to a heightened impact on our everyday leisure and work experiences. While automationsystems have been a realm for highly specialized tasks and trained experts until recently, now moreand more non-expert users encounter automated systems in their everyday life. The deploymentof these systems fundamentally changes practices and experiences in various domains. The overallgoal of this workshop is to investigate the requirements and design criteria for automation that areexperienced in everyday situations. In particular we will strive to come up with a set of principles forthree key areas of everyday automation experience: intelligibility, experienced control, and capturingautomation experience. This way, the workshop provides a first forum for knowledge exchange andnetworking across usage domains and contexts.
We propose a workshop on rapidly emerging topic of Computational Modeling in HCI to address the challenges of increasing complexity of human behaviors we are able to track and collect today. The goal of this workshop is to reconcile two seemingly competing approaches to computational modeling: theoretical modeling, which seeks to explain behaviors vs. algorithmic modeling, which seeks to predict behaviors. The workshop will address: 1) convergence of the two approaches at the intersection of HCI, 2) updates to theoretical and methodological foundations, 3) bringing disparate modeling communities to CHI, and 4) sharing datasets, code, and best practices. This workshop seeks to establish Computational Modeling as a theoretical foundation for work in Human-Computer Interaction (HCI) to model the human accurately across domains and support design, optimization, and evaluation of user interfaces to solve a variety of human-centered problems.
HCI has a growing body of work regarding important social and community issues, as well as various grassroots communities working to make CHI more international and inclusive. In this workshop, we will build on this work: first reflecting on the contemporary CHI climate, and then developing an actionable plan towards making CHI2019 and subsequent SIGCHI events and sister conferences more inclusive for all.
We invite scholars, designers, developers, policymakers and provocateurs to explore non-standard, global and virtual work futures, to reflect on the impact of new sites and temporal patterns of work, and to consider emerging interpersonal and person-machine dynamics within work. We will frame these discussions with a consideration of the relationship between the future of work and existing modes of labor and political economy, with a view to identifying possibilities for both technological innovation and systemic change.
The HCI community is experiencing a resurgence of interest in the ethical, social, and political dimensions of HCI research and practice. Despite increased attention to these issues is not always clear that our community has the tools or training to adequately think through some of the complex issues that these commitments raise. In this workshop, we will explore the creative use of HCI methods and concepts such as design fiction or speculative design to help anticipate and reflect on the potential downsides of our technology design, research, and implementation. How can these tools help us to critique some of the assumptions, metaphors, and patterns that drive our field forward? Can we, by intentionally adopting the personas of would-be evil-doers, learn something about how better to accomplish HCI for Good?
Situationally-induced impairments and disabilities (SIIDs) make it difficult for users of interactive computing systems to perform tasks due to context (e.g., listening to a phone call when in a noisy crowd) rather than a result of a congenital or acquired impairment (e.g., hearing damage). SIIDs are a great concern when considering the ubiquitousness of technology in a wide range of contexts. Considering our daily reliance on technology, and mobile technology in particular, it is increasingly important that we fully understand and model how SIIDs occur. Similarly, we must identify appropriate methods for sensing and adapting technology to reduce the effects of SIIDs. In this workshop, we will bring together researchers working on understanding, sensing, modelling, and adapting technologies to ameliorate the effects of SIIDs. This workshop will provide a venue to identify existing research gaps, new directions for future research, and opportunities for future collaboration.
Digital signage systems are transitioning from static displays to rich, dynamic interactive experiences while new enabling technologies that support these interactions are also evolving. For instance, advances in computer vision and face, gaze, facial expression, body and hand-gesture recognition have enabled new ways of distal interactivity with digital content. Such possibilities are only just being adopted by advertisers and retailers, yet, they face important challenges e.g. the lack of a commonly accepted gesture, facial expressions or call-to-action set. Another common issue here is the absence of active tactile stimuli. Mid-air haptic interfaces can help alleviate these problems and aid in defining a gesture set, informing users about their interaction via haptic feedback loops, and enhancing the overall user experience. This workshop aims to examine the possibilities opened up by these technologies and discuss opportunities in designing the next generation of interactive digital signage kiosks.
Inbodied design is an emerging area in HCI that focuses on using knowledge of the body's internal systems and processes to better inform em-bodied and circum-bodied design spaces. The current challenge in developing an inbodied approach to HCI research/design is domain expertise: accessing sufficient and appropriate information about how the body itself works and how the body's different systems interact dynamically. In this workshop, we review and build on last year's introduction to inbodied foundations, focusing on applying inbodied knowledge to design challenges to explore (1) the foundational pillars of the inbodied design approach, and (2) how inbodied knowledge can affect / alter our understanding of em-bodied and circum-bodied design challenges and better inform design decisions. Our aim with this hands-on and cross-domain workshop is for HCI researchers to create innovative designs taking the body as a starting point
New research areas in HCI examine complex and sensitive research areas, such as crisis, life transitions, and mental health. Further, research in complex topics such as harassment and graphic content can leave researchers vulnerable to emotional and physical harm. There is a need to bring researchers together to discuss challenges across sensitive research spaces and environments. We propose a workshop to explore the methodological, ethical, and emotional challenges of sensitive research in HCI. We will actively recruit from diverse research environments (industry, academia, government, etc.) and methods areas (qualitative, quantitative, design practices, etc.) and identify commonalities in and encourage relationship-building between these areas. This one-day workshop will be led by academic and industry researchers with diverse methods, topical, and employment experiences.
In the last five years, work on software that interacts with people via typed or spoken natural language, called chatbots, intelligent assistants, social bots, virtual companions, non-human players, and so on, increased dramatically. Chatbots burst into prominence in 2016. Then came a wave of research, more development, and some use. The time is right to assess what we have learned from endeavouring to build conversational user interfaces that simulate quasi-human partners engaged in real conversations with real people. This workshop brings together people who developed or studied various conversational agents, to explore themes that include what works (and hasn't) in home, education, healthcare, and work settings, what we have learned from this about people and their activities, and social or ethical possibilities for good or risk.
As the IoT is taking hold in the home, in healthcare, factories, and industry, new challenges and approaches arise for HCI research and design. For example, HCI is exploring agency delegation and automation to support the user in managing the deluge of IoT data, make decisions, or even take actions on behalf of the user, while economic models are being proposed to drive sharing economy services. This creates new problems including how to design appropriate solutions for uncertain and dynamic human behaviour, how to ensure resources are distributed fairly, and how to ensure that the user can understand system actions and ultimately remains in control. These issues are becoming more pertinent as the IoT diversifies into safety-critical domains such as manufacturing and healthcare. This one-day workshop intends to bring together the CHI community to explore the interactional, socio-cultural, ethical, and practical challenges and approaches that these new domains raise for the IoT. With this, we want to consider how such approaches could be integrated to achieve more sustainable, inclusive, or effective interactions.
Children are growing up in a Machine Learning infused world and it's imperative to provide them with opportunities to develop an accurate understanding of basic Machine Learning concepts. Physical gesture recognition is a typical application of Machine Learning, and physical gestures are also an integral part of children's lives, including sports and play. We present Scratch Nodes ML, a system enabling children to create personalized gesture recognizers by: (1) Creating their own gesture classes; (2) Collecting gesture data for each class; (3) Evaluating the classifier they created with new gesture data; (4) Integrating their classifiers into the Scratch environment as new Scratch blocks, empowering other children to use these new blocks as gesture classifiers in their own Scratch creations.
Social connections and cultural aspects play important roles in shaping an individual's preferences. For instance, people tend to select friends with similar music preferences. Furthermore, preferences and friending are influenced by cultural aspects. Recommender systems may benefit from these phenomena by using knowledge about the nature of social ties to better tailor recommendations to an individual. Focusing on the specifities of music preferences, we study user connections on Last.fm---an online social network for music. We identify those countries whose users are mainly connected within the same country, and those countries that are characterized by cross-country user connections. Strong cross-country connection pairs are typically characterized by similar cultural, historic, or linguistic backgrounds, or geographic proximity. The United States, the United Kingdom, and Russia are identified as countries having a large relative amount of user connections from other countries. Our results contribute to understanding the complexity of social ties and how they are reflected in connection behavior, and are a promising source for advancements of personalized systems.
This paper presents Character Alive, a tangible system designed to improve early Chinese literacy acquisition for Mandarin-speaking children at-risk for dyslexia by enabling high-level interaction. Character Alive uses the multisensory training method to teach children the reading and writing of Chinese characters and words. The core design features of our system are augmented dynamic color cues, 2D radical cards and handwriting cards with tactile cues, and multimedia content such as character animations. Character Alive was built on our previous work on designing tangible and augmented reality reading and writing systems for children at-risk for dyslexia in English. Our previous work has demonstrated that dynamic color cues can draw children's attention to key characteristics of letter-sound-correspondences and two-hand actions with tangible letters help children to better solve spelling tasks. We present the design rationale, the design and implementation of the Character Alive system, and the future plan on evaluating the system.
With head-mounted displays (HMDs), users can access and interact with a broad range of applications and data. Although some of this information is privacy-sensitive or even confidential, no intuitive, unobtrusive and secure authentication technique is available yet for HMDs. We present LookUnlock, an authentication technique for HMDs that uses passwords that are composed of spatial and virtual targets. Through a proof-of-concept implementation and security evaluation, we demonstrate that this technique can be efficiently used by people and is resistant to shoulder-surfing attacks.
This paper provides a comprehensive and novel analysis of the annual conference proceedings of CHI to explore the structure and evolution of the community. Self-awareness is healthy for a diverse and dynamic community, allowing it to anticipate and respond to emerging themes. Instead of using a traditional topic modelling approach to analyze the text of the papers, we adopt a social network analysis approach by analyzing the citation network of papers. After constructing such a citation network, community detection is applied in order to cluster papers into different groups. The keywords of these groups are found to represent different research themes in human-computer interaction, while the growth or decline of these groups is visualized by their paper shares over time. Lastly, we contribute a visualization tool for exploring emerging trends within our community, which can be used to predict likely topics at future CHI conferences.
This paper describes two studies with people with sensory, cognitive and motor impairments flying drones as a leisure activity. We found that despite several adaptations to existing technologies that match their abilities, flying remains very difficult due to the required perceptual and motor skills. We propose an adaptation space at hardware, software and automation levels to facilitate drone piloting. Based on this adaptation space, we designed HandiFly, a software for piloting drones which is adaptable to users' abilities. Participants were able to fly and emphasized its ability to be tailored to specific needs. We conclude with future direction to make drone piloting a more inclusive activity.
Biology labs routinely conduct direct experimentation with living organisms. However, most high-schools are not able to engage students in such experimentation due to multiple factors: sterility, cost of equipment, cost of skilled lab assistants, and difficulty measuring micro-scale processes. We present the design and implementation of My First Biolab (MFB), a lab in a box with a novel disposable fluidic vessel (experiment in a bag) using two sheets of Polyacrylamide-Polyethylene channeling liquids via paths created with a laser-cutter. The system implementation includes a 2D magnetic peristaltic pump, a spectral sensor, and a heat transfer plate. MFB is an affordable, safe, and sterile system for hands-on experimentation with live microorganisms. Our system supports temperature control, liquid circulation, measurement of optical density, and a web interface for remote control and monitoring. Our first experiment demonstrates the three phases of bacterial growth: initial lag phase, the rapid-growth log phase, and the stationary phase.
This paper presents Bricoleur, a new tool designed primarily for children to create rich dynamic audiovisual projects on mobile devices. Building off of technology developed for the Scratch programming language, Bricoleur allows users to capture video and audio media, then use visual programming blocks to quickly construct complex, interactive, and multilayered projects. The programmability of video and audio assets enables new creative possibilities that do not exist in traditional timeline-based video editing software nor in existing programming platforms. Drawing from Seymour Papert's assertion that all learning is an act of bricolage - i.e., a process of constructing knowledge while dialoguing back and forth with creative materials - we describe the design of an initial prototype of Bricoleur, which emphasizes tinkerability as a primary design goal.
To improve the design of a person-following robot, this preliminary study evaluates the influence of user tasks on human preferences of the robot's following angle and human perceptions of the robot's behavior. 32 participants were followed by a robot at three different following angles twice, once with an auditory task and once with a visual task, for a total of six walking trials. Results indicate that the type of user task influences participant preferences and perceptions. For the auditory task, as the following angle increased, participants were more satisfied with the robot's following behavior. For the visual task, as the following angle increased, participants were less satisfied with the robot's following behavior. In addition, participants were more perceptive of the robot's following behavior for the auditory task compared to the visual task. Additional research is required to better understand whether human preferences and perceptions depend on task modality or task complexity.
Referees' decisions in sport games, which must be made in less than a second, have an impact on the games' outcome. The use of hardware and/or software solutions could contribute towards increased accuracy of referees' decisions. Application of such solutions can be expensive, especially in the case of less popular sports. In this respect, we propose and evaluate a video-based system for helping referees in powerlifting to make better decisions. Results reveal promising accuracy rates of the proposed system. This attempt is the first step towards supporting referees in the powerlifting domain and further elaboration of the proposed system is required to achieve higher decision-making accuracy.
We investigate the existence of demographic bias in automatically generated personas by producing personas from YouTube Analytics data. Despite the intended objectivity of the methodology, we find elements of bias in the data-driven personas. The bias is highest when doing an exact match comparison, and the bias decreases when comparing at age or gender level. The bias also decreases when increasing the number of generated personas. For example, the smaller number of personas resulted in underrepresentation of female personas. This suggests that a higher number of personas gives a more balanced representation of the user population and a smaller number increases biases. Researchers and practitioners developing data-driven personas should consider the possibility of algorithmic bias, even unintentional, in their personas by comparing the personas against the underlying raw data.
Identifying how geoscientists interact with seismic images allows a deeper understanding of their rationale and the seismic interpretation process itself. Moreover, identifying nuances involving seismic interpreters performing slightly different roles opens new possibilities relative to how decision support systems could become part of the seismic interpretation. In this work, we detail an eye tracking study involving 11 seismic interpreters interacting with two seismic images. The results show that seismic interpreters, with different specialties, interact with seismic images differently. From the results, it is possible to better characterize gaze strategies from different seismic interpreters, which could also be used as input information for decision support systems.
Many young people experience difficulty finding and keeping their first independent home, which can lead to homelessness or risk of homelessness. To help address this challenge, a young people's service in Scotland (Calman Trust) is developing a digital tool called HasAnswers. This paper provides a brief description of HasAnswers, the results of iterative testing with 69 young people (40 male, 29 female) using paper and digital prototypes, and feedback from other services with a responsibility for supporting young people to achieve an independent adulthood, as a potential customer base for the future scaling up of HasAnswers to new geographical locations and organizations. While preliminary, the results/feedback has been consistently to confirm the potential usefulness and acceptability of HasAnswers. Next steps include applying the results of the latest user testing followed by pilot testing. The research contributes to the body of work within HCI on design for homelessness by providing a new digital tool with a greater emphasis on prevention and early intervention, informed by an iterative user-centred design process.
According to the UN Refugee Agency, there are about 22 million people considered refugees around the world. Since the beginning of the Syrian crisis in 2011 alone, approximately 11 million Syrians left their homes and sought refuge in other counties, mostly nearby countries such as Lebanon and Turkey. There are more than 3.7 million Syrian refugees in Turkey. Non-governmental organizations (NGOs) have a critical role in refugees' lives, their access to services, and integration into communities. In this study, we investigated the barriers NGOs encounter in their work with refugees and the types of information communication technologies they use in the process. Semi-structured interviews with participants from eight Turkish NGOs revealed that digital technologies have a fundamental role in their activities. They use them in three main ways including as tools to help them become knowledge brokers between Turkish communities and refugee communities. Our findings provide insights on NGOs' digital technology use and pave a path for future studies.
Driverless Passenger Shuttles are operating as a public transport alternative in the town of Sion, Switzerland since June'16, and traversing the populated commercial and residential zones of the city center. The absence of a human driver and the lack of dedicated AV-pedestrian interface makes it challenging for road users (pedestrians, cyclists, etc.) to understand the intent or operational state of the vehicle and negotiate road usage. In this article, we present a co-design study aimed at informing the design of interactive communication means between pedestrians and autonomous vehicles (AVs). Conducted in two stages with the local community --which is accustomed to the AV's ecosystem and has interacted with it on a daily basis-- the study highlights the interactive experiences of road users, and furnishes contextualized design guidelines to bridge the communication with the pedestrians.
The notion of relevance is often used as a concept to be considered for making a museum matter to its visitors. The term, however, is rarely operationalized for use by designers, practitioners, or scientists in their work on museum experiences. We propose an integrated framework for designing relevant museum experiences, in which we distinguish between four stages of seeding and growing relevance in new audiences, called "trigger", "engage", "consolidate" and "relate". The framework proposes to see designing for relevance as developing ways of integrating meaning-making, play and acceptable visitor effort across all these stages. It is intended to provide sensitizing concepts for use in further research on designing for relevance, as well as in design-related activities such as crafting requirements for new museum experiences, analyzing existing museum experiences and developing new museum experiences.
In many domains, real-world processes are traditionally communicated to users through abstract graph-based models like event-driven process chains (EPCs), i.e. 2D representations on paper or desktop monitors. We propose an alternative interface to explore EPCs, called immersive process models, which aims to transform the exploration of EPCs into a multisensory virtual reality journey. To make EPC exploration more enjoyable, interactive and memorable, we propose a concept that spatializes EPCs by mapping traditional 2D graphs to 3D virtual environments. EPC graph nodes are represented by room-scale floating platforms and explored by users through natural walking. Our concept additionally enables users to experience important node types and the information flow through passive haptic interactions. Complementarily, gamification aspects aim to support the communication of logical dependencies within explored processes. This paper presents the concept of immersive process models and discusses future research directions.
Monitoring the emotions of drivers can play a critical role to reduce road accidents and enable novel driver-car interactions. To help understand the possibilities, this work systematically studies the in-road triggers that may lead to different emotional states. In particular, we monitored the experience of 33 drivers during 50 minutes of naturalistic driving each. With a total of 531 voice self-reports, we identified four main groups of emotional triggers based on their originating source, being those related to human-machine interaction and navigation the ones that more commonly elicited negative emotions. Based on the findings, this work provides some recommendations for potential future emotion-enabled interventions.
Authoritative online information is especially important in disaster, when social media users are seeking out time- and safety-critical information. In this work we investigate how people engage with the posts by authoritative accounts that fall into five social roles-politicians, government agencies, media, weather experts, and humanitarian organizations. More specifically, we explore whether in their disaster-time sensemaking activities social media users engage with the content from different types of authoritative sources differently, and why. We find that tweets by politicians garner most replies and shares, but not due to prevalence in them of tweet features that facilitate visibility and engagement-hashtags, URLs, and images. We find that while higher popularity of political accounts plays a role in higher engagement, it does not fully explain the differences. Preliminary qualitative analysis suggests that politicized event-related posts by politicians and politicized public response to their even innocuous tweets may affect engagement.
Crowdsourcing and active learning have been combined in the past with the goal of reducing annotation costs. However, two issues arise with using AL and crowdsourcing: quality of the labels and user engagement. In this work, we propose a novel machine -> crowd <- expert loop model where the forward connections of the loop aim to improve the quality of the labels and the backward connections aim to increase user engagement. In addition, we propose a research agenda for evaluating the model.
This paper presents an exploratory study of a social exergame, called Step-by-Step, to help office workers initiate physical movements and social interactions in the work routine. In this project, we developed a mobile system for exploring a new mechanism of office vitality, through which the fitness task can be relayed from one to another co-worker in a workplace. Based on our prototypes, we evaluated the feasibility of Step-by-Step through a user study with five office workers and an expert interview with three senior designers. We discuss implications for the future development of the Step-by-Step system based on our qualitative findings.
This work presents ways into a design space of butterflies in the stomach; a qualia of belly tingling sensation possible of pleasure, discomfort and presence heightening. Three design instances are presented. From and within those are three conceptual directions drawn and exemplified. Conditional availability involves tuning the availability of an interaction to certain geographic locations, environmental conditions and time-of-day in strive for particular aesthetics. Erratic and dubious presence is about making interactions unpredictable and/or feeding a doubt whether the user is engaged in an interaction or not, in strive for confusion and startle. Sensorial evidence of interaction is a way of thinking about narratives within an interaction through elements of planning, exploration and suddenness in strive for experiential qualities like anticipation, surprise, and fascination of discovery. My felt experiences of a two-day camping trip were used as a design resource. Reflections of these experiences were used in the design and concept development through visualizations, textual narratives, technical implementation detailing, and thematic analysis. This work is a provocative step expanding on what human-computer interaction can be in the outdoors.
Tangible interaction design students often find it difficult to generate ideas for tangible manipulation. They often restrict their explorations to a few familiar possibilities. To our knowledge, there is no design tool that focuses on facilitating the exploration of a variety of manipulation and aiding generation of ideas for manipulation. To address this gap, we designed Idea Bits, a tangible design tool consisting of interactive physical artifacts that enable users to experience a set of manipulations. These artifacts are coupled with digital examples of tangible systems and technical implementation guidance to help users understand how to implement the manipulations. Our work contributes knowledge about the generation of ideas for manipulation and will be useful to tangible interaction design students, instructors, practitioners, and researchers.
Prolonged sitting at the workplace is a growing public health concern. In this paper, we propose the activity-focused design framework which provides an overview of recent work in HCI to stimulate physcial activity or to reduce sedentary behavior. Next, based on this framework, we present Stimulight, an intelligent system designed to explore the effect of providing personal and/or social feedback on the activity pattern of office workers. To test the intuitiveness of the feedback modalities of our design, three different feedback conditions were explored in a lab study with 61 participants. Our results show a positive effect of visualizing and sharing physical activity patterns with co-workers. Based on our findings, we present design implications and offer perspectives for future work on how to use social feedback mechanisms to encourage social interaction in the workplace to enhance physically active behavior among office workers.
In this work, we address the challenges of designing interactive technologies that approach menstruation in a positive way. Building on a Research through Design approach, we underline the tensions emerging from first-hand experiences and reflections in a design workshop. In order to maintain a positive approach, rather than asking participants what problems they encountered while on their period, we asked them what desires they had, and what experiences might help them cope with it. The results of the workshop emphasized the need for reflecting critically on how we perceive menstruation when designing and how viewing menstruation as a problem might perpetuate taboos and distance women's experiences from their bodies. We aim to contribute to the ongoing discussion on designing for women's health in HCI by suggesting implications for researchers and practitioners.
Voice Conversational Agents (VCAs) are increasingly becoming part of our daily life. Calling them by their names can solve more than just a problem of efficiency in interaction between users and VCAs, but also can cast them as actors capable of playing a role in a relationship. However, due to the immature state of the technology and its related services, such relationships between users and VCAs can be limited in practice. To broaden the scope in designing different relationships, this study explores VCAs depicted in sci-fi movies. Sci-fi is not purely based on fantasy, but also is a social reflection on technology, which in turn can inspire design researchers to understand and speculate the complexity of VCAs. Through community sourcing with sci-fi enthusiasts, 43 Sci-fi VCAs were deduced. A movie event was also organized to discuss the role of VCAs. Finally, this paper presents several possible design insights for designing the role of VCAs.
Visual Quotes, or the communication of motivational text messages in a visual format, are increasingly used across social media and online communities. While physical activity trackers could leverage visual quotes, empirical studies of activity tracking in HCI research have paid little attention to this phenomenon and their potential effects on motivation. In this work, we conducted an online experiment (129 participants) to evaluate the impact of aesthetic appeal in motivational text messages as it relates to extrinsic identified behavior regulation. This is the type of motivation linked to the initial adoption of exercise behavior. The results of our study demonstrate that a perceived motivating text message presented with different levels of aesthetic appeal - ugly, neutral, beautiful - has the same impact on the motivation linked to short-term exercise (extrinsic identified behavior regulation). In other words, the perceived aesthetic appeal did not influence the motivating capability of textual messages for encouraging physical activity.
Material speculation through counterfactual artifacts has been an alternative approach to envisioning possible worlds in HCI. This paper further explores how false memory, or confabulation, can be a resource for enriching this constructive design approach. We conducted an explorative study to understand how personal experience of confabulation can contribute to the constitution of the body of possible worlds and to explore the potential of using mixed everyday soundscapes as a medium to trigger possible-world-experience. The result shows that these "counterfactual soundscapes" can create self-convinced experiences of counterpart self, which indicates a reflexive but fictional self across possible worlds. Based on these findings, we proposed the model of counterpart self that accounts for reflexive speculation in possible worlds and made a research artifact, the Confabulation Radio, as our attempt to inquire into this phenomenon.
This research, starting from breakfast to explore varied morning experiences and expectations, aims to reframe morning experience through diverse possibilities. A good morning keeps people in good psychological and physical conditions, thus improving efficiency at work. In Taiwan, multiple ethnic groups and long working hours are common, which is also embodied by the choice of local breakfast. However, according to a Herbalife research in 2018 [4], around 25% of all Taiwanese lack the habit of eating breakfast every day. Our research starts to explore the role breakfast plays in people's morning routine, as well as the morning expectations and experiences of different lifestyles. The end goal is to create designs that give people a different perspective to engage with a good experience of the beginning of their day, to wonder about how their morning could be, not just about breakfast but about morning experience as well.
Conversational agents (CAs) become more popular and useful at home. Creating the persona is an important part of designing the conversational user interface (CUI). Since the CUI is a voice-mediated interface, users naturally form an image of the CA's persona through the voice. Because that image affects users' interaction with CAs while using a CUI, we tried to understand users' perception via drawing method. We asked 31 users to draw an image of the CA that communicates with the user. Through a qualitative analysis of the collected drawings and interviews, we could see the various types of CA personas perceived by users and found design factors that influenced users' perception. Our findings help us understand persona perception, and that will provide designers with design implications for creating an appropriate persona.
Sketching is known to support divergent thinking during conceptual ideation. Yet, in HCI teams, non-designers are known to be reluctant to sketch. Looking for a tool that could support non-designers' divergent thinking to creatively offset familiar solutions while providing starter suggestions, we hypothesized that LEGO pieces could replace sketching. In a comparative lab experiment, 36 participants did two conceptual ideations of Web interfaces, one using paper/pen, the other LEGO, in random sequence. The 72 resulting interfaces were assessed on their fluency, flexibility, elaboration and originality according to Guilford [6] and Torrance's [9] divergent thinking framework. Our main finding is that LEGO could substitute sketching for non-designers; the 3D figurative, constructive pieces provide a stimulating visual representation that supports divergent thinking by offering alternate meanings, generating a greater number of elements to react to, thus enhancing the use of analogies.
'Energy' is an abstract concept, invisible except through its effects, yet with vast geopolitical and environmental consequences-while driving many everyday practices. It is a curious 'material' to work with for designers, with experiential properties which are underexplored. In Electric Acoustic, we are exploring both sonification and vibration (cymatic displays) as media for experiencing energy, specifically electricity use. These materializations potentially enable deeper engagement with the invisible systems and infrastructures of everyday life. This short paper reports on our preliminary experiments and some of the issues and considerations arising during this initial exploration.
If chocolate and broccoli sound a strange pairing, can you imagine a broccoli chocolate bar that combines them? As a matter of fact, the two ingredients share the highest number of flavour molecules, so their combination might not be as weird as it sounds. We applied computational creativity, that is AI systems to enhance human creativity, to the food domain, with the main goal of feeding the mind of the creative professional in the food business with new unexpected combinations.
Digital nudges hold enormous potential to change behavior. Despite the appeal to consider timing as a critical factor responsible for the success of digital nudges, a comprehensive organizing framework to guide the design of digital nudges considering nudge moment is yet to be provided. In this paper, we advance the theoretical model to design digital nudges by incorporating three key components: (1) Identifying the optimal digital nudge moment (2) Inferring this optimal moment and (3) Delivering the digital nudge at that moment. We further discuss the existing work and open research avenues.
Designing technology for problem-free operation is vital, but equally important is considering how a user may understand or act upon errors and various other 'stuck' situations if and when they occur. Little is currently known about what children think and want for overcoming errors. In this paper we report on design-for-error workshops with children (age 5-10) in which we staged 3 simulated errors with a health assessment technology. In our developmentally-sensitive study, children witnessed the errors via a puppet show and created low-fidelity models of recovery mechanisms using familiar 'play-things'. We found the children were able to grasp the representational nature of the task. Their ideas were playful and inspired by magical thinking. Their work forced us to reflect on and revisit our own design assumptions. The tasks have had a direct impact on the design of the assessment.
This paper is an exploration of Augmented Reality applications to Participatory Marketing and overviews initial findings in this area of research. Participatory Marketing is the concept of marketing with customers rather than at them, and can potentially turn AR users form passive consumers to (pro-)active co-creators of this future media. We conducted a preliminary investigation to focus on possible challenges and opportunities.
We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning (human-robot-IoT) with one single AR-SLAM device. Users can perform task authoring in an analogous manner with the Augmented Reality (AR) interface. Then placing the device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot and IoT oriented tasks, and guides the path planning execution with the SLAM capability.
The increasing availability and diversity of virtual reality (VR) applications highlighted the importance of their usability. Function-oriented VR applications posed new challenges that are not well studied in the literature. Moreover, user feedback becomes readily available thanks to modern software engineering tools, such as app stores and open source platforms. Using Firefox Reality as a case study, we explored the major types of VR usability issues raised in these platforms. We found that 77% of usability feedbacks can be mapped to Nielsen's heuristics while few were mappable to VR-specific heuristics. This result indicates that Nielsen's heuristics could potentially help developers address the usability of this VR application in its early development stage. This work paves the road for exploring tools leveraging the community effort to promote the usability of function-oriented VR applications.
Augmenting arbitrary physical objects with digital content leads to the missing interface problem, because those objects were never designed to incorporate such digital content and so they lack a user interface. A review of related work reveals that current approaches fail due to limited detection fidelity and spatial resolution. Our proposal, based on Google Soli's radar sensing technology, is designed to detect micro-gestures on objects with sub-millimeter precision. Preliminary results with a custom gesture set show that Soli's core features and traditional machine learning models (Random Forest and Support Vector Machine) do not lead to robust recognition accuracy, and so more advanced techniques should be used instead, possibly incorporating additional sensor features.
Internet of Things systems are complex to develop. They are required to exhibit various features and run across several environments. Software developers have to deal with this heterogeneity both when configuring the development and execution environments and when writing the code. Meanwhile, computational notebooks have been gaining prominence due to their capability to consolidate text, executable code, and visualizations in a single document. Although they are mainly used in the field of data science, the characteristics of such notebooks could make them suitable to support the development of IoT systems as well. This work proposes an IoT-tailored literate computing approach in the form of a computational notebook. We present a use case of a typical IoT system involving several interconnected components and describe the implementation of a computational notebook as a tool to support its development. Finally, we point out the opportunities and limitations of this approach.
Locked-in syndrome (LIS) patients are partially or entirely paralyzed but fully conscious. Those patients report a high quality of life and desire to remain active in their society and families. We propose a system for enhancing social interactions of LIS patients with their families and friends with the goal of improving their overall quality of life. Our system comprises a Brain-Computer Interface (BCI), augmented-reality glasses, and a screen that shares the view of a caretaker with the patient. This setting targets both patients and caretakers: (1) it allows the patient to experience the outside world through the eyes of the caretaker and (2) it creates a way of active communication between patient and caretaker to convey needs and advice. To validate our approach, we showcased our prototype and conducted interviews that demonstrate the potential benefit for affected patients.
We investigate enhancing virtual haptic experiences by applying Stochastic Resonance or SR noise to the user's hands. Specifically, we focus on improving users' ability to discriminate between virtual textures modelled from nine grits of real sandpaper in a virtual texture discrimination task. We applied mechanical SR noise to the participant's skin by attaching five flat actuators to different points on their hand. By fastening a linear voice-coil actuator and a 6-DOF haptic device to participants' index finger, we enabled them to interact and feel virtual sandpapers while inducing different levels of SR noise. We hypothesize that SR will improve their discrimination performance.
The use of Augmented Reality (AR) systems has been shown to be beneficial in guiding users through structured tasks when compared to traditional 2D instructions. In this work, we begin to examine the potential of such systems for home improvement tasks, which present some specific challenges (e.g., operating at both large and small scales, and coping with the diversity in home environments). Specifically, we investigate user performance of a common low-level task in this domain. We conducted a user study where participants mark points on a planar surface (as if to place a nail, or measure from) guided only by virtual cues. We observed that participants position themselves so as to minimize parallax by kneeling, leaning, or side-stepping, and when doing so, they are able to mark points with a high degree of accuracy. In cases where the targets fall above one's line of vision, participants are less able to compensate and make larger errors. We discuss initial insights from these observations and participant feedback, and present the first set of challenges that we believe designers and developers will face in this domain.
Despite almost all software development involving application programming interfaces (APIs), there is surprisingly little work on how people use APIs and how to evaluate and improve the usability of an API. One possible way of investigating the usability of APIs is through the user's mental model of the API. Through discussions with the developers and UX practitioners at Google along with our own evaluations, a distributed data processing API called Apache Beam has been identified as difficult to use and learn. In our on-going study, we investigate methods for understanding users' mental models of distributed data processing and how this understanding can lead to design insights for Beam and its documentation. We present our novel approach, which combines a background interview with two natural programming elicitation segments: the first designed for participants to express a high level mental model of a data processing API while the second asks questions contextualized to a data processing task to see how participants apply their conceptual understanding to a more specific situation. Our method shows promise as pilot participants expressed a "dataflow" mental model that matched one way that Beam has been described, resulting in a potential design modification.
Speech To Text (STT) plays a significant role in Voice User Interface (VUI). While preserving necessary semantic information in converted text, STT generally captures no or limited emotional information. In this paper, we present an emojilization tool to automatically attach related emojis to the STT-generated texts by analyzing both textual and acoustic features in speech signals. For a given voice message, the tool selects the most representative emoji from 64 most commonly used emojis. We conducted a pilot study with 34 participants. In our study, 159 utterances were labeled with emojis by our tool. The emotion restoration effect was evaluated. The results indicate that the proposed tool effectively compensates for the emotion loss.
As the electrification of the transportation sector grows the electric grid must handle the new load resulting from electric vehicles (EV) charging. The integration of this new load in the grid has been subject to work in the smart-charging research field, however, while normal-sized EVs often offer chargers or other functions that support smart-charging, smaller EVs do not, which could be problematic. Especially considering that the consumption of small EV when aggregated can be significant. This article presents the motivation and development behind the development of MyTukxi, a hardware and software system that aims at implementing smart-charging algorithms for low consuming electric vehicles (EV), interacting with drivers to compensate for the lack of charging control in such vehicles.
Bouldering is an urban form of rock climbing that requires precise and complex movement. Similarly to other sports, the simplest way to learn bouldering skill is to mimic professional's motion. However, ordinary beginner boulders cannot learn to coaches, so that they learn by themselves or tutorial videos. Even if they managed, bouldering has a communication difficulty between a trainee and a trainer, that is, climbers cannot mimic the trainer's movement in parallel. Accordingly, we considered a video feedback system would be useful for beginners and suggested InterPoser: a novel visualization system for intermediate motion between a beginner climber and a more experienced. InterPoser receives two videos of different subjects climbing the sample problem and generates an intermediate movement. In addition, this motion is transferred into realistic images of the climber. The proposed system is expected to support beginner to acquire more detailed observation and understanding of the motion.
Driving in autonomous cars requires trust, especially in case of unexpected driving behavior of the vehicle. This work evaluates mental models that experts and non-expert users have of autonomous driving to provide an explanation of the vehicle's past driving behavior. We identified a target mental model that enhances the user's mental model by adding key components from the mental model experts have. To construct this target mental model and to evaluate a prototype of an explanation visualization we conducted interviews (N=8) and a user study (N=16). The explanation consists of abstract visualizations of different elements, representing the autonomous system's components. We explore the relevance of the explanation's individual elements and their influence on the user's situation awareness. The results show that displaying the detected objects and their predicted motion was most important to understand a situation. After seeing the explanation, the user's level of situation awareness increased significantly.
We propose an interactive system that allows common users to build large-scale balloon art based on a spatial augmented reality solution. The proposed system provides fabrication guidance to illustrate the differences between the depth maps of the target three-dimensional shape and the current work in progress. Instead of using color gradients for depth difference, we adopt a high contrast black and white projection of the numbers in considering balloon texture. In order to increase user immersion in the system, we propose a shaking animation for each projected number. Using the proposed system, the unskilled users in our case study were able to build large scale balloon art.
Speech is a direct and intuitive method to control a robot. While natural speech can capture a rich variety of commands, verbal input is poorly suited to finer grained and real-time control of continuous actions such as short and precise motion commands. For these types of operations, continuous non-verbal speech is more suitable, but it lacks the naturalness and vocabulary breadth of verbal speech. In this work, we propose to combine the two types of vocal input by extending the last vowel of a verbal command to support real-time and smooth control of robot actions. We demonstrate the effectiveness of this novel hybrid speech input method in a beverage-pouring task, where users instruct a robot arm to pour specific quantities of liquid into a cup. A user evaluation reveals that hybrid speech improves on simple verbal-only commands.
Operators of military unmanned aerial vehicles (UAVs) work in highly dynamic environments. They have to complete numerous tasks, sometimes simultaneously, while maintaining high situational awareness (SA) and making rapid decisions. Their main focus is on mission management via the UAV's payload, yet, they continuously interact with the command and control (C2) map to obtain SA and make decisions. C2 maps, shared among forces in the environment, are cluttered and overloaded with information. We aim to develop a map display machine-learning based spatial-temporal algorithm that will identify the most relevant information items to the UAV operator and deliver the right visualized information on the C2 map at the right timing. Towards the algorithm development, experiments for collecting user-based importance data were conducted and analysed. For this, a designated UAV C2 Experimental System (UCES) has been developed. Results show high feasibility for the prediction model, allowing to move forward with the following steps of the algorithm development.
Often, online ads are annoying. Ad blockers are a way to prevent ads from appearing on a web page. As a result, web service providers lose more than 35 billion dollars per year and freely available content on the web is at risk. Taking both interests of web service providers and users into account, we present a gamified ad blocker that allows users to drag a virtual monster over ads to eat them and make them disappear. For each deactivated ad, users receive ad-free time that they can take whenever they want. We report findings from a pre-study, establishing requirements for the implementation of the ad blocker as well as the results of a usability test of our prototype. As a next step, we will release the extension in the Chrome Web Store for upcoming in-the-wild studies.
Co-located games that bring players together have strong potential for supporting children's collaborative competencies. However, there is a challenge how to make results from research work related to this within Child-Computer Interaction (CCI) field easily transferable to future CCI research. Pursuing this challenge, we combined levels of Collaborative Activity (CA) with the design tool gameplay design patterns (GDPs). This combination was used to support comparative play tests of a co-located game with children who have learning difficulties. We report our observations on using our approach, arguing that the possibility of making patterns based on CA concepts such as Reflective Communication points towards collaborative GDPs. Furthermore, this study presents an exemplar that as a flexible and extensible tool GDPs can be used with different theories and models in the CCI field.
There is a need to re-design the entertainment systems for the older adults, incorporating the population of this age group into the digital culture. With this aim in mind this work presents an intergenerational experience carried out in an Interactive Space where tangible and gestures interaction are used to participate in pervasive gaming experiences. The experience makes use of a game initially designed just for children but in a very flexible way so that it can be tailored to different players' characteristics. Family groups made up of one or two grandparents and one or two grandchildren have played together The Fantastic Journey fulfilling all the missions either on tangible tabletops, just moving around the space or interacting by gestures. The experience was positively valued by both age groups; they were indeed happy with the opportunity of playing together in a challenging game. Nevertheless, the difficulty of designing engaging experiences for both age groups points to a challenging research area.
Finding and maintaining the right level of challenge with respect to the individual abilities of players has long been in the focus of game user research (GUR) and game development (GD). The right difficulty balance is usually considered a prerequisite for motivation and a good player experience. Dynamic difficulty adjustment (DDA) aims to tailor difficulty balance to individual players, but most deployments are limited to heuristically adjusting a small number of high-level difficulty parameters and require manual tuning over iterative development steps. Informing both GUR and GD, we compare an approach based on deep player behavior models which are trained automatically to match a given player and can encode complex behaviors to more traditional strategies for determining non-player character actions. Our findings indicate that deep learning has great potential in DDA.
Scratch users often struggle to detect and correct 'code smells' (bad programming practices) such as duplicated blocks and large scripts, which can make programs difficult to understand and debug. These 'smells' can be caused by a lack of abstraction, a skill that plays a key role in computer science and computational thinking. We created Pirate Plunder, a novel educational block-based programming game, that aims to teach children to reduce smells by reusing code in Scratch. This work describes an experimental study designed to measure the efficacy of Pirate Plunder with children aged 10 and 11. The findings were that children who played the game were then able to use custom blocks (procedures) to reuse code in Scratch, compared to non-programming and programming control groups.
We analyze hate speech toward the MENA players as a form of toxic behavior in League of Legends in-game and forum chats. We find that this kind of toxicity: (1) is initiated by one or two players; (2) sparks from criticizing the skills of team members; (3) can be elevated by frustration with game elements and hardware; and (4) can turn into personal clashes. There is also non-toxic use of abusive language, which stresses the importance of context-aware analysis (i.e., interpreting what is actually toxic). Finally, we find evidence that the type of toxicity varies by server location, advising gaming companies to consider the location of players when setting up policies to mitigate hate speech.
The growing maker movement has created a number of hardware and construction toolkits that lower the barriers of entry into programming for youth and others, using a variety of approaches, such as gaming or robotics. For constructionist-like kits that use gaming, many are focused on designing and programming games that are single player, and few explore using physical and craft-like approaches that move beyond the screen and single player experiences. Moving beyond the screen to incorporate physical sensors into the creation of gaming experiences provides new opportunities for learning about concepts in a variety of areas in computer science and making. In this early work, we elucidate our design goals and prototype for a mini-arcade system that builds upon principles in constructionist gaming - making games to learn programming - as well as physical computing.
This paper presents a game-like experience that translates tactile input into text, which captures the emotional qualities of that touch. We describe the experience and the system that generates it: a plush toy instrumented with pressure sensors, a machine learning method that acquires a mapping from touch data into a feature vector of affect values, and a mechanism that transcribes that feature vector into text. We conclude by discussing the range of novel interactions that such a nuanced tactile interface can support.
Usertesting is commonly employed in games user research (GUR) to understand the experience of players interacting with digital games. However, recruitment and testing with human users can be laborious and resource-intensive, particularly for independent developers. To help mitigate these obstacles, we are developing a framework for simulated testing sessions with agents driven by artificial intelligence (AI). Specifically, we aim to imitate the navigation of human players in a virtual world. By mimicking the tendency of users to wander, explore, become lost, and so on, these agents may be used to identify basic issues with a game's world and level design, enabling informed iteration earlier in the development process. Here, we detail our progress in developing a framework for configurable agent navigation and simple visualization of simulated data. Ultimately, we hope to provide a basis for the development of a tool for simulation-driven usability testing in games.
The design of Voice User Interfaces (VUIs) has mostly focused on applications for adults, but VUIs provide potential advantages to young children in enabling concurrent interactions with the physical and social world. Current applications for young children focus on media playing, answering questions, and highly-structured activities. There is an opportunity to go beyond these applications by using VUIs to support high-quality, creative social play. In this paper, we describe our first step in pursuing this opportunity with 24 design sessions guided by a partnership with eight 3 to 4 year old children. In a social play setting, we learned that children wanted to physically interact with the voice agents and VUIs could redirect behaviors and promote social interactions.
Game narrative and theming are ways in which game designers can affect player experience. In this work, we incorporate theories of emotion into game design, to explore the relationship between aesthetic elements and player experience. We designed and playtested two differently themed variants of the game Outbreak, a 'Horror' and a 'Sanitized' version. We present preliminary findings about playing differently themed versions of the same game which suggest that scary content can sustain interest throughout play and transform players' emotional response to uncertainty.
Envisioning cultural institutions as "social places", where the visitors can "create, share, and connect to each other around the cultural heritage content" (as defined by Nina Simon), we explore how to design cultural group experiences that combine personal moments of reflection to social encounter. In previous work we proposed a storytelling game where visitors conceive and narrate stories about the artworks, orchestrating group interactions according to the game phases. Playtesting with physical materials revealed promising potential, cultivating theatrical narrations, lively discussions and fruitful social interactions. Here we present a mobile-based, group experience design for gamified cultural visits with performative elements, leveraging the trajectories HCI framework. We highlight the key role, interface and space transitions encountered in the experience and we elaborate on the adopted design choices, while we reflect on main challenges and future directions.
Lack of adequate support in navigation and object detection can limit independence of visually impaired (VI) people in their daily routines. Common solutions include white canes and guide dogs. White canes are useful in object detection, but require physically touching objects with the cane, which may be undesired. Guide dogs allow navigation without touching objects in the vicinity, but cannot help in object detection. By addressing this gap, employing a user-centric research approach, we aim to find a solution to improve the independence of VI people. Here, we began by initially gathering requirements through online questionnaires. Working from this, we build a prototype of a glove that alerts its users when an obstacle is detected at the pointed position; we call this EyeR. Lastly, we evaluated EyeR with VI users and found out that in use our prototype provides real time feedback and is helpful in navigation. We also provide future recommendations for VI prototypes from our participants, who would additionally like the device to recognise objects.
We present a smartphone application for at-home participation in large-scale neuroscientific studies. Our goal is to establish user-experience design as a paradigm in basic neuroscientific research to overcome the limits of current studies, especially in rare neurological disorders.The presented application guides users through the fitting procedure of the EEG headset and automatically encrypts and uploads recorded data to a remote server. User-feedback and neurophysiological data from a pilot study with eighteen subjects indicate that the application can be used outside of a laboratory, without the need for external guidance. We hope to inspire future work on the intersection between basic neuroscience and human-computer interaction as a promising paradigm to accelerate research on rare neurological diseases and assistive neurotechnology.
This paper introduces ReMiND, a wearable for emotional awareness and mindfulness that targets individuals in recovery. Through a human-centered design process, working in conjunction with twelve-step fellowship groups, we conceived ReMiND as a tool to help those in recovery improve emotional awareness, reduce isolation, form new habits, and positively cope with new challenges.
Despite the increased demand, popularity, and cultural significance of streaming media and digital entertainment, many individuals with disabilities are unable to experience this content. Specifically, many video streaming technologies require input devices and content browsers that are inaccessible to individuals with sensory and physical impairments and do not work with their current assistive technologies. Our team of engineers, designers, and clinicians took an inclusive approach to assessing and redesigning these streaming service products, with the aim of creating more universally accessible experiences. We recruited 9 participants with diverse abilities to evaluate the accessibility of a large telecommunication company's commercially available video streaming products. This evaluation revealed significant accessibility barriers and informed the design of a participatory design activity to create accessible remote-controls, an onboarding assistance prototype, and a content browsing prototype that is screen reader accessible and supports audio descriptions. We evaluated these accessible prototypes with 11 additional participants and found they were more accessible, flexible, and enjoyable to use compared to the off-the shelf products. In this paper we summarize these findings and discuss how future streaming technology must support customization and follow established accessibility guidelines and standards.
In eHealth, engagement is viewed as an important factor in explaining why interventions are beneficial to some and not to others. However, a shared understanding of what engagement is and how to measure it, is missing. This paper presents the set-up of the development and initial validation of a new scale to measure engagement with eHealth interventions, based on scientific literature and interviews with users and experts. Furthermore, it presents the preliminary results of a systematic review, 11 interviews with engaged users and the first version of the new engagement scale. It is expected that the final scale, which will be based on theoretical and empirical research and focus on the different components of engagement, will enable researchers to investigate what features and forms of technology influence individuals' engagement and thereby pave the way to create more engaging technologies.
While clothing is one of the requisites of human life, the visually impaired have limitations in their shopping and need to rely on acquaintances or shop assistants to choose and purchase clothes. Given that fashion is a means of self-expression, their independency of expressing themselves has been limited. In this work, we sought to provide technological solutions for visually impaired people, to help them distinguish the color of clothes while shopping. After conducting exploratory user studies, we derived design requirements and developed a voice service mobile application which guides them through recognizing and choosing colors of clothing.
In this paper we present the results of an exploratory study examining the potential of voice assistants (VA) for some groups of older adults in the context of Smart Home Technology (SHT). To research the aspect of older adults' interaction with voice user interfaces (VUI) we organized two workshops and gathered insights concerning possible benefits and barriers to the use of VA combined with SHT by older adults. Apart from evaluating the participants' interaction with the devices during the two workshops we also discuss some improvements to the VA interaction paradigm.
How can people learn advanced motor skills such as front flips and tennis swings without starting from a young age? The answer, following the work of Masters et. al., we believe, is implicitly. Implicit learning is associated with higher retention and knowledge transfer, but that is unable to be explicitly articulated as a set of rules. To achieve implicit learning is difficult, but may be taught using obscured feedback - that is feedback that provides little enough information such that a user does not overfit a mental model of their target action. We have designed an auditory feedback system - AUFLIP - that describes high level properties of an advanced movement using a simplified and validated physics model of the flip. We further detail the implementation of a wearable system, optimized placement procedure, and takeoff capture strategy to realize this model. With an audio cue pattern that conveys this high-level, obscured objective, the system is integrated into a gymnastics-training environment with professional coaches teaching novice adults how to perform front flips. We perform a pilot user study training front flips, evaluating using a matched-pair comparison.
This extended abstract describes the concept of DryNights, a bedwetting behaviour change support system concept consisting out of a self-powered sensor and a mobile application developed in collaboration with LifeSense Group (spinoff Imec, Holst Centre) and the Eindhoven University of Technology. The sensor uses the principle of an electrochemical cell to generate its own electricity and to power a harmless and wireless signal transmission from the sensor to a mobile device. The mobile application has been designed in collaboration with children (N=75) from the target group. The reliability of the signal transmission and the range of the sensor have been successfully evaluated in a small scale experiment. Trials with children wearing it to go to sleep are currently under way, and suggest that DryNights is comfortable and children are experiencing it positively.
Involving end-users is considered key to successful design of technology. It can be challenging, however to involve end-users when designing healthcare technology, due to the limited availability of patients because of their condition or treatment. This is especially difficult when co-designing healthcare technology, which often requires several end-users to collaborate in group activities such as ideation exercises and brainstorms. In an exploration of co-design methods that do not require participants to be co-located, this paper describes initial results from a small-scale ideation workshop in the context of fertility treatment. Preliminary data analysis suggests that user-generated card-based ideation could be used to inspire ideation while transferring knowledge and ideas between participants who are not co-located. This approach could benefit researchers in several healthcare technology settings that use co-design, and other domains where availability of participants is limited.
Alzheimer's disease (AD), the most common type of dementia, is characterised by gradual memory loss. There is an increasing global research effort into strategies for early clinic-based diagnosis at the stage where patients present with mild memory problems. Initiating treatment at this stage would slow the progression of the condition and enable more years of good quality life. This paper presents the ongoing development of an augmented reality system using HoloLens that is designed to test an early onset of Alzheimer's disease. The most important aspects in the early AD diagnostics are the symptoms that appear to be connected with early memory loss, in particular spatial memory. The ability to store and retrieve the memory of a particular event involving an association between items such as the place and the object properties is incorporated in a game environment.
Prolonged sedentary behavior contributes to many chronic diseases. An appropriate reminder could help screen-based workers to reduce their prolonged sedentary behavior. The fixed-duration point-of-choice prompt has been frequently used in related work. However, this prompting system has several drawbacks. In this paper, we propose the SedentaryBar, a context-aware reminding system using an always-on progress bar to show the duration of a working session, as an alternative to the prompt. The new reminding system uses both users' keyboard/mouse events on the computer and the state-of-the-art computer vision algorithm with the webcam to detect users' presence, which makes the system more accurate and intelligent. Our evaluation study compared the SedentaryBar and the prompt using subjective and objective measurements. After using each method for a week respectively, more participants preferred the SedentaryBar. The participants' perceived interruption and usefulness also suggested the SedentaryBar was more popular during the study. However, the logged data of the participants' working durations indicated the prompt was more effective in reducing their sedentary behavior.
A significant number of young Americans are vulnerable to excess weight gain, especially during the college years. While technology-based weight loss interventions have the potential to be very engaging, short-term approaches showed limited success. In our work we aim to better understand the impact of long-term, multimodal, technology-based weight loss interventions, and study their potential for greater effect among college students. In this paper we lay the basis for our approach towards a multimodal health intervention for young adults: we present formative work based on interviews and a design workshop with 26 young adults. We discuss our intervention at the intersection of user feedback, empirical evidence from previous work, and behavior change theory.
Family member involvement has been shown to be key to the well-being and recovery of patients in an Intensive Care Unit (ICU), but they often find themselves overwhelmed and in an emotionally heightened state. ICU care teams, especially nurses, are typically considered to be in the best position to help and provide support to family members of patients. However, the heavy workload, lack of time, and personal interaction styles can make it difficult for them to be receptive to family member needs. To understand how current aids in the ICU are used and the challenges associated with them, we conducted 22 interviews with both family members and the care team. We also created prototypes of family-centered aids through a co-design session to reveal the opportunities that emerge for technology to facilitate family member support in the ICU without adding additional burdens on the care team.
Researchers use Asynchronous Remote Communities (ARC) to reach out to target populations who may find it hard to meet in person, or make time for telephone interviews. So far, ARC studies have been conducted using closed and secure groups on Facebook, because most participants are active members of this social network. However, it is not clear how participants' Facebook usage might affect their engagement with an ARC study. In this paper, we report a secondary analysis of a recent ARC study of women who had experienced at least one miscarriage that focused on their information and social support needs. We find participants tend to be comfortable with seeking emotional support on Facebook, and even those who say they rarely post to Facebook engage with most group activities. We discuss implications for choosing platforms for ARC studies.
Smart textiles, a subset of wearable technologies, inspire novel human-interactions that leverage people's physical and cognitive actions. However, people with intellectual and developmental disabilities (IDD) have different communication, social, and sensory experiences than non-disabled people. We empirically investigate how adults with IDD determine the functionality of smart textiles and which smart textile qualities impact comfort. We demonstrate a research design method that elicits social interaction among adults with IDD. We found that smart textiles facilitate embodied and social interactions among adults with IDD. We also present an emerging design space based on smart textile capabilities and user needs.
Inserting emojis can be cumbersome when users must swap through panels. From our survey, we learned that users often use a series of consecutive emojis to convey rich, nuanced non-verbal expressions such as emphasis, change of expressions, or micro stories. We introduce MojiBoard, an emoji entry technique that enables users to generate dynamic parametric emojis from a gesture keyboard. With MojiBoard, users can switch seamlessly between typing and parameterizing emojis.
This paper describes progress made in design and development of a new digital musical instrument (MIDI controller), MoveMIDI, and highlights its unique 3D positional movement interaction design differing from recent orientational and gestural approaches. A user constructs and interacts with MoveMIDI's virtual, 3D interface using handheld position-tracked controllers to control music software, as well as non-musical technology such as stage lighting. MoveMIDI's virtual interface contributes to solving problems difficult to solve with hardware MIDI controller interfaces such as customized positioning and instantiation of interface elements, and accurate, simultaneous control of independent parameters. MoveMIDI's positional interaction mirrors interaction with some physical acoustic instruments and provides visualization for an audience. Beta testers of MoveMIDI have created emergent use cases for the instrument.
We are surrounded by an increasing number of smart and networked devices. Today much of this technology is enjoyed by gadget enthusiasts and early adaptors, but in the foreseeable future many people will become dependent on smart devices and Internet of Things (IoT) applications, desired or not. To support people with various levels of computer skills in mastering smart appliances as found, e.g., in smart homes, we propose the 'magic paradigm' for programming networked devices. Our work can be regarded as a playful 'experiment' towards democratizing IoT technology. It explores how we can program interactive behavior by simple pointing gestures using a tangible 'magic wand'. While the 'magic paradigm' removes barriers in programming by waiving conventional coding, it simultaneously raises questions about complexity: what kind of tasks can be addressed by this kind of 'tangible programming', and can people handle it as tasks become complex? We report the design rationale of a prototypical instantiation of the 'magic paradigm' including preliminary findings of a first user trial.
Mixed Reality (MR) enables users to explore scenarios not realizable in the physical world. This allows users to communicate with the help of digital content. We investigate how different configurations of participants and content affect communication in a shared immersive environment. We designed and implemented side-by-side, mirrored face-to-face and eyes-free configurations in our multi-user MR environment and conducted a preliminary user study for our mirrored face-to-face configuration, evaluating with respect to one-to-one interaction, smooth focus shifts and eye contact within a 3D presentation using the interactive Chalktalk system. We provide experimental results and interview responses.
We propose "Ohmic-Sticker", a novel force-to-motion type input device to extend capacitive touch surfaces. It realizes various types of force-sensitive inputs by attaching on commercial capacitive touch surfaces. A simple force-sensitive-resistor (FSR)-based structure enables thin (less than 2 mm) form factors and battery-less operation. The applied force vector is detected as the leakage current from the corresponding touch surface electrodes by using "Ohmic-Touch" technology. Ohmic-Sticker can be used for adding force-sensitive interactions to touch surfaces, such as analog push buttons, the TrackPoint-like pointing devices, and full 6 DoF controllers for navigating virtual spaces.
We show a one-handed rapid text selection and command execution method for a smartphone; we term this Press & Slide. The user can perform caret navigation or text selection by sliding the finger on a software keyboard after pressing a key. Then, by releasing the key, a command such as "copy the selected text" is executed; the command is specified by the key that is pressed. Therefore, the user needs not touch the text, and thus the fat finger problem does not cause and the user needs not change his/her smartphone grip.
We describe an experiment conducted with three domain experts to understand how well they can recognize types and performance stages of activities using speech data transcribed from verbal communications during dynamic medical teamwork. The insights gained from this experiment will inform the design of an automatic activity recognition system to alert medical teams to process deviations in real time. We contribute to the literature by (1) characterizing how domain experts perceive the dynamics of activity-related speech, and (2) identifying the challenges associated with system design for speech-based activity recognition in complex team-based work settings.
Delays in response to mobile messages can cause negative emotions in message senders and can affect an individual's social relationships. Recipients, too, feel a pressure to respond even during inopportune moments. A messaging assistant which could respond with relevant contextual information on behalf of individuals while they are unavailable might reduce the pressure to respond immediately and help put the sender at ease. By modelling attentiveness to messaging, we aim to (1) predict instances when a user is not able to attend to an incoming message within reasonable time and (2) identify what contextual factors can explain the user's attentiveness---or lack thereof---to messaging. In this work, we investigate two approaches to modelling attentiveness: a general approach in which data from a group of users is combined to form a single model for all users; and a personalized approach, in which an individual model is created for each user. Evaluating both models, we observed that on average, with just seven days of training data, the personalized model can outperform the generalized model in terms of both accuracy and F-measure for predicting inattentiveness. Further, we observed that in majority of cases, the messaging patterns identified by the attentiveness models varied widely across users. For example, the top feature in the generalized model appeared in the top five features for only 41% of the individual personalized models.
Researchers have designed technologies for and with older adults to help them age in place, but there is an opportunity to support older adults in creating customized smart devices for themselves through electronic toolkits. We developed a plan for iterating on Craftec - one of the first electronic toolkits designed for older adults - informed by the results of a participatory design workshop and user evaluation. We focused on supporting older adults to create exemplar artifacts, such as medication adherence systems. We contribute the exemplars and the current plan for components of the Craftec system as a way to support older adults to design technology for themselves.
Since non-speech sounds can convey urgency well, they have been used as alerts in the vehicle context, including control transitions (handover and takeover) in automated vehicles. However, their potential has not been fully investigated to make use in international standards. To contribute to making authentic standards, the present paper investigated the effects of various non-speech displays to further refine auditory variables. Twenty-four young drivers drove in the driving simulator that had both handover and takeover transitions between manual and automated modes with a secondary task. The reaction times for handover and takeover and other sound user experience questionnaire results are reported with discussions and future work.
Take-over is one of the most crucial user interactions in semi-automated vehicles. To make better communication between driver and vehicle, research has been conducted on various take-over request displays, yet the potential has not been fully investigated. The present paper explored the effects of adding auditory displays to visual text. Earcon and speech showed the best performance and acceptance with spearcon the least. This study is expected to provide the basic data and guidelines for future research and design practice.
This work investigates classification of emotions from MoCap full-body data by using Convolutional Neural Networks (CNN). Rather than addressing regular day to day activities, we focus on a more complex type of full-body movement - dance. For this purpose, a new dataset was created which contains short excerpts of the performances of professional dancers who interpreted four emotional states: anger, happiness, sadness, and insecurity. Fourteen minutes of motion capture data are used to explore different CNN architectures and data representations. The results of the four-class classification task are up to 0.79 (F1 score) on test data of other performances by the same dancers. Hence, through deep learning, this paper proposes a novel and effective method of emotion classification, which can be exploited in affective interfaces.
Training employees on workplace procedures in virtual environments (VEs) is becoming popular since it reduces cost and risks. Although haptic enhancements with force feedback make such VEs more realistic and increase performance. Such enhancements are only available for 'spatial' scenarios. One potential enhancement for low-cost VEs is electrical muscle stimulation (EMS), but it remains open how EMS can be used to support trainees. Therefore we present WONDER: A virtual training environment with an EMS feedback enhancing layer. In an initial study, we show the feasibility of the approach and that it can successfully support trainees in remembering workflows. We test feedback that supports participants by pushing their hand towards a button or pulling their hand away from it. Participants preferred a combination of both feedback types.
Efficient text entry is essential to any computing system. However, text entry methods in virtual reality (VR) currently lack the predictive aid and physical feedback that allows users to type efficiently. The state of the art methods such as using physical keyboards with tracked hand avatars require a complex setup which might not be accessible to the majority of VR users. In this paper, we propose two novel ways to enter text in VR: 1) Word-gesture typing using six degrees of freedom (6DOF) VR controllers; and 2) word-gesture typing using pressure-sensitive touchscreen devices. Our early stage pilot experiment shows that users were able to type at 16.4 WPM and 9.6 WPM on the two techniques respectively without any training, while an expert's typing speeds reached up to 34.2 WPM and 22.4 WPM. Users subjectively preferred the VR controller method over the touchscreen one in terms of usability and task load. We conclude that both techniques are practical and deserve further study.
Advertisers have optimized the periphery of our attention to drive complex purchasing behavior, typically using persuasive or rhetorical techniques to promote decisions that are agnostic to our best interest. Instead of serving the ambition of companies with large marketing budgets, what if these techniques were used to reinforce the behaviors and attachments we choose for ourselves? YourAd is an open-source browser extension and design tool that allows users to supplant their internet ads with custom replacements-- designed by and for themselves. YourAd incorporates industry best practices into a platform to facilitate experimentation with user-aligned advertisement ecosystems, probe the limits of their influence, and optimize their design in support of an end user's personal aspiration.
In the last few years, the interest in virtual reality has been increasing partially due to the emergence of cheaper and more accessible hardware, and the increase in content available. One of the possible applications for virtual reality is to lead people into seeing situations from a different perspective, which can help change beliefs. The work reported in this paper uses virtual reality to help people better understand paralympic sports by allowing them to experience the sports' world from the athletes' perspective. For the creation of the virtual environment, both computer-generated elements and 360 video are used. This work focused on wheelchair basketball, and a simulator of this sport was created resorting to the use of a game engine (Unity 3D). For the development of this simulator, computer-generated elements were built, and the interaction with them implemented. User studies were conducted to evaluate the sense of presence, motion sickness and usability of the system developed. The results were positive although there are still some aspects that should be improved.
Programming has the potential to bring to life that which is most minute in man's imagination. Imagination, however, it will all remain, if no appropriate intervention is made to facilitate the learning of programming. Research studies show that the traditional, face-to-face method of teaching does not provide an enabling environment for learning programming. Hence, outside-classroom intervention is called for. Certain previous studies have tried to build new tools to support the outside-classroom intervention. However, there a need to study the use of existing, familiar and relaxing environments, such as social media, for this intervention. In this paper, we investigate the capability of a social media platform to support the learning of programming among learners in the developing world. We chose the WhatsApp platform as the starting point to uncover these design needs. We also reflected on the lessons learnt using this intervention.
Our traditional interaction possibilities have centered around our electronic devices. In recent years, the progress in electronics and material science has enabled us go beyond chip layer and work at the substrate level. This has helped us rethink form, sources of power, hosts and in turn new interaction possibilities. However, the design of such devices has mostly been ground up and fully synthetic. In this paper, we discuss the analogy between artificial functions and natural capabilities in plants. Through two case studies, we demonstrate bridging unique natural operations of plants with the digital world. Each desired synthetic function is grown, injected carefully or placed in conjunction with a plant's natural functions. Our goal is to make use of sensing and expressive abilities of nature for our interaction devices. Merging synthetic circuitry with plant's own physiology could pave a way to make these lifeforms responsive to our interactions and their ubiquitous sustainable deployment.
Interactive spherical displays offer unique educational and entertainment opportunities for both children and adults in public spaces. However, designing interfaces for spherical displays remains difficult because we do not yet fully understand how users naturally interact with and collaborate around spherical displays. This paper reports current progress on a project to understand how children (ages 8 to 13) and adults interact with spherical displays in a real-world setting. Our initial data gathering includes an exploratory study in which children and adults interacted with a prototype application on a spherical display in small groups in a public setting. We observed that child groups tended to interact more independently around the spherical display, whereas adult groups interacted with the sphere in a driver-navigator mode and did not tend to walk around the sphere. This work will lay the groundwork for future research into designing interactive applications for spherical displays tailored towards users of all age groups.
In this paper, we discuss concepts for improving beverage experience using electrical stimulation in terms of the requirements for the procedure (design space), previous study (completion), and limitations of conventional technologies. To improve beverage experience, electrical stimulation has been indicated as a method for changing a beverage's taste. However, the taste of a beverage changes temporally-a beverage's taste during drinking is different from its taste after swallowing. There are methods of evaluating the taste of a beverage based on time-series, such as the Time-Intensity (TI) method and the Temporal Dominance of Sensations (TDS) method. Therefore, it is important to focus on the temporal change in a beverage's taste for improving the beverage experience. Thus, we focused on the taste before and after swallowing as a first step. Based on this review, we propose the concept of controlling the temporal change of a beverage's taste using electrical stimulation.
The process of qualitative coding often involves multiple coders coding the same data to ensure reliable codes and a consistent understanding of the codebook. One aspect of qualitative coding includes resolving disagreements, where coders discuss differences in coding to reach a consensus. We conduct a case study to evaluate four strategies of disagreement resolution and understand their impact on the coding process. We find that an open discussion and the n-ary tree metric lead coders to focus more on the disagreement of a particular data instance, whereas kappa values and Code Wizard direct coders to compare code definitions. We discuss opportunities for using different strategies at different stages of the coding process for more effective disagreement resolution.
Can interactions between automated vehicles and pedestrians be evaluated in a quantifiable and standardized way? In order to answer this, we designed an input device in the form of a continuous slider that enables pedestrians to indicate their willingness to cross a road and their feeling of safety in real time in response to an approaching vehicle. In an initial field study, 71% of the participants reported that they were able to use the device naturally and indicate their feeling of safety satisfactorily. The feeling-of-safety slider can consequently be used to evaluate and benchmark interactions between pedestrians and vehicles, and compare communication interfaces for automated vehicles.
Work in social psychology on interpersonal interaction has demonstrated that people are more likely to comply to a request if they are presented with a justification - even if this justification conveys no information. In the light of the many calls for explaining reasoning of interactive intelligent systems to users, we investigate whether this effect holds true for human-computer interaction. Using a prototype of a nutrition recommender, we conducted a lab study (N=30) between three groups (no explanation, placebic explanation, and real explanation). Our results indicate that placebic explanations for algorithmic decision-making may indeed invoke perceived levels of trust similar to real explanations. We discuss how placebic explanations could be considered in future work.
The abundance of automatically-triggered lifelogging cameras is a privacy threat to bystanders. Countering this by deleting photos limits relevant memory cues and the informative content of lifelogs. An alternative is to obfuscate bystanders, but it is not clear how this impacts the lifelogger's recall of memories. We report on a study in which we compare viewing 1) unaltered photos, 2) photos with blurred people, and 3) a subset of the photos after deleting private ones, on memory recall. Findings show that obfuscated content helps users recall a lot of content, but it also results in recalling less accurate details, which can sometimes mislead the user. Our work informs the design of privacy-aware lifelogging systems that maximizes recall and steers discussion about ubiquitous technologies that could alter human memories.
Physical movement is a dominant element in robot behavior. We evaluate if robotic movements are automatically interpreted as social cues, even if the robot has no social role. 24 participants performed the Implicit Associations Test, classifying robotic gestures into direction categories ("to-front" or "to-back") and words into social categories (willingness or unwillingness for interaction). Our findings show that social interpretation of the robot's gestures is an automatic process. The implicit social interpretation influenced both classification tasks, and could not be avoided even when it decreased participant's performance. This effect is of importance for the HCI community as designers should consider, that even if a robot is not intended for social interaction (e.g. factory robot), people will not be able to avoid interpreting its movement as social cues. Interaction designers should leverage this phenomenon and consider the social interpretation that will be automatically associated with their robots' movement.
The reproducibility crisis refers to the inability to reproduce scientific experiments and is one of science's great challenges. Alarming reports and growing public attention are leading to the development of services and tools that aim to support key reproducible practices. In the face of this rapid evolution, we envision the unique opportunity for Human-Computer Interaction to impact scientific practice through the systematic study of requirements and moderating effects of technology on research reproducibility. In this paper, we report on the current state of technological and human factors in reproducible science and present challenges and opportunities for both HCI researchers and practitioners to understand, support and motivate core practices.
Disabled people face many barriers to access in all areas of life, including creative expression. With music making, a lack of accessible instruments can be a major barrier, as well as environmental factors. The Strummi is an accessible instrument based on the guitar, designed as a technology probe to explore the technical and cultural role of guitar-like design and interaction modality. We have been collaborating with Heart n Soul, an arts charity that works with young people and adults with learning disabilities. In this paper, we share findings from the first year of this collaboration, and reflect on the implications for doing HCI research with learning-disabled communities. We took a longitudinal, situated approach with an intentionally simple technology inspired by in the wild and technology probe methodologies, allowing for interest in the Strummi to grow organically.
Wearable sport technologies and activity trackers help sportspeople by providing physiological information on their performance. However, professional sportspeople find this information irrelevant due to their high-performance training. They want these devices to provide real-time assistive feedback on their performance, despite the formidable limitations suggested by previous research on giving such feedback. On the other hand, sport coaches already give performance feedback to their sportspeople during their performance.
We speculated that some of their approaches might give clues for designing activity trackers with useful real-time performance feedback. Consequently, we interviewed six elite tennis coaches to explore their approaches of communicating performance information to their players, during tennis games. In this paper, we discussed the findings by comparing them with related work and formed two design insights for giving real-time performance feedback that might lead to novel approaches for activity trackers.
This paper presents a first version of a set of insights developed collaboratively by researchers during a three-year participatory design project spread across four European locations. The MAZI project explored potential uses of a "Do-It-Yourself" WiFi networking technology platform. Built using low-cost Raspberry Pi computer hardware and specially developed, open-source software, this toolkit has the potential to enable hyper-local applications and services to be developed and maintained within a host community for its own use. The nine insights are a distillation and articulation of the collective reflections of the project partners gained from their experiences of working in diverse settings with varied communities and stakeholders. In this paper, we discuss the reflective process, we present the insights to the CHI community in order to gain feedback, and we situate our findings within previous literature.
This paper describes a process of collaborative sense-making through co-designing a reflective poster. We used this method in the 'Empowering Hacks' project which gathered two non-academic individuals with disabilities (authors 2 & 3) and a non-disabled HCI researcher (author 1) around DIY/Making by, for and with people with disabilities. To collectively review the achievements and challenges we experienced in this project, we designed a timeline which allowed us to equally engage in reflective thinking and curatorial discussions on how to present and explain identified key moments. We see the instance of this co-created poster as an opportunity to discuss with the CHI-community the potential and relevance of including research participants in analysis processes.
Advances in AI are paving the way towards more natural interactions, blurring the line between bot and human. We present findings from a two-week diary study exploring users' interactions with the chatbot Replika. In particular, we focus on how users anthropomorphize chatbots and how this influences their engagement. We find that failing to adhere to social norms and glaring signs of humanity leads to decreased engagement unless balanced appropriately.
Popular alarm apps are offering task-based alarms that do not allow the user to dismiss an alarm unless they complete a specific task (e.g., solving math problems). Because such wake-up tasks cause discomforts, their usefulness and necessity could vary among individuals and their context. In this work, we aimed to understand the characteristics of Alarmy (task-based alarm app) users who (dis) likes wake-up task in terms of alarm set usage. We grouped 8,500 US users into three according to the proportion of the task selection and investigated group-wise usage differences. We found significant usage differences among the groups in terms of (1) set frequency, (2) set time, and (3) set consistency, possibly caused by consistent needs and task difficulty. The results suggest promising directions for inconvenient interaction and behavior change research.
There is a growing need to support people to counter problematic smartphone use. We analyse related research in methods to address problematic usage and identify a research gap in off-device retraining. We ran a pilot to address this gap, targeting automatic approach biases for smartphones, delivered on a Tabletop surface. Our quantitative analysis (n=40) shows that self-report and response-time based measures of problematic smartphone usage diverge. We found no evidence that our intervention altered reaction time-based measures. We outline areas of discussion for further research in the field.
Listening and speaking are tied to human experiences of closeness and trust. As voice interfaces gain mainstream popularity, we ask: is our relationship with technology that speaks with us fundamentally different from technology we use to read and type? In particular, will we disclose more about ourselves to computers that speak to us and listen to our answer? We examine this question through a controlled experiment where a conversational agent asked participants closeness-generating questions common in social psychology through either text-based or voice-based interfaces. We found that people skipped more invasive questions when reading-typing compared to speaking-listening. Surprisingly, typing in their answers seemed to increased the propensity for self-disclosure. This research has implications for the future design of voice-based conversational agents and deepens concerns of user privacy.
Building adaptive support systems requires a deep understanding of why users get stuck or face problems during a goal-oriented task and how they perceive such situations. To investigate this, we first chart a problem space, comprising different problem characteristics (complexity, time, available means, and consequences). Secondly, we map them to LEGO assembly tasks. We apply these in a lab study (N=22) equipped with several tracking technologies (i.e., smartwatch sensors and an OptiTrack setup) to assess which problem characteristics lead to measurable consequences in user behaviour. Participants rated occurred problems after each task. With this work, we suggest first steps towards a) understanding user behaviour in problem situations and b) building upon this knowledge to inform the design of adaptive support systems. As a result, we provide the GOLD dataset (Goal-Oriented Lego Dataset) for further analysis.
The replication crisis---a failure to replicate foundational studies---has sparked a conversation in psychology, HCI, and beyond about scientific reliability. To address the crisis, researchers increasingly adopt preregistration: the practice of documenting research plans before conducting a study. Done properly, preregistration should reduce bias from taking exploratory findings as confirmatory. It is crucial to treat preregistration, often an online form/template, as a user-centered design problem to ensure preregistration achieves its intended goal. To understand preregistration in practice, we conducted 14 semi-structured interviews with preregistration users (researchers) who ranged in seniority and experience. We identified two main purposes researchers have for using preregistration, in addition to different user roles and adoption barriers. With the ultimate goal of improving the reliability of scientific findings, we suggest opportunities to explicitly support the different aspects of preregistration use based on our findings.
Travel planning is increasingly done using assistive travel planning technologies. These technologies, however, tend to focus on the traveller as an individual, while travelling can often be a social endeavour involving other people. In order to explore the influence of other people on travelling behaviour, nineteen participants from the city of Ghent, Belgium, took part in a diary study and a subsequent interview. Our results show that the social context of certain travelling behaviours can influence the three main components that make up a displacement (i.e. the route, the departure time and the mode of transportation). Additionally, other aspects of the displacement, such as activities during the displacement, can also be influenced by a social travelling context. We propose that travel planning and travel assistance software could benefit from efforts to incorporate the social aspects of travelling into their systems and offer some suggestions.
In this paper we introduce Hyper Typer, a serious Android game for collecting text entry performance data on a large scale in an unsupervised manner. Publishing the game on the Google Play Store resulted in a total of 1,917 usable transcribed phrases with 58,829 keystrokes over an eleven-week period. By analyzing the data, we demonstrate the feasibility of the method and give preliminary results regarding the overall performance and error rate of players. Moreover, the collected data allows us to compare two of the most commonly used Android soft-keyboards.
The way information systems are designed has a crucial effect on users' privacy, but users are rarely involved in Privacy-by-Design processes. To bridge this gap, we investigate how User-Centered Design (UCD) methods can be used to improve the privacy of systems' designs. We present the process of developing A/P(rivacy) Testing, a platform that allows designers to compare several privacy designs alternatives, eliciting end-users' privacy perceptions of a tested system or a feature (Figure 1). We describe three online experiments, with 959 participants, in which we created and validated the reliability of a scale for Users' Perceived Systems' Privacy (UPSP), and used it to compare between privacy designs alternatives by using scenarios and different variants. We show that A/B testing is applicable for privacy purposes and that our scale is differentiating between designs that perceived as legitimate and designs that may violate users' expectations.
Malicious Android applications can obtain user's private data and silently send it to a server. Android permissions are currently not sufficient enough to ensure the security of users' sensitive information. For a sufficient permission model it is important to account the target of the outgoing data flow. On the other hand, permission dialogues often contain relevant information, but most of the users generally do not understand the implications or the visualization fails to guide the user attention to it. It is important to empower users by providing applications that show them who can access their private data and who might send this data to the outside. In order to raise user awareness considering Android permissions, we developed HappyPermi, an application that visualizes which user information is accessible by the granted permissions. Our evaluation (n=20) shows that most users are not aware of the sensitive data that their installed applications have access to. Our results suggest how different users feel about accessing their sensitive data when they are aware of its outgoing destinations.
We present HoloPass, a mixed reality application for the HoloLens wearable device, which allows users to perform user authentication tasks through gesture-based interaction. In particular, this paper reports the implementation of picture passwords for mixed reality environments, and highlights the development procedure, lessons learned from common design and development issues, and how they were addressed. It further reports a between-subjects study (N=30) which compared usability, security, and likeability aspects of picture passwords in mixed reality vs. traditional desktop contexts aiming to investigate and reason on the viability of picture passwords as an alternative user authentication approach for mixed reality. This work can be of value for enhancing and driving future implementations of picture passwords in mixed reality since initial results are promising towards following such a research line.
A number of large technology companies, or so-called "tech giants", such as Alphabet/Google, Amazon, Apple, Facebook, and Microsoft, are increasingly dominant in people's daily lives, and critically studied in fields such as Science, Technology and Society (STS) studies, with an emphasis on technology, data, and privacy. This project aims to contribute to research at the intersection of technology and society with a prototype visualization tool that shows the vast spread and scope of these large technology companies. In this paper, a prototype graph visualization of notable American technology companies, their acquisitions, and services is presented. The potential applications and limitations of the visualization tool for research are explored. This is followed by a discussion of applying the visualization tool to research on personal data and privacy concerns and possible extensions. In particular, difficulties of data collection and representation are emphasized.
Phishing continues to be a difficult problem for individuals and organisations. Educational games and simulations have been increasingly acknowledged as versatile and powerful teaching tools, yet little work has examined how to engage users with these games. We explore this problem by conducting workshops with 9 younger adults and reporting on their expectations for cybersecurity educational games. We find a disconnect between casual and serious gamers, where casual gamers prefer simple games incorporating humour while serious gamers demand a congruent narrative or storyline. Importantly, both demographics agree that educational games should prioritise gameplay over information provision -- i.e. the game should be a game with educational content. We discuss the implications for educational games developers.
Millions of time-based data streams (a.k.a., time series) are being recorded every day in a wide-range of industrial and scientific domains, from healthcare and finance to autonomous driving. Detecting anomalous behavior in such streams has become a common analysis task for which data scientists employ complex machine learning models. Analyzing the behavior and performance of these models is a challenge on its own. While traditional accuracy metrics (e.g., precision/recall) are often used in practice to measure and compare the performance of different anomaly detectors, such statistics alone are insufficient to characterize and compare the algorithms in a systematic, human-interpretable way. In this extended abstract, we present Metro-Viz, a visual analysis tool to help data scientists and domain experts reason about commonalities and differences among anomaly detectors, and to identify their strengths and weaknesses.
Improving the cyber literacy of employees reduces a company's risk of cyber security breach. Game-based methods are found to be more effective in teaching users how to avoid fraudulent phishing links than traditional learning material such as videos and text. This paper reports on the development of a mobile app designed to improve cyber literacy and provoke users' perceptions of who is responsible for cyber security in organisations. Based on a preliminary trial with 17 participants, we investigated users perceptions of a tongue-in-cheek, provocative cyber security awareness game where users' jobs depend on their aptitude for protecting their organisations' cyber security. Findings suggest that users accepted the high responsibility levelled upon them in the game and that ludic elements hold promise for engagement and increasing users' cyber awareness.
Peace is a universal concern involving a complex process of negotiations between select groups (i.e. policy makers, mediators, scholars and civil society groups). In this paper we present PaxVis, a platform of two interactive data visualizations for a large database of peace agreements (PA-X). We developed PaxVis to support comparative analysis of peace processes and to improve understandings of the complex dynamics behind the establishment of peace.
People experience and understand cyber security differently. Our ongoing work aims to address the fundamental challenge of how we can understand a diverse range of cyber security experiences, attitudes and behaviours in order to design better, more effective cyber security services and educational materials. In this paper, we take a lifespan approach to study the language of cyber security across three main life stages - young people, working age, and older people. By applying text feature extraction and analysis techniques to lists of cyber security features generated by each age group, we illustrate the differential language of cyber security across the lifespan and discuss the implications for design and research in HCI.
A growing number of conversational agents are being embedded into larger systems such as smart homes. However, little attention has been paid to the user interactions with conversational agents in the multi-device collaboration context (MDCC), where a multiple number of devices are connected to accomplish a common mission. The objective of this study is to identify the roles of conversational agents in the MDCC. Toward this goal, we conducted semi-structured interviews with nine participants who are heavy users of smart speakers connected with home IoT devices. We collected 107 rules (usage instances) and asked benefits and limitations of using those rules. Our thematic analysis has found that, while the smart speakers perform the role of voice controller in the single device context, their role extended to automation hub, reporter, and companion in the MDCC. Based on the findings, we provide design implications for smart speakers in the MDCC.
We present the results of a preliminary study into the usability of troubleshooting terminology around home computer networks. Forty-seven participants classified 29 terms, selected from interview transcripts and online help forums, in an open card sort. We analyzed words participants explicitly indicated as unfamiliar as well as words that participants misclassified. The study serves as a proof of concept for a broader study to determine whether certain technical terms and/or their colloquial counterparts are understandable by technical novices and intermediates. Our findings indicate that participants found technical and colloquial terms equally problematic. These findings have implications for the design of troubleshooting tools and systems as well as the design of technical support scripts and training.
Programmatic errors are often difficult to resolve due to poor usability of error messages. Applying theories of visual perception and techniques in visual design, we created three visual variants of a representative error message in a modern UI framework. In an online experiment, we found that the visual variants led to substantial improvements over the original error message in both error comprehension and resolution. Our results demonstrate that seemingly cosmetic changes to the presentation of an error message can have an oversized impact on its usability.
The integration of User-Centered Design with Agile practices studies the interactions between designers and developers and the alignment of the design and development processes. However, beyond the interactions with the development team, designers are often required to operate within a wider business context, driven by goals set on high-level metrics, like Monthly Active Users, and to show how design-led initiatives and improvements address those metrics. In this paper we generalize learnings from prior work on applying usability improvements to Jira, a project tracking software tool created by Atlassian, and we describe a structured approach to bridging the gap between feature work and business metrics.
In recent years, museums have embraced the use of digital technologies to add interactivity to exhibits. New tools such as wireless beacons, QR codes and markerless trackers paired with powerful smartphones are used to implement applications ranging from guides that provide supplementary material as web pages or audio to spatially precise augmented reality (AR). In this work we explore the use of head-mounted (rather than the more common hand-held) AR in a museum space. Our goal is to explore visitor behaviour when using such technology, to inform the design of a future longitudinal study. We found that visitors enjoyed their experience with head-mounted AR and learned fairly quickly how to navigate and interact with virtual content.
Recently, many mobile apps start to incorporate social functions into their design, even for the task-oriented ones, i.e., those designed mainly to help users complete certain tasks. For instance, Taobao, a shopping app, includes online-sharing and instant-messaging functions. However, there is still lack of research on how the users accept and use these social functions. This paper aims to unveil the user requirements on the social functions in task-oriented apps, and accordingly provide design suggestions for app developers. To this end, we conduct semi-structured interviews with 16 participants on how they use instant-messaging functions in three widely-used task-oriented apps, the shopping app Taobao, an online payment app Alipay, and an entertainment app NetEase Cloud Music. Our findings demonstrate that the instant-messaging functions in these apps are not widely accepted, although they benefit user experience and facilitate users' online social activities. We show that both the design and the users' stereotype towards the apps are important reasons. Finally, we suggest several design guidelines.
Magika is an interactive Multisensory Environment that enables new forms of playful interventions for children, especially those with Special Education Needs. Designed in cooperation with more than 30 specialists at local care centers and primary schools, Magika integrates digital worlds projected on the wall and the floor with a gamut of "smart" physical objects (toys, ambient lights, materials, and various connected appliances) to enable tactile, auditory, visual, and olfactory stimuli. The room is connected with an interface for educators that enables them to: control the level of stimuli and their progression; define and share a countless number of game-based learning activities; customize such activities to the evolving needs of each child. This paper describes Magika and discusses its potential benefits for play, education and inclusion.
Automating usability diagnosis and repair can be a powerful assistance to usability experts and even less knowledgeable developers. To accomplish this goal, evaluating user interaction automatically is crucial, and it has been broadly explored. However, most works focus in long interaction sessions, which makes it difficult to tell how individual interface components influence usability. In contrast, this work aims to compare how different widgets perform for a same task, in the context of evaluating alternative designs for small components, implemented as refactorings. For this purpose, we propose a unified score to compare the widgets involved in each refactoring by the level of effort required by users to interact with them. This score is based on micro-measures automatically captured from interaction logs, so it can be automatically predicted. We show the results of predicting such score using decision trees.
Teachable interfaces can enable end-users to personalize machine learning applications by explicitly providing a few training examples. They promise higher robustness in the real world by significantly constraining conditions of the learning task to a specific user and their environment. While facilitating user control, their effectiveness can be hindered by lack of expertise or misconceptions. Through a mobile teachable testbed in Amazon Mechanical Turk, we explore how non-experts conceptualize, experience, and reflect on their engagement with machine teaching in the context of object recognition.
Since its idealization more than 20 years ago, much research has been carried out on the User eXperience (UX) field, with several evaluation methods being proposed. However, studies have been pointing out conflicting results when evaluating the UX. Users frequently evaluate their UX as positive, even when experiencing many negative emotions when interacting with a product. Moreover, variables such as the peak-end effect and the memory-experience gap may also have been influencing the results, leading to misinterpretations about the product's quality. In this context, this paper presents our work-in-progress research on the following research question: "What are we doing wrong when evaluating the UX?". We discuss about different variables that may have been influencing users' perceptions about their experiences in previous studies and highlight research opportunities. With this work, we expect to shed light on and bring reflection to current practices on UX evaluations in order to progress the research in the UX field.
Amazon's Echo, and Apple's Siri have drawn attention from different user groups; however, these existing commercial VUIs support limited language options for users including native English speakers and non-native English speakers. Also, the existing literature about usability differences between these two distinct groups is limited. Thus, in this study, we conducted a usability study of the Google Home Smart Speaker with 20 participants including native English and non-native English speakers to understand their differences in using the Google Home Smart Speaker. The findings show that compared with their counterparts, the native English speakers had better and more positive user experiences in interacting with the device. It also shows that users' English language proficiency plays an important role in interacting with VUIs. The findings from this study can create insights for VUI designers and developers for implementing multiple language options and better voice recognition algorithms in VUIs for different user groups across the world.
Mid-air haptic feedback constitutes a new means of system feedback in which tactile sensations are created without contact with an actuator. Though earlier research has already focused on its abilities to enhance our experiences, e.g. by increasing a sense of immersion during art exhibitions, an elaborate study investigating people's abilities to identify different mid-air haptic shapes has not yet been conducted. In this paper, we describe a user study involving 50 participants, with ages between 19 - 77 years old, who completed a mid-air haptic learning experiment involving eight different mid-air haptic shapes. Preliminary results showed no learning effect throughout the task. Age was found to be strongly related to a decline in performance, and interestingly, significant differences in accuracy rates were found for different types of mid-air haptic shapes.
Gamification is increasingly being applied in education to engage and motivate learners. Yet the application of gaming elements can be problematic because it can have a negative effect on cognitive load (CL) and on working memory (WM). This is a particular issue for children with learning disabilities who suffer from deficits in working memory. While studies have explored the relationship between gamification and cognitive load, there is little research to address the management of cognitive load in gamified learning applications for children with learning disabilities. This study is suggesting a framework based on existing guidelines derived from HCI concepts and cognitive load theories to design user-centered gamified applications for children with learning disabilities to exploit their limited WM capacity and manage cognitive load.
Virtual Reality (VR) is more accessible than ever these days. While topics like performance, motion sickness and presence are well investigated, basic topics as VR User Interfaces (UIs) for menu control are lagging far behind. A major issue is the absence of haptic feedback and naturalness, especially when considering mid-air finger-based interaction in VR, when "grabbable" controllers are not available. In this work, we present and compare the following two visual approaches to mid-air finger-based menu control in VR environments: a planar UI similar to common 2D desktop UIs, and a pseudo-haptic UI based on physical metaphors. The results show that the pseudo-haptic UI performs better in terms of all tested aspects including workload, user experience, motion sickness and immersion.
This paper presents work-in-progress aiming to develop an actively adapting virtual reality (VR) relaxation application. Due to the immersive nature of VR technologies, people can escape from their real environment and get into a relaxing state. Goal of the application is to adapt to the users' physiological signals to foster the positive effect. Until now, a first version of the VR application was constructed and is currently evaluated in an experiment. Preliminary results of this study demonstrate that people appreciate the immersion into the virtual environment and escape from reality. Moreover, participants highlighted the option to adapt users' needs and preferences. Based on the final study data, the constructed application will be enhanced with regard to adoption and surrounding factors.
Advances in artificial intelligence offer the promise of accessibility, precision, and personalized care in health settings. However, growth in technology has not translated to commensurate growth in automation of healthcare facilities. To gain a better understanding of user psychology behind the acceptance of automation in clinics, a 3 (Role: Receptionist, Nurse, Doctor) x 3 (Digital Agent Representation: Human, Avatar, Robot) factorial experiment (N = 283) was conducted. Results suggest that the digital nature of the interaction overpowers any individual role effects, with acceptance depending upon individual traits (belief in machine heuristic; power usage). Implications for theory and the design of digital healthcare facilities are discussed.
Using real-world logs from 6,866 users who received relevant smartphone notifications we show that visual elements in the notification influence its receptivity. Users responded significantly more to notifications that included an image or an icon compared to standard notifications and to notifications including an action button compared to those not including such button. In addition, timing of the notifications also had a significant effect on receptivity, with lower click rates during the morning hours and higher rates during the afternoon and evening hours.
In the emerging field of automated vehicles (AVs), the many recent advancements coincide with different areas of system limitations. The recognition of objects like traffic signs or traffic lights is still challenging, especially under bad weather conditions or when traffic signs are partially occluded. A common approach to deal with system boundaries of AVs is to shift to manual driving, accepting human factor issues like post-automation effects. We present CooperationCaptcha, a system that asks drivers to label unrecognized objects on the fly, and consequently maintain automated driving mode. We implemented two different interaction variants to work with object recognition algorithms of varying sophistication. Our findings suggest that this concept of driver-vehicle cooperation is feasible, provides good usability, and causes little cognitive load. We present insights and considerations for future research and implementations.
Dyscalculia affects comprehension of numerical mathematical problems, working with numbers and arithmetics. We describe our work on a training system for an exercise that trains connections between verbal and numerical representations of numbers and finger counting. Fingers support embodied cognition and constitute a natural numerical representation. We describe the design rationale and iterative development process, and first evaluation results for our system that enables children to train without guidance and feedback by a trainer.
STEAM education fuses arts with traditional STEM fields so that the diverse disciplines can broaden and inform each other. Our eight-week STEAM after school program exposed elementary school children to social robotics and musical theater. Approximately 25 children grades K-5 participated over the course of the program with an average of 12 children attending each week. The program covered acting, dancing, music, and drawing with the robots in two-week modules based around the fairy tale, "Beauty and the Beast". The modular design enabled children who could come to only a few sessions to participate actively. The children demonstrated enthusiasm for both the robots and the musical theater activities and were engaged in the program. Efforts such as this can provide meaningful opportunities for children to explore a variety of arts and STEM fields in an enjoyable manner. The program components and lessons learned are discussed with recommendations for future research.
Learner-driven subgoal labeling helps learners form a hierarchical structure of solutions with subgoals, which are conceptual units of procedural problem solving. While learning with such hierarchical structure of a solution in mind is effective in learning problem solving strategies, the development of an interactive feedback system to support subgoal labeling tasks at scale requires significant expert efforts, making learner-driven subgoal labeling difficult to be applied in online learning environments. We propose SolveDeep, a system that provides feedback on learner solutions with peer-generated subgoals. SolveDeep utilizes a learnersourcing workflow to generate the hierarchical representation of possible solutions, and uses a graph-alignment algorithm to generate a solution graph by merging the populated solution structures, which are then used to generate feedback on future learners' solutions. We conducted a user study with 7 participants to evaluate the efficacy of our system. Participants did subgoal learning with two math problems and rated the usefulness of system feedback. The average rating was 4.86 out of 7 (1: Not useful, 7: Useful), and the system could successfully construct a hierarchical structure of solutions with learnersourced subgoal labels.
In the contemporary practice of participatory neighborhood planning, planners leverage digital support tools with realistic, interactive 3D visualization to support perception processing and to increase engagement among diverse public stakeholders. However, capturing the aspirations of a community lacking design and planning expertise requires a more thorough evaluation and considered design of support tools. We present Land.Info, a proof-of-concept software that allows users to design open spaces with 3D visualization and see the subsequent costs and environmental consequences. To assess how the public engages in design discussion with 3D visualization, we organized three community design workshops for developing a vacant lot. We found that 3D visualization 1) promotes public ideation of user stories around objects, and 2) prohibits ideas beyond spatial design elements. Future research will investigate whether it is possible to aggregate more diverse public aspirations, whether or not visual realism sets expectations for designs, and the potential impacts of expanding the software user base for neighborhood planning cases.
In this paper, we introduce SociaBowl, a dynamic table centerpiece to promote positive social dynamics in 2-way cooperative conversations. A centerpiece such as a bowl of food, a decorative flower arrangement, or a container of writing tools, is commonly placed on a table around which people have conversations. We explore the design space for an augmented table and centerpiece to influence how people may interact with one another. We present an initial functional prototype to explore different choices in materiality of feedback, interaction styles, and animation and motion patterns. These aspects are discussed with respect to how it may impact people's awareness of their turn taking dynamics as well as provide an additional channel for expression. Potential enhancements for future iterations in its design are then outlined based on these findings.
Online learning platforms such as MOOCs have been prevalent sources of self-paced learning to people nowadays. However, the lack of peer accompaniment and social interaction may increase learners' sense of isolation. Prior studies have shown the positive effects on visualizing peer students' appearances in VR learning environments. In this work, we propose to build virtual classmates, which were constructed by synthesizing previous learners' time-anchored messages. Configurations of virtual classmates and their behavioral features can be adjusted. To build the characteristics of virtual classmates, we developed a technique called comment mapping to aggregate prior online learners' comments to shape virtual classmates' behaviors. We evaluated the effects of the virtual classmates built with and without the comment mapping and the amount of virtual classmates rendered in VR.
Note-taking activities in physical classrooms are ubiquitous and have been emerging in online learning. To investigate how to better support online learners to take notes while learning with videos, we compared free-form note-taking with a prototype system, NoteStruct, which prompts learners to perform a series of note-taking activities. NoteStruct enables learners to insert annotations on transcripts of video lectures and then engages learners in reinterpreting and synthesizing their notes after watching a video. In a study with a sample of Mechanical Turk workers (N=80), learners took longer and more extensive notes with NoteStruct, although using NoteStruct versus free-form note-taking did not impact short-term learning outcome. These longer notes were also less likely to include verbatim copied video transcripts, but more likely to include elaboration and interpretation. We demonstrate how NoteStruct influences note-taking during online video learning.
With the ubiquity of turn-by-turn navigation on toady's smartphones, personal exploration of the unseen has been drastically diminished. Such services make it less likely for users to conquer their less frequented parts of the urban environment. In this paper we present the DetourNavigator, a navigation service that creates routes based on Google Location History along areas that are unfamiliar to the user. Our preliminary user study indicates that these personalized graphs are well suited to generate routes that might lead to more holistic knowledge about the built environment.
Prior work has demonstrated that energy education programs designed for young children can influence the adoption of energy efficiency measures in the home. Here, we introduce the Know Your Energy Numbers (KYEN) program, an energy education program designed to teach an older audience of pre-teens, or tweens, about: (i) their energy consumption lifestyles, (ii) available residential energy tools, and (iii) methods to extract insights from their energy data. We also describe results from two pilots with 18 tweens from Girl Scout and Boy Scout troops living in Northern California. We report on how participants and their families reacted to our energy-based curricula, the benefits and challenges they perceived about using energy tools, and their preferences regarding the display of home energy data. We conclude with a brief discussion of the outcomes and limitations of this work before describing next steps for the program.
Brain Painting is a brain-computer interface (BCI) application that gives users the ability to paint on a virtual canvas without requiring physical movement [1-2]. Brain Painting has shown to improve the Quality of Life (QOL) of patients with Amyotrophic lateral sclerosis (ALS), by giving patients a way to express themselves and affect society through their art [1]. Although there is currently no known cure for ALS, through such outlets we can help mitigate the physical and psychological impairments of those living with ALS. This paper discusses the development and testing of an immersive Brain Painting application using a Google brush-like tool in a 3D Environment, for able-bodied users. It also discusses how it can help provide a more immersive medium for users to express themselves creatively. In addition, we also discuss feedback received from a preliminary study on how the brush and application can be improved to better allow users to paint in VR using their brain.
We present a review of persuasive systems in vehicles based on the Persuasion Interface Design in the Automotive context Framework (PIDAF). It integrates intents, cues, persuasive principles and design options for automotive persuasion. Our results show that most systems target safety and eco-driving using conscious cues to alert the driver. Most systems use self-monitoring, tailoring or suggestion as persuasive principles. Visual modalities are still much more popular than auditory or haptic ones. We identified blind spots to support designers and researchers in developing systems addressing areas which are less explored in automotive persuasion.
Previsualization (previs) is an essential phase in the visual design process of narrative media such as film, animation, and stage plays. In digital previs complex 3D tools are used that are not specifically designed for the previs process making it hard to use for creative persons without much technical knowledge. To enable building dedicated previs software, we analyze the tasks performed in digital previs based on interviews with domain experts. In order to support creative persons in their previs work we propose the use of natural user interfaces and discuss which are suited for the specific previs tasks.
Though some work has looked at the implementation of personal informatics tools with youth and in schools, the approach has been prescriptive; students are pushed toward behaviour change intervention or otherwise use the data for prescribed learning in a particular curriculum area. This has left a gap around how young people may themselves choose to use personal informatics tools in ways relevant to their own concerns. We gave workshops on personal informatics to 13 adolescents at two secondary schools in London, UK. We asked them to use a commercial personal informatics app to track something they chose that they thought might impact their learning. Our participants proved competent and versatile users of personal informatics tools. They tracked their feelings, tech activity, physical activity, and sleep with many using the process as a system for understanding and validating aspects of their own lives, rather than changing them.
Now examinations and scores serve as the main criterion for a student's academic performance. However, students use guessing strategies to improve the chances of choosing the right answer in a test. Therefore, scores do not reflect actual levels of the student's knowledge and skills. In this paper, we propose a brain-computer interface (BCI) to estimate whether a student guesses on a test question or masters it when s/he chooses the right answer in logic reasoning. To build this BCI, we first define the "Guessing'' and employ Raven's Progressive Matrices as logic tests in the experiment to collect EEG signals, then we propose a sliding time-window with quorum-based voting (STQV) approach to recognize the state of "Guessing'' or "Understanding'', together with FBCSP and end-to-end ConvNet classification algorithms. Results show that this BCI yields an accuracy of 83.71% and achieves a good performance in distinguishing "Guessing'' from "Understanding''.
Twitch, a live video-streaming platform, provides strong financial and social incentives to developing a follower base. While streamers benefit from Twitch's own features for forming a wide community of engaged viewers, many streamers look to external social media platforms to increase their reach and build their following. We collect a corpus of Twitch streamer popularity measures and their behavior data on Twitch and third party platforms. We test the community-proposed relationship between behavior on social media accounts and popularity through examining the timing of creation and use of social media accounts. We conduct these experiments by studying the correlations between streamer behaviors and two popularity measures used by Twitch: followers and average concurrent viewers. We find that we cannot yet define which behaviors have statistically significant correlations with popularity, and propose future directions for this research.
This paper presents preliminary results of a study designed to quantify users' engagement levels with interactive media content, through self-reported measures and interaction data. The broad hypothesis of the study is that interaction data can be used to predict the level of engagement felt by the user. The challenge addressed in this work is to explore the effectiveness of interaction data to act as a proxy for engagement levels and reveal what that data shows about engagement with media content. Preliminary results suggest several interesting insights about participants engagement and behaviour. Crucially, temporal statistics support the hypothesis that the participant making use of the controls in the interactive, video-based experience positively correlates with higher engagement.
Social media allows us to connect and maintain relationships in spite of physical distance and barriers; as computers and the internet become more accessible, hard-to-reach populations are finding a voice on these platforms. One such group is those who are or have been homeless. Through a computational linguistic analysis of a large corpus of Tumblr blog posts, this paper provides preliminary insights to understand the unique ways homeless bloggers express their needs, frustrations and financial/social distress, connect with others, and seek emotional and practical support from others. We highlight future investigations, building upon this research, that can be pursued in HCI to assist an underserved population with the difficult life experience of homelessness.
Finsta is a "fake" Instagram account that some people maintain in addition to their real Instagram account (rinsta) for a more authentic performance. We draw on Goffman's theatrical metaphor and use a mixed-methods approach to explore how and why people do the work of performing their identity across these distinct presentations of the self. We found that finsta users deliberately partition their audience and mostly maintain a small audience of close friends to avoid context collapse. Additionally, we discovered that finsta is a space where distinct norms shape performance: humor, authenticity, and "unfiltered'' self-expression. Given that finsta users are mostly teenagers and young adults, we ask how an expectation for authentic performance by peers might itself increase pressure on users.
As technology is becoming more powerful and widespread, it is used in multiple areas and for diverse purposes making us available to others anytime and anyplace. Boundary management focuses on the organisation of domains in life and their borders (e.g., between work vs. non-work). Working parents of young children are facing particular challenges of orchestrating their life domains. We present the results of an interview study of parents of young children on their boundary management and availability across domains. The paper contributes an identification of life domains; a classification of availability statuses; and details on the status we call ad-hoc availability with a melange of a priori rules and spontaneous behaviours. Ad-hoc availability is not only determined by a general personal preference for connection, but very importantly by a practical information need from the parent towards the person wanting to connect.
When a non-native speaker talks with a native speaker, he/she sometimes feels hard to take speaking turns due to language proficiency. The resulting conversation between a non-native speaker and a native speaker is not always productive. In this paper, we propose a conversational agent to support a non-native speaker in his/her second language conversation. The agent joins the conversation and makes intervention by a simple script based on turn-taking rules for taking the agent's turn, and gives the next turn to the non-native speaker for prompting him/her to speak. Evaluation of the proposed agent suggested that it successfully facilitated the non-native speaker's participation over 30% of the agent's interventions, and significantly increased the frequency of turn-taking.
Prior research reported that workers on Amazon Mechanical Turk (AMT) are underpaid, earning about $2/h. But the prior research did not investigate the difference in wage due to worker characteristics (e.g., country of residence). We present the first data-driven analysis on wage gap on AMT. Using work log data and demographic data collected via online survey, we analyse the gap in wage due to different factors. We show that there is indeed wage gap; for example, workers in the U.S. earn $3.01/h while those in India earn $1.41/h.
As the K-pop industry has been rapidly expanded, the strength of the K-pop fandoms is under the spotlight. In particular, the collaborations among fandoms for mutually supporting their artists have contributed to the success of the K-pop artists. This paper investigates the current practice of the fandom collaborations in K-pop. To this end, we first introduce the notion of the 'fandom collaboration network' that represents the collaborations among K-pop fandoms. By collecting and analyzing a large-scale fandom activity data from DCinside, we investigate (i) to what extent fandom collaboration is prevalent in K-pop, (ii) how fandoms collaborate with other fandoms, and (iii) what fandoms play more roles in fandom collaboration than others. We find that K-pop fandoms actively collaborate with other fandoms for mutually supporting their artists. By analyzing the structural properties of the fandom collaboration network, we show the fandom collaboration is basically based on the reciprocity. However, we also show that the amount of collaborations between the two fandoms is often unfair. Among all the active fandoms in our data, we find that there a small number of fandoms who play significant roles in fandom collaborations in K-pop. We believe our work can provide important insight for K-pop stakeholders such as fans, agencies, artists, marketers, and broadcasting companies.
Lithopia is a prototype of a blockchain-managed fictional village that uses satellite and drone data to trigger smart contracts on the open source blockchain platform, Hyperledger. The project is testing the possibility of anticipatory governance of emerging blockchain and distributed ledger technologies (DLTs) by involving stakeholders in the design process over templates. The goal is to question the promises of blockchain governance happening over automation and smart contracts and to offer an alternative to the misuses of emerging technologies in the so-called predictive and anticipatory design. The prototype consists of a functional Node-RED dashboard used as an interface for the Hyperledger smart contracts and a design fiction movie about the lives of the Lithopians.
Personal deliberation, the process through which people can form an informed opinion on social issues, serves an important role in helping citizens construct a rational argument in the public deliberation. However, existing information channels for public policies deliver only few stakeholders' voices, thus failing to provide a diverse knowledge base for personal deliberation. This paper presents an initial design of PolicyScape, an online system that supports personal deliberation on public policies by helping citizens explore diverse stakeholders and their perspectives on the policy's effect. Building on literature on crowdsourced policymaking and policy stakeholders, we present several design choices for crowdsourcing stakeholder perspectives. We introduce perspective-taking as an approach for personal deliberation by helping users consider stakeholder perspectives on policy issues. Our initial results suggest that PolicyScape could collect diverse sets of perspectives from the stakeholders of public policies, and help participants discover unexpected viewpoints of various stakeholder groups.
After the meteoric rise in price, and subsequent public interest, of the cryptocurrency Bitcoin, a developing body of work has begun examining its impact on society. In recent months, as Bitcoin's price has rapidly declined, uncertainty and distrust have begun to overshadow early enthusiasm. In this late-breaking work, we investigated one of the largest and most important Bitcoin online communities, the r/Bitcoin Reddit forum. A vocal subgroup of users identify themselves as "true Bitcoiners", and justify their continued devotion to Bitcoin. These subreddit participants explained and justified their trust in Bitcoin in three primary ways: identifying characteristics of beneficial versus harmful Bitcoin users, diminishing the importance of problems, and describing themselves as loyal to Bitcoin over time.
With increasing ubiquity, wearable technologies are becoming part of everyday life where they may cause controversy, discomfort and social tension. Particularly, body-worn "always-on" cameras raise social acceptability concerns as their form factors hinder bystanders to infer whether they are "in the frame". Screen-based status indicators have been suggested as remedy, but not evaluated in-the-wild. Simultaneously, best practices for evaluating social acceptability in field studies are rare. This work contributes to closing both gaps. First, we contribute results of an in-the-wild evaluation of a screen-based status indicator testing the suitability of the "displayed camera image" design strategy. Second, we discuss methodical implications for evaluating social acceptability in the field, and cover lessons learned from collecting hypersubjective self-reports. We provide a self-critical, in-depth discussion of our field experiment, including study-related behavior patterns, and prototype fidelity. Our work may serve as a reference for field studies evaluating social acceptability.
An autonomous car, also known as a robot car, self-driving car, is a vehicle that is capable of sensing its environment and moving with little or no human control. In most cases today, a driver can switch "driving mode" of his/her car between manual and autonomous. However, while the mode can be smartly changed, it is not able to show the current driving mode to nearby pedestrians. This might become a source of anxieties for many ordinary people living in the era of autonomous car. To overcome this issue, we propose to create a car with new expressive functions, and make it communicable and interactive. Unlike conventional approaches which use displays and LEDs, our approach is to develop 3D shape-transforming, morphable body of a car by using multi-material 3D-printing. Our first trial with printed soft membranes was successful in representing three different modes of a car seamlessly. In this paper, we introduce our concept, core technologies and implementations, and discuss further possibilities and future works.
In this paper, I speculate on a future where our need to socialize physically is solely to exchange bacteria. With our biological data being in the hands of private companies and governments and our environments' microbiomes becoming less diverse, our social systems, social identities and social interactions are redefined and reinvented to adapt to this new reality. In this world, everyone wears a "social microbial prosthesis" that analyzes their microbial composition from their breath and reveals sensitive information on their chest. The microbial prosthesis would be able to give off information not only on the microbial composition but also on the mental and physical health of a person. This second skin plays a role in controlling communication and interaction between people, where one is able to, by inspecting surrounding people's prosthesis, take careful considerations of who to interact with and who to avoid. Social Microbial Prosthesis is a critique of the race of private companies and governments to collect our biological data, the role of commercializing such data in shaping and changing our social identities and a response to the loss of microbial diversity in our environments due to our modern lifestyles and surroundings.
In-situ exploration of heart rate (HR) zones during cardio training (CT) is important for training efficiency. However, approaches for monitoring HR, either depend on complex visualizations on small screens (i.e., smartwatches) or intrusive modalities (i.e. haptic, auditory) that might force the attention to the information. We developed an early prototype, Howel, a novel wrist-worn soft wearable to display HR zone information during CT. Our concept utilizes mapping information onto dynamic patterns (color changing stripes) as an easy-to-understand ambient display. To preserve non-intrusiveness, it uses non-emissive modality by heating thermochromic paints on its textile surfaces. Early feedback from three participants suggests that soft wearables with non- emissive dynamic patterns have potential (1) to embed information organically on the body, (2) to give easy-to-understand in-situ intensity information and (3) to keep the attention on the exercise instead of performance measures.
Current research in wearable technologies have shown that we can use real-time tactile instructions to support the learning of physical activities through vibrotactile stimulation. While tactile cues based on vibration may indicate direction, they do not convey the direction of movement. We would like to propose the use of inflatables as an alternative form of actuation to express such information through pressure. Inspired by notions from embodied interaction and somaesthetic design, we present in this paper a research through design (RtD) project that substitutes directional metaphors with push against the body. The result, Flow, is a wearable designed to cue six movements of the wrist/forearm to support the training of elementary sensory-motor skills of physical activities, such as foil fencing. We contribute with the description of the design process and reflections on how to design for tactile motion instructions through inflatables.
Researchers in HCI and STS are increasingly interested in describing ethics and values relevant for design practice, including the formulation of methods to guide value application. However, little work has addressed ethical considerations as they emerge in everyday conversations about ethics in venues such as social media. In this late breaking work, we describe online conversations about a concept known as "asshole design" on Reddit, and the relationship of this concept to another practitioner-focused concept known as "dark patterns." We analyzed 1002 posts from the subreddit '/r/assholedesign' to identify the types of artifact being shared and the interaction purposes that were perceived to be manipulative or unethical as a type of "asshole design." We identified a subset of these posts relating to dark patterns, quantifying their occurrences using an existing dark patterns typology.
We introduce a novel video chat interface with a 360° photo as background in order to offer richer contextual and background information. We conduct a preliminary user evaluation in a lab setting. Paired participants were randomly assigned to two conditions, using regular interface or 360° photo interface. Each pair video chatted in pairs, then completed a post-study survey and answered several questions about their experience. Participants reported to have less behavioral interdependence and more inclusion using 360° photo video chat interface. They also reported having strong interest in employing it in long-distance intimate relationship, and made some suggestions for design iterations.
In this paper, we introduce a novel system of password generation, MementoKey, consisting of private words that exist only in a user's memory and a corresponding set of public (non-secret) words that will facilitate users' recall of the private words, which they are associated with. We will demonstrate how MementoKey offers a useful alternative to existing options for storing passwords in password managers, or to using cryptographically weak, but memorable, passwords. We have conducted a user study to evaluate the word-association technique for recalling passwords and the effectiveness of our prototype software training and checking system to guide the user successfully through the memorization process. Our study involving 60 diverse participants indicates that our prototype can effectively lead users through a visualization and memorization technique to create a strong word-association memory between pairs of adjectives and nouns.
Participatory design prescribes intensive stakeholder involvement in the design process. One challenge in such projects is to enable stakeholders to develop an open mind for novel solutions of the design problem at hand. When designing social technology, this is further complicated by prejudices about technology as being too blunt and inadequate to interfere with the sensitivity of social context. In this paper, we present a novel approach that supports a more neutral and open discussion on the benefits and pitfalls of social technology. The approach helps stakeholders see social technology in a broader perspective, which in turn enables the design of solutions with improved social sensitivity.
Resocialisation is a guided process by which ex-convicts are introduced back into society. An issue that arises in this process is that ex-convicts are behind on technological developments when they return to society. Here, we present work on how quantified self technology, as an alternative to the present-day ankle monitor, can be a helpful tool to obtain overview and insight in their progress. In particular, we present a prototype that physically monitors stress levels as an indicator of behavioural patterns. Results from research with former convicts shows how giving ownership over tracking data can help the user group understand their societal status and become more sovereign during their resocialisation process. Finally, we reflect on ethical questions regarding data gathering, Quantified Other and privacy for ex-convicts.
As the lines between the digital and analog worlds become increasingly blurred, it is nearly impossible to traverse modern life without creating a digital footprint. This integration is so deep-rooted into the fabric of society, that if one attempted to choose to disconnect from today's hyperconnected world, one would have to move away from civilization. Weiser's vision of the omni-present, ubiquitous computer of the 21st century [21] has been realized, but at a cost. With invisible interfaces we forego the ability to recognize when we are being watched, heard or influenced by external actors. This paper takes a bottom-up approach of using design fiction narratives to explore how to design mechanisms of control (MoC) that may help reinstate human control and agency over our data. Preliminary results show emergent themes pertaining to data access, governance and sharing; the forms of MoC; as well as methodological lessons.
In future, emergency services will increasingly use technology to assist emergency service dispatchers and call taker with information during an emergency situation. One example could be the use of drones for surveying an emergency situation and providing contextual knowledge to emergency service call takers and first responders. The challenge is that drones can be difficult for users to maneuver in order to see specific items. In this paper, we explore the idea of a drone being controlled by an emergency call taker using embodied interaction on a tangible chair. The interactive chair, called Flight Chair, allows call takers to perform hands-free control of a drone through body movements on the chair. These include tilting and turning of one's body
Recent years have seen numerous attempts to imbue conversational agents with marked identities by crafting their personalities. However, the question remains as to how such personalities can be systematically designed. To address this problem, this paper proposes a conceptual framework for designing and communicating agent personalities. We conducted two design workshops with 12 designers, discovering three dimensions of an agent personality and three channels to express it. The study results revealed that an agent personality can be crafted by designing common traits shared within a service domain, distinctive traits added for a unique identity, and neutral traits left intentionally undecided or user-driven. Also, such a personality can be expressed through how an agent performs services, what contents it provides, and how it speaks and appears to be. Our results suggest a renewed view of the dimensions of conversational agent personalities.
Lucid dreaming, knowing one is dreaming while dreaming, is an important tool for exploring consciousness and bringing awareness to different aspects of life. We present a proof-of-concept system called Lucid Loop: a virtual reality experience where one can practice lucid awareness via biofeedback. Visuals are creatively generated before your eyes using a deep learning Artificial Intelligence algorithm to emulate the unstable and ambiguous nature of dreams. The virtual environment becomes more lucid or "clear" when the participant's physiological signals, including brain waves, respiration, and heart rate, indicate focused attention. Lucid Loop enables the virtual embodied experience of practicing lucid dreaming where written descriptions fail. It offers a valuable and novel technique for simulating lucid dreaming without having to be asleep. Future developments will validate the system and evaluate its ability to improve lucidity within the system by detecting and adapting to a participants awareness.
Prototyping involves the evolution of an idea into various stages of design until it reaches a certain level of maturity. These design stages include low, medium and high fidelity prototypes. Workload analysis of prototyping using NASA-TLX showed an increase in workload specifically in frustration, temporal demand, effort, and decline in performance as the participants progressed from low to high fidelity. Upon reviewing numerous commercial and academic tools that directly or indirectly support software prototyping in one aspect or another, we identified a need for a comprehensive solution to support the entire software prototyping process. In this paper, we introduce Eve, a prototyping workbench that enables the users to sketch their concept as low fidelity prototype. It generates the consequent medium and high fidelity prototypes by means of UI element detection and code generation. We evaluated Eve using SUS with 15 UI/UX designers; the results depict good usability and high learnability (Usability score: 78.5). In future, we aim to study the impact of Eve on subjective workload experienced by users during software prototyping.
This research aims to utilize an output method for zero energy pop-up fabrication using chemical inflation as a technique for instant, hardware-free shape change. By applying state-changing techniques as a medium for material activation, we provide a framework for a two-part assembly process starting from the manufacturing side whereby a rigid structural body is given its form, through to the user side, where the form potential of a soft structure is activated and the structure becomes complete. To demonstrate this technique, we created two use cases: firstly, a compression material for emergency response, and secondly a self-inflating packaging system. This paper provides details on the auto-inflation process as well as the corresponding digital tool for the design of pneumatic materials. The results show the efficiency of using zero energy auto-inflatable structures for both medical applications and packaging. This rapidly deployable inflatable kit starts from the assumption that every product can provide its own contribution by responding in the best way to a specific application.
This paper presents a first prototype of Mobeybou, a Digital Manipulative that uses physical blocks to interact with digital content. It intends to create an environment for promoting the development of language and narrative competences as well as digital literacy among pre and primary school children. Mobeybou offers a variety of characters, objects and landscapes from various cultures around the world and can be used for creating multicultural narratives. An interactive app developed for each country provides additional cultural and geographical information. A pilot study carried out with a group of 3rd graders showed that Mobeybou motivated and inspired them to actively and collaboratively create narratives integrating elements from the different countries. This may indicate Mobeybou's potential to promote multiculturalism.
Designing for digital or robotic fabrication typically involves a virtual model in order to determine and coordinate the required operations of its construction. As a result, its creative design space becomes constrained to material expressions that can be predicted through digital modeling. This paper describes our preliminary thinking and first empirical results when this digital modeling phase is skipped, and the designing occurs interactively 'with' the fabrication operations themselves. By analyzing the material responses of corrugated cardboard to simple linear cutting operations that are executed by a robotic arm, we demonstrate how emergent material effects can be discovered improvisationally. Such material effects cannot be virtually modeled, however, they can be recreated and controlled by the robotic manipulations. We believe this form of 'material sketching' broadens the advances in 'human-fabrication interaction' towards a novel and unforeseeable expressions of physical form that require a much more direct, yet still digital, relationship with materiality.
Balance is an essential physical skill to master, but a challenging one given that it requires a heightened body awareness to control, maintain and develop. In HCI physical training research, the design space of technology support for developing such body awareness remains narrow. Here, we introduce BalBoa, a balancing board to support balance training during handstands. We describe key highlights of the design process behind the Balboa, and present a work-in-progress prototype, which we tested with handstand beginners and experts. We discuss feedback from our users, preliminary insights, and sketch the future steps towards a fully developed prototype.
Mid-air haptics is an emerging technology that can produce a sense of touch in mid-air using ultrasound. While the use of mid-air haptics has a lot of potential in various domains such as automotive, virtual reality or professional healthcare, we suggest that the home is an equally promising domain for such applications. We organized an ideation workshop with 15 participants preceded by a sensitizing phase to identify possible applications for mid-air haptics within the home. From the extensive set of ideas that resulted from this, five themes emerged: guidance, confirmation, information, warning and changing status. As general 'application categories', we propose that they can provide a useful basis for the future design and development of mid-air haptic applications in the home, and possibly also beyond.
Augmented Reality(AR) tools are currently primarily targeted at programmers, making designing for AR challenging and time-consuming. We developed an interactive prototype, PintAR, that enables the authoring and rapid-prototyping of situated experiences by allowing designers to bring their ideas to life using a digital pen for sketching and a head-mounted display for visualizing and interacting with virtual content. In this paper, we explore the versatility such a tool could provide through case studies of a researcher, an artist, a ballerina, and a clinician.
Writing on walls in public spaces has been a way for people to communicate and express themselves. In this paper, we present the design of a participatory feedback gathering system inspired by this practice. By engaging the campus community in sharing their feedback on the use of spaces and facilities, we aim to encourage them to participate in co-creating the campus space. Our prototype combines a physical object's affordance to attract attention with an internet forum to gather feedback. We document some key findings from our exploratory study and share ideas about future work.
The role of libraries are rapidly shifting, in large part as a consequence of digitization. In addition to providing access to collections of books and other physical media, public libraries today are embracing a new role of becoming urban hubs, in which a wide range of activities take place. In these activities, local knowledge is developed, exchanged, and disseminated. However, there are still very few digital services that support this new role. Here, we explore how to develop digital services for supporting and leveraging user-generated video content in library activities. Based on interviews and design scenarios as probes, we describe the potentials and challenges for designing such services, as seen from the perspective of library staff. Our insights will inform the design of a new digital service for publics to participate in collaborative production of videos to document, exchange, and disseminate local knowledge generated in library activities.
This paper articulates the design, manufacturing, and deployment of transTexture, a digital lamp features dynamic kinetic textures and changeable lighting effects. We deployed transTexture in the homes of two domain expert participants in a preliminary study to explore how computational materiality of interaction can fit a changing everyday environment. We conducted two semi-structured interviews with domain expert participants at the beginning and end of the field study. Based on the lived experiences uncovered in our initial findings, we elaborate on two notions that describe types of user engagements with computational materiality: transformation and Integration. We conclude with lessons learned from our study that will guide our future research on computational materiality.
Machine learning (ML) is now widely used to empower products and services, but there is a lack of research on the tools that involve designers in the entire ML process. Thus, designers who are new to ML technology may struggle to fully understand the capabilities of ML, users, and scenarios when designing ML-empowered products. This paper describes a design tool, ML-Process Canvas, which assists designers in considering the specific factors of the user, ML system, and scenario throughout the whole ML process. The Canvas was applied to a design project, and was observed to contribute in the conceptual phase of UX design practice. In the future, we hope that the ML-Process Canvas will become more practical through continued use in design practice.
The rapid growth of urban populations creates challenges for food production. One solution that is potentially more sustainable than current methods is localized production, in particular food production by individuals at home. Growing food at home is possible, but it is a process that requires motivation, knowledge and skills. Here, we present the design of a sensor platform aimed at helping individuals in urban environments grow food at home by informing them about the needs of their plants and, based on urban farming practices, by connecting them with a network of growers to share knowledge and produce.
In this paper, we propose an approach, for sign language recognition, that makes use of a virtual reality headset to create an immersive environment. We show how features from data acquired by the Leap Motion controller, using an egocentric view, can be used to automatically recognize a user signed gesture. The Leap features are used along with a random forest for real-time classification of the user's gesture. We further analyze which of these features are most important, in an egocentric view, for gesture recognition. To test the efficacy of our proposed approach, we test on the 26 letters of the alphabet in American Sign Language in a virtual environment with an application for learning sign language.
In this paper we present ShadowLamp, a lamp concept supporting controllable shadow casting for displaying ambient information. The concept uses electrochromic displays to mask light and thereby allow switching of projected shadows. We implemented a prototype in a hexagon frame with six separately controlled LEDs compartmentalized to casts shadows in 60° angles. Alongside the LEDs, each compartment contains an electrochromic display for shadow control. As a use case we fabricated displays for a children's book and used them to change the shadows as the story progress. The displays and LEDs are controlled by a Bluetooth connected Android application.
The development of personal fabrication technologies has enabled end users to model and prototype desired objects. 3D printing technologies have eased our access to solid models, however, it is still a challenge to develop thin fibers rapidly at personal levels that may help enriching textures of models. We propose a system and method inspired by cotton candy making, which uses rotary jet-spinning to extract thin plastic fibers at high speed. We report our exploration of the proposed method where we studied various plastic materials, the effects of the rotation speed, and the hole size of the fiber exit. The method allows plastic fibers to be extracted at micro-scale, and we propose various examples of use cases. Our approach can be used in combination with traditional 3D printing techniques, where soft and/or hairy models are required to design the texture of a 3D model.
Users bundle the consumption of their favorite content in temporal proximity to each other, according to their preferences and tastes. Thus, the underlying attributes of items implicitly match user preferences. However, current recommender systems largely ignore this fundamental driver in identifying matching items. In this work, we introduce a novel temporal proximity filtering method to enable items-matching. First, we demonstrate that proximity preferences exist. Second, we present a temporal proximity induced similarity metric driven by user tastes, and third, we show that this induced similarity can be used to learn items pairwise similarity in attribute space. The proposed model does not rely on any knowledge outside users' consumption and provide a novel way to devise user preferences and tastes driven novel items recommender.
There is a significant gap between the high-level, semantic manner in which we reason about image edits and the low-level, pixel-oriented way in which we execute these edits. While existing image-editing tools provide a great deal of flexibility for professionals, they can be disorienting to novice editors because of the gap between a user's goals and the unfamiliar operations needed to actualize them. We present Eevee, an image-editing system that empowers users to transform images by specifying intents in terms of high-level themes. Based on a provided theme and an understanding of the objects and relationships in the original image, we introduce an optimization function that balances semantic plausibility, visual plausibility, and theme relevance to surface possible image edits. A formative evaluation finds that we are able to guide users to meet their goals while helping them to explore novel, creative ideas for their image edit.
Touchscreens combine input and output in a single interface. While this enables an intuitive interaction and dynamic user interfaces, the fat-finger problem and the resulting occlusions still impact the input accuracy. Previous work presented approaches to improve the touch accuracy by involving visual features on the top side of fingers, as well as static compensation functions. While the former is not applicable on recent mobile devices as the top side of a finger cannot be tracked, compensation functions do not take properties such as finger angle into account. In this work, we present a data-driven approach to estimate the 2D touch position on commodity mutual capacitive touchscreens which increases the touch accuracy by 23.0% over recently implemented approaches.
In this paper, we propose the integration of audience atmosphere generation techniques into Interactive Storytelling (IS) engines to obtain more realistic and variable Virtual Reality (VR) training systems. We outline a number of advantages of this novel combination compared to current atmosphere generation techniques. The features of recent IS engines can be extended to automatically adapt the atmosphere produced by a group of virtual humans in response to user intervention while staying coherent with the unfolding story of the training scenario. This work is currently being developed in the context of a VR training for teachers, in which they learn to manage a difficult classroom under the guidance of an instructor.
Machine Learning (ML) is a useful tool for modern game designers but often requires a technical background to understand. This gap of knowledge can intimidate less technical game designers from employing ML techniques to evaluate designs or incorporate ML into game mechanics. Our research aims to bridge this gap by exploring interactive visualizations as a way to introduce ML principles to game designers. We have developed QUBE, an interactive level designer that shifts ML education into the context of game design. We present QUBE's interactive visualization techniques and evaluation through two expert panels (n=4, n=6) with game design, ML, and user experience experts.
Today's small group interactions often occur in multi-device, multi-artifact ecosystems. CHI researchers studying these group interactions may adopt a socio-behavioral or sensing/data mining approach or both. A mixed-methodological approach for studying group interactions in collocated settings require collecting data from a range of sources, like audio, video, multiple sensor streams, and multiple software logs. Analyzing these disparate data sources systematically - with opportunities to rapidly form and correct research insights can help researchers who study group interactions. But engineering solutions assimilating multiple data sources to support different methodologies, ranging from grounded theory methodology to log analysis are rarely found. To address this frequent and tedious problem of data collection and processing in mixed-methodology studies of group interactions, we introduce a workbench tool: Interaction Proxemics in Multi-Device Ecologies (IPME). The IPME workbench synchronizes multiple data sources, provides data visualization, and opportunities for data correction and annotation.
We can listen to music from many different countries due to the evolution of the Internet. However, understanding lyrics written in foreign languages is still difficult, even though many international songs have been translated. In this paper, we propose an interactive lyric translation system and describe its implementation. Users can modify lyrics by selecting a sample lyric from candidate translations and then freely edit the lyric using the proposed system. The proposed system also allows users to listen to their translation by applying singing voice synthesizer and search for related words. We conducted experiments with 12 participants to compare the lyric translation by the proposed system to manual lyric translation. The translation using the proposed system had better evaluation results comparing to the manual translation.
Citizen Science projects ask their participants to contribute work to pre-defined topics, thereby typically rendering the participants as mere consumers of often narrowly defined tasks. In this work-in-progress paper, we present our work on an interactive experimentation platform that allows anybody - researchers as well as members of the crowd - to run experiments and test scientific hypotheses with a local crowd of volunteers. The platform also enforces a lightweight review process for teaching its users how to formulate valid scientific hypotheses and experimental designs.
There is a lot of work in progress by the W3C and others surrounding a Web standards compliant Web of Things (WoT) which it is hoped will unify the current Internet of Things infrastructure. Our contribution to this uses the Document Object Model (DOM) to represent complex physical environments, with a CSS-like syntax for storing and controlling the state of 'things' within it. We describe how JavaScript can be used in conjunction with these to create an approach which is familiar to Web developers and may help them to transition more smoothly into WoT development. We share our implementation and explore some of the many potential avenues for future research. These include rich WoT development tools and the possibility of content production for physical environments.
The goal of our research has been to create software that extends the benefits of virtual reality (VR) to mathematics education. We report on the design and evaluation of a VR application meant to support students' reasoning about objects in three-dimensional (3D) coordinate systems and to explore the possibilities of the application for mathematics education in high school classrooms.
This paper presents the results of an empirical exploration of 10 cursor movement metrics designed to measure respondent hesitation in online surveys. As a use case, this work considers an online survey aimed at exploring how people gauge the electricity consumption of domestic appliances. The cursor metrics were derived computationally from the mouse trajectories when rating the consumption of each appliance and analyzed using Multidimensional Scaling, Jenks Natural Breaks, and the Jaccard Similarity Index techniques. The results show that despite the fact that the metrics measure different aspects of the mouse trajectories, there is an agreement with respect to the appliances that generated higher levels of hesitation. The paper concludes with an outline of future work that should be carried out in order to further understand how cursor trajectories can be used to measure respondent hesitation.
Eliciting cybersecurity behavior change in users has been a difficult task. Although most users have concerns about their safety online, few take precautions. Transformational games offer a promising avenue for cybersecurity behavior change. To date, however, studies typically focus on entertainment value instead of investigating the effectiveness and design potential of games in cybersecurity. As a first step to filling this gap, we present the design of Hacked Time, a desktop game that aims to encourage cybersecurity behavior change by translating self-efficacy theory into the game's design. As cybersecurity games are a relatively novel area, our design aims to serve as a prototype for mapping specific behavior change principles relevant to this area onto game design practice.
In this paper, we introduced the concept of a hybrid product which combines a digital game with physical experiences and discussed practical recommendations for hybrid products development in the domain of wildlife conservation for children. IPANDA including hardware and software applications with sensing technology was able to gather real-time environmental data and connected to one virtual wildlife experiencing environmental challenges regarding its living habits. Children who play the product can learn about the environment around them and foster wildlife protection awareness. To evaluate our conceptual system, we created a preliminary prototype and conducted user study within the semi-structured interview and Smileyometer. Our striking findings revealed IPANDA as a promising tool to serve as groundwork to encourage children to explore the physical environment and gain potential wildlife protection education.
Room-scale virtual reality games allow players to experience an unmatched level of presence. A major reason is the natural navigation provided by physical walking. However, the tracking space is still limited, and viable alternatives or extensions are required to reach further virtual destinations. Our current work focuses on traveling over (very) large distances--an area where approaches such as teleportation are too exhausting and WIM teleportations potentially reduce presence. Our idea is to equip players with the ability to switch from first-person to a third-person god-mode perspective on demand. From above, players can command their avatar similar to a real-time strategy game and initiate travels over large distance. In our first exploratory evaluation, we learned that the proposed dynamic switching is intuitive, increases spatial orientation, and allows players to maintain a high degree of presence throughout the game. Based on the outcomes of a participatory design workshop, we also propose a set of extensions to our technique that should be considered in the future.
This paper analyses the impact that introducing game elements can have in players' artistic valuation of video-games. We put forth a hypothesis that aesthetic experiences are incompatible with game elements (challenges and rewards/penalties). We tested it, by allowing (n=76) subjects to experience two different variants of the same artistic video game, one with game elements, another without. Using a mixed methods approach, we study results from self-reports and open-ended questionnaires. These indicate that in the game version's case, subjects reported being less focused in understanding the experience's meaning and found it less meaningful to a statistically significant degree. Therefore, we conclude that game designers seeking to mediate artistic experiences should be cautious in the introduction of game elements, as they can negatively impact the experience's value.
Public transport can be a place where commuters feel rushed or stressed. Missing your train, a delayed bus or crowdedness at the station does not induce happiness among most people. As a consequence, prosocial behaviour like offering someone a seat is displayed less often. We discuss the design and design process of MirrorMe, a simple communal game to induce positive mood of commuters. MirrorMe aims to increase prosocial behaviour through mimicry. Commuters are challenged to "make a face" and thereby connect to other commuters. MirrorMe will be installed in a public display close to a train station enabling access to all commuters and passers-by. This work addresses the need for games and play in public setting to stimulate prosocial behaviour. It exemplifies how multidisciplinary HCI approaches in a gamejam setting can contribute to real life challenges. We conclude with open questions for impact evaluation in future work.
Physical play opportunities for people with motor disabilities typically do not include co-located play with peers without disabilities in traditional sport settings. In this paper, we present a prototype of a wheelchair-accessible interactive floor projection system, iGYM, designed to enable people with motor disabilities to compete on par with, and in the same environment as, peers without disabilities. iGYM provides two key system features-peripersonal circle interaction and adjustable game mechanic (physics)-that enable individualized game calibration and wheelchair-accessible manipulation of virtual targets on the floor. Preliminary findings from our pilot study with people with motor disabilities using power wheelchairs, manual wheelchairs, and people without disabilities showed that the prototype system was accessible for all participants at higher than anticipated target speeds. Our work has implications for designing novel, physical play opportunities in inclusive traditional sport settings.
Digital gameplay experience depends not only on the type of challenge that the game provides, but also on how the challenge be presented. With the introduction of a novel type of emotional challenge and the increasing popularity of virtual reality (VR), there is a need to explore player experience invoked by emotional challenge in VR games. We selected two games that provides emotional challenge and conducted a 24-subject experiment to compare the impact of a VR and monitor-display version of each game on multiple player experiences. Preliminary results show that many positive emotional experiences have been enhanced significantly with VR while negative emotional experiences such as horror and fear have less been influenced; participants' perceived immersion and presence were higher when using VR than using monitor-display. Our finding of VR's expressive capability in emotional experiences may encourage more design and research with regard to emotional challenge in VR games.
Despite the known benefits of commensal eating, eating alone is becoming increasingly common as people struggle to find time and manage geographical boundaries to enjoy a meal together. Eating alone however can be boring, less motivating and shown to have negative impact on health and wellbeing of a person. To remedy such situations, we undertake a celebratory view on robotic technology to offer unique opportunities for solo-diners to feel engaged and indulged in dining. We present, Fobo, a speculative design prototype for a mischievous robotic dining companion that acts and behaves like a human co-diner. Besides tackling solo-dining, this work also aims to reorient the perception that robots are not always meant to be infallible. They could be erroneous and clumsy, like we humans are.
Escape the Smart City is a critical pervasive game that uses an escape room format to help players develop an understanding of the implications of urban surveillance technologies. Set in downtown Amsterdam, players work together as a team of hackers to stop the mass deployment of an all-seeing AI-enhanced surveillance system. In order to defeat the system players need to understand its attributes and exploit its weaknesses. Novel gameplay elements include locating hidden surveillance cameras in the city, discovering and exploiting algorithmic biases in computer vision, and exploring new techniques to avoid facial recognition systems. This work makes two distinct contributions to the CHI community: first, it introduces critical pervasive games as an approach to engage the public in complex sociotechnical issues, and second, it experiments with the escape room format as a platform for critical play.
Hybrid bio-digital games integrate real, biological materials into computer systems. They offer a rich, playful space in which interactions between humans, computers, and non-human organisms can be explored. However, the concept of video game 'glitching' in hybrid bio-digital games, specifically those that result from interactions between the biological and the computer hardware and/or software, have not been explored in great detail. We report two incidences of glitches observed during Mold Rush - a hybrid bio-digital game based on growth patterns of living mold: The creation of an additional game character (Moldy Ghosts), and the gameplay freeze (a Yeasty Invasion). As we interpret our observations, we question the potential for glitches to become valuable tools in framing HCI investigations into designing a productive and meaningful biological-digital interactions. The goal of this paper is to propose three testable routes in which glitches could be implemented. 1) Glitch as a tool for learning 2) Glitch as a precursor for an experience-enhancing game component, and 3) Glitch as an instigator for discourse on ethical implications of bio-digital games.
Sharing game control (SGC) is a multiplayer context that is considered within games user research. With the popular Twitch Plays Pokémon, settings of this type have also received broad media attention. In this paper, we introduce and describe HedgewarsSGC, our modifications to the open-source game Hedgewars to investigate different player roles in this shared game control context: besides considering competing groups who share control over their units via input aggregators, it also provides options for spectators that do not want to give up individual control. Thus, HedgewarsSGC is an approach to investigate SGC in such a scenario and additionally, allows further reasoning about input aggregation.
Gamification can change how and why people interact with software. A common approach is to use quantitative feedback to give users a feeling of progress or achievement. There are, however, other ways to provide users with motivation or meaning during normal computer interactions, such as using emotional reinforcement. This could provide a powerful new tool to allow the positive effects of gamification to reach wider contexts. This paper investigates the design and evaluation of a mobile to-do list application, 'Tamu To-Do', which utilises gamified emotional reinforcement, as seen in Figure 1. A week-long field study (N=9) recorded user activity and impressions with the application. The results supported emotional reinforcement's potential as a gamification strategy to improve user motivation and engagement.
Nowadays video games are more inclusive: children, disabled people, and seniors are considered. However, there are players that require special configuration so they could enjoy playing equally. One of these types of players is left-handed players. In order to determine whether to create a special configuration designed for the needs of left-handed audience, we carried out a study where left-handed players were targeted and offered to play with two types of control: a standard right-handed and a customized left-handed configuration. A significant main effect on player experience on the left-handed control configuration was demonstrated. The study reveals the importance of a catered control configuration to create a fair, non-stressful and user-friendly environment for players.
Virtual reality (VR) games continue to grow in popularity with the advancement of commercial VR capabilities such as the inclusion of full body tracking. This means game developers can design for novel interactions involving a player's full body rather than solely relying on controller input. However, existing research on evaluating player interaction in VR games primarily focuses on game content and inputs from game controllers or player hands. Current approaches for evaluating player full body interactions are limited to simple qualitative observation which makes evaluation difficult and time-consuming. We present a Full Room Virtual Reality Investigation Tool (FRVRIT) which combines data recording and visualization to provide a quantitative solution for evaluating player movement and interaction in VR games. The tool facilitates objective data observation through multiple visualization methods that can be manipulated, allowing developers to better observe and record player movements in the VR space to improve and iterate on the desired interactions in their games.
Research shows that exposure to nature has benefits for people's mental and physical health and that ubiquitous and mobile technologies encourage engagement with nature. However, existing research in this area is primarily focused on people without visual impairments and is not inclusive of blind and partially sighted individuals. To address this gap in research, we interviewed seven blind people (without remaining vision) about their experiences when exploring and experiencing the outdoor natural environment to gain an understanding of their needs and barriers and how these needs can be addressed by technology. In this paper, we present the three themes identified from the interview data; independence, knowledge of the environment, and sensory experiences.
Several studies have investigated the clinical efficacy of remote-, internet- and chatbot-based therapy, but there are other factors, such as enjoyment and smoothness, that are important in a good therapy session. We piloted a comparative study of therapy sessions following the interaction of 10 participants with human therapists versus a chatbot (simulated using a Wizard of Oz protocol), finding evidence to suggest that when compared against a human therapist control, participants find chatbot-provided therapy less useful, less enjoyable, and their conversations less smooth (a key dimension of a positively-regarded therapy session). Our findings suggest that research into chatbots for cognitive behavioural therapy would be more effective when directly addressing these drawbacks.
As the accuracy of Automatic Speech Recognition (ASR) nears human-level quality, it might become feasible as an accessibility tool for people who are Deaf and Hard of Hearing (DHH) to transcribe spoken language to text. We conducted a study using in-person laboratory methodologies, to investigate requirements and preferences for new ASR-based captioning services when used in a small group meeting context. The open-ended comments reveal an interesting dynamic between: caption readability (visibility of text) and occlusion (captions blocking the video contents). Our 105 DHH participants provided valuable feedback on a variety of caption-appearance parameters (strongly preferring familiar styles such as closed captions), and in this paper we start a discussion on how ASR captioning could be visually styled to improve text readability for DHH viewers.
We co-designed paper prototype dashboards for virtual environments for three children with diverse sensory needs. Our goal was to determine individual interaction styles in order to enable comfortable and inclusive play. As a first step towards an inclusive virtual world, we began with designing for three sensory-diverse children who have labels of neurotypical, ADHD, and autism respectively. We focused on their leisure interests and their individual sensory profiles. We present the results of co-design with family members and paper prototyping sessions were conducted by family members with the children. The results contribute preliminary empirical findings for accommodating different levels of engagement and empowering users to adjust environmental thresholds through interaction design.
Parkinson's disease is a progressive neurodegenerative disorder that is also characterized by its motor fluctuations throughout the day. This makes clinical assessment to be hard to accomplish in an appointment as the patient status at the time may be largely different from his condition two hours before. Clinicians can only evaluate patients from time to time, making symptom fluctuations difficult to discern. The emergence of wearable sensors enabled the continuous monitoring of patients out of the clinic, in a free-living environment. Although, these sensors exist and they are being explored in a research setting, there have been limited efforts in understanding which information and how it should be presented to non-technical people, clinicians (and patients). To fill this gap, we started by performing a focus group with clinicians to capture the information they would like to see devised from free-living sensors, and the different levels of detail they envision. Building on the insights collected, we developed a data-driven platform, DataPark, that presents usable visualizations of data collected from a wearable tri-axial accelerometer. It enables report parameterization and includes a battery of state-of-the-art algorithms to quantify physical activity, sleep, and clinical evaluations. A two-month preliminary deployment in a rehabilitation clinic showed that patients feel rewarded and included by receiving a report, and that the change in paradigm is not burdensome and adds information for clinicians to support their decisions.
This late-breaking work presents initial results regarding a novel mobile-projection system, aimed at helping people to learn how to walk with crutches. The existing projection-based solutions for gait training disorders are based on walking over a fixed surface (usually a treadmill). In contrast, our solution projects visual cues (footprints and crutch icons) directly into the floor, augmenting the physical space surrounding the crutches, in a portable way. Walking with crutches is a learning skill that requires continuous repetition and constant attention to detail to make sure they are being used correctly, avoiding negative consequences, such as falls or injuries. We conducted expert consultation sessions, and we identified the main issues that patients face when walking with crutches. This informed the design of Augmented Crutches. We performed a qualitative evaluation and conclude with design implications: the importance of timing, self-assurance and awareness.
Virtual humans are computer-generated characters designed to simulate key properties of human face-to-face conversation---verbal and nonverbal. Their human-like physical appearance and nonverbal behavior set them apart from chatbot-type embodied conversational agents, and has recently received significant interest as a potential tool for health-related interventions. As healthcare providers deliberate whether to adopt this new technology, it is crucial to examine the empirical evidence about their effectiveness. We systematically evaluated evidence from controlled studies of interventions using virtual humans on their effectiveness in health-related outcomes. Nineteen studies were included from a total of 3354 unique records. Although study objectives varied greatly, most targeted psychological conditions, such as mood, anxiety, and autism spectrum disorders (ASD). Virtual humans demonstrated effectiveness in improving health-related outcomes, more strongly when targeting clinical conditions, such as ASD or pain management, than general wellness, such as weight loss. We discuss the emerging differences when designing for clinical interventions versus wellness.
Sleep is a critical component of overall wellness, and pervasive and ubiquitous computing technologies have shown promise for allowing individuals to track and manage their sleep quality. However, sleep quality is also affected by interpersonal factors, especially for families with young children. In this study, we adopted a family informatics approach to understand opportunities and challenges for sleep technologies at the family level. We conducted home-based interviews with 10 families with young children, asking them about their current practices, values, and perceived role for technology. We describe challenges across three phases: bedtime, nighttime, and waking. We show that family-based sleep technologies may have the greatest impact by supporting family activities and rituals, encouraging children's independence, and providing comfort.
Shared decision making (SDM) is increasingly considered as the best way to reach a treatment decision in a clinical environment. However, the use of SDM in practice can be obstructed by a number of factors, such as time constraints or lack of applicability due to patient characteristics. Our project, PrepDoc, explores how a Virtual Training Doctor (VTD) can help patients overcome some of these obstacles to experiencing effective SDM during doctor's visits. In this paper, we report on user studies conducted with 19 participants in Scotland aged 65+. The goal of these studies was to identify the reactions of this audience to the PrepDoc system, evaluate its suitability within Scotland, and elicit suggestions to improve it. Our findings revealed that the idea of empowering people to participate in SDM using a virtual agent was positively received by all participants. However, the reactions to how this idea was implemented in the PrepDoc system varied greatly across participants. Based on this, our paper outlines recommendations for enhancing the user experience with VTDs, accommodating individual differences of older adults, and accounting for the national context.
Orientation and mobility (O&M) training is essential for improving the independence and wellbeing of the visually impaired. However, the shortage of qualified trainers and the unengaging training contents limit the number of O&M training recipients. In this paper, we propose a drone-based intelligent tutor system - HeliCoach - to provide cost-effective and personalized O&M training. We first elaborate on the system design and potential scenarios of HeliCoach use. We then demonstrate the effectiveness of this concept using a preliminary user study. Finally, we discuss the implication and challenges of this system.
We investigate how technology can be used to support people with dementia to engage in Reminiscence Therapy. We used a participatory design approach carried out over three stages: scope, design and evaluation, involving five participants with dementia. We also engaged professionals and caregivers through a survey. We provide initial recommendations for engaging participants with dementia on how they wish to reminisce and what technology may support this.
Caregiving to a person with Alzheimer's can be a very demanding task, both from physical and psychological perspectives. Technological responses to support caregiving, and improve the quality of life of people with Alzheimer's and their caregivers are lacking. Using a research through design approach, we devised a robot focused on empowering people with Alzheimer and fostering their autonomy, from the initial sketch to a working prototype. MATY is a robot that encourages communication with relatives and promotes routines by eliciting the person to take action, using a multisensorial approach (e.g., projecting biographical images, playing suggestive sounds, or emitting soothing aromas). The paper reports the iterative, incremental design process performed together with stakeholders. We share first lessons learned in this process with HCI researchers and practitioners designing solutions, particularly robots, to assist people with dementia and their caregivers.
Mobile behaviour change applications should be evaluated for their effectiveness in promoting the intended behavior changes. In this paper we argue that the 'gold standard' form of effectiveness evaluation, the randomised controlled trial, has shortcomings when applied to mobile applications. We propose that N-of-1 (also known as single case design) based approaches have advantages. There is currently a lack of guidance for researchers and developers on how to take this approach. We present a framework encompassing three phases and two related checklists for performing N-of-1 evaluations. We also present our analysis of using this framework in the development and deployment of an app that encourages people to walk more. Our key findings are that there are challenges in designing engaging apps that automate N-of-1 procedures, and that there are challenges in collecting sufficient data of good quality. Further research should address these challenges.
Ergotact introduces the possibility of including force-based rehabilitation activities of the upper limb of post-stroke survivors. These activities are integrated into a dedicated game which is deployed on a tabletop. The patient interacts in the game with a tangible object which has to be moved, rotated, tightened/untightened and lifted according to the gameplay. The surface of the object is equipped with a matrix of force sensors which allows to introduce force-based activities into the game; for the purpose of the game, the object also includes an accelerometer and a gyroscope. The paper presents the concept and first feedbacks from therapists.
Current research on technology for fitness is often focused on tracking and encouraging healthy lifestyles. In contrast, we adopt an approach based on improving consumer knowledge of food energy. An interactive survey was distributed on Amazon Mechanical Turk to assess how well crowdworkers can judge the calories in a series of foods. Our subjects yielded results comparable to traditional participants, exhibiting well-known phenomena such as underestimating the energy contained in foods perceived to be healthy. Several techniques from the online education literature, such as prompts for reflection, were also investigated for their efficacy at increasing estimation accuracy. Although calories were more accurately judged after applying these methods on aggregate, the effects of individual techniques on our participants were inconclusive. A more thorough investigation is thus needed into effective educational methods for correcting calorie estimations on the Web.
Helping patients to reach optimal health entails a holistic approach of complex interventions including clinical decision support systems, patient decision aids, and self-management tools. In real-world settings, understanding the human factors in technological interventions is the core of HCI research; however, it requires a considerable amount of time to run experimental procedures, especially for patients with mental disorders. We conducted a roleplay simulation over a period of two weeks that comprised observations, and semi-structured interviews with eight health care professionals participated in the simulated use of a health optimization system. The study revealed the SWING model of enabling interventions towards optimal health as i) Sharing feelings, ii) Weaving of information, iii) Improving awareness, iv) Nurturing trust v) Giving support. This model establishes a common path from research to practice for researchers and practitioners in eHealth and HCI.
The use of Clinical Practice Guidelines (CPGs) is known to enable better care outcomes by promoting a consistent way of treating patients. This paper describes a user-centered design approach involving nurses, to develop a prototype expert system for modelling CPGs for Pressure Ulcer management. The system was developed using Visirule, a software tool that uses a graphical approach to modeling knowledge. The system was evaluated by 5 staff nurses and compared nurses' time and accuracy to assess a wound using CPGs accessed via the Intranet of an NHS Trust and the expert system. A post task qualitative evaluation revealed that nurses found the system useable with a systematic design, that it increased access to CPGs by reducing time and effort required by other usual methods of access, that it provided opportunities for learning due to its interactive nature, and that its recommendations were more actionable that those provided by usual static CPG documents.
Technology companies are increasingly acknowledging the need to make their products usable for diverse users that include people with disabilities and aging populations; as a result, educators need to consider how to include accessibility-related topics in college level technology-based courses. In this paper, we present a study of the efficacy of short (10-minute) documentaries, created by student filmmakers, that portrayed three people with different disabilities. We evaluated the films with undergraduate and graduate students who were enrolled in technology-related courses to explore the films' abilities to raise awareness for concerns related to accessibility. We found that the films were effective at changing some incorrect assumptions about designing for diverse users and increasing recognition of the importance of designing for diversity.
Medical images contain important information for diagnosis and preoperative planning in modern medicine. Interacting with these images still happens mostly with a mouse, abstract gestures or handles. In a focus group with five surgeons, we evaluate the possibilities of 3D printed organ models for interaction in VR for the use case of surgery planning. The surgeons rate the approach as highly useful and highlighted the advantage of easier grasping the space relations, which would greatly improve the planning phase of surgery.
This paper introduces the concept of wellbeing-as-interaction. Instead of designing and evaluating technologies that locate wellbeing in the individual, this paper presents early-stage work on designing technologies for people to collaboratively express, interpret, discuss and enact wellbeing. To explore this concept, we examined the wellbeing of six pairs of university students through a 7-day deployment of a technology probe 'MoodCloud'. MoodCloud consisted of a mobile app and an ambient display to share wellbeing updates through colour. We observed three patterns of wellbeing interactions: updates, follow-ups, and message chains. Wellbeing interactions benefitted from the ambiguity of colour and a clearly defined target audience, but students also communicated through other channels to make sense of updates and to enact support. The concept of wellbeing-as-interaction seeks to offer an analytic lens for the CHI community as well as inspiration for novel wellbeing technologies that emphasise meaningful interactions with friends.
The human body is a complex system itself composed of complex systems; its state influences all aspects of our health, wellbeing including our cognitive to physical performance. In HCI most of us are not well versed in how this complex system works. The following paper proposes in5, a model to help make that physiology accessible for design. The model has two parts: (1) the MEECS dichotomies: five fundamental-to-life, volitional processes - move, eat, engage, cogitate, sleep - that are affected by parameters of quality, quantity, time and context, and (2) tuning: an approach to adjust the parameters of these dichotomies toward "dialing in" health, wellbeing, performance. The paper also offers examples for how this model can be explored for design research.
This paper presents a step-by-step process that was developed primarily to extract design pre-requisites for personalized mobile technologies assisting anxiety self-regulation. This process, which is recognized as a preliminary framework, was developed, refined, and tested based on a multidisciplinary literature review and an exploratory study conducted with mental health professionals who treat anxiety disorders. The step-by-step nature of this framework draws from multiple disciplinary and stakeholder perspectives, integrates knowledge about efficacious psychological interventions, considers individual differences and specific challenges faced by patients, and realizes contextual needs. It also includes incremental and iterative refinements based on multidisciplinary sources to foster more evidence-based interface designs. Once reached its maturity, this framework can potentially be applied for designing efficacious technologies for a range of mental health conditions. The expected future contributions and limitations of the framework are also discussed.
Autonomous virtual agents (VAs) are increasingly used commercially in critical information spaces such as healthcare. Existing VA research has focused on microscale interaction patterns such as usability and artificial intelligence. However, the macroscale patterns of users' information practices and their relationship with the design and adoption of VAs have been largely understudied, especially when it comes to older adults (OAs), who stand to benefit greatly from VAs. We conducted a preliminary investigation to understand the role design elements, such as anthropomorphic aspects of VAs, play in OAs' perception of VAs and in OAs' preferences for VAs' participation within their health information practices. Some unexpected findings indicate that the fidelity of anthropomorphic features influences perception in ways that are dependent on the context of the information tasks. This suggests that research on improving the design and increasing the adoption of VAs should factor the interplay between fidelity of VA representation and information context.
Screen readers have become a core assistive technology for blind web users to browse web pages. Although screen readers can convey the textual information or structural properties of web pages, they cannot deliver their overall impression. Such a limitation hinders blind web users from obtaining an overview of the website, which non-blind people can do in a short time. As such, we present SoundGlance, a novel application that briefly delivers an auditory summary of web pages. SoundGlance supports the screen reader users by converting the important glanceable cues of the pages into sound. The feasibility of prototype was examined in a pilot study with fourteen blind people. Several practical insights were derived from the experiment.
This paper focuses on the larger question of when to administer in-car just-in-time stress management interventions. We look at the influence of driving-related stress to find the right time to provide personalized and contextually-aware interventions. We address this challenge with a data driven approach that takes into consideration driving-induced stress, driver (cognitive) availability, and indicators of risky driving behavior such as lane departures and high steering reversal rates. We ran a study with sixteen commuters during morning and evening traffic while applying an in-situ experience sampling. During 45 minutes of driving through various scenarios including city, highway, and neighborhood roads we captured physiological measurements, video of participants and surroundings, and CAN bus driving data. Initial review of the data shows that stress levels changed greatly between 2 and 9 (out of a 0-min to 10-max scale). We conclude with a discussion on how to prepare the data to train supervised algorithms to find the right time to intervene stress while driving.
This is the first on-road study testing the efficacy and safety of guided slow breathing interventions in a car. This paper presents design and experimental implications when evolving from prior simulator to on-road scenarios. We ran a controlled study (N=40) testing a haptic guided breathing system in a closed circuit under stress and not-stressed driving conditions. Preliminary results validate prior findings about the efficacy and safety of the intervention. Initial qualitative analysis shows an overall positive acceptance, and no safety-critical incidents (e.g., hard brakes or severe lane departures) -- all participants graded the intervention as safe for real traffic applications. Going further, additional analysis is needed before exposing commuters to the intervention on public roads.
Even when in a static position, data acquired from 6 Degrees of Freedom (DoF) trackers is affected by noise, which is typically called jitter. In this study, we analyzed the effects of 3D rotational jitter on Virtual Reality (VR) controllers in a 3D Fitts' law experiment, which explored how such jitter affects user performance. Eight subjects performed a Fitts' law experiment with or without additional jitter on the cursor. Results show that while error rate significantly increased above ±0.5° jitter and subjects' effective throughput started to decrease significantly above ±1° jitter, there was no significant effect on users' movement time. Further, the Fitts's law movement time model was affected when ±2° jitter was applied to the tracker. According to these results, ±0.5° jitter on the controller does not significantly affect user performance for the tasks explored here. The results of our study can guide the design of 3D controller and tracking systems for 3D user interfaces.
We propose a DIY process to produce customized paper keyboards with kinesthetic feedback that interact with touchscreens. The process is built using two techniques: kirigami and printable double-layered circuits. Our goal is to improve the extensibility and usability of various interfaces made with 2D paper substrates. First, Our kirigami structures provide kinesthetic sensations whose z-directional key stroke is comparable to that of traditional keyboards. In order to design keys with appropriate stroke and reaction force, we adopted the Rotational Erection System (RES). Second, printable double-layered circuits allow users to easily adjust input layouts. This easy-to-customize keyboard can be especially useful for those who have specific requirements for input devices.
Touchless gesture is a common input type when interacting with large displays or virtual and augmented reality applications. In touchless input, users may alternate between hands or use bimanual gestures. But touchless performance in nondominant hands is little explored---even though cognitive science and neuroscience studies show cerebral hemispheric specialization causes performance differences between dominant and nondominant hands in lateralized individuals. Drawing on theories that account for between-hand differences in rapid-aimed movements, we characterize motor asymmetry in touchless input. Results from a controlled study (n = 20, right-handed) show freehand touchless input produces significantly smaller between-hand performance differences than a mouse in pointing and dragging. We briefly discuss the HCI implications of motor asymmetry in an input type.
Tangibles on interactive tabletops are tracked by the surface they are placed on and have been shown to benefit the interaction. However, they are tied to the surface. When picked up, they are no longer recognized and lose any connection to virtual objects shown by the table. We introduce the interaction concept of Off-Surface Tangibles that are tracked by the surface but continue to support meaningful interactions when lifted off the surface. We present a design space for Off-Surface Tangibles, and design considerations when employing them. We populate the design space with prior work and introduce possible interaction designs for further research.
Ambient notifications are an essential element to support users in their daily activities. Designing effective and aesthetic notifications that balance the alert level while maintaining an unobtrusive dialog, require them to be seamlessly integrated into the user's environment. In an attempt to employ the living environment around us, we designed Overgrown, an actuated robotic structure capable of supporting a plant to grow over itself. As a plant endoskeleton, Overgrown aims to engage human empathy towards living creatures to increase effectiveness of ambient notifications while ensuring better integration with the environment. In a focus group, Overgrown was identified with having personality, showed potential as a user's ambient avatar, and was suited for social experiments.
We present a data-driven animated character capable of blocking attacks from a user in a VR sword fighting experience. The system uses motion capture data and a machine learning model to recreate a believable blocking behaviour, suggesting the viability of full-featured data-driven interactive characters in VR. Our work is part of a larger vision of VR interaction as a two-level problem, separating spatial details from design concerns. In this context, here we provide the designers of the experience with a character from which a "blocking'' behaviour can be requested without further spatial specifications. This puts down a first building block in the construction of a controllable data-driven VR sword fighter capable of multiple behaviours.
We present JeL-a bio-responsive immersive installation for interpersonal synchronization through breathing. In JeL, two users are immersed in a virtual underwater environment, where their individual breathing controls the movement of a jellyfish. As users synchronize their breathing, a virtual glass sponge-like structure starts to grow, representing the users' physiological synchrony. JeL explores a novel form of interpersonal interaction in virtual reality that aims to connect users to their physiological state through biofeedback, to each other through physiological synchronization, and to nature through connecting with a jellyfish and collaboratively growing a glass sponge-inspired sculpture. This form of immersive, bio-responsive interaction could ultimately be used to encourage self-awareness, a feeling of connectedness, and consequently pro-social and pro-environmental attitudes. Here, we describe the motivation, inspiration, design elements, and future work involved in bringing this system to fruition.
Olfactory notifications have been proven to have a positive impact on drivers. This has motivated the use of scents to convey driving-relevant information. Research has proposed the use of such scents as lemon, peppermint, lavender and rose for in-car notifications. However, there is no framework to identify which scent is the most suitable for every application scenario. In this paper, we propose an approach for validating a matching between scents and driving-relevant notifications. We suggest a study in which the olfactory modality is compared with a puff of clean air, visual, auditory, and tactile stimuli while performing the same driving task. For the data analysis, we suggest recording the lane deviation, speed, time required to recover from the error, as well as the perceived liking and comfort ratings. Our approach aims to help automotive UI designers make better decisions about choosing the most suitable scent, as well as possible alternative modalities.
Augmented visual-audio feedback supports rhythmic motor performance in both sports training and sensorimotor synchronization practise. In home-based rehabilitation for minor stroke patients, training on a fine motor skill using rhythms not only helps to recover sophisticated motion ability but also increases their confidence and mental health recovery. Auditory information has been shown to have advantages for improving rhythmic motion performance, but it can be masked by environmental noise and may be intrusive to non-stakeholders. Under these circumstances, patients may be reluctant to practice actively due to difficulties hearing the auditory stimuli or through a concern for disturbing others. To address this issue, we explored an inconspicuous way of providing vibrotactile feedback through wristband. In order to investigate the general feasibility of a sensorimotor synchronization task, we conducted a preliminary user study with 16 healthy participants, and compared the visual-tactile feedback with visual-audio, visual-audio-tactile and visual-only feedback. Results showed that rhythmic motion accuracy with visual-tactile feedback has the equivalent facilitatory effect with visual-audio feedback. In addition, visual-tactile feedback supports smoother movements than the visual-audio feedback. In the future, after refinement with stroke patients, the system could support customization for different levels of sensorimotor synchronization training.
A long lasting problem in the design of auditory displays is how to design audio feedback that is aesthetically appealing and comfortable to listen to. Many systems focus solely on function and do not consider these other factors. This can lead to annoyance for users, or more extremely, abandonment of the system entirely. Instead of communicating information through sound which is built in to the system, an alternative method is to modulate acoustic parameters of a user's own music to encode information. This method - music modulation - has successfully been used in systems for conveying navigational data while walking and listening to music. This paper discusses the potential of applying this method to other contexts and types of data. We present a number of acoustic parameters of music that could be used to encode information and discuss a number of factors affecting the design of sonification systems employing them, with the goal of inciting discussion and further research into this potentially effective method of conveying information through sound.
Communities in Indian Country experience severe behavioral health inequities [11, 12]. Based on recent research investigating scalable behavioral health interventions and therapeutic best practices for Native American (NA) communities, we propose ARORA, a social and emotional learning intervention delivered over a networked mobile game that uses geosocial gaming mechanisms enhanced with augmented reality technology. Focusing on the Navajo community, we take a community-based participatory research approach to include NA psychologists, community health workers, and educators as co-designers of the intervention activities and gaming mechanisms. Critical questions involve the operation of the application across low-infrastructure landscapes as well scalability of design practices to be inclusive of the many diverse NA cultural communities in Indian Country.
Interest in programming education for children is growing. This research explores the possibilities of utilizing voice user interface (VUI) in children's programming education. We designed an interactive educational programming game called TurtleTalk, which converts the various utterances of children into code using a neural network and displays the results on a screen (Figure 1). Through VUI, children can move the turtle, the voice agent of the game, to the target location and learn the basic programming concepts of "sequencing" and "iteration." We conducted a preliminary user study where eight children played the game and took part in a posthoc interview. The results showed that voice interaction with TurtleTalk led children to be more immersed in the game and understand the elements of programming with ease and confidence.
In this study, we present an OrigamiSpeaker which can be handcrafted with silver nano-particle ink on a paper substrate. The OrigamiSpeaker is based on the "electrostatic loudspeaker" technique. The audio signal is amplified to a high voltage and applied to an electrode that vibrates to generate sound. By using Origami techniques, users are able to design various shapes of OrigamiSpeaker.
Creating tactile patterns for a grid or a 3D arrangement of a large number of actuators presents a challenge as the design space is huge. This paper explores two different possibilities of implementing an easy-to-use interface for tactile pattern design on a large number of actuators around the head. Two user studies were conducted in order to iteratively improve the prototype to fit user needs.
Navigating a space populated by fenceless industrial robots while carrying out other tasks can be stressful, as the worker is unsure about when she is invading the area of influence of a robot, which is a hazard zone. Such areas are difficult to estimate and standing in one may have consequences for worker safety and for the productivity of the robot. We investigate the use of multimodal (auditory and/or visual) head-mounted AR displays to warn about entering hazard zones while performing an independent navigation task. As a first step in this research, we report a design-research study (including a user study), conducted to obtain a visual and an auditory AR display subjectively judged to approach equivalence. The goal is that these designs can serve as the basis for a future modality comparison study.
This paper presents the first results on a user study in which people with visual impairments (PVI) explored a virtual environment (VE) by walking in a virtual reality (VR) treadmill. As recently suggested, we have now acquired first results from our feasibility study investigating this walk-in-place interaction. This represents a new, more intuitive way of for example virtually exploring unknown spaces in advance. Our prototype consists of off-the-shelf VR components (i.e., treadmill, headphones, glasses, and controller) providing a simplified white cane simulation and was tested by six visually impaired subjects. Our results indicate that this interaction is yet difficult, but promising and an important step to make VR more and better usable for PVIs. As an impact on the CHI community, we would like to make this research field known to a wider audience by sharing our intermediate results and suggestions for improvements, on some of which we are already working on.
We present a method we propose called FabAuth for identifying 3D-printed objects, which utilizes the differences in the resonant properties of such objects. We focus on changing the internal structures of each object made through a 3D printing process to assign a unique resonant property to it even if multiple objects have the same appearance. To identify the objects, the method identifies resonant property differences by using vibration that can pass through 3D-printed objects. The method can be applied even to low-filled 3D-printed objects as long as an acoustic wave can travel through the objects from one sensor to another. To validate the method's feasibility, we conducted a preliminary experiment to confirm whether it can be applied to low-filled 3D-printed objects and found that its average classification accuracy reached 92.2%.
Interacting with a smartwatch is difficult owing to its small touchscreen. A general strategy to overcome the limitations of the small screen is to increase the input vocabulary. A popular approach to do this is to distinguish fingers and assign different functions to them. As a finger identification method for a smartwatch, we propose FingMag, a machine-learning-based method that identifies the finger on the screen with the help of a ring. For this identification, the finger's touch position and the magnetic field from a magnet embedded in the ring are used. In an offline evaluation using data collected from 12 participants, we show that FingMag can identify the finger with an accuracy of 96.21% in stationary geomagnetic conditions.
Two-Thumb Touchpad Typing (4T) using hand-held controllers is one of the common text entry techniques in Virtual Reality (VR). However, its performance is far below that of two-thumb typing on a smartphone. We explored the possibility of improving its performance focusing on the following two factors: the visual feedback of hovering thumbs and the grip stability of the controllers. We examined the effects of these two factors on the performance of 4T in VR in user experiments. Their results show that hover feedback had a significant main effect on the 4T performance, but grip stability did not. We then investigated the achievable performance of the final 4T design in a longitudinal study, and its results show that users could achieve a typing speed over 30 words per minute after two hours of practice.
As the distinctions of formation structure between Chinese and Western languages, learners take more effort in grasping correct formation structure and pronunciation when they are studying Chinese character. Various Chinese character learning systems have already been proposed in order to help the learners recognize the character, but ignore its handwriting and cultural background. We here present a learning system for learners to study Chinese character better. To increase the effectiveness in learning Chinese character, we combine pronunciation and character writing with the integrated use of computer, projector and camera in this design. This system eases the learners to understand the meaning, cultural background and formation structure of character by using the morphological and phonetic animation projection and handwriting instruction. In comparison with screen-writing systems, this system provides a more authentic learning experience for learners through simulating the learning process of writing on real paper. We anticipate our system to be a starting point to explore the instruction in Chinese character in the field of its formation structure, pronunciation and handwriting, and to be used in the classroom or at home in the future.
This paper introduces a biosensing prototype that transforms emotions into music, helping people recognize and understand their own feelings and actions and those of other people. This study presents a series of three experiments with 20 participants in four emotional states: happiness, sadness, anger, and neutral state. Their real-time emotions were captured through a wearable probe Audiolize Emotion that detects users' EEG signals, composes data into audio files which are played to users themselves and others. At last, we conducted observations and interviews with participants to explore factors linked with social interaction, users' perceptions of music, and the reflections on the use of audio form for self-expression or communication. We found that Audiolize Emotion prototype triggers communication and self-expression in two ways: building curiosity and supporting communication by extending expression form. Based on the results, we provide future directions to explore the field of emotion and communication further and plan to apply the knowledge into more fields of VR game and accessibility.
In this paper, we introduce a vibrotactile wristband for warning and guiding the driver based on the road condition in automated vehicles. The vibrotactile wristband can receive the command from the host computer in the vehicles via Bluetooth and generate the corresponding vibration patterns with six vibration motors. 3 vibration patterns are designed to guide the driver to the right direction in the artificial driving state and 8 vibration patterns are designed to warn the driver about the problems which the driving support system can't solve in the automatic driving state. Based on tactile illusions, we convert the graphical markers into the vibration patterns to reduce the driver's memory burden and improve the accuracy of recognizing the patterns. In order to evaluate the performance of the vibrotactile wristband, a virtual driving environment is developed and the subject can experience the vibration patterns when he/she drives the virtual vehicle.
Internet voting has promising benefits, such as cost reduction, but it also introduces drawbacks: the computer, that is used for voting, learns the voter's choice. Code voting aims to protect the voter's choice by the introduction of voting codes that are listed on paper. To cast a vote, the voters need to provide the voting code belonging to their choice. The additional step influences the usability. We investigate three modalities for entering voting codes: manual, QR-codes and tangibles. The results show that QR-codes offer the best usability while tangibles are perceived as the most novel and fun.
Immersive Virtual Reality (IVR) does not afford social cues for communication, such as sweaty palms to indicate stress, as users can only see an avatar of their collaborator. Prior work has shown that this data is necessary for successful collaboration, which is why we propose to augment IVR communication by (1) real-time capturing of physiological senses and (2) leveraging the unlimited virtual space to display these. We present the results of a focus group (N=7) and a preliminary study (N=32) that investigate how this data may be visualized in a playful interaction and the effects they have on the performances of the collaborators.
Byte.it is an exploration of the feasibility of using miniaturized, discreet hardware for teeth-clicking as hands-free input for wearable computing. Prior work has been able to identify teeth-clicking of different teeth groups. Byte.it expands on this work by exploring the use of a smaller and more discreetly positioned sensor suite (accelerometer and gyroscope) for detecting four different teeth-clicks for everyday human-computer interaction. Initial results show that an unobtrusive position on the lower mastoid and mandibular condyle can be used to classify teeth-clicking of four different teeth groups with an accuracy of 89%.
Despite a growing body of research about the design and use of Smart Personal Assistants, existing work has mainly focused on their use as task support for individual users in rather simple problem scenarios. Less is known about their ability to improve collaboration among multiple users in more complex problem settings. In our study, we directly compare 21 groups who either use a Smart Personal Assistant tutor or a human tutor when solving a problem task. The results indicate that groups interacting with Smart Personal Assistant tutors show significantly higher task outcomes and higher degrees of collaboration quality compared to groups interacting with human tutors. The results are used to suggest areas for future research in the field of computer-supported collaboration.
Semi-autonomous vehicles are gradually appearing on our roads, and have already been involved in several high-profile accidents. These accidents usually occurred because the driver did not intervene in time when the automated system failed. An important issue for the design of future semi-autonomous vehicles is identifying effective methods for alerting drivers to critical events that require their intervention. To investigate this, we report the results of a lab-based simulator study in which participants had to respond to driving events while also playing an immersive mobile game on a phone. Results show that a more assertive voice alerting the driver to driving events resulted in faster reaction times and was perceived as more urgent than a less assertive voice, regardless of how immersed the driver was in the mobile game. These results suggest that the designers of future semi-autonomous vehicles should use assertive voice commends to alert drivers to critical events that require their intervention.
We present Bubble, a pneumatically actuated wearable device that enables people with hand disabilities to use their own hands to grasp objects without fully bending their fingers. Bubble offers a novel approach to grasping, where slim, ultra-lightweight silicone actuators are attached to the fingers. When the user wishes to grasp an object, the silicone units inflate pneumatically to fill the available space around the object. The inflatable units are interchangeable, can be independently inflated, and can be positioned anywhere on the fingers in any orientation, thereby enabling a wide variety of grasping gestures including the palmar grasp, pinch, etc. In this paper, we describe the implementation of our current prototype, the fabrication process of the soft inflatable units, as well as our preliminary study to evaluate our system's grasping capability.
Experiencing a sense of touching the displayed objects is the common goal of researchers and users. In this paper, a new texture rendering method for electrostatic tactile display is proposed, through which lateral force to the moving finger is calculated and generated by electrostatic attraction force based on the recorded data such as acceleration signals and friction properties of actual materials. The electrostatic attraction force is adjusted according to the exploring speeds of user's finger. User studies of adjective and similarity ratings on roughness and stickiness of virtual materials are designed, and the results demonstrate that the sense of touching the rendered materials is similar to that of the real materials, which proves that the proposed texture rendering method can be applied to display tactile information in an electrostatic tactile display.
In this work, we evaluate the potential of using wearable non-contact (infrared) thermal sensors through a user study (N=12) to measure mental workload. Our results indicate the possibility of mental workload estimation through the temperature changes detected using the prototype as participants perform two task variants with increasing difficulty levels. While the sensor accuracy and the design of the prototype can be further improved, the prototype showed the potential of building AR-based systems with cognitive aid technology for ubiquitous task assistance from the changes in mental workload demands. As such, we demonstrate our next steps by integrating our prototype into an existing AR headset (i.e. Microsoft HoloLens).
Different environmental cues can influence our spatial behaviour when we explore unfamiliar spaces. Research shows that the presence of other people affects our navigation decisions. To investigate the use of this environmental cue as a navigation aid in novel environment, we first explore visualisations that represent historical presence of people. We carried out an exploratory study (n=12) to examine whether and how people understand and use floor visualisations to make their navigational choices. Results suggest that floor visualisations have influenced participants' navigation decisions. Our findings showed that implicit visualisations were difficult to interpret compared to explicit visualisations. Thematic analysis of participants' interpretations revealed a contextual interpretation of explicit visualisations and non-contextual interpretation of implicit visualisations. Additionally, thematic analysis revealed that spatial behaviour is influenced by several factors including self-centredness, environmental features and the presence of others. These design insights will inform the design of history-enriched floor interfaces that direct people in the built environment.
The process of content creation for distribution via social media platforms is not a trivial one for social media editors as the goal of creating both serious and engaging content is challenging, with no clear or differing guidelines or rules across and between platforms. For creators of serious content, such as news organizations, advertisers, or educational institutions, engagement has a deeper meaning beyond likes, shares, etc. that is aimed at the audience actually processing the underlying content associated with a social media post. In this research, we report findings from a group study that aimed to understand the process and challenges of creating engaging content across three social media platforms in a major news organization. The findings from the study indicate that creating engaging content is effort- and time-consuming, and they highlight the need to support the process of creating engaging content across multiple social media platforms. Our longer-term goal is to develop a system design to support social media editors' creation of engaging content with which they can select engaging passages from news articles and select platforms on which to publish the content.
This paper presents a new user authentication paradigm which is based on a flexible user authentication method, namely FlexPass. FlexPass relies on a single, user-selected secret that can be reflected in both textual and graphical authentication secrets. Such an approach facilitates adaptability in nowadays ubiquitous user interaction contexts within the Internet of Things (IoT), in which end-users authenticate multiple times per day through a variety of interaction device types. We present an initial evaluation of the new authentication method based on an in-lab experiment with 32 participants. Analysis of results reveal that the FlexPass paradigm is memorable and that users like the adaptable perspective of the new approach. Findings are expected to scaffold the design of more user-centric knowledge-based authentication mechanisms within nowadays ubiquitous computation realms.
Online freelancer marketplaces offer workers the flexibility and control they desire. However, workers also struggle with the uncertainty resulting from these benefits. In traditional brick-and-mortar workplaces, workers who experience uncertainty during specific phases of their assimilation into a new role or organization engage in information-seeking behaviors. Understanding these phases of heightened uncertainty helps organizations better cater to workers' informational needs e.g. through mentorship programs. While understanding the uncertainty that online workers experience as they assimilate into their career is critical to understanding online workers' needs, such an understanding is currently severely limited. thus, we conducted semi-structured interviews with 29 online freelancers to investigate critical events that contribute to uncertainty early in their online careers. We situate these critical events within the context of organizational assimilation, and how participants employ diverse information-seeking tactics.
By storing data about citizens for the purposes of service provision, private and public organizations have disempowered the people they serve, shifting the balance of power toward themselves as data holders. Through three co-production engagements involving families receiving "early help" support from their local authority and support workers involved in supplying this care, we have identified existing data usage practices, explored the impact of those practices upon the supported families, and co-designed new and improved approaches - both technological and practice-based - that are perceived to offer families fairer treatment, greater influence, and to benefit from better decision-making. Our findings show that by applying Human-Data Interaction and giving supported families direct access to see and manipulate their own data, both during and outside of the support engagement, the locus of decision-making could be shifted towards the data subject.
This work presents a model for the development of affective assistants based on the pillars of user states and traits. Traits are defined as long-term qualities like personality, personal experiences, preferences, and demographics, while the user state comprises cognitive load, emotional states, and physiological parameters. We discuss useful input values and the necessary developments for an advancement of affective assistants with the example of an affective in-car voice assistant. With our work we help to shape the vision of our community regarding affective interfaces, not just in the automotive domain but also for other application areas.
In recent studies of gaze tracking system using 3D model-based methods, the optical axis of the eye is estimated without user calibration. The remaining problem for achieving implicit user calibration is to estimate the difference between the optical axis and visual axis of the eye (angle kappa). In this paper, we propose an implicit user calibration method using face detection around the optical axis of the eye. We assume that the peak of the average of face region images indicates the visual axis of the eye in the eye coordinate system. The angle kappa is estimated as the difference between the optical axis of the eye and the peak of the average of face region images. We developed a prototype system with two cameras and two IR-LEDs. The experimental results showed that the proposed method can estimate the angle kappa more accurately than the method that uses Itti's saliency map instead of face detection.
Over the past decade, there has been an increase in educational software use within classrooms as well as continuing demand on K-12 teachers extending beyond in-class activities. Yet, we still do not have a deep understanding of current teacher behaviors outside the classroom. Our paper presents insights on how to better design for technology use in this space by reporting on key themes such as communication, privacy and student technology at home. These findings translate into design implications to increase transparency with student data, the need to design first for technology students have access to in the home (e.g. mobile) and designing for the teacher need of setting personal boundaries within communication tools.
When we try to acquire a moving target such as hitting a virtual tennis in a computer game, we must hit the target instantly when it flies over our hitting range. In other words, we have to acquire the target in spatial and temporal domains simultaneously. We call this type of task spatiotemporal moving target selection, which we find is common yet less studied in HCI. This paper presents a tentative model for predicting the error rates in spatiotemporal moving target selection. Our model integrates two latest models, the Ternary-Gaussian model and the Temporal Pointing model, to explain the influence of spatial and temporal constraints on pointing errors. In a 12-subject pointing experiment with a computer mouse, our model shows high fitting results with 0.904 R2. We discuss future research directions on this topic and how it could potentially help the design in dynamical user interfaces.
In some cases, crossing-based target selection motion may gain a less error rates and a higher interactive speed. Most of the research in target selection fields focused on the analysis of interaction results. Moreover, trajectories play a much more important role in crossing-based target selection comparing to the other interactive techniques. And an ideal model for trajectories may help computer designers make predictions about interaction results during the process of target selection rather than at the end of the whole process. We proposed a trajectory prediction model for crossing-based target selection tasks referring to dynamic model theory. Simulation results show that our model performed well in the prediction of the trajectories, endpoints and the hitting time for target-selection motion, and the average error of trajectories, endpoints and hitting time values as 17.28%, 18.17 pixel and 11.50%.
For the past few years, voice assistant (VA) has been widely used around the world. Current voice assistants provide a gendered voice to sound more natural and life-like, but most of them have a female voice as a default voice setting. Our study explored how gender stereotypes of women are reflected in voice assistants with female voices through analyzing five South Korean VAs. We collected 1,602 responses from VAs and conducted a thematic analysis to examine the patterns of the gender stereotype. As a result, we have categorized three distinct characteristics: (1) bodily display, (2) subordinate attitude, (3) sexualization. We suggested that these stereotypical traits could create a power dynamic between users and female agents.
Mediated social touch (MST) provides physical contact over a distance for geographically separated individuals. Despite advances in actuator technologies, it remains difficult to recreate the feel and sensation of natural human touch. Combining touch with morphologically congruent visual feedback may overcome limitations related to the low fidelity of current-day tactile displays. Being able to both feel and see the touch act being initiated on an input device could enhance the perceived realism of the touches. In two studies, we test the effects of such visual feedback on self-reported naturalness of touch, social presence, and emotional experiences.
We present the results of our study of people's responses to unsafe scenarios with personal safety apps. Several such apps have been developed, offering features such as a location-sharing panic button. However, there is little research into how people might respond in different personal safety situations, and how such apps might contribute to their response. We performed a lab study with 30 participants and used semi-structured interviews to gather responses to a set of three increasingly risky scenarios, both before and after the installation of a personal safety app. From our results, participants stated that they would use mobile phones and personal safety apps most often to support "collective" responses, with calls to others for assistance. Further, while collective responses were often combined with "avoidance" or "protective" responses, when using a personal safety app, collective responses were less often combined with other reaction types. Overall, our results suggest some potential benefit from personal safety apps, though more study is required.
Developing affectively aware technologies is a growing industry. To build them effectively, it is important to understand the features involved in discriminating between emotions. While many technologies focus on facial expressions, studies have highlighted the influence of body expressions over other modalities for perceiving some emotions. Eye tracking studies have evaluated the combination of face and body to investigate the influence of each modality, however, few to none have investigated the perception of emotion from body expressions alone. This exploratory study aimed to evaluate the discriminative importance of dynamic body features for decoding emotion. Eye tracking was used to monitor participants' eye gaze behavior while viewing clips of non-acted body movements to which they associated an emotion. Preliminary results indicate that the two primary regions attended to most often and longest were the torso and the arms. Further analysis is ongoing, however initial results independently confirm prior studies without eye tracking.
Public speaking is recognized as an important skill for success in learning and education. However, the mere thought of public speaking elicits anxiety in many people. This anxiety may manifest in a student's nonverbal behaviors and physiological responses which can negatively affect both performance and evaluation. While public speaking training systems have employed a variety of speaker cues to automatically evaluate and score public speaking performance, many are built on data collected in a lab setting. However, it is difficult to achieve the same level of anxiety in these environments. We posit that students would benefit from a system that provides the ability to reflect on and practice public speaking presentation skills. This preliminary study explores public speaking anxiety from physiological responses and nonverbal behaviors of English language learners in-situ as a first step toward the design and development of a public speaking practice and reflection system.
Both overtrust in technology and drowsy driving are safety-critical issues. Monitoring a system is a tedious task and overtrust in technology might also influence drivers' vigilance, what in turn could multiply the negative impact of both issues. The aim of this study was to investigate if trust in automation affects drowsiness. 30 participants in two age groups conducted a 45-minute ride in a level-2 vehicle on a real test track. Trust was assessed before and after the ride with a subjective trust scale. Drowsiness was captured during the experiment using the Karolinska Sleepiness Scale. Results depict, that even a short initial system exposure significantly increases trust in automated driving. Drivers who trust the automated vehicles more show larger signs of drowsiness what may negatively impact the monitoring behavior. Drowsiness detection is important for automated vehicles, and the behavior of drowsy drivers might help to infer trust in an unobtrusively way.
Co-working and co-living companies are rising globally and the increasing participation within the gig economy has extended the range of users of community-based spaces (co-spaces) and raised a set of different community models in considering how to support them. In this paper, we specifically focus on the needs of digital nomads in co-spaces who struggle to pursue their personal and professional freedom. In so doing, we raise awareness of existing tensions that currently hinder the social engagement of these individuals in co-space settings.
Writing is a fundamental task in our daily life. Existing writing improvement tools mostly focus on low-level grammar error correction, rather than enhancing users' writing styles at the cognitive level. In this work, we present a computational approach that allows learners to have fast but effective learning experience with the help of automatic style transfer, visual stylometry analytics, machine teaching and practice. Our system provides a perfect fusion of vividly visualized style features and principles along with informative examples, which together can shape and drive personalized cognitive learning experience. We demonstrate the effectiveness of our system in a scenario of learning from William Shakespeare.
Soft robotics have a set of unique traits, such as excelling at grabbing fragile objects which stem from the highly compliant materials used to produce them. However, very little research has so far focused on the interplay of the different interaction partners in a human - soft-robot collaboration. In this paper, we present the results of our investigation of the influence of two movement patterns on the willingness of random passersby to assist a soft robot in completing a task.
Verbal abuse is a hostile form of communication ill-intended to harm the other person. With a plethora of AI solutions around, the other person being targeted may be a conversational agent. In this study, involving 3 verbal abuse types (Insult, Threat, Swearing) and 3 response styles (Avoidance, Empathy, Counterattacking), we examine whether a conversational agent's response style under varying abuse types influences those emotions found to mitigate people's aggressive behaviors. Sixty-four participants, assigned to one of the abuse type conditions, interacted with the three conversational agents in turn and reported their feelings about guiltiness, anger, and shame after each session. Our study results show that, regardless of the abuse type, the agent's response style has a significant effect on user emotions. Participants were less angry and more guilty with the empathetic agent than the other two agents. Our study findings have direct implications for the design of conversational agents.
Software developers use Stack Overflow on a daily basis to search for solutions to problems they encounter during bug fixing and feature enhancement. In prior work, studies have been done on mining Stack Overflow data such as for predicting unanswered questions or how and why people post. However, no work exists on how developers actually use, or more importantly, read the information presented to them on Stack Overflow. To better understand this behavior, we conduct an eye tracking study on how developers seek for information on Stack Overflow while tasked with creating human-readable summaries of methods and classes in large Java projects. Eye gaze data is collected on both the source code elements and Stack Overflow document elements at a fine token-level granularity using iTrace, our eye tracking infrastructure. We found that developers look at the text more often than the title in posts. Code snippets were the second most looked at element. Tags and votes are rarely looked at. When switching between Stack Overflow and the Eclipse Integrated Development Environment (IDE), developers often looked at method signatures and then switched to code and text elements on Stack Overflow. Such heuristics provide insight to automated code summarization tools as they decide what to give more weight to while generating summaries.
Data science is an open-ended task in whichexploratory programming is a common practice. Data workers often need faster and easier ways to explore alternative approaches to obtain insights from data, which frequently compromises code quality. To understand how well current IDEs support this exploratory workflow, we conducted an observational study with 19 data workers. In this paper, we present two significant findings from our analysis that highlight issues faced by data workers: (a)code hoarding and (b) excessivetask switching andcode cloning. To mitigate these issues, we provide design recommendations based on existing work, and propose to augment IDEs with an interactive visual plugin. This plugin parses source code to identify and visualize high-level task details. Data workers can use the resulting visualization to better understand and navigate the source code. As a realization of this idea, we presentAddIn, an add-in for RStudio that identifies and visualizes the hypotheses that a data worker is testing for statistical significance through her source code.
We all hold stereotypes about different locations across the world. Do these stereotypes affect our attitudes toward cloud services when we are told the location of the servers storing our data? And, does it matter if the cloud services are provided by a well-known brand? To answer these questions, a 2 X 11 experiment was conducted to examine the effects of location and brand cues on users' reaction to cloud services. Brand authority of the hosting company had a positive effect. More importantly, location of the cloud servers also affected outcomes, in that users tended to prefer some locations (e.g., US, Europe, Oceania, China) over others (e.g., Russia) for storing their cloud data. These findings have theoretical implications as well as design suggestions.
The arrival of self-driving cars and smart technologies is fraught with controversy, as users hesitate to cede control to machines for vital tasks. While advances in engineering have made such autonomous technology a reality, considerable design work is needed to motivate their mass adoption. What are the key predictors of people's acceptance of self-driving cars? Is it the ease of use or coolness aspect? Is it the degree of perceived control for users? We decided to find out with a survey (N = 404) assessing acceptance of self-driving cars, and discovered that the strongest predictor is "posthuman ability," suggesting that individuals are much more accepting of technology that can clearly outclass human abilities.
An audience exists for personal information, including quantified-self data, beyond an individual's social network and social communities. In the era of big data, the research and policy arenas are two areas where up-to-date assemblages of personal information have market value. In this ongoing study, we examine the long-term value of small data [2], acknowledging that there is also a societal need and an audience for rich, personalized collections of digital self-tracking records. Using qualitative research methods, we interviewed 18 people to explore the nature of self-tracking data that exists as a byproduct of daily life, and their sense of why and how their data could be archived for posterity. In the process, the intellectual and design challenges of a digital quantified-self archive are explored.
This paper introduces an emerging typology of the 'absences' that confound the study of self-tracking. A review of the literature, and the ongoing work of the authors on the long-term value of self-tracking data, is used as a resource to develop descriptions of levels and types of ?gaps' that emerge as part of the activities, behaviors, technologies, and data practices of self-tracking. Such gaps are shown to be both common and insightful, highlighting the economic, social, behavioral, and psychological layers that undergird self-tracking.
We propose a novel, biologically plausible cost/fitness function for sensorimotor control, formalized with the information-theoretic principle of empowerment, a task-independent universal utility. Empowerment captures uncertainty in the perception-action loop of different nature (e.g. noise, delays, etc.) in a single quantity. We present the formalism in a Fitts' law type goal-directed arm movement task and suggest that empowerment is one potential underlying determinant of movement trajectory planning in the presence of signal-dependent sensorimotor noise. Simulation results demonstrate the temporal relation of empowerment and various plausible control strategies for this specific task.
Every year Human-Computer Interaction (HCI) researchers create new technical innovations. Unfortunately, the User-Centered Design (UCD) processes used by most designers in HCI does not help much when the challenge is to find the best users for these innovations. We augmented the matchmaking design method, making it more systematic in considering potential users by using a list of 399 occupation groups and by incorporating the customer discovery interviews from the Lean Startup. We then assessed our new design method by searching for users who might benefit from two different technical innovations: ViBand and PaperID. We found that matchmaking with the list of occupation groups helped surface users we would likely have not considered. In addition, the customer discovery interviews helped to generate better applications and additional target users for the innovations. This paper documents our process, the design method, and insights we gained from using it.
Despite promising results, the psychological approach of implementation intentions remain underused in 'in-the-wild' habit formation apps. The majority of existing apps focus on using self-tracking and reminders but these hinder the development of habit. This study proposes a new mechanism to support habit formation by using reinforced implementation intentions. Our findings suggest that adding reinforcement is indeed useful to maintain the level of compliance but it is not necessarily the same in terms of automaticity. We also discuss how the potential use of reinforcement can be improved in the future.
Although numerous studies have focused on interfaces for maneuvering drones, a method for evaluating these interfaces has not been established. A pointing experiment was carried out with a drone in this study. The results indicate that the target distance and target width affect the movement time and error rate while maneuvering. This is consistent with the results of previous pointing studies. Fitts' law was not a good fit (R^2 = 0.672), while the data fit well to a two-part model (R^2 = 0.993). Based on these results, we propose future experimental work that could contribute to improving drone interfaces.
Mobile phone use is pervasive, yet little is known about task switching on digital platforms and applications. We propose an unobtrusive experience sampling method to observe how individuals use their smartphones by taking screenshots every 5 seconds when the device is on. The purpose of this paper is to incorporate the psychological process into feature extraction, and use these features to effectively predict the task switching behavior on smartphones. Features are extracted from the sequence of screenshots, gauging visual stimulation, cognitive load, velocity and accumulation, sentiment, and time-related factors. Labels of task switching behavior were manually tagged for 87,182 screenshots from 60 subjects. Using random forest, we demonstrate that we can correctly infer a user's task switching behavior from unstructured data in screenshots with up to 77% accuracy, demonstrating it is a viable option to use features of the screenshots to predict task switching behavior.
Various social media sites and online communities provide new channels for people in need to ask questions and seek help. However, individuals may still encounter psychological barriers that deter solicitation for assistance, which is more formally described as"social costs''. For example, it can be %concerns people have when seeking help can be the concerns of burdening others, the obligation of reciprocation, etc. To understand what could reduce social costs, we conducted a study in the context of Question-Answering (QA) and investigated the following three factors inspired by literature: anonymity (posting a question anonymously), recommendation (having the system handle the question routing), and ephemerality (allowing questions to be visible only for a short period). We built a QA platform to support these three features and conducted a randomized within-subject experiment to test their effects on social costs of posting questions. Results suggest the presence of anonymity, recommendation, and ephemerality reduces the social costs which provides design implications for future community building.
In the overlap between Human-Computer Interaction (HCI) and Cinematics, sits an interest in physiological responses to experiences. Focusing particularly on brain data, Neurocinematics has emerged as a research field using Brain-Computer Interface (BCI) sensors. Where previous work found inter subject correlations (ISC) between brain measurements of people watching movies in constrained conditions using functional magnetic resonance imaging (fMRI), we seek to examine similar responses in more naturalistic settings using functional Near Infrared Spectroscopy (fNIRS). fNIRS has been shown to be highly suitable for HCI studies, being more portable than fMRI and more tolerant of many natural movements than Electroencephalography (EEG). Early results found significant ISC, which gives a lot of hope and potential for using fNIRS in Neurocinematics.
The GDPR has a significant impact on the way users interact with technologies, especially the everyday platforms used to personalize news and related forms of information. This paper presents the initial results from a study whose primary objective is to empirically test those platforms' level of compliance with the so-called 'right to explanation'. Four research topics considered as gaps in existing legal and HCI scholarship originated from the project's initial phase, namely (1) GDPR compliance through user-centered design; (2) the inclusion of values in the system; (3) design considerations regarding interaction strategies, algorithmic experience, transparency, and explanations; and (4) technical challenges. The second phase is currently ongoing and allows us to make some observations regarding the registration process and the privacy policies of three categories of news actors: first-party content providers, news aggregators and social media platforms.
We conducted a study of n = 20 older adults to better understand their mental models for what the term 'privacy' means to them in both digital and non-digital contexts. Participants were asked to diagrammatically represent this information and describe their drawings in a semi-structured interview setting. Preliminary coding analysis revealed participants' frustrations with available methods for addressing privacy violations. While some asserted that there are both good and bad uses of private data, others avoided technology as a whole out of privacy fears or ambivalence towards using web-based banking and social media services. Some participants described fighting back against privacy attacks, while others felt resigned altogether. Our study provides initial steps towards illuminating privacy perceptions of older adults and can have impacts on training and tailor design for this important demographic.
In today's age, a wide range of individuals create their own web presence. Thanks to modern tools, creating a website is easier than ever. In order to make sure that this increased accessibility does not come at the cost of decreased security, the respective web design knowledge should become more accessible as well. We created 16 security patterns for web design based on expert knowledge. We present the solution hierarchy of these patterns and how they might be applied by non-expert users.
Normal users are usually not good at making decisions about cybersecurity, being easily attacked by hackers. Quite a few tools have been devised and implemented to help, but they can not balance security and usability well. To solve the problem, this paper explores the application of prospect theory to security recommendations. We conducted online surveys (n=61) and a between-subjects experiment (n=106) in six conditions to investigate the issues. In the experiment, we provided different security recommendations about two-factor-authentication (2FA) to participants in different conditions and recorded their decisions about enabling it. Results show that participants in the condition "Disadvantage" were willing to adopt 2FA the most. The findings indicate that showing disadvantages can be useful to persuade users into better security decisions.
Two-factor authentication (2FA) provides an extra layer of security as it requires two pieces of evidence to be provided to an authentication mechanism before granting access to a user. However, people do not prefer 2FA; a reason for this is that 2FA requires performing extra actions. In this late-breaking work, we present UltraSonic Watch which provides a seamless 2FA through ultrasound. We report a small-scale within-subjects study (N=30) which investigates the performance of UltraSonic Watch and the participants' experience (in terms of perception, preference, and willingness to adopt). The results are promising as they revealed that ultrasound can be used to provide an efficient 2FA mechanism, transparent to the users, who are positive in adopting such an approach to increase the security of the authentication process.
We report on the design, implementation and evaluation of
Do we disclose more information online when we access Wi-Fi from home compared to the University, or an Airbnb rental, or a coffee shop? Does it matter if we are shown terms and conditions (T&C) before getting online? Will signing into a virtual private network (VPN) affect our disclosure? We conducted an experiment (N = 276) to find out. Our results suggest that while VPN promotes disclosure of personal information and unethical behaviors in an Airbnb network, the provision of T&C inhibits this disclosure. Conversely, in a University network, provision of terms and conditions encourages disclosure of unethical behavior, but the presence of VPN cue inhibits it. Further, a user's belief in the publicness heuristic (public networks are risky) dictate how much users reveal in various locations based on their perceptions of relative security of accessing Wi-Fi from those locations.
Parallel Coordinates Plots (PCP) are a widely used approach to interactively visualize and analyze multidimensional scientific data in a 2D environment. In this paper, we explore the use of Parallel Coordinates in an immersive Virtual Reality (VR) 3D visualization environment as a means to support the decision-making process in engineering design processes. We evaluate the potential of VR PCP using a formative qualitative study with seven participants. In a task involving 54 points with 29 dimensions per point, we found that participants were able to detect patterns in the dataset compared with a previously published study with two expert users using traditional 2D PCP, which acts as the gold standard for the dataset. The dataset describes the Pareto front for a three-objective aerodynamic design optimization study in turbomachinery.
Reverse engineering is a complex task essential to several software security jobs like vulnerability discovery and malware analysis. While traditional program comprehension tasks (e.g., program maintenance or debugging) have been thoroughly studied, reverse engineering diverges from these tasks as reverse engineers do not have access to developers, source code, comments, or internal documentation. Further, reverse engineers often have to overcome countermeasures employed by the developer to make the task harder (e.g., symbol stripping, packing, obfuscation). Significant research effort has gone into providing program analysis tools to support reverse engineers. However, little work has been done to understand the way they think and analyze programs, potentially leading to the lack of adoption of these tools among practitioners. This paper reports on a first step toward better understanding the reverse engineer's process and mental models and provides directions for improving program analysis tools to better fit their users. We present the initial results of a semi-structured, observational interview study of reverse engineers (n=15). Each interview investigated the questions they asked while probing the program, how they answered these questions, and the decisions made throughout. Our initial observations suggest that reverse engineers rely on a variety of reference points in both the program text and structure as well as its dynamic behavior to build hypotheses about the program's function and identify points of interest for future exploration. In most cases, our reverse engineers used a mix of static and dynamic analysis---mostly manually---to verify these hypotheses. Throughout, they rely on intuition built up over past experience. From these observations, we provide recommendations for user interface and program analysis improvements to support the reverse engineer.
Our research explores the impact of network impairments on remote augmented reality (AR) collaborative tasks, and possible strategies to improve user experience in these scenarios. Using a simple AR task, under a controlled network environment, our preliminary user study highlights the impact of network outages on user workload and experience, and how user roles and learning styles play a role in this regard.
Conversational agents such as those hosted by the 'smart speakers' have become popular over the last few years. Although users can accomplish tasks as if they were asking a person, users still have problems in utilizing conversational agents effectively. To address this problem, some proposals explain how to input agent requests by using visual information such as instruction manuals and displays. However, such instructions create problems such as occupying the hands and eyes. The purpose of this study is to effectively enhance request entry by issuing instructions for use in an easy-to-understand manner without using visual information. Our proposal uses a pair of conversational agents, one is called the main agent, and the other is called the sub-agent, that have different voice types. Experiments show that agent pairing yields easier to understand instructions than using just the main agent. Furthermore, experiments also show that use instructions are easier to understand if the sub-agent reads aloud specific examples of use.
Machine learning is being adopted in a wide range of products and services. Despite its adoption, design and research processes for machine learning experiences have yet to be cemented in the user experience community. Prototyping machine learning experiences is noted to be particularly challenging. This paper suggests Wizard of Oz prototyping to help designers incorporate human-centered design processes into the development of machine learning experiences. This paper also surfaces a set of topics to consider in evaluating Wizard of Oz machine learning prototypes.
It is common for individuals with diverse demographic backgrounds to collaborate through computer-mediated communication (CMC) technologies. Groups with internal diversity are typically considered to be advantageous to group performance due to the presence of different perspectives and the potential to stimulate new ideas. However, intergroup conflicts can also occur in diverse groups, especially for groups with imbalanced composition. Previous studies have pointed out that minority members often suffer from unequal participation and performance pressure, which may further decrease group outcome. Since CMC tools facilitate online collaboration, it is necessary to understand how group composition interacts with the affordances of CMC tools on group collaboration. In this study, we tested three gender compositions (female-majority, equal-gender-composition, male-majority) with two communication contexts (video-text, text-only) and found that both gender composition and communication medium influenced group collaboration. Design implications for online collaboration are provided based on our findings.
Unlike conventional desktop simulations which have constrained interaction, immersive Virtual Reality (VR) allows users to freely move and interact with objects. In this paper we discuss a work-in-progress system that 'virtually' records participants movement and actions within a simulation. This system recovers and rebuilds recorded data on request, accurately replaying individual participants motions and actions in the simulation. Observers can review this reconstruction using an unrestricted virtual camera and if necessary, observe changes from recorded input devices. Reconstruction of each participants' skeleton structure was created using tracked input devices. We conclude that our system offers detailed recreation of high-level knowledge and visual information of participant actions during simulations.
A significant body of research in Artificial Intelligence (AI) has focused on generating stories automatically, either based on prior story plots or input images. However, literature has little to say about how users would receive and use these stories. Given the quality of stories generated by modern AI algorithms, users will nearly inevitably have to edit these stories before putting them to real use. In this paper, we present the first analysis of how human users edit machine-generated stories. We obtained 962 short stories generated by one of the state-of-the-art visual storytelling models. For each story, we recruited five crowd workers from Amazon Mechanical Turk to edit it. Our analysis of these edits shows that, on average, users (i) slightly shortened machine-generated stories, (ii) increased lexical diversity in these stories, and (iii) often replaced nouns and their determiners/articles with pronouns. Our study provides a better understanding on how users receive and edit machine-generated stories, informing future researchers to create more usable and helpful story generation systems.
Most user experience (UX) evaluation tools require users to self-reflect and to communicate their thoughts (e.g. thinking aloud, retrospective interviews, questionnaires). In the context of designing for people with dementia, however, conditions like aphasia and general cognitive decline restrict the applicability of these methods. In this paper, we report on the iterative design of Proxemo, a smartwatch app for the documentation of observed emotions in people with dementia. Evaluations of Proxemo in dementia care facilities showed that observers considered Proxemo easy to use and preferred it over note-taking on paper. The agreement between different coders was substantial (k = .71). We conclude that Proxemo is a promising tool for UX evaluations in the dementia context - and possibly beyond, but further research on the analysis of its generated data is required.
Creating personas from actual online user information is an advantage of the data-driven persona approach. However, modern online systems often provide big data from millions of users that display vastly different behaviors, resulting in possibly thousands of personas representing the entire user population. We present a technique for reducing the number of personas to a smaller number that efficiently represents the complete user population, while being more manageable for end users of personas. We first isolate the key user behaviors and demographical attributes, creating thin personas, and we then apply an algorithmic cost function to collapse the set to the minimum needed to represent the whole population. We evaluate our approach on 26 million user records of a major international airline, isolating 1593 personas. Applying our approach, we collapse this number to 493, a 69% decrease in the number of personas. Our research findings have implications for organizations that have a large user population and desire to employ personas.
One of the critiques of personas is that the underlying data that they are based on may stale, requiring further rounds of data collection. However, we could find no empirical evidence for this criticism. In this research, we collect monthly demographic data over a two-year period for a large online content publisher and generate fifteen personas each month following an identical algorithmic approach. We then compare the sets of personas month-over-month, year-over-year, and over the whole two-year period. Findings show that there is an average 18.7% change in personas monthly, a 23.3% change yearly, and a 47% change over the entire period. Findings support the critique that personas do change over time and also highlight that changes in the underlying data can occur within a relatively short period. The implication is that organizations using personas should employ ongoing data collection to detect possible persona changes.
Through technological advancements in various areas of our lives, Conversational Agents progressed in their human-likeness. In the field of HCI, however, the use of conversational fillers (e.g., "um," "uh," etc.) by Conversational Agents have not been fully explored in an experimental setting. We observed the effects on user perceptions of Intelligence, Human-likeness and Likability of Conversational Agents by a 2 x 2 experimental design. From the results of 26 total participants, we concluded that 1) the use of fillers by Conversational Agents are perceived as less intelligent and less likable in task-oriented conversations, 2) and the fillers did not have any statistically significant change in perception of human-likeness. However, further examination showed that users reported filler-speaking agents as more entertaining for social-oriented conversations. With these findings, we discuss design implications for voice-based Conversational Agents.
Persuasive technologies and nudging are increasingly used to shape user behaviors in applications ranging from health and the environment to business. A thorough understanding of the effectiveness of nudges across different contexts and whether they affect user perception of a system is still lacking. We report the results of a controlled, quantitative study with 20 participants which focused on testing the effectiveness of three different nudges in an e-commerce environment and whether their use has an impact on participants' trust. We found that products nudged via an anchoring effect were more frequently "bought" by participants, and that while participants deemed a store version implementing nudges and one which did not to be equally trustworthy, they perceived the former as technically inferior. Overall we found the effects of nudging to be less dominant than reported in previous studies.
In this study, we apply a "conversational search" to the design of the news service of smart speakers so that users can actively get richer information while listening to the news. We designed a research prototype called "Anchor," where a smart speaker news assistant provides users with news about specific keywords and responds to users' questions. We recruited 21 participants and conducted a user study where they consumed the news with Anchor, followed by post hoc interviews. The results of the qualitative analysis revealed the following. (1) People preferred interactive news to news briefings. (2) People found it useful to get answers on their questions by talking with the assistant. (3) Although users were allowed to ask questions anytime, they often hesitated, as they did not want to miss the whole flow of the news. (4) However, they had difficulty recalling the questions they had not asked. Based on these findings, we discuss the implications for news design in a voice-only user interface.
Information architecture forms the foundation of users' navigation experience. Open card sorting is a widely-used method to create information architectures based on users' groupings of the content. However, little is known about the method's cross-study reliability: Does it produce consistent content groupings for similar profile participants involved in different card sort studies? This paper presents an empirical evaluation of the method's cross-study reliability. Six card sorts involving 140 participants were conducted: three open sorts for a travel website, and three for an eshop. Results showed that participants provided highly similar card sorting data for the same content. A rather high agreement of the produced navigation schemes was also found. These findings provide support for the cross-study reliability of the open card sorting method.
While the subject of synaesthesia has inspired various practitioners and has been utilized as a design material in different formats, research has not so far presented a way to apply this captivating phenomenon as a source of design material in HCI. The purpose of this paper is to explore the translative property of synaesthesia and introduce a tangible way to use this intangible phenomenon as an interactive design material source in HCI and design. This paper shares a card-based tool that enables practitioners to use the translative property of synaesthesia for the sake of ideation. It further introduces a potential area of where this tool may be utilized for exploring user experiences. This work has implications for the CHI community as it attempts to share a practical way of using the intangible property of synaesthesia to explore potential user experiences.
The analysis of tasks and workflows is a longstanding tradition in Human-Computer Interaction (HCI). In many cases, it provides a crucial basis for the usable design of interactive systems. However, established tools almost exclusively focus on task content and structure, thereby ignoring the more "experiential" aspects of task performance. To fill this gap, we combined Hierarchical Task Analysis (HTA) with the analysis of subjective accounts of meaning. Our explorative study (N=4) suggests that objective descriptions resulting from HTA and subjective experience of one and the same activity differ. People tend to subsume experientially unimportant sequences or even ignore these within their subjective experience. Furthermore, people are able to name experientially important sequences and to relate these to feelings and thoughts (i.e., meaning). In the future, more refined versions of our approach may support practitioners with the design of meaningful interaction and activities.
Personality affects the way someone feels or acts. This paper examines the effect of personality traits, as operationalized by the Big-five questionnaire, on the number, type and severity of identified usability issues, physiological signals (skin conductance), and subjective emotional ratings (valence-arousal). Twenty-four users interacted with a web service and then participated in a retrospective thinking aloud session. Results revealed that the number of usability issues is significantly affected by the Openness trait. Emotional Stability significantly affects the type of reported usability issues. Problem severity is not affected by any trait. Valence ratings are significantly affected by Conscientiousness, whereas Agreeableness, Emotional Stability and Openness significantly affect arousal ratings. Finally, Openness has a significant effect on the number of detected peaks in user's skin conductance.
Over the last decade, advances in machine learning have multiplied the possibilities for applications of artificial intelligence. One of these applications are digital companions that assist their users in tasks and activities. In this study, we wanted to evaluate whether digital companions can be designed to create possibilities for positive experiences in work contexts and also be perceived as such using a Wizard-of-Oz prototype of a companion that supports workshop planning. We find that the work is perceived as more positive, more natural, and the content that is presented as more positive after the interaction with a companion that has been designed for positive experience compared to a neutral companion with the same functionality.
The use of complex machine learning models can make systems opaque to users. Machine learning research proposes the use of post-hoc explanations. However, it is unclear if they give users insights into otherwise uninterpretable models. One minimalistic way of explaining image classifications by a deep neural network is to show only the areas that were decisive for the assignment of a label. In a pilot study, 20 participants looked at 14 of such explanations generated either by a human or the LIME algorithm. For explanations of correct decisions, they identified the explained object with significantly higher accuracy (75.64 % vs. 18.52 %). We argue that this shows that explanations can be very minimalistic while retaining the essence of a decision, but the decision-making contexts that can be conveyed in this manner is limited. Finally, we found that explanations are unique to the explainer and human-generated explanations were assigned 79 % higher trust ratings. As a starting point for further studies, this work shares our first insights into quality criteria of post-hoc explanations.
The HCI community has extensively studied location-based systems that applied various location sensing technologies. However, there is a lack of user experience (UX) studies of systems where hybrid location sensing approaches are provided. In this work, we designed, implemented and studied a hybrid location sharing system that offers automatic location sensing through Bluetooth Low Energy (BLE) beacons and participatory location sharing using GPS. Our findings provide design implications to future location-based systems that apply such a hybrid location sensing design.
This paper introduces fact-checking into Machine Learning (ML) explanation by referring training data points as facts to users to boost user trust. We aim to investigate what influence of training data points, and how they affect user trust in order to enhance ML explanation and boost user trust. We tackle this question by allowing users check the training data points that have the higher influence and the lower influence on the prediction. A user study found that the presentation of influences significantly increases the user trust in predictions, but only for training data points with higher influence values under the high model performance condition, where users can justify their actions with more similar facts.
Natural language interfaces (NLIs) that aim to understand arbitrary language are not only difficult to engineer; they can also create unrealistic expectations of the capabilities of the system, resulting in user confusion and disappointment. We use an interactive language learning game in a 3D blocks world to examine whether limiting a user's communication to a small set of artificial utterances is an acceptable alternative to the much harder task of accepting unrestricted language. We find that such a restricted language interface provides same or better performance on this task while improving user experience indices. This suggests that some NLIs can restrict user languages without sacrificing user experience and highlights the importance of conveying NLI limitations to users.