Our everyday technologies evidence clear examples of racial bias. Rather than attempt to eliminate bias through seeking fairness in algorithms, regulatory intervention or a race-blind stance, this paper seeks to correct the balance by adopting an explictly anti-racist approach to the design of sociotechnical systems. As a research-through-design initiative, we bring techniques from critical technical practice to bear on revealing and inverting assumptions in HCI, attempting to produce alternative sociotechnical systems that aim not merely to reveal or correct but to destablize or dismantle systems of oppression. We articulate core principles to guide such work and articulate four system prototypes to interrogate anti-racist HCI as a potential form of critical technical practice. We conclude with discussion of the challenges that face anti-racist HCI in terms of timing, reflection, and failure, addressing what an anti-racist critical technical practice reveals about the enduring structural sources of inequity in the products and practices of HCI.
This submission is intended to start a critical conversation about the ”Future of Work.” I expose the ideological commitments of major funders of research into this area, such as the NSF and private think tanks, because they shape narratives on work and worker organizing as well as the course of technology development more broadly. In the research itself, workers may be invited to participate in computing research, but they seldom determine the goals of projects nor do they own the results. Ultimately, while government agencies and nonprofit organizations claim to seek improved conditions for workers, the knowledge and technologies produced through this research serve primarily to undermine the power of working class people and their unions.
Art is not Research. Research is not Art. is a multimedia, multi-site participatory installation by a collective of artists and researchers from Calgary, Toronto, and Lancaster; it is informed by these contexts. It reflects the tensions between how “participants” are treated in participatory art and interaction research. It offers a framework through which we can explore how epistemologies might evolve in a blending between Art and Research. Visitors download the paper to read, critically reflect on the relationship between art and research, and experientially engage with the material through a series of creative prompts. A performance variation of the piece will be performed in-person and online through the ACM SIGCHI Conference on Human Factors in Computing Systems alt.chi track.
We endure decades of preparation for, enduring of, and recovery from work. It manifests and corrupts most of our life. It doesn't have to be like this. We can envision and design towards a future that gives life back to us: a world without work as we know it. HCI as a field should be oriented towards abolishing work for everyone rather than designing a future of work, often depicted as a metaverse of three-dimensional avatars, virtual workplaces, and automated oppression. In this paper, we detail what work is, what the abolition of work means, and a call to action for HCI to abolish work following a transformative justice framework.
Is our future heading towards enhancing the human experience with computer-mediated reality? Immersive technology is unique, existing between the world and our senses, letting users traverse wholly virtual environments (i.e. distant places or fantasy worlds) or augment the real world with virtual objects, and any mix of virtuality-reality in-between. This paper explores the philosophical and social ramifications of ubiquitous immersive technology, envisioning a relatively near-future where mainstream technology has been replaced and a dystopian far-future wherein individuals may choose to abandon reality in favour of virtual worlds. Creating design fictions as thought experiments, we explore the open challenges of possible futures in XR, researching tomorrow’s technologies today.
It is often assumed that imagery provides an easy or universal mode of communication, but when we imagine our worlds through sketches, visualizations, and comics are we leaving anyone behind? The visual world is rich, it transcends boundaries and connects us... but not all of us. To be able to create an image or visualize something is a skill, but to be able to view and interpret that image is a privilege: how can we bridge the gap between visual and textual interpretation? We propose that alt-narrative could bridge the gap between visual communication and imagery to connect everyone in storytelling, visualization, and thought. This AltCHI paper puts forward exploration of the AltNarrative method in HCI imagery and comics. To engage with this research, the reader should either activate their computer or smart device screen reader or open in Adobe Acrobat in Mac or Windows then activate Read Out loud.
Being confronted with a dulling perception and challenges to be creative when working at home, I imagined my home office to sense my state of arousal and search together with me for inspirational moments by infusing its familiar appearance with distortion.
In two prototypical human-home office interaction systems, we use Virtual Reality (VR), Deep Reinforcement Learning (DRL) and Galvanic Skin Response (GSR) to enhance a home office’s sensibility towards its user’s level of arousal as well as to enlarge its textural action space.
Although physiological feedback in machine learning faces low learning rates, the resulting interaction offers a fresh perspective on our human-home office relation.
We introduce ”Calmbots” that are insects-based interfaces comprising multiple functions, i.e., drawing, display, transportation, or haptics. We explore possibilities of multiple insects as co-creation interfaces making artworks through insects indirectly, in contrast with BioArt where artworks are made by biological technology. Considering scalability, sustainability, or promotability of applying insects-based devices on creation activities in daily life situations, we utilize on-hand and efficient control system by AR markers and radio-based station, and propose efficient ways of controlling multiple insects reaching goals and transporting objects, and customize flexible option parts. In robust experimental trials, our results showed effective control on a group of three or five cockroaches with at-least 60% reaching accuracy, mobility on carpeted or cable-lines floor, and possible continuous control under certain time duration. Participants in a user study felt positive with Calmbots’ functions and expected more performance or appearance on activities or creation in daily life.
We present a new approach to visualizing data that is well-suited for personal and casual applications. The idea is to map the data to another dataset that is already familiar to the user, and then rely on their existing knowledge to illustrate relationships in the data. We construct the map by preserving pairwise distances or by maintaining relative values of specific data attributes. This metaphorical mapping is very flexible and allows us to adapt the visualization to its application and target audience. We present several examples where we map data to different domains and representations. This includes mapping data to cat images, encoding research interests with neural style transfer and representing movies as stars in the night sky. Overall, we find that although metaphors are not as accurate as the traditional techniques, they can help design engaging and personalized visualizations.
Grounded Theory Methodology (GTM) is a powerful way to develop theories where there is little existing research using a flexible but rigorous empirically-based approach. Although it originates from the fields of social and health sciences, it is a field-agnostic methodology that can be used in any discipline. However, it tends to be misunderstood by researchers within HCI. This paper sets out to explain what GTM is, how it can be useful to HCI researchers, and examples of how it has been misapplied. There is an overview of the decades of methodological debate that surrounds GTM, why it’s important to be aware of this debate, and how GTM differs from other, better understood, qualitative methodologies. It is hoped the reader is left with a greater understanding of GTM, and better able to judge the results of research which claims to use GTM, but often does not.
Ethically-defensible research requires wide-ranging, holistic, and deep consideration. It is often overseen by Research Ethics Committees, Institutional Research Boards or equivalents but not all organisations have these and where they do, their degree of independence from organisational priorities varies (perhaps leading to research that would create reputational or other difficulties for organisations being left unpublished or unacknowledged). Conflicts of interest can therefore be left unmanaged, participants may be exploited, and society may not benefit. In this paper, we claim that publishing communities (e.g. scholarly conferences) can play a larger role in supporting improved ethical practice by defining and communicating the ethical values of their community’s collective identity and aspirations. This approach is not prescriptive like procedural ethics nor as broad as general research ethics codes (both are important) but offers a tangible way to unify ethics concerns across research contexts.
This essay performs a speculative ethics in designing with a researcher’s own bodily fluids. This is through the creation of “performative texts”, which are autoethnographic accounts of past experiences in which written words perform through visual and spatial compositions alongside verbal readings aloud. I present three performative texts about moments of discomfort in designing with milk from my own breastfeeding relationship. They are to reflect upon felt experiences of potential harm and to understand social and material relations of care. From these I offer three possibilities for how HCI might consider the ethics of first-person research in attending to more-than-human entanglements: unsafe spaces, situated escapes, and censored inclusion. These possibilities and the approach of performative texts contribute to research for more sustainable futures by exploring the decentering of humans through an intimate engagement with the self.
Emotional self-reliance is a key factor in psychological well-being. Individuals with Allism Spectrum Disorder often face challenges in interpersonal communication and social settings. Especially experiences of emotion echolalia, the mirroring of others’ emotions, can cause not only impairment in communication and relationships with others, but significant emotional distress, putting the mental well-being of allistic people at risk. In this paper, we report on the design of FaceSavr™, a digital textile protective system against emotion echolalia, and describe our participatory design process with seven allistic adults.
Human relationships, intimacy and the role of technology within it constantly change, catapulted in 2020 by COVID-19. We take this social rupture as an opportunity to reimagine possible futures for love, friendship, and kinships. Through design futuring and related approaches, we offer five prompts we developed for imagining alternative futures exploring a diverse range of intimacies. Through generating responses to the prompts, we offer alternative intimate futures as well as reflections on how such 'prompts for futuring' can be generative for design research. Our work extends calls for diversifying design futuring, imploring design researchers to consider diverse and inclusive ways of designing for futures, especially for human relationships and intimacy.
In the 2010s machine learning (ML) became a key driver of quickly growing number of apps and services. How to teach ML and other data-driven approaches in K–12 education has become a focus of intensive research efforts, with many recent advances in technology, pedagogy, and classroom integration. What to teach learners, at what age, and how, are some of the open questions being explored.
This study explored children’s interactions with a simple image classification tool, which used only two features to classify images. The results offer a proof-of-concept of how to teach 1) the principles of the ML workflow, and 2) some central ML insights, including image recognition, supervised learning, training data, model, feature, classifying, and accuracy. The results recognize how learning the principles of technology facilitates a shift in the locus of explanation from what oneself does to what the computer does. The results provide examples of how to support children’s developing data agency.
More than half of cancer patients are dealing with moderately severe pain on a monthly basis and most of them report a breakthrough pain experience at least once. Despite the importance of pain management for cancer patients, cancer pain remains undertreated. With computer technology and especially Virtual Reality offering open endless opportunities for pain management, we must consider how low-cost home-based Virtual Reality for cancer patients can be sensitively designed to provide comfortable, enriching pain management experiences. Working closely with 51 cancer patients, medical and paramedical personnel, we co-designed an intelligent personalized mobile application to first collect ecologically momentary assessment data on symptoms like pain and fatigue and Health-Related Quality of Life and subsequently enhance symptom management of cancer patients at home. Through this paper, we thoroughly explain the screening process and quantitative analysis we run to identify which environments patients would like to receive as a Virtual Reality intervention that can facilitate the design of Virtual Reality interventions for cancer patients.
The global agenda in education foresees the integration of social and emotional learning to equip students to succeed in our evolving digital society. In this paper, we focus on deepening into the research on AR/MR technologies to foster students’ comprehension of socio-cultural values in historical contexts. We address this challenge by exploring the potential of the interaction paradigm called World-as-Support (WaS). We present a case study of an educational Virtual Heritage experience with primary students for a bomb shelter built during the Spanish Civil War. Our findings showed that the experience enhanced students’ capabilities to reflect upon high-level issues related to value human dignity, to grasp some of the essential qualities of the value solidarity and to connect historical events with present political situations in Spain. Finally, we discuss four design recommendations for learning activities based on the WaS concerning (1) the enhancement of students’ competences in collaboration, and communication; (2) critical thinking; (3) contextualization of historical contents; and (4) moral and ethical considerations for digital augmentation.
Multimodal Interfaces (MMIs) supporting the synergistic use of natural modalities like speech and gesture have been conceived as promising for spatial or 3D interactions, e.g., in Virtual, Augmented, and Mixed Reality (XR for short). Yet, the currently prevailing user interfaces are unimodal. Commercially available software platforms like the Unity or Unreal game engines simplify the complexity of developing XR applications through appropriate tool support. They provide ready-to-use device integration, e.g., for 3D controllers or motion tracking, and according interaction techniques such as menus, (3D) point-and-click, or even simple symbolic gestures to rapidly develop unimodal interfaces. A comparable tool support is yet missing for multimodal solutions in this and similar areas. We believe that this hinders user-centered research based on rapid prototyping of MMIs, the identification and formulation of practical design guidelines, the development of killer applications highlighting the power of MMIs, and ultimately a widespread adoption of MMIs. This article investigates potential reasons for the ongoing uncommonness of MMIs. Our case study illustrates and analyzes lessons learned during the development and application of a toolchain that supports rapid development of natural and synergistic MMIs for XR use-cases. We analyze the toolchain in terms of developer usability, development time, and MMI customization. This analysis is based on the knowledge gained in years of research and academic education. Specifically, it reflects on the development of appropriate MMI tools and their application in various demo use-cases, in user-centered research, and in the lab work of a mandatory MMI course of an HCI master’s program. The derived insights highlight successful choices made as well as potential areas for improvement.
Augmented Reality Smart Glasses (ARSG) are a recent development in consumer-level personal computing technology. Research on ARSGs has largely focused on new forms of etiquette for these personal computing devices, but little else has been examined due in part to consumer availability. The most well-known example of ARSGs is Google Glass, which are no longer available for consumer purchase due to privacy concerns. Google has more recently transitioned to industry-focused applications with the Glass Enterprise Edition . Recent consumer-facing iterations on the technology include Snapchat Spectacles and Ray-Ban Stories, which reignite some of the anxieties surrounding wearable cameras. Focals by North, the ARSG product studied in this project, do not have the capacity to record video or audio, thus mitigating the risk of privacy breaches. This study examines how users of Focals employ the device, successfully or not, to facilitate daily activities such as scheduling, communication, wayfinding, and how non-users perceive the interactions of Focals users. Participants wrote blog responses and participated in a focus group on their daily experiences with the glasses; they also speculated on potential uses and features of future iterations relating to accessibility and entertainment purposes. Focals by North, a relatively low-cost ARSG, aims to make this tech mass market to “seamlessly [blend] technology into our world” . However, this study found participants preferred choice when receiving notifications, and greatly questioned the need for notifications to appear in their field of vision. We anticipate that these results will inform frameworks for assessing consumer facing ARSG products in future work.
In-product feedback mechanisms allow for capturing user feedback while the user is engaging with the product or service. Traditionally in-product feedback has focused on metrics such as Net Promoter Score (NPS)  and Customer Satisfaction (CSAT)  which look to measure customer loyalty or overall product satisfaction. By introducing complementary user experience (UX) metrics that are focused on user outcomes, UX teams have greater insight into measuring the successes or challenges of their users in the context of use. This case study describes and discusses the process employed and the lessons learned while designing and implementing a user-centered in-product feedback system. We specifically call out challenges and opportunities around aligning with business outcomes, navigating current frameworks, unlocking self-serve data to stakeholders, informing strategy, and feeding additional research. In conclusion, we present these learnings as a framework, dubbed BLUE, to help other UX teams create in-product feedback mechanisms.
Comparing products, features, brands, or ideas relative to one another is a common goal in user experience (UX) and market research. While Likert-type scales and ordinal stack ranks are often employed as prioritization methods, they are subject to several psychometric shortcomings. We introduce the numeric forced rank, a lightweight approach that overcomes some of the limitations of standard methods and allows researchers to collect absolute ratings, relative preferences, and subjective comments using a single scale. The approach is optimal for UX and market research, but is also easily employed as a structured decision-making exercise outside of consumer research. We describe how the numeric forced rank was used to determine the name of a new Google Cloud Platform (GCP) feature, present the findings, and make recommendations for future research.
Collecting accurate and fine-grain information about the music people like, dislike and actually listen to has long been a challenge for sociologists. As millions of people now use online music streaming services, research can build upon the individual listening history data that are collected by these platforms. Individual interviews, in particular, can benefit from such data, by allowing the interviewers to immerse themselves in the musical universe of consenting respondents, and thus ask them contextualized questions and get more precise answers. Designing a visual exploration tool allowing such an immersion is however difficult, because of the volume and heterogeneity of the listening data, the unequal “visual literacy” of the prospective users, or the interviewers’ potential lack of knowledge of the music listened to by the respondents. In this case study we discuss the design and evaluation of such a tool. Designed with social scientists, its purpose is to help them in preparing and conducting semi-structured interviews that address various aspects of the listening experience. It was evaluated during thirty interviews with consenting users of a streaming platform in France.
In our evaluation of the B2B e-commerce site of a global manufacturing company we conducted a user test with employees and customers. We found statistically significant differences in usability, user experience and NPS metrics between employees and customers, with the employees being more critical compared to the customers. We postulate and present some evidence that this difference is due to employees implicitly comparing B2B with B2C e-commerce sites and therefore expecting the experience of a B2C site for a B2B site. Such a comparison, fosters a bias, which has implications for businesses that host B2B e-commerce sites. We conclude by sketching recommendations for practitioners on how to go about such a bias.
The LilyTiny sewable microcontroller was created ten years ago, in an effort to make electronic textiles more accessible. At the time, e-textiles was gaining traction as a means to invite more diverse participation in computing, but financial and instructional barriers stood in the way of broader adoption. In addition, there existed a scaffolding gap between projects involving lights, batteries, and thread – and those requiring programming (i.e. leveraging the LilyPad Arduino and/or additional sensors or outputs). In an effort to expand access to electronic textiles, we designed the LilyTiny, an inexpensive, pre-programmed sewable microcontroller which controls assorted LED patterns, and which later became available for purchase through SparkFun. Alongside the LilyTiny, we released a free workshop guide for educators which details five low-cost activities that can be taught without any prior electronics experience.
This paper summarizes our development of the LilyTiny and companion curriculum – and reflects on whether we met our stated goal of expanding access to electronic textiles in the decade since. We share and discuss some measures of impact, including a survey of derivative products and a multi-year analysis of sales data from the LilyTiny’s sole distributor SparkFun Electronics.
Due to public concerns over touch-based disease transmission, tangible and embedded interfaces are perhaps the most unsuited technology during a pandemic. Even so, this case study documents the development and evaluation of such a system from early 2020 when people were told to avoid actions that might spread the virus (e.g., touch). Adding to the challenge, the Lookout was installed outside in a city centre for widespread public use. Despite these challenges, a COVID-safe touchable device was embedded and extensively used. This Case Study reports the co-creation of the device noting COVID restriction adaptations over a nine-month deployment. Our contributions are twofold: the study acts as a case-point of the impact of the unique COVID design context, with lessons for future pandemic scenarios; and, given we had over 10,000 users at a time when people were cautious about using shared devices or services, we surface some design characteristics that can promote the use of public technology.
For a small group of office workers who share the same workspace and the task load, leveraging their social skills and awareness could further increase their mutual awareness of each other’s work-related stress. This paper presents a case study of a one-week deployment of a shared, anonymous heart rate variability (HRV) data visualization system at six workplaces with 24 office workers, who were closely collaborated in four-person groups. We collected stress-related physiology data (i.e., heart-rate variability) from wearable sensors and anonymously visualized them on a shared display. Although the physiological data collection where noisy due to the practical constraints in the field settings, we found the participants still increasingly agreed with the systems and used the visualization as a reference for their subjective stress assessment. We also present and discuss how groups of office workers individually and socially reflect on their one-week experiences and then summarize takeaways for designing shared physiological data visualization systems for group stress management in the long term.
Accessibility in a hospital is challenging for people in low-income countries due to a lack of accessible mediums to communicate wayfinding, accessibility, and healthcare information. This results in delays and stress but can also result in sub-optimal treatment or sometimes a complete lack of treatment for the visitors. Sensible physical and digital interventions can greatly ease the experience of visitors and reduce the work-related stress of healthcare providers. We present a case study on wayfinding and service design for a mega ophthalmic care facility that has a daily footfall of 2500 patients. From our mixed-methods study we identified: (i) there are very few accessible mediums available to communicate wayfinding, accessibility, and healthcare information; (ii) there is a lack of inclusively designed interventions to accommodate the diversity of visitors; (iii) spatial ambiguity and situational impairment due to crowd density exasperate the situation and (iv) there exist missing as well as misleading information. We developed a spectrum of solutions on the environmental and digital infrastructures available within this context to deliver wayfinding and procedural information. We completed a progressive intervention across digital and physical mediums over a duration of 18 months. This has shown the impact of each medium on visitors’ experience. We found the choice of interface to access information depends on the ease of access, and ease of access depends on visitors’ abilities. Therefore, both the environment and digital mediums are found to be useful for visitors. Based on these empirical findings, we draw recommendations for an inclusive service design that incorporates using elements of the environment, human and digital infrastructure to support a more positive healthcare visitors experience.
Exoskeletons are increasingly used for rehabilitation. To support the design of new and engaging ways of interacting with an exoskeleton, we have developed a low-cost toolkit that interfaces a LEGO Technic arm exoskeleton with serious gaming. The toolkit enables easy modifications and options for the integration of a range of sensors. Additionally, it can be applied for use in gaming via a screen display or virtual reality (VR) systems. The toolkit provides real-time data streaming valuable for researchers and clinicians to analyze how the exoskeleton is being used. We present two case studies with the exoskeleton being used as an input and output interface for serious gaming.
Perceiving images and drawing are fundamental parts of human life, and thus access to them should be a universal right. However, there is a large breach for people with visual impairments to access diverse graphics, let alone drawing. There are several techniques of tactile graphics, such as swell paper, Braille embossing, and thermoform that help to alleviate this gap. However, in developing countries, the high cost and lack of access make them impractical. In this work, we describe our experience improving access to tactile graphics and drawing in Colombia. We created low-cost, effective and efficient, tactile graphics and drawing techniques that improve on current solutions. These techniques were created from the best practices of two projects adapting pieces from the Colombian art heritage [52, 53] for blind and visually impaired people. They were then applied to a third project: running a virtual tactile drawing club with blind and visually impaired participants in the middle of the COVID-19 pandemic. The lessons learned from these experiences are presented in this paper with the hope they can help the community democratize access to tactile graphics.
While eParticipation platforms have been developed extensively, there is a lack of insight into how they support societal participation. People's beliefs in their capabilities are a relevant component in human action, also affecting the motivation to participate. In this paper, we report the results of a study on the possibilities of an eParticipation platform to a) enhance the users’ self-efficacy in the context of societal participation, and b) lower the threshold of societal participation. Altogether, 34 young people from various backgrounds participated in Virtual Council field tests to collaborate on the Climate Change Act in Finland. The results suggest that eParticipation platforms can enhance the societal participation self-efficacy of youths that initially have less experience participating in societal issues. Furthermore, the threshold of participation can be lowered after using the eParticipation platform. The paper adds to the growing discussion on connections between youths use of digital services and societal participation.
Housing costs have risen dramatically in the past decade, surpassing their pre-Recession levels, but the data that housing researchers and policymakers rely on to understand these dynamics remain subject to important limitations in their spatiotemporal granularity or methodological transparency. While these aspects of existing public and private data sources present barriers to understanding the geography of cost and availability in markets across the United States, web data about housing opportunities provide an important alternative—albeit one that demands technical skills that would-be data users may lack. This case study documents the experiences of a collaboration between social and computer scientists focused on using a novel programming-by-demonstration tool for web automation, Helena, to inform rental housing policy and inequalities in the United States. While this project was initially focused on collecting housing ads from a single site within the Seattle area, the capacity to scale our project to new sources and locations afforded by Helena’s human-centered design allowed a team of social scientists to progress to scraping data across the country and multiple platforms. Using this project as a case study, we discuss a.) important programming and research challenges that were encountered and b.) how Helena’s design helped us overcome these barriers to using scraped web data in basic research and policy analysis.
This case study explores the use of an existing data-driven storytelling method, called data comic, within a co-design process, to improve the sense-making of data. Data can often support a co-design process by providing additional insight towards a problem that is being solved. A large number of methods are available to facilitate different aspects of a co-design process, but when it comes to embedding curated data, alternatives are limited. Not everyone has the expertise to understand raw data and co-design scenarios are typically time-limited meaning that learning new data skills is not the focus. Therefore, appropriate data curation can help to bring data into a design process while reducing time spent on data manipulation or upskilling. At the same time, it is important that participants are encouraged to think critically about the data and not simply accept a pre-determined viewpoint. We have adapted an existing technique called Data Comic for curating data for use in a co-design situation by turning the comic panels into a card game so that they can be used interactively and collaboratively. In this paper, we describe and reflect on its use within a workshop with participants who were mostly teenagers. We led them through a process of engaging with curated data to make sense of water pollution in a Finnish lake since the 1970s and raise awareness towards environmental pollution. We present our results from the participant's feedback and workshop to reflect on how data comics played a role in sense-making.
This study considers how well an autoethnographic diary study helps as a method to explore why families might struggle in the application of strong and cohesive cyber security measures within the smart home. Combining two human-computer interaction (HCI) research methods — the relatively unstructured process of autoethnography and the more structured diary study — allowed the first author to reflect on the differences between researchers or experts, and everyday users. Having a physical set of structured diary prompts allowed for a period of “thinking as writing”, enabling reflection upon how having expert knowledge may or may not translate into useful knowledge when dealing with everyday life. This is particularly beneficial in the context of home cyber security use, where first-person narratives have not made up part of the research corpus to date, despite a consistent recognition that users struggle to apply strong cyber security methods in personal contexts. The framing of the autoethnographic diary study contributes a very simple, but extremely powerful, tool for anyone with more knowledge than the average user of any technology, enabling the expert to reflect upon how they themselves have fared when using, understanding and discussing the technology in daily life.
Prototyping is notoriously difficult to do with machine learning (ML), but recent advances in large language models may lower the barriers to people prototyping with ML, through the use of natural language prompts. This case study reports on the real-world experiences of industry professionals (e.g. designers, program managers, front-end developers) prototyping new ML-powered feature ideas via prompt-based prototyping. Through interviews with eleven practitioners during a three-week sprint and a workshop, we find that prompt-based prototyping reduced barriers of access by substantially broadening who can prototype with ML, sped up the prototyping process, and grounded communication between collaborators. Yet, it also introduced new challenges, such as the need to reverse-engineer prompt designs, source example data, and debug and evaluate prompt effectiveness. Taken together, this case study provides important implications that lay the groundwork toward a new future of prototyping with ML.
This paper investigates the use of AI features - intelligent attributes in products - in the workplace with enterprise users who engage with AI enabled systems through a variety of touchpoints. Oftentimes, product teams developing AI features face a siloed view of AI experiences, and this work aims to present an end-to-end understanding of the range of enterprise users and their experiences when interacting with AI in the workplace. The purpose is to identify the phases in the AI feature journey for enterprise users across their spectrum of experiences, perceptions, and technical acumen. This paper presents this journey of enterprise users working with AI features, analyzes existing challenges and opportunities within this journey, and proposes recommendations to address these areas when planning, designing, and developing AI features for business applications.
As demonstrated by media attention and research, Artificial Intelligence systems are not adequately addressing issues of fairness and bias, and more education on these topics is needed in industry and higher education. Currently, computer science courses that cover AI fairness and bias focus on statistical analysis or, on the other hand, attempt to bring in philosophical perspectives that lack actionable takeaways for students. Based on long-standing pedagogical research demonstrating the importance of using tools and visualizations to reinforce student learning, this case study reports on the impacts of using publicly-available visualization tools used in HCI practice as a resource for students examining algorithmic fairness concepts. Through qualitative review and observations of four focus groups, we examined six open-source fairness tools that enable students to visualize, quantify and explore algorithmic biases. The findings of this study provide insights into the benefits, challenges, and opportunities of integrating fairness tools as part of machine learning education.
The present case study examines the product landscape of current AI-empowered co-creative tools. Specifically, I review literature in both creativity and HCI research and investigate how these tools support different stages in humans’ creative processes and how common challenges in human-AI interaction (HAII) are addressed. I find these AI-driven tools mostly support the generation and execution of ideas and are less involved in the early stages of co-creation. Moreover, HAII challenges identified in other fields receive little attention in the creative domain. Based on a synthetic analysis, I elaborate on how future tools can leverage the ”non-human” quality of AI to achieve innovation through a more human-centered, collaborative journey.
In this work, we present a case study on an Instagram Data Donation (IGDD) project, which is a user study and web-based platform for youth (ages 13-21) to donate and annotate their Instagram data with the goal of improving adolescent online safety. We employed human-centered design principles to create an ecologically valid dataset that will be utilized to provide insights from teens’ private social media interactions and train machine learning models to detect online risks. Our work provides practical insights and implications for Human-Computer Interaction (HCI) researchers that collect and study social media data to address sensitive problems relating to societal good.
We conducted User Experience (UX) Bootcamps with teens (ages 13-17) to teach them important UX design skills and industry standard tools for co-designing effective online safety interventions or “nudges”. In the process, we asked teens to storyboard about their risky or uncomfortable experiences and design high-fidelity prototypes for online safety interventions that would help mitigate these negative experiences. In this case study, we present our methodology, feedback from teens, challenges, and lessons learned in conducting our UX Bootcamps for adolescent online safety. We recommend that future researchers who want to conduct similar research with teens to encourage group activities, balance teen autonomy with researcher assistance, and ensure teens’ privacy and well-being. Finally, we provide useful guidelines for conducting virtual training and research studies with teens.
The global pandemic and associated lockdown measures created new and severe forms of vulnerability for people living in the context of intimate partner violence and coercive control, including being trapped in their homes with abusers and struggling to access advocates as services moved online. This paper outlines innovations used by victim advocates and service providers in reaching survivors in often impossible circumstances, and summarizes lessons from the design of a safety-planning application named “myPlan”. We propose digital security guidelines for products and services which may be used by survivors cohabiting with abusers or using devices which may be under surveillance. We conclude with reflections on whether technology design leads to empowerment: although we cannot overstate the importance of digital security design which is sensitive to the unique vulnerabilities of marginalized people, true empowerment requires a greater commitment to funding accessible housing, mental health support, and advocacy.
We present a case study where we developed RHETORiC: an audience conversation tool that promotes civil participation in online news comments. By following a human-centered design process, we created and evaluated a novel tool with an interface that supports people in carefully formulating their opinions and arguments, so they can constructively contribute to online discussions about news. Results of a large-scale field study show that our tool succeeds in increasing the level of civility, argumentation and proficiency of comments in comparison with those on social media. On top, both users and journalists report high satisfaction with the RHETORiC tool. In our paper we reflect upon lessons learned about the design process as well as about the tool itself, which contributes to the fields of both Human-Computer Interaction and Digital Journalism.
As our relationship with machines becomes evermore intimate, we observe increasing efforts in the quantification of human emotion, which has historically generated unintended consequences. We acknowledge an amplification of this trend through recent technological developments that aim toward human-computer integration, and explore the dark patterns that may arise for integrating emotions with machinic processes through “machine_in_the_middle”. Machine_in_the_middle is an interactive system in which a participant wears an electroencephalographic headset, and their neural activity is analysed to ascertain an approximation of their emotional state. Using electrical muscle stimulation their face is animated into an expression that corresponds with the output of the emotional recognition system. Through our work, we contribute the insight of three possible dark patterns that might emerge from emotional integration, including: reductionism of human emotion, disruptions of agency, and parasitic symbiosis. We hope that these insights inspire researchers and practitioners to approach human-computer integration more cautiously.
Fashion, a highly subjective topic is interpreted differently by all individuals. E-commerce platforms, despite these diverse requirements, tend to cater to the average buyer instead of focusing on edge cases like non-binary shoppers. This case study1, through participant surveys, shows that visual search on e-commerce platforms like Amazon, Beagle.Vision and Lykdat, is particularly poor for non-binary clothing items. Our comprehensive quantitative analysis shows that these platforms are more robust to binary clothing inputs. The non-binary clothing items are recommended in a haphazard manner, as observed through negative correlation coefficients of the ranking order. The participants also rate the non-binary recommendations lower than the binary ones. Another intriguing observation is that male raters are more inclined to make binary judgements compared to female raters. Thus it is clear that these systems are not inclusive to the minority, disadvantaged communities of society, like LGBTQ+ people. We conclude with a call to action for the e-commerce platforms to take cognizance of our results and be more inclusive.
Management of radiology requests in larger clinical contexts is characterized by a complex and distributed workflow. In our partner hospital, representing many similar clinics, these processes often still rely on exchanging physical papers and forms, making patient or case data challenging to access. This often leads to phone calls with long waiting queues, which are time-inefficient and result in frequent interrupts. We report on a user-centered design approach based on Rapid Contextual Design with an additional focus group to optimize and iteratively develop a new workflow. Participants found our prototypes fast and intuitive, the design clean and consistent, relevant information easy to access, and the request process fast and easy. Due to the COVID pandemic, we switched to remote prototype testing, which yielded equally good feedback and increased the participation rate. In the end, we propose best practices for remote prototype testing in hospitals with complex and distributed workflows.
In this case study, we present a mixed methodology approach to needfinding, integrating in-depth qualitative interview data with machine learning-powered analysis of a larger dataset. The research is motivated by the high failure rates and low involvement of consumers in the food startup industry’s product design process. To help food startups design products in a more consumer-friendly and timely manner, we are developing a novel framework called the Food Personality Framework (FPF). This framework categorizes eaters according to their eating habits, preferences, motivations and constraints. To better understand the complex relationships between motivations, we chose Grounded Theory as the most pertinent approach and interviewed 14 singles with full autonomy over their food choices. We further leveraged the availability of large online food-related datasets to inform and reinforce our findings from the qualitative work. We analyzed 6687 user behaviors of Food.com, a popular recipe recommendation site, according to the 18 influencers of food choice identified from the qualitative interviews. We found three meaningful clusters of user behavior: Minimalists, Social Butterflies and Conscious eaters. The interview data, thus, enabled grounded classification of the large scale user behavior and provided a grounded way to interpret the relationship among the top motivators identified in each clusters. The cluster analysis will inform future sampling of interviewees and will provide new insightful questions for the qualitative research. The case study delineates a dynamic interplay of qualitative and quantitative data used to investigate human food choice, a novel domain in the Human-Food Interaction literature.
Codesign brings together users and creators of a product to collaboratively determine how that product will manifest. In this case study of codesign, the Adobe Lightroom team collaborated with 12 photographers in a 4-day workshop to envision an in-app community space for photographic expression and learning. We discuss findings from the codesign workshop, including core values for the envisioned photography community: authenticity, connection and growth, and how we translated these themes to the design of the product. This case study demonstrates a successful codesign effort within the context of user-centered design at a large company.
Most literature has already shown that virtual reality, concretely, 360-degrees video can have influence on affective skills in multiple applications from education to medicine. This thesis aims to go one step further, real-time 360-degree video communications. Researchers commonly based their experiments on controlled environments and participants with specific characteristics which reduce the reproducibility and ecological validity of the methodologies. We propose to design a methodology based on questionnaires and the use of biosensors to jointly assess technical and socioemotional features in bidirectional immersive communications. For that, we first present some research questions and then choose the tools, stimuli, and use case to answer them. The use case selected as starting point to validate the methodology is tele-education. This thesis is an invitation to leave the comfort zone and question the usage of methodologies that have been highly proven with previous technologies, but which may as well be insufficient to address the challenges of new communications.
Content with flashes, bright colors, and repeated patterns can cause seizures and migraines when viewed by people with photosensitivity. Exposure to seizure-inducing content is a serious risk in online environments, as evidenced by documented instances of people with photosensitivity being exposed to seizure-inducing material while playing video games or using social media. My thesis focuses on improving online safety and accessibility for people with photosensitivity by measuring the prevalence of seizure-inducing content online, developing new tools for detecting seizure-inducing content, and constructing a broad framework for protecting against seizure-inducing content at the level of content creators, platforms, and content consumers. Through this work, I hope to help build a better understanding of the current state of photosensitive risk online and contribute new solutions for mitigating seizure-inducing content with minimal adverse effects on the browsing experience for users with photosensitivity.
Criminologists and sociologists have documented the ways that technologies for community safety can perpetuate structural racism. Among these include policing technologies that encode Black criminality  and surveillance technologies which extends the reach of the carceral state . Fortunately prior work in HCI has also investigated how the design of technologies for community safety can resist structural racism by employing strategies that stand as alternatives to traditional policing activities, including story-telling, community-building, and care practices [10, 11, 25, 27]. This body of work amplifies the practices of community organizations and employs their expertise to build safer community. I contribute to this body of research and ask how technologies can empower a layperson in the goal of building safer communities. This is an area of great potential— just as community policing technologies extend the reach of traditional policing, new technologies can be designed to extend the reach of community organizations aiming to dismantle oppressive social structures. Towards this goal, I leverage Transformative Justice (TJ), a community-based approach to addressing violence which asks us to not only examine the ways in which our current systems, structures, and norms perpetuate harm, but to also dismantle and replace those systems so that the conditions that enabled the harm are transformed. In this paper, I describe the current state of research, the principles of TJ, my goals for the PhD, and where I would benefit from further support.
How can smart technology support people with mild-moderate dementia to benefit from the positive effects of listening to music in daily life? The quality of life of people with dementia decreases rapidly when they experience difficulties in using everyday products and lose initiative. With a focus on the interaction with music, we study how smart technology can enable human-product-interaction while adapting to loss of initiative. As a result, knowledge on interaction design will be developed to help designers create better products for people with dementia.
I aim to understand how individuals may be motivated non-financially to collaborate in maintaining evolving information asynchronously. I have developed two systems, Drafty and Sketchy, that provide evidence of achieving this goal at differing timescales. In Drafty, unpaid contributors voluntarily update a tabular dataset of Computer Science professor profiles at a multi-year timescale. In Sketchy, university students simultaneously sketch and co-inspire each other during voluntary UI/UX sketching activities at a sub-hour timescale. I seek to integrate lessons from both systems to develop a theoretical framework that uses a feedback cycle to provide users with practical random examples from evolving information to motivate new contributions. I will further develop this theoretical framework by extending Drafty to generate insights to automatically attract visitors and create the Virtual Readability Lab (VRL) to help users improve their reading performance through personalized readability tests and deepen our understanding of information design human performance. Both systems will give users the agency to seek randomized insights within a naturalistic setting to motivate continuous contributions to each system’s evolving information.
Specialized software tools have become essential to many forms of content creation, yet poor accessibility of these tools has led to unequitable opportunities for disabled content creators. My doctoral research contributes to our knowledge of accessibility in computer-supported content creation by studying the context of audio production by people with vision impairments. Through interviews, observations, and content analysis, I develop a comprehensive understanding of how accessibility unfolds during various stages of audio content production for blind audio producers – from learning the tools and practices to developing and exhibiting professional expertise while also pushing back against ableist productivity standards and stereotypes. Critically reflecting upon these insights, I am developing a system to scaffold accessible learning of audio production software in a way that recognizes and leverages the expertise and practices developed by blind audio production trainers. The final phase of my dissertation will evaluate this system with both blind trainers and learners to understand how interactive technologies can promote accessible learning in computer-supported content creation.
Conversations with contemporary witnesses, e.g., Holocaust survivors, are valuable opportunities to learn about impacts of historical events. They lead to a more thorough conveyance of history by complementing aggregated facts and numbers with emotional personal experiences and detailed accounts. Encounters and interviews with first-hand witnesses to a number of historical periods soon are a thing of the past due to the respective survivors’ advanced age. Embodied Conversational Agents using recordings of contemporary witnesses are an attempt at preserving interactive and personal testimonies for future generations of students and educators. Since the implementation of such Interactive Digital Testimonies (IDT) contains irreversible design decisions, an overview of available choices and their respective consequences are needed. While a number of IDTs have been created over the past years, there is little empirical research on available design choices and the resulting effects on users. In my dissertation I will identify the required HCI features of IDTs, as well as investigate and evaluate how these features can be implemented. This scientific basis will allow for informed decisions during the development of future IDTs.
Comfort in work environments is highly influenced by indoor environmental quality–the combined effects of acoustics, thermal, lighting, air quality, and so forth. These physical parameters can impair productivity, and also can be a threat to health, and compromise the well-being. This dissertation work explores the opportunities for interactive artificial intelligence to bring radical changes in human experiences within built environments. The research description outlines four case studies that bring tailored notifications and actions to users in order to improve their comfort in the personal and social context in office buildings as well as at home.
Entertainment is enjoyable and helps people to feel socially connected with others. However, the control of entertainment applications in group situations is often restricted to one person. This especially limits group decision-making where people sit in close proximity such as in the living room or inside a car. Restricting users in group decision-making can cause interpersonal conflicts and frustration. To overcome these aspects, my PhD research aims to articulate design principles to support social control. I specifically focus on distributing control across group members to support collaborative group decision-making and enhance social connectedness. To achieve this, I will design entertainment-oriented social control services for the car and the living room. Based on the exploration and designs for the car domain, I will transfer the gained knowledge to the living room, to verify and generalize my findings. This should support in drawing an overall conclusion for the design of general social control entertainment services. Thus, my dissertation contributes to the understanding of the design for future, entertainment-oriented, distributed control systems to best support group collaboration.
In an ever increasingly complex technological landscape, interaction methods need to support users in their own discovery. This is exacerbated by a general aversion to instruction manuals and a trend towards invisible controls. Consequently, a lack of support for awareness or recognition of interaction possibilities leads to inefficient usage or complete disregard of the system. However, despite standard work imploring that users should be able to “figure out what actions are possible and where and how to perform them”, this problem is rarely considered in the introduction of new interaction methods. My doctoral research is focused on improving the discovery of interaction possibilities. To do so I am combining a theoretical approach, focused on identifying, defining and framing relevant considerations with a practical approach to validate those theoretical considerations and identify and formulate actionable improvement methods for researchers and designers. In the process I hope to highlight the importance of discoverability to the research community and advocate for its increased consideration.
The concept of gamified interactive models and its novel extensions, such as playification, has been widely approached in order to engage users in many fields. In fields such as HCI and AI, however, these approaches were not yet employed for supporting users to create different forms of artworks, like a musical corpus. While allowing novel forms of interactivity with partially-autonomous systems, these techniques could also foster the emergence of artworks not limited to experts. Hence, in this research we introduce the concept of meta-interactivity for compositional interfaces, which extends an individual’s capabilities by the translation of an effort into a proficiency. We study how this approach can be effective through the development of novel systems that enables novices to compose coherent musical pieces through the use of imagetic elements in a virtual environment.
The majority of online video content remains inaccessible for blind people due to the lack of audio descriptions. Content creators have traditionally relied on professionals to author audio descriptions, but their service is costly and not readily available. In this research, I introduce four threads of research that I will conduct for my Ph.D. dissertation, aimed to create methods and tools that are both time- and cost-effective in providing good quality audio descriptions. They are: (i) The development and evaluation of mixed-ability collaboration authoring tool, (ii) The formative study to uncover the feedback pattern from the reviewer, (iii) the evaluation and generation of real-time supports for novice authors to write AD, and (iv) the design, development, and evaluation of a system that demonstrate the utility of semi-automatically authoring AD. I believe these four research threads help me to uncover a cheaper solution to generate audio description. Hence, motivating the content creator to include this accessibility feature in the video production process and making the existing and upcoming videos accessible.
The dark side of technology, such as technology overuse, has received growing concern. Prior studies have developed a variety of stand-alone techniques to support technology non-use, but few have attempted to build interventions directly into technologies to regulate usage. My research seeks to address this issue in the context of games by exploring the relationship players intend to have with games, especially their desired balance between play and non-play, and designing built-in interventions in games to help support this relationship. I will conduct qualitative interviews to understand players’ ideal relationships with games, and quantitative experiments to investigate the impact of built-in interventions on such relationships. Through these investigations, I attempt to develop a theoretical framework to understand users’ intended relationship with technologies and provide guidance on intervention designs that contribute to digital well-being.
There is an opportunity to support shy and neurodivergent children in the development of critical executive function (EF) skills through social play. Through a within-subjects study at a preschool and a remote Zoom observation case study of neurodivergent children and their parents, I have identified the potential for StoryCarnival, a system that supports evidence-based sociodramatic play activities through e-book stories, a play-planning app, and a tangible, adult-controlled voice agent, to empower shy children to more confidently engage with their peers, to motivate neurodivergent children through various modalities, to encourage neurodivergent children to engage in symbolic play, and to afford children different types of agency in different settings. Through my future work, I hope to confirm the validity of these findings and examine the potential for StoryCarnival to support inclusive play in mixed-abilities groups through a large-scale deployment study and field studies.
Frontline health workers in many countries are responsible for filling gaps in essential primary health infrastructure, as witnessed during the COVID-19 pandemic. Their work increasingly involves the use of purportedly “intelligent” systems or data collection for such systems, to support diagnosis, disease forecasting, and information delivery. My research aims to inform the design of data-driven and automated systems in frontline health work, particularly for women workers in low-level and precarious roles in the Global South. Drawing from literature in the fields of human-computer interaction (HCI), gender and development studies, and health informatics, I will critically examine health workers’ experiences and relationships with “intelligent” systems, and engage in the participatory design of technology that might better serve worker needs while strengthening the frontline health ecology overall.
Over time our daily visual tasks become more complex continuously, however, the natural adaptation of our visual system does not adapt as fast as the living environment changes. As the representations of this unbalanced trend, concentration difficulty and visual overload are experienced and studied intensively. As well as universal motion sickness occurs in both real-life and virtual environments. Besides, online learning and co-working experience are far from satisfactory, which is partly due to lacking engagement and instantaneous visual interaction that we used to have when conducting those activities offline. Thus, my goal is to propose methods helping us better adapt to rapidly changing visual contexts. In the formative research, I created dynamic peripheral vision blocking glasses, and its experimental result indicates that wearing such glasses helped its users suffer fewer motion sickness symptoms while accessing fast-moving surrounding scenery in a VR environment. For the following studies, I am creating dynamic saliency adjusting glasses and gaze guiding glasses to augment and reshape daily-life visual perception.
While static media makes the distinction between diegetic and non-diegetic audio, the use of these concepts in interactive media, such as games, presents problems of classification. This is especially true for virtual reality where players embody virtual characters through their own bodily movements. This PhD research focuses on the diegetic aspect of audio in virtual reality and the effect that different approaches might have on players’ sense of embodiment and willingness to suspend disbelief.
Socially assistive robots (SARs) receive significant research attention due to their positive impact across many contexts. For example, studies have shown that autistic children are receptive to SARs in therapy, and achieve similar learning outcomes compared to human-delivered therapy. Given the sensitive nature of therapy and the current state of autonomous robots, however, SARs are in practice teleoperated by a therapist who controls their motion and dialogue. This presents an opportunity to produce more effective SAR teleoperation interfaces in the context of therapy for autistic children. In this paper, I outline research for improving teleoperation interfaces of SARs through (1) analyzing current teleoperation usage, (2) interviewing teleoperators about their needs, and (3) implementing and evaluating varied designs for teleoperation interfaces.
Automated decision systems (ADS) are increasingly used for consequential decision-making. These systems often rely on sophisticated yet opaque machine learning models, which do not allow for understanding how a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield unfair outcomes because their sanity is challenging to assess and calibrate in the first place—which is particularly worrisome for human decision-subjects. Based on this observation and building upon existing work, I aim to make the following three main contributions through my doctoral thesis: (a) understand how (potential) decision-subjects perceive algorithmic decisions (with varying degrees of transparency of the underlying ADS), as compared to similar decisions made by humans; (b) evaluate different tools for transparent decision-making with respect to their effectiveness in enabling people to appropriately assess the quality and fairness of ADS; and (c) develop human-understandable technical artifacts for fair automated decision-making. Over the course of the first half of my PhD program, I have already addressed substantial pieces of (a) and (c), whereas (b) will be the major focus of the second half.
Care work is often moralized as essential to the functioning of society, but has long been undervalued economically and politically, contributing to a global care crisis and prompting increased control and extraction of care work to stabilize capitalist systems. The role of emerging technologies in these politics of care is understudied, particularly in the Global South. My research looks at why and how emerging technologies are being introduced into care work in health in India and Kenya, uncovering the implications this has for how care work is done, care workers’ lives, and organizations deploying the technology. Drawing on cases of both harm and opportunity effected by technology, my work offers an empirical understanding of the role of technology in the politics of care work, and the implications this can have for the future of care work amidst a global care crisis.
Modern approaches to second language (L2) pedagogy emphasise communicative competence as part of the language teaching goals. However, it has been observed that some learners are more willing to engage in L2 communication than others, and that this disposition may be affected by several variables that are not bounded to their linguistic competence. In the case of migrants, who are required to learn and use a second language to integrate into their new environment, there is a critical need for opportunities and resources to support their Willingness to Communicate (WTC) beyond the classroom. Intelligent personal assistants (IPA) offer a dynamic oral conversational opportunity in language learning that seems to have potential for improving language learners’ Willingness to Communicate. As part of this doctoral study, a conversational experience will be designed to bring together conversational and instructional principles that take into consideration the learners’ needs, their environment, and instructional activities to produce output in the English language. The research is of benefit to educators, and the HCI and UX community that is focused on the design or implementation of conversational experiences with IPAs in language learning.
Machines, computers, and robots play an increasingly important role in our personal and professional lives. In manufacturing industries, the direct physical collaboration between workers and robots is already reality today. To understand how humans perceive this collaboration, investigations regarding the effect of different robot behaviors and the characteristics that influence the perception by humans are required. Existing findings on the effect of human attributes and human-like appearance of robots point out several possibilities that improve interactions between humans and robots. However, these studies predominantly focus on humanoid robots. With regard to the increasing number of industrial robots and the arising economic potential of automation, the objective of my research is to gain in-depth understanding of human perception of robot movement behaviors. Achieving this requires insights on possible and perceptible robot behaviors in the non-humanoid field, how they are perceived and how they can be mapped to acceptance by humans. Based on this understanding, resulting interactions can be improved by adapting relevant behavior to the individual’s preferences. In view of new technologies and possibilities, it might even be possible to use real time data to classify humans and let robots adapt their behavior automatically.
How will we stay connected amidst a climate crisis? Conditions associated with climate change, such as sea level rise and increasing extreme weather events, can destabilize already vulnerable network and digital infrastructures. While existing infrastructures are in dire need of maintenance, additional infrastructures are simultaneously being built to address imbalances in network access and distribution. My dissertation project attends to these intersecting precarities as a way to reconsider how digital infrastructures can be reworked to address overlapping questions of environmental and digital inequities. My research is situated within marginalized coastal communities in south Louisiana, where legacies of petrochemical extraction has led to deep socioeconomic and ecological disparities, while increased intensity of storms and floods have begun to impede and damage an already sparse network infrastructure. In my project, I use archival, ethnographic, and design research methods to examine longer histories of environmental degradation, investigate current practices of maintaining and developing network infrastructures, and develop approaches for researchers in HCI and related computing fields to re-envision just and equitable network infrastructures.
The intertwined and sometimes contradictory work of managing complex health needs (e.g., discordant, enigmatic, and/or rare conditions) creates many challenges for patients, caregivers, and healthcare providers. While researchers have created interventions such as technologies and services to address particular health needs, interventions must be designed to better account for gaps in technologies and interdependencies across health needs. In this workshop we will adopt an ecosystems perspective to better understand the nature of complex needs and how to support the management of those needs through holistic and multi-faceted support. Using a hands-on design sprint technique, participants will (1) map out different complex care ecosystems, (2) generate ideas for technologies, services, and other multi-faceted interventions to address gaps in those ecosystems, and (3) choose the most promising ideas to further develop and refine. We will close by reflecting together on what we have created, our approaches to design, and the theories and concepts that shaped our approaches. Through this process, we will collectively generate an agenda for research and design to better support the management of complex health needs.
HCI researchers increasingly conduct emotionally demanding research in a variety of different contexts. Though scholarship has begun to address the experiences of HCI researchers conducting this work, there is a need to develop guidelines and best practices for researcher wellbeing. In this one-day CHI workshop, we will bring together a group of HCI researchers across sectors and career levels who conduct emotionally demanding research to discuss their experiences, self-care practices, and strategies for research. Based on these discussions, we will work with workshop attendees to develop best practices and guidelines for researcher wellbeing in the context of emotionally demanding HCI research; launch a repository of community-sourced resources for researcher wellbeing; document the experiences of HCI researchers conducting emotionally demanding research; and establish a community of HCI researchers conducting this type of work.
Automation has been permeating our everyday lives in various facets. Given both the ubiquity and, in many cases, the indispensability of ubiquitous automated systems, creating engaging experiences with them becomes increasingly relevant. This workshop provides a platform for researchers and practitioners working on (semi-)automated systems and their user experience and allows for cross-discipline networking and knowledge transfer. In a keynote talk, paper presentations, discussions, and hands-on sessions, the participants will explore and discuss user engagement with automation for operation, appropriation, and change. The results of the workshop are a set of research ideas and drafts of joint research projects to drive further automation experience research in a collaborative interdisciplinary manner.
Computational approaches for user interfaces have been used in adapting interfaces for different modalities, usage scenarios and device form factors, understanding screen semantics for accessibility, task-automation, information extraction, and in assisting interface design. Recent advances in machine learning (ML) have drawn considerable attention across HCI and related fields such as computer vision and natural language processing, leading to new ML-based user interface approaches. Similarly, significant progress has been made with more traditional optimization- and planning-based approaches to accommodate the need for adapting UIs for screens with different sizes, orientations and aspect ratios, and in emerging domains such as VR/AR and 3D interfaces. The proposed workshop seeks to bring together researchers interested in all kinds of computational approaches for user interfaces across different sectors as a community, including those who develop algorithms and models and those who build applications, to discuss common issues including the need for resources, opportunities for new applications, design implications for human-AI interaction in this domain, and practical challenges such as user privacy.
EmpathiCH aims at bringing together and blend different expertise to develop new research agenda in the context of “Empathy-Centric Design at Scale”. The main research question is to investigate how new technologies can contribute to the elicitation of empathy across and within multiple stakeholders at scale; and how empathy can be used to design solutions to societal problems that are not only effective but also balanced, inclusive, and aware of their effect on society. Through presentations, participatory sessions, and a living experiment—where data about the peoples’ interactions is collected throughout the event—we aim to make this workshop the ideal venue to foster collaboration, build networks, and shape the future direction of “Empathy-Centric Design at Scale”.
Increasing availability of personal data opened new possibilities for technologies that support individuals’ reflection, increase their self-awareness, and inform their future choices. Personal informatics, chiefly concerned with investigating individuals’ engagement with personal data, has become an area of active research within Human-Computer Interaction. However, more recent research has argued that personal informatics solutions often place high demands on individuals and require knowledge, skills, and time for engaging with personal data. New advances in Machine Learning (ML) and Artificial Intelligence (AI) can help to reduce the cognitive burden of personal informatics and identify meaningful trends using analytical engines. Furthermore, introducing ML and AI can enable systems that provide more direct support for action, for example through predictions and recommendations. However, there are many open questions as to the design of personal informatics technologies that incorporate ML and AI. In this workshop, we will bring together an interdisciplinary group of researchers in personal informatics, ML, and AI to outline the design space for intelligent personal informatics solutions and develop an agenda for future research in this area.
We are now entering the new space age! In 2021, for the first time in history that there is civilian crew in space, demonstrating the next frontier of human space exploration that will not be restricted to highly trained astronauts but will be open to a more general public. However, keeping a human healthy, happy and productive in space is one of the most challenging aspects of current space programs . Thus, there is an emerging opportunity for researchers in HCI to design and research new types of interactive systems and computer interfaces that can support humans living and working in space and elsewhere in the solar system.
Last year, SpaceCHI workshop (https://spacechi.media.mit.edu/) at CHI 2021 welcomed over 130 participants from 20 countries around the world to present new ideas and discuss future possibilities for human-computer interaction for space exploration. The SpaceCHI 1.0, for the first time, brought together crossdisciplinary researchers from HCI, aerospace engineering, robotics, biological science, design, art, architecture to envision the future of human space exploration leading the workshop participants and organizers to form a new global community focused on HCI research for space applications. With success from the previous SpaceCHI, we are exploring the exciting opportunity for researchers in HCI to contribute to the great endeavor of space exploration by participating in our workshop.
Bodies of water can be a hostile environment for both humans and technology, yet they are increasingly becoming sources, sites and media of interaction across a range of academic and practical disciplines. Despite the increasing number of interactive systems that can be used in-, on-, and underwater, there does not seem to be a coherent approach or understanding of how HCI can or should engage with water. This workshop will explicitly address the challenges of designing interactive aquatic systems with the aim of articulating the grand challenges faced by WaterHCI. We will first map user experiences around water based on participants’ personal experiences with water and interactive technology. Building on those experiences, we then discuss specific challenges when designing interactive aquatic experiences. This includes considerations such as safety, accessibility, the environment and well-being. In doing so, participants will help shape future work in WaterHCI.
Haptic devices have been around for decades, providing critical information, usability benefits and improved experiences across tasks from surgical operations to playful applications in Mixed Reality. We see more and more software and hardware solutions emerging that provide design tools, design approaches and platforms, both in academia and industry. However, we believe that designers often re-invent the wheel, and must spend an inordinate amount of time doing their work, which is not sustainable for long-term research. This workshop aims at gathering people from academia and industry to provide a common ground to discuss various insights on and visions of the field. We aim to bring together the various strands of haptics—devices, software, and design—to assess the current state-of-the-art and propose an agenda towards haptics as a united design discipline. We expect the outcome of the workshop to be a comprehensive overview of existing tools and approaches, along with recommendations on how to move the field forward, together.
Building on the prior workshops on conversational user interfaces (CUIs) [2, 40], we tackle the topic of ethics of CUIs at CHI 2022. Though commercial CUI developments continue to rapidly advance, our scholarly dialogue on ethics of CUIs is underwhelming. The CUI community has implicitly been concerned with ethics, yet making it central to the growing body of work thus far has not been adequately done. Since ethics is a far-reaching topic, perspectives from philosophy, design, and engineering domains are integral to our CUI research community. For instance, philosophical traditions, e.g., deontology or virtue ethics, can guide ethical concepts that are relevant for CUIs, e.g., autonomy or trust. The practice of design through approaches like value sensitive design can inform how CUIs should be developed. Ethics comes into play with technical contributions, e.g., privacy-preserving data sharing between conversational systems. By considering such multidisciplinary angles, we come to a special topic of interest that ties together philosophy, design, and engineering: conversational disclosure, e.g., sharing personal information, transparency, e.g., as how to transparently convey relevant information in a conversational manner, and vulnerability of diverse user groups that should be taken into consideration.
The fields of programmable matter, actuated materials, and Soft Robotics are becoming increasingly more relevant for the design of novel applications, interfaces, and user experiences in the domain of Human-Computer Interaction (HCI). These research fields often use soft, flexible materials with elastic actuation mechanisms to build systems that are more adaptable, compliant, and suitable for a very broad range of environments. However, at the intersection between HCI and the aforementioned domains, there are numerous challenges related to fabrication methods, development tools, resource availability, nomenclature, design for inclusion, etc. This workshop aims to explore how to make Soft Robotics more accessible to both researchers and nonresearchers alike. We will (1) investigate and identify the various difficulties people face when developing HCI applications that require the transfer of knowledge from those other domains, and (2) discuss possible solutions and visions on how to overcome those difficulties.
This workshop proposes a space for Latin American academics and activists engaging with data to think critically about the legitimacy and power dynamics of knowledge production. Given that most research on data, as well as its area of application, have focused on and is informed by the Global North, this workshop sets the spotlight on Latin America and places activist and academic knowledge on equal standing. The organisers are an interdisciplinary group of Latin American scholars and activists, women based in the North and the South, engaging with data across the borderlands of disciplines, practices, migrations, and languages. By hosting the workshop in Spanish (with English interpretation), the organisers aim to create a space where Latin American voices are heard and appreciated, without requiring English proficiency from speakers and participants. We invite the CHI community to cross the borders and join a different conversation, including panels and interactive sessions that will inspire — and challenge — current thinking around data and data practices.
Spanish Version: Este taller propone un espacio para que académicxs y activistas latinoamericanxs que trabajan con datos piensen críticamente en la legitimidad y las dinámicas de poder en la producción de conocimiento. Dado que la mayor parte de las investigaciones sobre datos, así como su área de aplicación, se ha centrado en el Norte Global y ha sido moldeada por éste, este taller pone el foco en América Latina y sitúa el conocimiento activista y académico en igualdad de condiciones. Las organizadoras son un grupo interdisciplinario de académicas y activistas latinoamericanas, con sede en el Norte y en el Sur, que trabajan con datos a través de las fronteras disciplinarias, las prácticas, las migraciones y los idiomas. Al organizar el taller en español (con interpretación en inglés), las organizadoras se proponen crear un espacio en el que se escuchen y aprecien las voces latinoamericanas, sin exigir a ponentes y participantes que dominen el inglés. Invitamos a la comunidad de CHI a cruzar las fronteras y unirse a una conversación diferente, incluyendo paneles y sesiones interactivas que inspirarán — y desafiarán — ideas en torno a los datos y las prácticas de datos.
Virtual reality provides great opportunities to simulate various environments and situations as reproducible and controllable training environments. Training is an inherently collaborative effort, with trainees and trainers working together to achieve specific goals. Recently, we have seen considerable effort to use virtual training environments (VTEs) in many demanding training contexts, e.g. police training, medical first responder training, firefighter training etc. For such contexts, trainers and trainees must undertake various roles as supervisors, adaptors, role players, and observers in training, making collaboration complex, but essential for training success. These social and multi-user aspects for collaborative VTEs have received little investigation so far. Therefore, we propose this workshop to discuss the potential and perspectives of VTEs for challenging training settings. In a one-day online workshop, researchers and practitioners will jointly develop a research agenda on how currently underrepresented aspects of social and collaborative work can be integrated into VR-supported training. This workshop will focus on two themes: (1) Multi-sensory experience: novel collaborative interfaces for VTEs (e.g. joint use of tangible devices, strategies for preventing simulator-induced negative effects); (2) Multi-user interaction: collaboration in VTEs between trainers (two trainers run a scenarios jointly), trainers and trainees /the trainer controls the scenario for a trainee), and trainees with each other (e.g. two trainees solve an exercise together)
The interactive augmentation of musical instruments to foster self-expressiveness and learning has a rich history. Over the past decades, the incorporation of interactive technologies into musical instruments emerged into a new research field requiring strong collaboration between different disciplines. The workshop ”Intelligent Music Interfaces” consequently covers a wide range of musical research subjects and directions, including (a) current challenges in musical learning, (b) prototyping for improvements, (c) new means of musical expression, and (d) evaluation of the solutions.
The last several years have seen a strong growth of telerobotic technologies with promising implications for many areas of learning. HCI has contributed to these discussions, mainly with studies on user experiences and user interfaces of telepresence robots. However, only a few telerobot studies have addressed everyday use in real-world learning environments. In the post-COVID 19 world, sociotechnical uncertainties and unforeseen challenges to learning in hybrid learning environments constitute a unique frontier where robotic and immersive technologies can mediate learning experiences. The aim of this workshop is to set the stage for a new wave of HCI research that accounts for and begins to develop new insights, concepts, and methods for use of immersive and telerobotic technologies in real-world learning environments. Participants are invited to collaboratively define an HCI research agenda focused on robot-mediated learning in the wild, which will require examining end-user engagements and questioning underlying concepts regarding telerobots for learning.
The Asian CHI symposium is an annual event organized by researchers and practitioners in the Asia Pacific since the authors first co-initiated South East Asia Computer-Human Interaction (SEACHI) during CHI 2015 . The symposium aims to bring together both early-career and senior HCI academia and UX practitioners from industries in the Asia Pacific and bring about cross-exchange of information and transfer of knowledge in a multidisciplinary environment and multi-socio-economic aspects of HCI research, foster social ties and collaboration in the field of HCI . As the Asian CHI community grows to be more diverse than when we started in 2015, there is a need to address the issues of equity, justice, access, and transparency more strategically, especially with a historical link to colonialism in Asia. Beyond showcasing the latest Asian-inspired HCI work and those focusing on incorporating Asian sociocultural factors in their design, implementation, evaluation, and improvement, the Asian CHI Symposium 2022 is a sandbox for decolonizing yet academically rigorous discourse platform for both HCI academic and UX practitioners to present their latest research findings and solutions that reflect the expansion of HCI theory and applications towards culturally inclusive design for diverse audiences in Asia.
In the wake of the hype around big data, artificial intelligence, and “data-drivenness,” much attention has been paid to developing novel tools to capitalize upon the deluge of data being recorded and gathered automatically through IT systems. While much of this literature tends to overlook the data itself—sometimes even characterizing it as “data exhaust” that is readily available to be fed into algorithms, which will unlock the insights held within it—a growing body of literature has recently been directed at the (often intensive and skillful) work that goes into creating, collecting, managing, curating, analyzing, interpreting, and communicating data. These investigations detail the practices and processes involved in making data useful and meaningful so that aims of becoming ‘data-driven’ or ‘data-informed’ can become real. Further, In some cases, increased demands for data work have led to the formation of new occupations, whereas at other times data work has been added to the task portfolios of existing occupations and professions, occasionally affecting their core identity. Thus, the evolving forms of data work are requiring individual and organizational resources, new and re-tooled practices and tools, development of new competences and skills, and creation of new functions and roles. While differences exist across the global North and the global South experience of data work, such factors of data production remain paramount even as they exist largely for the benefit of the data-driven system [21, 32]. This one-day workshop will investigate existing and emerging tasks of data work. Further, participants will seek to understand data work as it impacts: individual data workers; occupations tasked with data work (existing and emerging); organizations (e.g. changing their skill-mix and infrastructuring to support data work); and teaching institutions (grappling with incorporation of data work into educational programs). Participants are required to submit a position paper or a case study drawn from their research to be reviewed and accepted by the organizing committee (submissions should be up to four pages in length). Upon acceptance, participants will read each other's paper, prepare to shortly present and respond to comments by two discussants and other participants. Subsequently, the workshop will focus on developing a set of core processes and tasks as well as an outline of a research agenda for a CHI-perspective on data work in the coming years.
Crowdworkers silently enable much of today’s AI-based products, with several online platforms offering a myriad of data labelling and content moderation tasks through convenient labour marketplaces. The HCI community has been increasingly interested in investigating the worker-centric issues inherent in the current model and seeking for potential improvements that could be implemented in the future. This workshop explores how a reimagined perspective on crowdsourcing platforms could provide a more equitable, fair, and rewarding experience. This includes not only the workers but also the platforms, who could benefit e.g. from better processes for worker onboarding, skills-development, and growth. We invite visionary takes in various formats on this topic to spread awareness of worker-centric research and developments to the CHI community. As a result of interactive ideation work in the workshop, we articulate a future direction roadmap for research centred around crowdsourcing platforms. Finally, as a specific interest area, the workshop seeks to study crowdwork from the context of the Global South, which has been arising as an important but critically understudied crowdsourcing market in recent years.
In this one-day workshop we are going to make access. We aim to counteract the phenomenon that access to making (e.g., in makerspaces, fablabs, etc.) is not equally distributed, with certain groups of people being underrepresented (e.g., women*1). After brief introductions from participants and a set of three impulse keynotes, we will envision and “make” interventions together, such as speculative or provocative objects and actions. The workshop takes a constructive stance with the goal to not rest on empirical and theoretical findings or individual experiences, but to translate those into viable interventions. These serve as exemplars of findings with the clear goal of being deployed soon after.
Even though issues such as climate change, pollution, and declining biodiversity impact us all, people with historically disenfranchised and socio-politically marginalized (HDSM) identities often bear the harsher brunt of ecological crises and suffer disproportionately. There is a need for listening to the voices of people with intersecting HDSM identities in relation to feminist engagements with ecological issues as applicable to HCI and IxD research and practice. Building upon and braiding together two thriving HCI discourses on feminism and environmental sustainability, we invite submissions from researchers, designers, educators, and activists interested in the intersections of feminist and ecological issues with a priority towards the well-being of people with HDSM identities. Converging feminist concerns on power, voice, and public discourse through this online workshop distributed across three time-zones, we hope to provide a forum for contemporary feminist voices as agents of change while engaging with ecological issues through an intersectional feminist orientation.
Immersive Analytics is now a fully emerged research topic that spans several research communities, including Human-Computer Interaction, Data Visualisation, Virtual Reality and Augmented Reality. Immersive Analytics research has identified and validated benefits of using embodied, 3D spatial immersive environments for visualisation and have shown benefits in the effective use of space and 3D interaction to explore complex data. As of today, most studies in Immersive Analytics have focused on exploring novel visualisation techniques in 3D embodied immersive environments. Thus far, there is a lack of fundamental study in this field that clearly compares immersive versus non immersive platforms for analytics purposes, and firmly delineates the benefits of immersive environments for analytic tasks. We feel that it is time to establish an agenda to assess the benefits and potential of immersive technologies, spatial interfaces, and embodied interaction to support sensemaking, data understanding, and collaborative analytics. This workshop will aim at putting this agenda together, by gathering international experts from Immersive Analytics and related fields to define which studies need to be conducted to assess the effect of embodied interaction on cognition in data analytics.
This workshop engages the phenomenon of synesthesia to explore how translating between sensory modalities might uncover new ways to experience and represent data: What does it mean to taste a timeline, hear a network, or touch a categorical scale? We employ the method of sketching, which traditionally favors visual representations, and consider what it means to ‘sketch’ in other modalities like sound, taste, and touch. Through a series of rapid, playful activities ideating data representations across sensory modalities, we will explore how the affordances of sketching—like intentional ambiguity—might help designers creatively map data to experience. We will also discuss challenges for sensory sketching in remote, collaborative environments and brainstorm suggestions for digital tools. The outcome of this workshop will be a series of exercises and examples that serve as a toolkit for designers, researchers, and data practitioners to incorporate sketching across the senses into their work.
Emotion has been studied in HCI for two decades, with specific traditions interested in sensing, expressing, transmitting, modelling, experiencing, visualizing, understanding, constructing, regulating, manipulating or adapting to emotion in human-human and human-computer interactions. This CHI 2022 workshop on the Future of Emotion in Human-Computer Interaction brings together interested researchers to take stock of research on emotion in HCI to-date and to explore possible futures. Through group discussion and collaborative speculation we will address questions such as: What are the relationships between digital technology and human emotion? What roles does emotion play in HCI research? How should HCI researchers conceptualize emotion? When should HCI researchers use interdisciplinary theories of emotion or create new theory? Can specific emotions be designed for, and where is this knowledge likely to be applied? What are the implications of emotion research for design, ethics and wellbeing? What is the future of emotion in human-computer interaction?
In the last decade, HCI researchers have designed and engineered several systems to lower the entry barrier for beginners and support novices in learning hands-on creative maker skills. These skills range from building electronics to fabricating physical artifacts. While much of the design and engineering of current learning systems is driven by the advances in technology, we can reimagine these systems by reorienting the design goals around constructivist and sociocultural theories of learning to support learning progression, engagement across artistic disciplines, and designing for inclusivity and accessibility. This one-day workshop aims to bring together the HCI researchers in systems engineering and learning sciences, challenge them to reimagine the future design of systems of learning creative maker skills, form connections across disciplines, and promote collaborative research in the systems of learning creative skills.
Technology is changing, which means the design processes supporting it must also change. Digital tools for user experience and interaction design are vital in enabling designers to create appropriate, enjoyable and functional human-computer experiences, and so will necessarily evolve alongside our technological development. This workshop aims to support the futuring of user experience and user interfaces, and will engage with stakeholders, practicing designers, researchers, students and educators in order to understand better the needs for next-generation design tools. We will envisage new forms of design tools that encourage best practice, for example, linking representations, analysis tools, just-in-time evidence, physicality, experience, and crucially, put context at the centre of design.
HCI scholarship has not yet fully engaged with faith, religion, and spirituality, even though billions of people around the world are associated with such traditions and belief systems. While a few papers and workshops at CHI have focused on particular religions, broader discussions around religion, interfaith relationships, and computing have been absent from mainstream HCI design concerns. In this workshop, we propose to bring together HCI scholars and practitioners, whose work is associated with various faiths, religions, and spiritual practices to start this important conversation, with a focus on three questions: (a) does secularization in computing marginalize faith-based values? and if so, how? (b) how can HCI design address the unique needs and values of faith-based communities? and (c) how can scholarship and practice in HCI benefit from the integration of faith, religion, and spirituality? We hope to form an HCI community of scholars and practitioners focused on the intersection of faith/spirituality/religion and computing.
The concept of social justice in Human-Computer Interaction has become an emergent domain of practice and research across the past decade. Work has included research efforts into meeting the needs of under-served populations, providing method blueprints for inclusion of marginalised identities, and a call for greater consideration on how positive impact is defined both in and beyond research engagements. While the number of justice-orientated works may have increased, new social forces question what is meant by the term justice in social justice initiatives; asking who is included in how justice is defined, what its goals are and how might we measure it. We offer this workshop as an opportunity to: (a) build conceptual and visual ‘mosaics’ of social justice works in HCI to map out the existing landscape; (b) build a supportive community of HCI researchers, practitioners, activists and designers who work with matters of in/justice to share vocabulary, approaches and expertise with likewise individuals; (c) facilitate critical conversations around meaningful justice-orientated action and practice, and how they might relate to wider justice frameworks.
We invite you to celebrate the fifth inbodied interaction workshop at CHI by exploring the Inbodied Interaction Framework to align your designs with the internal complexity of the human body's interconnected, physical, and biological networks first with the goal to "#makeNormalBetter" for all at scale. This year we are introducing the new Inbodied Interaction Design Framework with a set of guiding questions and provocations to lean on and re-invent working practices. In this virtual workshop, we welcome participants with no prior experience with inbodied interaction and familiar practitioners who want to gain an alternative perspective for technology design that takes the body as a starting point.
Data science has become an important topic for the CHI conference and community, as shown by many papers and a series of workshops. Previous workshops have taken a critical view of data science from an HCI perspective, working toward a more human–centered treatment of the work of data science and the people who perform the many activities of data science. However, those approaches have not thoroughly examined their own grounds of criticism. In this workshop, we deepen that critical view by turning a reflective lens on the HCI work itself that addresses data science. We invite new perspectives from the diverse research and practice traditions in the broader CHI community, and we hope to co-create a new research agenda that addresses both data science and human-centered approaches to data science.
Our workshop aims to bring together researchers and practitioners across disciplines in HCI who share an interest in promoting well-being through tangible interaction. The workshop forms an impassioned response to the worldwide push towards more digital and remote interaction in nearly all domains of our lives in the context of the COVID-19 pandemic. One question we raise is: to what extent will measures like remote interaction remain in place post-pandemic, and to what extent these changes may influence future agendas for the design of interactive products and services to support living well? We aim to ensure that the workshop serves as a space for diverse participants to share ideas and engage in cooperative discussions through hands-on activities resulting in the co-creation of a Manifesto to demonstrate the importance of embodied and sensory interaction for supporting well-being in a post-pandemic context. All the workshop materials will be published online on the workshop website and disseminated through ongoing collaboration.
Southeast Asia that consists of eleven countries, has been proud of its way of life and rich culture and is generally happy to maintain its long comforting tradition. However, the region cannot deny that its diverse population and strategic location have become a center of attention for global players to invest in the region. With the emergence of Industry 4.0, digital transformation has become mandatory for any organizations or nations in Southeast Asia to consider. Through SEACHI (Southeast Asian CHI) Symposium, we aim to grow awareness in HCI and UX to improve the design and development of technology for a living and bring together the Southeast Asian academic researchers and industry practitioners. As HCI is maturing in Asia, we identified the remarkable growth and needs of HCI in the Southeast Asian community. In this symposium, we have several questions that we would like to answer: To what extent the HCI and UX that has been taught and practiced in Southeast Asia met the needs to support the digital transformation initiatives in the region; whether there has been any significant and proper contextualization of the HCI and UX fields; whether HCI and UX are still perceived as a Western mindset instead of a localized approach to make a difference in any projects; whether HCI and UX have become a standard norm in the digital product and design process and how HCI and UX players in Southeast Asia have worked together to create a unique ecosystem. Under the big conference theme “Equity, Justice and Access Commitments,” the symposium aims to bring about equal and fair access for anyone to exchange information and transfer knowledge in this multidisciplinary environment and multi-socioeconomic aspects of research and practice HCI and UX in Southeast Asia.
As concerns around climate change escalate, the need to address the environmental impacts of computing becomes more dire. While urgent action is needed, there is also opportunity to rectify longstanding inequities and injustices present in the relationship between computing and the environment. The aim of this one-day hybrid workshop is to gather researchers and practitioners and develop a material ethics of computing. We frame material ethics as a shared understanding of the relations between material and labor that shape digital infrastructures. Through presentations, discussions, and facilitated activities, we aim to build a research community to understand how computing facilitate sites of environmental damage and degradation, and also spaces for justice, change, and hope.
Science fiction imaginaries and Silicon Valley innovators have long envisioned a workerless future. However, this industrial ambition has often outpaced technological realities in robotics and artificial intelligence, leading to a reassertion of human skills to cover untenable gaps in autonomous systems. This one-day workshop will invite discussion on this recent retrograde trend toward precarious (and often concealed) human labour across such domains as agriculture, transportation, and caregiving through paper presentations and design activities. Throughout, we will ask how this phenomenon speaks to engineering and design challenges and, subsequently, encourage participants to ideate new cybernetic arrangements that centre the agency and well-being of essential workers.
We are all researchers, practitioners, and educators – but many of us are also artists, makers, curators. Our arts practice is part of what makes up our sense of self, but also influences our interests and directions in digital and technological enquiry. There exist spaces where the traditional lives alongside the computational, or where the two are blended, no less valid in purpose or value. We seek to investigate this liminal environment, and explore the current state of art in HCI, computer science and other related fields, shifting boundaries as to what ”art” is in these spaces. By bringing together like-minded and creative individuals, this workshop aims to both inspire and legitimise our diverse practices, present viewpoints, create meaningful outputs, host discussions, and work toward the future of this plurality.
Design is conventionally considered to be about making and creating new things. But what about the converse of that process – unmaking that which already exists? Researchers and designers have recently started to explore the concept of “unmaking” to actively think about important design issues like reuse, repair, and unintended socio-ecological impacts. They have also observed the importance of unmaking as a ubiquitous process in the world, and its relation to making in an ongoing dialectic that continually recreates our material and technological realms. Despite the increasing attention to unmaking, it remains largely under-investigated and under-theorized in HCI. The objectives of this workshop are therefore to (a) bring together a community of researchers and practitioners who are interested in exploring or showcasing the affordances of unmaking, (b) articulate the material and epistemological scopes of unmaking within HCI, and (c) reflect on frameworks, research approaches, and technical infrastructure for unmaking in HCI that can support its wider application in the field.
It is generally acknowledged that the virtual event platforms of today do not perform satisfactorily at what is arguably their most important function: providing attendees with a sense of social presence. Social presence is the “sense of being with another” and can include ways of knowing who is in the virtual space, how others are reacting to what is happening in the space, an awareness of others’ activities and availability, and an idea of how to connect with them. Issues around presence and awareness have been perennial topics in the CHI and CSCW communities for decades. Nevertheless, the time feels ripe for a new effort with a special focus on larger-scale virtual events, given the accelerated pace of change in the socio-technological landscape and the tremendous potential impact that new insights could now have. The goal of this workshop is to bring together researchers and developers from academia and industry with a shared interest in improving the experience of virtual events to exchange insights and hopefully energize an ongoing community effort in this area.
Misinformation and disinformation are proliferating in societies compromising our ability to make informed decisions. Currently a myriad of tools, technologies, and interventions are designed to aid users in making informed decisions when they encounter content of dubious credibility. However, with the advancement of technology, new forms of fake media are emerging such as deepfakes and cheapfakes containing synthetic images, videos, and audio. Combating these new forms of fake media requires tools and interventions understanding the new context. In this case, designers and developers of these tools need to examine user experience and perspectives on new contexts and understand multidisciplinary view points before designing any tools. This workshop calls for multidisciplinary participation to interrogate the current landscape of misinformation tools and to work towards understanding nuances of user experience of these new fake media and perceptions of tools that support users to distinguish credible from inaccurate content. This workshop intends to solicit a human-centric design framework which can act as a UX design guideline when designing and developing tools for combating mis/disinformation.
Extended Reality (AR/VR/MR) technology is becoming increasingly affordable and capable, becoming ever more interwoven with everyday life. HCI research has focused largely on innovation around XR technology, exploring new use cases and interaction techniques, understanding how this technology is used and appropriated etc. However, equally important is the investigation and consideration of risks posed by such advances, specifically in contributing to new vulnerabilities and attack vectors with regards to security, safety, and privacy that are unique to XR. For example perceptual manipulations in VR, such as redirected walking or haptic retargeting, have been developed to enhance interaction, yet subversive use of such techniques has been demonstrated to unlock new harms, such as redirecting the VR user into a collision. This workshop will convene researchers focused on HCI, XR, Safety, Security, and Privacy, with the intention of exploring safety, privacy, and security challenges of XR technology. With an HCI lens, workshop participants will engage in critical assessment of emerging XR technologies and develop an XR research agenda that integrates research on interaction technologies and techniques with safety, security and privacy research.
Explainability of AI systems is crucial to hold them accountable because they are increasingly becoming consequential in our lives by powering high-stakes decisions in domains like healthcare and law. When it comes to Explainable AI (XAI), understanding who interacts with the black-box of AI is just as important as “opening” it, if not more. Yet the discourse of XAI has been predominantly centered around the black-box, suffering from deficiencies in meeting user needs and exacerbating issues of algorithmic opacity. To address these issues, researchers have called for human-centered approaches to XAI. In this second CHI workshop on Human-centered XAI (HCXAI), we build on the success of the first installment from CHI 2021 to expand the conversation around XAI. We chart the domain and shape the HCXAI discourse with reflective discussions from diverse stakeholders. The goal of the second installment is to go beyond the black box and examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we put an emphasis on “operationalizing”, aiming to produce actionable frameworks, transferable evaluation methods, concrete design guidelines, and articulate a coordinated research agenda for XAI.
This workshop applies human centered themes to a new and powerful technology, generative artificial intelligence (AI). Unlike AI systems that produce decisions or descriptions, generative AI systems can produce new and creative content that can include images, texts, music, video, and other forms of design. The results are often similar to results produced by humans. However, it is not yet clear how humans make sense of generative AI algorithms or their outcomes. It is also not yet clear how humans can control and more generally, interact with, these powerful capabilities. Finally, it is not clear what kinds of collaboration patterns will emerge when creative humans and creative technologies work together.
It is time to convene the interdisciplinary research domain of generative AI and HCI. Participation in this invitational workshop is open to seasoned scholars and early career researchers. We solicit descriptions of completed projects, works-in-progress, and provocations. Together we will develop theories and practices in this intriguing new domain.
Designing wearables is a complex task that includes many layers, such as wearability, interactivity, functionality, social and cultural considerations. For decades now, prototyping toolkits are proposed to aid diverse types of audiences in exploring the design of smart accessories and garments. However, the task of designing toolkits for wearables has not received a comprehensive discussion and systematic reflection. In this workshop, we look into challenges, opportunities, and lessons learned in using, developing and evaluating wearable toolkits by focusing on their target groups, purposes, effects on the final designs. By bringing together researchers and practitioners who are experienced with the design, use and assessment of wearable toolkits, we see a particular opportunity in providing a broader perspective on defining the future of wearable toolkit designs.
Research in surgical intervention and technology development is increasingly interdisciplinary. Despite the great potential of working in this way, recent research suggests that interdisciplinary collaborations and competing stakeholder interests can be challenging to initiate and manage, with the result that knowledge and expertise from different fields are not always well integrated. The aim of this workshop is to bring together stakeholders from HCI, surgical science, and surgical practice and technology to investigate the potential of interdisciplinary collaboration, specifically identifying actionable strategies to coordinate and improve efforts towards designing, developing, evaluating, and iterating on the next generation of surgical solutions. The workshop will address current limitations in interdisciplinary collaboration, and identify opportunities for surgical technology stakeholders to make contributions across the entire development life cycle. In the longer term, the workshop will contribute towards the development of a pragmatic collaboration framework encompassing diverse research paradigms, compatible with surgical practice, and supportive of longitudinal evaluation.
Self-determination theory (SDT) has become one of the most frequently used and well-validated theories used in HCI research, modelling the relation of basic psychological needs, intrinsic motivation, positive experience and wellbeing. This makes it a prime candidate for a ‘motor theme’ driving more integrated, systematic, theory-guided research. However, its use in HCI has remained superficial and disjointed across various application domains like games, health and wellbeing, or learning. This workshop therefore convenes researchers across HCI to co-create a research agenda on how SDT-informed HCI research can maximise its progress in the coming years.
While disability studies and social justice-oriented research is growing in prominence in HCI, these approaches tend to only bring attention to oppression under a single identity axis (e.g. race-only, gender-only, disability-only, etc). Using a single-axis framework neglects to recognize people’s complex identities and how ableism overlaps with other forms of oppression including classism, racism, sexism, colonialism, among others. As a result, HCI and assistive technology research may not always attend to the complex lived experiences of disabled people. In this one-day workshop, we position disability justice as a framework that centers the needs and expertise of disabled people towards more equitable HCI and assistive technology research. We will discuss harmful biases in existing research and seek to distill strategies for researchers to better support disabled people in the design (and dismantling) of future technologies.
Human–Computer Interaction (HCI) research has led to major innovations used by large and diverse audiences in different parts of the world. However, a recent meta-analysis  found that research at CHI is still highly (73%) concentrated in western contexts. HCI Across Borders (HCIxB) has gathered a diverse audience by conducting workshops and symposia since CHI 2016, aiming to expand borders within CHI. For CHI 2022, we expect to regroup for a virtual workshop to reflect on shifting boundaries from CHI’s past and emerging challenges in HCI research, education, and practice in recent years.
As humans increasingly interact (and even collaborate) with AI systems during decision-making, creative exercises, and other tasks, appropriate trust and reliance are necessary to ensure proper usage and adoption of these systems. Specifically, people should understand when to trust or rely on an algorithm’s outputs and when to override them. While significant research focus has aimed to measure and promote trust in human-AI interaction, the field lacks synthesized definitions and understanding of results across contexts. Indeed, conceptualizing trust and reliance, and identifying the best ways to measure these constructs and effectively shape them in human-AI interactions remains a challenge.
This workshop aims to establish building appropriate trust and reliance on (imperfect) AI systems as a vital, yet under-explored research problem. The workshop will provide a venue for exploring three broad aspects related to human-AI trust: (1) How do we clarify definitions and frameworks relevant to human-AI trust and reliance (e.g., what does trust mean in different contexts)? (2) How do we measure trust and reliance? And, (3) How do we shape trust and reliance? As these problems and solutions involving humans and AI are interdisciplinary in nature, we invite participants with expertise in HCI, AI, ML, psychology, and social science, or other relevant fields to foster closer communications and collaboration between multiple communities.
This workshop transnationally triangulates race, capital, and technology to understand how colonialism and imperialism linger and mutate across various sites and scales. Furthermore, it brings together transnational HCI work that engages with critical ethnic studies as well as postcolonial and decolonial studies to intervene on the field’s long-standing epistemology and site focus on the West and fixation on the nation-state at large. Attention to colonial residual, geopolitical tensions, and historical specificities brings HCI in conversation with geopolitical shifts and their very real impacts on the practice and theory of technology design, while troubling the presumptions of who “gets to be human” in HCI. We invite papers and presentations that seek to: 1) triangulate sites of study; 2) draw from different disciplines, theoretical approaches, and methodologies; and 3) engage themes of transnational capital, race, and technology.
EduCHI 2022 will bring together an international community of scholars, practitioners, and researchers to shape the future of Human-Computer Interaction (HCI) education. Held as part of the CHI 2022 conference, the two-day symposium will feature interactive discussions about trends, curricula, pedagogies, teaching practices, and current and future challenges facing HCI educators. In addition to providing a platform to share curriculum plans and teaching materials, EduCHI 2022 will also provide opportunities for HCI educators to learn new instructional strategies and deepen their pedagogical knowledge.
AI generated characters, i.e., realistic renderings of human faces, voices, and mannerisms that appear authentic to a human being  are made possible through advancements in generative machine learning. In addition to character creation, neural networks have recently also been used for the hyper-realistic synthesis and modification of prose, images, audio, and video data.
While this technology has been most widely associated with media manipulation and the spread of misinformation, often referred to as deepfakes, it is increasingly being used for positive applications and integrated into areas ranging from entertainment, to humanitarian efforts and education. With the adaptation and usage of AI generated characters across different industries, we see a potential for significant positive applications in a variety of fields such as learning, privacy, telecommunication, art, and therapy. In this workshop, we will bring together researchers in HCI, AI, and related fields to explore the positive applications, design considerations, and ethical implications of using AI generated characters and related forms of synthetic media.
As Human-Computer Interaction (HCI) research aims to be inclusive and representative of many marginalized identities, there is still a lack of available literature and research on intersectional considerations of race, gender, and sexual orientation, especially when it comes to participatory design. We aim to create a space to generate community recommendations for effectively and appropriately engaging Queer, Transgender, Black, Indigenous, People of Color (QTBIPOC) populations in participatory design, and discuss methods of dissemination for recommendations. Workshop participants will engage with critical race theory, queer theory, and feminist theory to reflect on current exclusionary HCI and participatory design methods and practices.
This course introduces computational cognitive modeling for researchers and practitioners in the field of HCI. Cognitive models use computer programs to model how users perceive, think, and act in human–computer interaction. They offer a powerful approach for understanding interactive tasks and improving user interfaces. This course starts with a review of classic architecture based models such as GOMS and ACT-R. It then rapidly progresses to introducing modern modelling approaches powered by machine learning methods, in particular deep learning, reinforcement learning (RL), and deep RL. The course is built around hands-on Python programming using notebooks.
Transparent research practices enable the research design, materials, analytic methods, and data to be thoroughly evaluated and potentially reproduced. The HCI community has recognized research transparency as one quality aspect of paper submission and review since CHI 2021. This course addresses HCI researchers and students who are already knowledgeable about experiment research design and statistical analysis. Building upon this knowledge, we will present current best practices and tools for increasing research transparency. We will cover relevant concepts and skills in Open Science, frequentist statistics, and Bayesian statistics, and uncertainty visualization. In addition to lectures, there will be hands-on exercises: The course participants will assess transparency practices in excerpts of quantitative reports, interactively explore implications of analytical choices using RStudio Cloud, and discuss their findings in small groups. In the final session, each participant will choose a case study based on their interest and assess its research transparency together with their classmates and instructors.
The challenge of writing good research papers is to communicate a significant and original contribution that benefits human-computer interaction, specifically the CHI community. Researchers have to communicate the validity of their work adequately and do this clearly and concisely to turn a research project into a successful CHI publication. This second online edition of the successful CHI paper writing course offers hands-on advice and more in-depth tutorials on how to write papers with clarity, substance, and style. The instructor has structured it into four online units focusing on the structure and style of the abstract and introduction of CHI papers.
The combination of the Internet of Things and Artificial Intelligence has made it possible to introduce numerous automations in our daily environments. Many new interesting possibilities and opportunities have been enabled, but there are also risks and problems. Often these problems are originated from approaches that have not been able to consider the users’ viewpoint sufficiently. We need to empower people in order to actually understand the automations in their surroundings environments, modify them, and create new ones, even if they have no programming knowledge. The course discusses these problems and some possible solutions to provide people with the possibility to control and create their daily automations.
UI design rules and guidelines are not simple recipes. Applying them effectively requires determining rule applicability and precedence and balancing trade-offs when rules compete. By understanding the underlying psychology, designers and evaluators enhance their ability to apply design rules. This two-part (140-minute) course explains that psychology. This course will be presented online only.
Child Computer Interaction is concerned with the research, design, and evaluation of interactive technologies for children. Whilst many aspects of general HCI can be applied into this field, there are important adaptions to be made when conducting work for and with children throughout all stages of the design cycle. This course overviews the main tools and techniques in use by the CCI community against a backdrop of ensuring that children who work with our community feel valued and see the value in the work they contribute to. presented alongside examples and experiences from academia and industry. The course draws on the authors’ experience within academia and industry hands on and provides useful checklists and tips to ensure children (and researchers and developers) get the most out of participation in HCI activities.
Being able to visualise data in consistent, high-quality ways is a useful skill for HCI researchers and practitioners. In this course, attendees will learn how to produce high quality plots and visualisations using the ggplot2 library for the R statistical computing language. There are no prerequisites and attendees will leave with scripts to get them started as well as foundational knowledge of free open-source tools that they can build on to produce complex, even interactive, visualisations. Course information materials can be found at https://www.sjjg.uk/chi22-course.
During the Introduction to Data-Enabled Design (DED) course attendees will learn about designing for intelligent ecosystems while using data as a creative material. We will take the course participants through both major steps, contextual and informed, of this method by means of data collected (upfront) from the attendee’s personal context. This course offers a careful balance between hands-on work and DED theory. The learning outcomes focus on topics of (physical) prototyping, (remote) data collection and analysis, using data as a creative material, and designing remote interventions.
Sketching is a universal activity that first appears when we play as children, but later, it is often overlooked as a useful skill in adult work – yet it can bring multiple benefits to research and practice. Specifically, our field of Human-Computer Interaction (HCI) embraces interdisciplinary practices, and amongst those, sketching has proven to be a valuable addition to the skill set of researchers, practitioners, and educators in both academia and industry. Many individuals lack the confidence to take up pen and paper after years of non-practice, but it is possible to re-learn these lost skills, improve on them, and apply them in practical ways to all areas of work and research. This course offers a journey in sketching, from scribbles and playful interpretations to hands-on and theoretical information inherent in sketching practice. Attending individuals will learn techniques and applied methods for utilizing sketching within the context of HCI, guided by experienced instructors.
The population of the developed world is aging. Most websites, apps, and digital devices are used by adults aged 50+ as well as by younger adults, so they should be designed accordingly. This one-part course, based on the presenter’s recent book, presents age-related factors that affect older adults’ ability to use digital technology, as well as design guidelines that reflect older adults’ varied capabilities, usage patterns, and preferences. This course will be presented online only.
In this course you will learn how to use video data for prototyping. The course provides hands-on training in working with video clips, including transcription and identification of relevant actions. You will familiarize with core interaction analytic concepts (grounded in ethnomethodology and conversation analysis) and will learn how to do an action-by-action analysis. Working on the design case of everyday interaction with automatic doors, you will learn how video interaction analysis can be embedded in an iterative design process.
A key challenge for new reviewers is getting the tone and structure of a review right. A skilful reviewer will provide enough information in their review to help editors or Associate Chairs decide about including a paper in a journal or proceedings. This course will help participants understand a) the expectations of different submission types, b) how different venues make decisions, and c) identifying strong contributions, robust methodologies, and clear writing to create reviews for these different settings. Participants will critique anonymised but real reviews, and try to guess the venue they are written for and the recommendation they make.
This course is a hands-on introduction to the fabrication of flexible, transparent free-form displays based on electrochromism for an audience with a variety of backgrounds, including artists and designers with no prior knowledge of physical prototyping. Besides prototyping using screen printing or ink-jet printing of electrochromic ink and an easy assembly process, participants will learn essentials for designing and controlling electrochromic displays.
Is simplicity and minimalism the universal standard for interaction design? How can we avoid stereotyping with personas in design practices? What AI algorithms and design mechanism made “digital blackface” phenomenon on social media so popular? This interactive course teaches participants to reconsider some commonly held design beliefs and routine design practices with a lens of cultural differences. Illustrated with design case studies, it introduces strategies and techniques to turn differences into design resources for inclusivity. Participants will learn essential critical design skills of creating engaging and empowering designs in a globalized world at a divisive time.
The objective of this in-person CHI course is to provide CHI attendees with an introduction and overview of the rapidly evolving field of automotive user interfaces (AutomotiveUI). The course will focus on UI aspects in the transition towards automated driving. In particular, we will also discuss the opportunities of cars as a new space for non-driving-related activities, such as work, relaxation, and play. For newcomers and experts of other HCI fields, we will present the special properties of this field of HCI and provide an overview of new opportunities, but also general design and evaluation aspects of novel automotive user interfaces.
The course proposed here will focus on design and accessibility considerations for wearable technology. In this course we will explore how to develop a robust set of design and accessibility considerations (guidelines) for wearable technology. The course will begin a presentation on wearability and accessibility then participants will engage in an activity using a new Wearable Technology Designer’s Web Tool. The tool can be accessed again after the course and shared with students and colleagues. The course will end with a discussion about what design considerations might need to be added to the tool and what human factors information or other research should be updated.
Human-computer interaction has entered a third, globally-connected era. The initial focus on making computers usable was followed by efforts to realize visions of the potential, with CHI a key player. Those visions are realized, yielding new opportunities and challenges, many of them unanticipated. HCI draws on computer science, human factors, information systems, and information science. It relies on design and interacts with AI. This course provides an understanding of forces that guide the interaction of related disciplines, constraints imposed by human nature and the trajectories we are following, and opportunities and issues that will engage us in the years ahead.
Research methods are a foundational part of an education in human-computer interaction. This education is not always provided, and what training is available may not always be focused on the most relevant topics for this diverse field. This course aims to survey research methods relevant to psychology, human factors, computer science, and engineering. This course is suitable for those with limited background in research, and can benefit those with substantial experience. This course aims to provide exposure to many relevant topics and inspire attendees to delve deeper and hone their craft with this course as an introduction.
Personas has evolved since Alan Cooper coined the term in 1999, moving into new domains, new ways of collecting data, and with novel ways of presenting the persona profiles. From the beginning, personas was linked to software design, expressing the need for empathy with end-users. This is still the case today, but we want to show how this is executed in different domains, not only in software, and how different forms of presentation relate to empathy. Thus, the persona course investigates the relationship between data collection, the representation of data as persona profiles, and empathy.
Our physical environments integrate more and more sensing, processing, and communications technology toward the realization of the vision of Ambient Intelligence (AmI). At the same time, virtual worlds are increasingly more linked to physical spaces by means of applications delivering the vision of Augmented and Mixed Reality (AR/MR). In this context, several opportunities emerge at the intersection of these two visions of computing. The goal of this course is to achieve the realization that AmI and MR are two sides of the same coin in order to deliver a fresh perspective to HCI researchers and practitioners interested in designing interactive experiences beyond the desktop and mobile computing paradigms by employing the concepts, principles, and methods of AmI and MR.
In tech, women and people of color consistently report being ignored, devalued, and perceived as less competent than men. But if all members of diverse teams do not feel valued and connected, team performance and innovation are undermined. This course introduces practices that interrupt bias and negative interactions while ensuring all are heard during working meetings. We cover effective techniques to structure participation in working team meetings and guide decisions and feedback so all are heard. These techniques arm HCI professionals who are leaders or facilitators, and academics guiding student teams, with ways to ensure inclusion.
During meetings, one person is often the owner of the whiteboard or PowerPoint to sketch the problem or idea. This person commonly “owns” the meeting leading to passive meeting moments for others. In this course, we will bring the whiteboard to the table to start with collaborative sketching. By also using tangibles, a topic can be discussed in a more interactive and efficient way. We will also learn how to apply the techniques in micro-communications such as coffee machine talks. Participants will leave the course with own hands-on material to use back home. Let’s stop having daily soulless meetings.
We are witnessing the work of user experience (UX) designers expanding beyond single digital products towards designing customer journeys through several service touchpoints and channels. Greater understanding of the service design approach and the interplay between service design and UX design is needed by UX researchers and practitioners in order to address this challenge. This course provides a theoretical introduction to service design and practical activities that help attendees understand the principles of service design and apply key methods within their work. It is targeted at UX design practitioners, teachers, and researchers, and those interested in systemic approaches to design.
It is assumed that to appreciate a knowledge contribution in research-through-design, we all agree on what the act of designing is and should deliver in research. However, just from a glance at contributions in an HCI context, this is far from the case. The course is based on the book: Drifting by intention – four epistemic traditions in constructive design research authored by the instructors. It unpacks different ways of knowing in practice-based design and provides operational models and hands-on exercises applied on participants cases to help plan and articulate the contribution of design in each participant's individual research project.
Many researchers and practitioners find statistics confusing. This course aims to give attendees an understanding of the meaning of the various statistics they see in papers or need to use in their own work. The course builds on the instructor’s previous tutorials and master classes including at CHI 2020, and on his recently published book “Statistics for HCI: Making Sense of Quantitative Data”. The course will focus especially on material you will not find in a conventional textbook or statistics course including aspects of statistical ‘craft’ skill, and offer attendees an introduction to some of the instructor’s extensive online material.
This course teaches principles of rapid prototyping for augmented, virtual, and mixed reality (XR). Participants will learn about low-fidelity prototyping with paper and other physical materials, and digital prototyping including immersive authoring. After an overview of the XR prototyping process and tools, participants will complete a hands-on session using easy-to-use digital authoring tools to create working interactive prototypes that can be run on XR devices. The course is targeted at non-technical audiences including HCI practitioners, user experience researchers, and interaction design professionals and students interested in XR design.
Speech and voice interaction is often hailed as a natural form of interaction and thus more inclusive for a larger portion of users. But, how accurate is this claim? In this panel, we challenge existing assumptions that voice and speech interaction is inclusive of diverse users. The goal of this panel is to bring together the broad HCI community to discuss the state of voice interaction for marginalized and vulnerable populations, how inclusive design is considered (or neglected) in current voice interaction design practice, and how to move forward when it comes to designing voice interaction for inclusion and diversity. In particular, we plan to center the discussion on older adults as a representative group of digitally-marginalized populations, especially given that voice interfaces are marketed towards this group, yet often fail to properly include this population in the design of such interfaces.
In 2017, a CHI panel titled “Integration vs Powerful tools” debated whether our relationship with digital technology had begun to shift from interaction to integration. Today, Human-Computer integration developed into an emerging paradigm, rapidly gaining traction and developing a theoretical basis over a number of recent symposiums and publications. However, as more contributions are made to the establishment of Human-Computer Integration, the more its concepts and principles seem to diverge, such that we now find that each theorist talks about something different when they say “integration”. Building on the 2017 panel, in this panel, we ask “What is the essence of Human-Computer Integration? And what are its implications for the future of HCI?” This panel seeks to facilitate discourse between leading thinkers from diverse backgrounds with the audience to come to a shared vision and mutual understanding of human-computer integration.
The last decade has witnessed the expansion of design space to include the epistemologies and methodologies of more-than-human design (MTHD). Design researchers and practitioners have been increasingly studying, designing for, and designing with nonhumans. This panel will bring together HCI experts who work on MTHD with different nonhumans as their subjects. Panelists will engage the audience through discussion of their shared and diverging visions, perspectives, and experiences, and through suggestions for opportunities and challenges for the future of MTHD. The panel will provoke the audience into reflecting on how the emergence of MTHD signals a paradigm shift in HCI and human-centered design, what benefits this shift might bring and whether MTH should become the mainstream approach, as well as how to involve nonhumans in design and research.
In the technical human-computer interaction (HCI) community, two research fields that gained significant popularity in the last decade are digital fabrication and augmented/virtual reality (AR/VR). Although the two fields deal with different technical challenges, both aim for a single end goal: creating ”objects” instantly – either by fabricating them physically or rendering them virtually. In this panel, we will discuss the pros and cons of both approaches, discuss which one may prevail in the future, and what opportunities exist for closer collaboration between researchers from the two research fields.
As increasingly powerful natural language generation, representation, and understanding models are developed, made available and deployed across numerous downstream applications, many researchers and practitioners have warned about possible adverse impacts. Harmful impacts include but are not limited to disparities in quality-of-service, unequal distribution of resources, erasure, stereotyping and misrepresentation of groups and individuals, they might limit people’s agency or affect their well-being. Given that language tasks are often complex, open-ended, and incorporated across a diversity of applications; effectively foreseeing and mitigating such harms has remained an elusive goal. Towards this goal, Natural Language Processing (NLP) literature has only recently started to engage with human-centered perspectives and methods—that are often central to HCI research. In this panel, we bring together researchers with expertise in both NLP and HCI, as well as in issues that pertain to the fairness, transparency, justice, and ethics of computational systems. Our main goals are to explore 1) how to leverage HCI perspectives and methodologies to help foresee potential harms of language technologies and inform their mitigation, 2) synergies between the HCI and the responsible NLP research that can help build common ground, and 3) complement existing efforts to facilitate conversations between the HCI and NLP communities.
This panel reflects on the conditions of collaboration as well as the possibilities of solidarity between academic and tech workers by drawing on the experiences of panelists who have pondered questions of ethics, responsibility and values in technology-building from a range of positionalities within, adjacent to and outside of academic and technology organizations. As the proposal outlines, the panel will think through opportunities as well as risks for academic and tech workers to work towards progressive tech futures together but also the differences and impossibilities that arise with each position.
Participatory design (PD) for Artificially Intelligent (AI) systems has gained in popularity in recent years across multiple application domains, both within the private and public sectors. PD methods broadly enable stakeholders of diverse backgrounds to inform new use cases for AI and the design of AI-based technologies that directly impact people's lives. Such participation can be vital for mitigating adverse implications of AI on society that are becoming increasingly apparent and pursuing more positive impact, especially to vulnerable populations. This panel brings together researchers who have, or are, conducting participatory design of AI systems across diverse subject areas. The goal of the panel is to elucidate similarities and differences, as well as successes and challenges, in how PD methods can be applied to Artificially Intelligent systems in practical and meaningful ways. The panel serves as an opportunity for the HCI research community to collectively reflect on opportunities for PD of AI to facilitate collaboration amongst stakeholders, as well as persistent challenges to participatory AI design.
In this interactive panel, a brief introduction by the panelists regarding their stances on race and HCI will be followed by breakout group discussion in which participants will be prompted to ask themselves what anti-racist actions they can take in their workplaces and in HCI work.
The digitization of financial transactions in both Global North and Global South has led to considerable shifts in how money is used, understood, and processed by users, banks, and fintechs. This shift from physical cash to digital media, accelerated by the COVID-19 push for digital transactions, has impacted how users perceive and use digital money and opened avenues for more data collection. This diverse panel proposes a discussion to understand the set of opportunities and challenges around the design of digital financial services (DFS) and data-driven decision-making in DFS. We will create a live working document starting before the panel to document the discussion, which develops during and after the panel. This live document will enable community to engage with a broader audience of researchers and industry, outlining processes, methods, and tools that researchers and practitioners have created to work with users to develop new equitable DFS and further exploration.
This Special Interest Group (SIG) will collaboratively explore potential futures of the ACM Special Interest Group on Computer-Human Interaction (SIGCHI) on the organization’s 40th anniversary. Taking stock of where we are now, forty years after inception, our goal will be to engage members of the SIGCHI community in a participatory approach towards imagining how SIGCHI might evolve, and how it can ensure that the elements it values most, such as connection, inclusion, and equity, among others, can be nurtured as the field evolves, and technologies come and go.
Online arguments are an increasingly important and controversial part of modern life. From the spread of political conspiracies to managing relationships while socially distanced, the past several years have stressed the role technology plays in our interactions with one another. And unavoidably, part of maintaining relationships includes managing conflict and disagreements, from who does which chores to who’s a better political candidate. This SIG will bring together an interdisciplinary group of designers and researchers to discuss how to design, build, and evaluate systems to support constructive arguments online. We ask: How can online systems support conflict while sustaining and possibly strengthening interpersonal relationships? We will explore these questions in the context of a collaborative literature review across fields relevant to social computing and the psychology of conflict, paired with group discussion on how to design for disagreements.
Designers and HCI researchers from industry and academia have been exploring the opportunities that emerge from incorporating behavioral data into the design process. For this, designers employ and combine data from multiple sources, multiple scales, and types to obtain valuable insights that inform and support design decisions. This combination unfolds through interdisciplinary collaborations, enabled by various methods and approaches, including participatory data analysis, sense-making interviews, co-design workshops, and data storytelling. However, due to the personal nature of behavioral data and the open-ended, iterative approach of Human-Centered Design, data-centric design activities clash with current HCI and data science practices. As both industry and academia increasingly use data-centric design processes, we recognize a need to share both examples and experiences to reinforce that most practices (and failed experiences) do not yet emerge solely from the literature. In this Special Interest Group, we aim to provide a space for design, data, and HCI researchers and practitioners to connect, reflect on the current practices, and explore potential approaches to further integrating behavioral data into design activities.
The Executive Committee (EC) of ACM Special Interest Group on Computer-Human Interaction (SIGCHI) organized a series of ten equity talks from March 2021 to August 2021. These were hour-long recorded virtual roundtable sessions, for which we solicited participation from the SIGCHI community on concerns and questions relating to equity, in a number of areas relevant to SIGCHI. Many concerns were listed, some were repeated across topics, and the EC followed due diligence when it came to presenting this information to the community. What comes next?
Advances in computing technology, changing policies, and slow crises are rapidly changing the way we work. Human-computer interaction (HCI) is a critical aspect of these trends, to understand how workers contend with emerging technologies and how design might support workers and their values and aspirations amidst technological change. This SIG invites HCI researchers across diverse domains to reflect on the range of approaches to future of work research, recognize connections and gaps, and consider how HCI can support workers and their wellbeing in the future.
Consumer neurotechnology is arriving en masse, even while algorithms for user state estimation are being actively defined and developed. Indeed, many consumable wearables are now available that try to estimate cognitive changes from wrist data or body movement. But does this data help people? It’s a critical time to ask how users could be informed by wearable neurotechnology, in a way that would be relevant to their needs and serve their personal well-being. The aim of this SIG is to bring together the key HCI communities needed to address this: personal informatics, digital health and wellbeing, neuroergonomics, and neuroethics.
In this special interest group (SIG), we follow up on previous conversations around hybrid models for conferences, conducted in open sessions by the ACM Special Interest Group on Computer-Human Interaction (SIGCHI) Executive Committee (EC). The COVID-19 pandemic led to a sudden shift to virtual conferences; as we start to go back to in-person events, it is important to reflect on the types of events we desire, and design these accordingly. With this SIG, we hope to share experiences from previous conferences (successful or not) and discuss potential solutions to pending issues. This SIG will be led by VP at Large Adriana S. Vivacqua, with the participation of other EC members.
Connecting with each other on the basis of research interests, geographies or identity has shown to be an important aspect of community development within HCI. The advent of a variety of committees and Special Interest Groups (SIG) are a testament to this need of connecting with each other. As members of the nascent SIGCHI LATAM committee, we see an opportunity in learning together with different communities across the HCI field about how we are working to attract more practitioners, academics, and students to work together and grow research surrounding HCI. With this SIG, we want to promote an environment for discussing and exchanging practices among different collectives within SIGCHI worldwide. We propose inviting community leaders from different geographies, sub-fields and identities to discuss their strategies and experiences growing their communities. The event is open to all members interested in gaining insight into how to grow communities within HCI.
Recognizing the significant potential impact that HCI has on food practices and experiences, researchers and practitioners are undertaking a growing number of explorations of novel computing technology and food combinations. These explorations have so far primarily emphasized technology-driven systems and taken a human-centric perspective. We propose a Special Interest Group (SIG) in “foodHCI futures” that creates a space for researchers to discuss the boundaries of food incorporating HCI, and with the simultaneous aims of reconciling food with technology and extending our visions for human-food interactions towards anthropocentrism. Specifically, the SIG will be a beginning of developing a structured conceptual map of the possibilities for future technology interventions in food systems. In developing this map, we hope to encourage democratized debate, provoke new and divergent thoughts on the opportunities for foodHCI, and ultimately gain unique insights that contribute to preferable food futures.
In recent years, the notion of the Metaverse has become the focus of a growing body of work in the industry. However, there is no consensus on the conceptualization in academia. To date, much of this attention has revolved around technological challenges. However, what is notably missing from these discussions is a consideration of the human factors and social aspects that are considered more critical challenges within HCI. The aims of this SIG are as follows: Firstly, to provide a platform for researchers and practitioners to engage with the various definitions and the ways in which the Metaverse is developing. Secondly, to discuss the opportunities, challenges, and future possibilities in the context of HCI. This will lay the foundations to build a network for academics interested in the field for future multidisciplinary research relating to the Metaverse.
With the development of low-cost electronics, rapid prototyping techniques, as well as widely available mobile devices (e.g. mobile phones, smart watches), projects related to the design and fabrication of personal health sensing applications, either on top of existing device platforms (e.g. mHealth), or as stand-alone devices, have emerged in the last decade. In addition, recent advances in novel sensing and interface technologies, accessibility studies and system design open up new possibilities and can bring in different perspectives for personal health sensing. We believe that joining the forces in such interdisciplinary work is a key to moving the field of personal health sensing forward. This Special Interest Group aims to bring in researchers from different fields, identify the significance and challenges of the personal health sensing domain, discuss potential solutions and future research directions, and promote collaborative research opportunities.
An ongoing challenge within the HCI community is coming to a shared understanding of research ethics in the face of our diverse disciplinary traditions, evolving technologies and methods, and multiple geographic and cultural settings. Building upon previous panels and town halls organized by the SIGCHI Research Ethics Committee at conferences including CSCW, CHI, GROUP, and IDC, this special interest group convening is intended to provide space for the broader HCI community to highlight, debate, and discuss current ethical challenges within our domain, and work toward the development of guidelines and processes for supporting–and evaluating– ethical research. For 2022, we are proposing a SIG rather than a more formal panel, to encourage more people who are not committee members to speak, share, and discuss their perspectives and challenges when engaging with research ethics in HCI. This conversation will benefit from a diversity of voices and perspectives to help us all learn from each other and think deeply and the values and ethical commitments of our research community.
The increasing capabilities of Artificial Intelligence enable the support of users in a continuously growing number of applications. Current systems typically dictate that interaction between user input and AI output unfolds in discrete steps, as is the case with, for example, conversational agents. Novel scenarios require AI systems to adapt and respond to continuous user input, e.g., image-guided surgery and AI-supported text entry. In and across these applications, AI systems need to support more varied and dynamic interactions in which users and AI interact continuously and in parallel. Current methods and guidelines are often inadequate and sometimes even detrimental to user needs when considering continuous usage scenarios. Realizing a continuous interaction between users and AI requires a substantial change in perspective when designing Human-AI systems. In this SIG, we support the exchange of cutting-edge research contributing to a better understanding and improved methods and tools to design continuous Human-AI interaction.
Recently, HCI researchers have shown interest in sustainability in the context of both new making methodologies and materials. In this work, we introduce a range of sustainable biomaterials (materials that are bio-based) that we use to create unique interactive interfaces. These biomaterials include ReClaym, a clay-like material made from compost; Alganyl, an algae-based bioplastic; Dinoflagellates, bioluminescent algae; SCOBY, symbiotic cultures of bacteria and yeast; and Spirulina, nutrient-dense blue-green algae. We describe how the implementation of these materials in our designs highlight the importance of utilizing materials that can biodegrade. We also call attention to the importance of care, patience, and understanding during the design processes to facilitate the creation of playful designs that respect the agency of each biomaterial. Lastly, we discuss the gained deeper sense of intimacy and understanding with our biomaterials which not only lead to more personally meaningful interfaces but more sustainable ones.
Advancing text generation algorithms (e.g., GPT-3) have led to new kinds of human-AI story co-creation tools. However, it is difficult for authors to guide this generation and understand the relationship between input controls and generated output. In response, we introduce TaleBrush, a GPT-based tool that uses abstract visualizations and sketched inputs. The tool allows writers to draw out the protagonist’s fortune with a simple and expressive interaction. The visualization of the fortune serves both as input control and representation of what the algorithm generated (a story with varying fortune levels). We hope this demonstration leads the community to consider novel controls and sensemaking interactions for human-AI co-creation.
Leveraging the human propensity for embodied interaction, SocialStools is a socio-spatial interface that facilitates playful social interactions across strangers in a physical space, fostering togetherness. Three stools on caster wheels generate sound and image around them in response to sitting on them, and moving and rotating relative to each other. In this paper, we identify the challenges of stranger interactions, introduce our cyber-physical system, and describe and demonstrate three interaction situations: sitting, moving closer and away from each other, and rotating to face or turn away from each other. By translating these interactions into visual and auditory effects, we explore the possibilities of merging the socio-physical world with a digital system to create unique social affordances for interpersonal interactions that foster togetherness. Our demonstration transforms three strangers into a trio of sound-and-image makers interacting through creative, embodied play.
As COVID-19 spreads across the globe and the number of deaths continues to rise, the heartbreaking experiences are being replaced by collective mourning. As German journalist and pacifist Kurt Tukholsky once said: “The death of one man is a tragedy, the death of millions is a statistic”.
When we look back at the help-seeking posts of those who were lost, those who died of unconfirmed COVID-19 testing reports; those who committed suicide out of despair; those whose life-saving medical equipment were being taken away and those who lost their lives due to overwork and infection from their patients... Many of them were not included in the official statistics, and they are likely to be forgotten over time. They were not being treated fairly when they were alive, and they were not being mentioned after they passed away.
We spoke to one of those families. One daughter said: “After this pandemic, who will remember someone such as my mother – she had nowhere to go for medical treatment; she was rejected by the hospital, and she had to die at home?”
This is one of the reasons why we built this online platform. We want to document as many people who have left us because of the pandemic as possible. Our website also includes the help-seeking information these people posted before they passed away. These are the evidences they have left in a particular moment in this pandemic. We hope to provide a space for family members to express their grief and for the public to mourn the dead. Behind every number is a life.
”Unfinished Farewell” can be viewed at www.farewell.care and www.jiabaoli.org/covid19
Virtual Lab by Quantum Flytrap explores novel ways to represent quantum phenomena in an interactive and intuitive way. It is an online laboratory with a real-time simulation of an optical table, supporting up to 3 entangled particles. We have created a highly visual no-code interface and an introductory Quantum Game to make the Virtual Lab approachable for users without prior exposure to quantum mechanics. At the same time, a configurable sandbox mode with customizable elements makes it useful for students, lecturers, and scientists. Virtual Lab provides a way to explore exact quantum states, entanglement, and the probabilistic nature of quantum physics. We provide ways to simulate quantum computing (e.g. Deutsch-Jozsa algorithm), use quantum cryptography (e.g. BB84 protocol), explore counterintuitive quantum phenomena (e.g. quantum teleportation), and recreate historical experiments (e.g. Michaelson-Morley interferometer). Virtual Lab is available at: https://lab.quantumflytrap.com/.
We demonstrate software tools to convert 2D cutting plans to 3D models either through user interaction (Assembler3) or automatically using a heuristics-based beam search algorithm (autoAssembler ). This conversion allows users to make 3D manipulations (here using kyub ) to models for laser cutting, commonly shared in the form of 2D cutting plans. By sharing the models again in a 3D format, others can build on work and increase the complexity of what is shared in the community. The resulting more advanced models and the ability to customize them to individual users’ needs, increase the value of models for laser cutting shared online. In this demonstration, participants use the different software tools, and customize models to their liking.
Meerkat is a social support app that helps older adults tackle hurdles in interacting with mobile applications by receiving contextual support from friends and family or community volunteers. The app allows users to describe the interaction issue, capture a screenshot, draw, and write queries to helpers who can provide understandable explanations. Meerkat is informed by theories of collective efficacy and aims to reduce the barriers to existing social networks. Given COVID-19 social distancing measures, the design of Meerkat adapts patterns of peer support to situations where the users are not co-located. To users who do not have available social support, the app provides a foundation to receive support from a community of volunteers who are social connections to other users. Meerkat also includes a practice mode, facilitating social learning by presenting users with mobile tasks. We conclude with how Meerkat can contribute to collective and individual efficacy.
This work introduces a robust software trainer for the guitar, called Fret Ferret, which utilizes the contextual multi-armed bandit class of algorithms, as well as implicit signals from the user, to customize lessons to each user automatically and in real time, avoiding the need to tweak a large selection of game parameters in order to obtain a desired difficulty level. We discuss the consequences of using these algorithms to drive new styles of computer-human interaction that can accelerate our learning process and help us reach new mental achievements. We elaborate on the details of our algorithms, how they inform our user interface design, and how we address the difficulty of scaling the development of a large number of machine learning models to adapt to many mini games and each user individually. Finally, we address future work that can further augment the performance and capabilities of our software.
With increased automation levels of vehicle technology, human involvement is expected to be transformed from a driver towards an operator role. This transition will entail remote monitoring and possibilities for intervention of driverless vehicle fleets. Thus far, there are insufficient tools available for user interface development and experimentation, which renders it difficult to quickly devise, prototype, and test solutions to support operators in their specific work contexts. We present the TeleOperationStation, an extended-reality experience prototyping environment, which coherently simulates automated fleet operation workflows, covering non-driving tasks, monitoring, takeover requests, and full remote vehicle operation. During this interactive demonstration, on-site and online conference participants will be enabled to experience and co-create user interfaces for remote automated vehicle operation, using AR from a first-person viewpoint, and controlling a physical miniature vehicle.
Emotion regulation–managing one’s emotional state in day-to-day life–is an important set of skills for supporting mental wellbeing. We propose to demonstrate our AR Fidget system that scaffolds emotion regulation using a combination of AR glasses and fidgeting actions. AR glasses have been used in various well-being situations for providing the person with unobtrusive and in-the-moment feedback. Meanwhile, previous studies have shown that the manipulation of soft-bodied ‘fidget’ objects helps people manage emotions and improve focus. This project explores whether combining fidgeting behaviors such as swiping, tapping, and clicking with supportive visual/auditory feedback on AR glasses can support emotion regulation in the moment. We will demonstrate three visual/auditory metaphors in the form of AR lenses (Lotus, Bubble, Fire) combined with suggested target fidget actions based on prior research, toward helping people to regulate anxiety, boredom, and anger.
This interactivity demonstration provides users with a visualization of their conversational turn-taking, within a shared Virtual Reality (VR) environment. It is intended to help support balanced communication in remote meetings. This prototype is part of a larger research project focused on developing VR tools to improve online meetings with designed affordances that take advantage of VR’s unique properties to help people with balancing participation, time management, coming to shared decisions, following an agenda, and achieving social connection and support for ideas. Ultimately, these prototypes help show the potential for using VR to make online meetings more effective and satisfying. At CHI, users will have a chance to try out the conversation balance system either in-person at the conference venue, or as a remote group if they are attending virtually.
We demonstrate a novel interface concept in which interactive systems directly manipulate the user's head orientation. We implement this using electrical-muscle-stimulation (EMS) of the neck muscles, which turns the head around its yaw (left/right) and pitch (up/down) axis. As the first exploration of EMS for head actuation, we characterized which muscles can be robustly actuated. Then, we demonstrated how it enables interactions not possible before by building a range of applications, such as (1) directly changing the user's head orientation to locate objects in AR; (2) a sound controller that uses neck movements as both input and output; (3) synchronizing head orientations of two users, which enables a user to communicate head nods to another user while listening to music; and (4) rendering force feedback from VR punches on the head by actuating the user's neck.
Real-time environmental tracking has become a fundamental capability in modern mobile phones and AR/VR devices. However, it only allows user interfaces to be anchored at a static location. Although fiducial and natural-feature tracking overlays interfaces with specific visual features, they typically require developers to define the pattern before deployment. In this paper, we introduce opportunistic interfaces to grant users complete freedom to summon virtual interfaces on everyday objects via voice commands or tapping gestures. We present the workflow and technical details of Ad hoc UI (AhUI), a prototyping toolkit to empower users to turn everyday objects into opportunistic interfaces on the fly. We showcase a set of demos with real-time tracking, voice activation, 6DoF interactions, and mid-air gestures and prospect the future of opportunistic interfaces.
Computer-mediated communication is being adopted in work and personal life around the world. Measuring collaborative performance would be useful for evaluating and optimizing social computing applications, but there is a lack of methods for it. For this purpose, we have developed a collaborative block design task, which requires collaboration and depends on simple and abstract rules. The task is about constructing three-dimensional structures out of primitive shapes based on cards representing two-dimensional flat projections of the complete structure. The task is presented as a physical version, which can be manufactured using a 3D printer and a laser cutter, as well as a virtual version which is released as an open source VRChat world created in Unity, and as task components which can be imported into any virtual 3D environment. The task can be used for evaluating systems and augmentations for their fitness for collaboration, as well as to investigate other phenomena which seem to be linked with better cooperation, such as inter-individual and inter-brain synchronization.
We demonstrate InfraredTags, which are invisible 2D markers and barcodes that are 3D printed as part of objects. We show how InfraredTags can be decoded using a convolutional neural network (CNN) after being captured by low-cost near-infrared cameras. InfraredTags are formed by printing objects from an infrared-transmitting filament, which infrared cameras can see through, and by having air gaps inside for the tag’s bits, which appear at a different intensity in the infrared image. We built a user interface that facilitates the integration of common tags (QR codes, ArUco markers) with the object geometry to make them 3D printable as InfraredTags. Once printed, the tags can be robustly decoded with a U-Net model that we trained using a custom dataset for optimal binarization and detection. We show how our method enables different applications, such as object tracking and embedding metadata for tangible interactions and augmented reality.
Artists and designers have been exploring how robotics can be used to interact with our environment in new ways. Robots connect computational design processes with the physical environment, making digital interaction with nature possible. We present a robotic process for planting that enables the computational design of landscapes. We demonstrate how robotic planting can be used for generative art and design by creating a living typeface grown from seed. The robot draws a message by 3-dimensionally (3D) printing a blend of planting media and seeds. When the seeds germinate, the glyphs emerge from their substrate in a flush of green. The letterforms become dynamic living organisms. Artistic agency shifts from the artist to nature.
In recent years, actuators that handle fluids such as gases and liquids have been attracting attention for their applications in soft robots and shape-changing interfaces. In the field of HCI, there have been various inflatable prototyping tools that utilize air control, however, very few tools for liquid control have been developed. In this study, we propose HydroMod, novel constructive modules that can easily generate liquid flow and programmatically control liquid flow, with the aim of lowering the barrier to entry for prototyping with liquids. HydroMod consists of palm-sized small modules, which can generate liquid flow with the electrohydrodynamics (EHD) phenomenon by simply connecting the modules. Moreover, users can control the flow path by simply recombining the modules. In this paper, we demonstrate three applications that can be easily prototyped with liquids by recombining HydroMod. These applications showcase the characteristics of using liquids: weight manipulation, color expression, and heating/cooling control.
In this interactivity paper, we present the design and demonstration of Hicclip, a smart snacking sealing rack that leverages eating-related sounds as persuasive strategies to interact with different snacking behaviors of the user to possibly intervene snacking addictions. We envisioned the auditory feedback might be effective in preventing excessive snacking. With Hicclip, we also wanted to investigate the embodiment of persuasive technology in existing snack-related product (i.e., snack sealing) for enhancing the adoption of health intervention. Based on the current stage of the prototype design, we proposed an online demo at CHI 2022, based on a livestream session and an interactive video to facilitate the user experience of Hicclip remotely. We hope to get some feedback from the international audience to help us iterate this design concept further.
Sound zones enable multiple simultaneous sound experiences in the same physical room without interference. In this paper, we present an interactive sound zone setup that can produce two sound zones within a confined space. Through a tangible remote controller, users can change the volume, size, and position of these sound zones. In addition, we have built a custom visualisation display that provides real-time feedback of the sound zones to support users’ understanding. Sound zone systems pose novel challenges for the HCI community, including how users may understand and interact with sound zones. Our setup offers a concrete solution into investigating these challenges.
We demonstrate FoolProofJoint, a software tool that simplifies the assembly of laser-cut 3D models and reduces the risk of erroneous assembly. FoolProofJoint achieves this by modifying finger joint patterns. Wherever possible, FoolProofJoint makes similar looking pieces fully interchangeable, thereby speeding up the user's visual search for a matching piece. When that is not possible, FoolProofJoint gives finger joints a unique pattern of individual finger placements so as to fit only with the correct piece, thereby preventing erroneous assembly.
We developed two integrated tools to operationalise AI fairness in business practice: The Fairness Compass supports defining the fairness objective and thus specifies the metric. The Fairness Library helps selecting the best algorithm to achieve the adopted fairness metric. Together, both tools cover the entire implementation process of AI fairness.
Although the interaction capability of soft sensors has been explored in the field of HCI, they are found to have limitations in shape, wiring management, and softness due to the characteristics of the material. In this paper, we propose a design of 3D-printed resistive soft sensors that can detect deformation by creating internal lattice structures with flexible conductive materials. The magnitude of deformation can be detected by measuring the resistance between a pair of electrodes. By adjusting the parameters of the lattice structure, a soft sensor with high flexibility of shape design and adjustable local softness can be realized. In addition, by inserting non-conductive structures into the sensor structure, external wires can be consolidated in one place. We present a design and fabrication pipeline, and investigate the fundamental characteristics of several patterns of sensors. We also show several applications that demonstrate the feasibility of the proposed method.
The interactive installation “Tête-à-tête 22” is a conversational sitting object to explore the intimate dependency in relationships that are difficult to achieve remotely. Using a pair of kinetic chairs that gradually become unstable, the installation requires its seatmates work as a team to maintain their balance. The creative technology used in “Tête-à-tête 22” facilitates human-human connections by evoking a sensual experience in intriguing, reflective and expressive ways. Ultimately, the strenuous physical interaction aims to produce a sense of togetherness in everyday life, by constructing a digital entity with a symbolic meaning, an experiential possibility, and a functional role.
We demonstrate a mobile text entry system that brings full-size ten-finger typing to everyday surfaces, allowing users to type anywhere. Our wearable wristband TapType integrates accelerometers that sense vibrations arising from finger taps against a passive surface, from which our Bayesian neural network estimates a probability distribution over the fingers of the hand. Given a pre-defined key-finger mapping, our text entry decoder fuses these predictions with the character priors of an n-gram language model to decode the input text entered by the user. TapType combines high portability with sustained rapid bimanual input across the full space, which we demonstrate at the example of supplementing text input on mobile touch devices, in eyes-free scenarios using audio feedback, and in a situated Mixed Reality scenario to enable typing outside visual control with passive haptic feedback.
Now more than ever, people are using online platforms to communicate. Twitch, the foremost platform for live game streaming, offers many communication modalities. However, the platform lacks representation of social cues and signals of the audience experience, which are innately present in live events. For this demonstration, we present an interactive experience that captures the audience energy and response in a game streaming context. We designed a survival shooter game and integrated a custom-communication modality (Commons Sense) in which remote audience attendees’ heart rates will be sensed via webcam, averaged, and fed into a video game as input to affect sound, lighting, and difficulty.
The increasing sophistication and availability of Augmented and Virtual Reality (AR/VR) technologies wield the potential to transform how we teach and learn computational concepts and coding. This project develops a platform for creative coding in virtual and augmented reality. The Embodied Coding Environment (ECE) is a flow-based visual coding system designed to increase physical engagement with programming and lower the barrier to entry for novice programmers. It is conceptualized as a merged digital/physical workspace where spatial representation of code, the visual outputs of the code, and user interactions and edit histories are co-located in a virtual 3D space.
As HCI researchers, we are constantly searching for ways to improve the approaches in which we engage with our participants, especially when engaging with vulnerable populations about sensitive topics like online risk experiences. For this reason, we developed “30 Days,” a cross-platform EMA diary mobile app and web tool that collects contextualized data by engaging and motivating teens to report on their daily online experiences. We developed 30 Days to be tailored to the needs of researchers using experience sampling with teens. This interactivity demonstration provides an overview of the “30 Days” system.
Some individuals with motor impairments communicate using a single switch — such as a button click, air puff, or blink. Our software, Nomon, provides a method for single-switch users to select between items on a screen. Nomon’s flexibility stems from its probabilistic selection method, which allows potential options to be arranged arbitrarily rather than requiring they be arranged in a grid. As a result, Nomon can be used for a host of applications — including gaming, drawing, and web browsing. Focusing on accessibility, we updated the Nomon interface in collaboration with a switch user and with experts in Augmentative and Alternative Communication (AAC). We present our updated Nomon interface as an open-source web application.
Office work presents health and wellbeing challenges, triggered by working habits or environmental factors. While technological interventions gain popularity in the workplace, they often fall short of acknowledging personal needs. Building on approaches from personal informatics, we present our vision on the use of user-driven, situated sensor probes in an office context and how the community might deal with complex yet timely questions around the use of data to empower people in becoming explorers of their own habits and experiences. We demonstrate Habilyzer, an open-ended sensor toolkit for office workers, which enables user-driven explorations in self-tracking their work routines. This research contributes an alternative approach to improving working habits and vitality in the workplace, moving from solution-oriented technologies to inquiry-enabling tools. Through this demonstration, we also aim to trigger discussions on the use of sensors and data in the office context, in the light of privacy, consent and data ownership.
Interactive applications using ultrasonic phased arrays of transducers (PATs) provide novel opportunities to create 3D displays. They show visual content using levitated particles such as polystyrene beads, thread or fabric, that dynamically change to render volumetric shapes in mid-air. Typically, this technology uses fixed arrays of transducers (e.g., two boards vertically or horizontally aligned) to display interactive content. That is, a fixed system of reference (SoR), defined by the levitation device, and dynamic mid-air content. This demo proposes a novel setup, featuring a dynamic-SoR for the device with the dynamic mid-air content. We do this by combining a PAT-based levitation device mounted onto a two-gimbal mechanism, resulting into a device with a dynamic coordinate system for volumetric animations. We describe our setup and demonstration, which aims to create an intriguing experience, where both the content and the device itself can freely move in 3D space, with the aim of challenging the user's expectations about the content displayed or even the principles enabling it.
This demonstration presents QuadStretch, a multidimensional skin stretch display that is worn on the forearm for VR interaction. QuadStretch realizes a light and flexible form factor without a large frame that grounds the device on the arm and provides rich haptic feedback through high expressive performance of stretch modality and various stimulation sites around the forearm. In the demonstration, the presenter lets participants experience six VR interaction scenarios with QuadStretch feedback: Boxing, Pistol, Archery, Slingshot, Wings, and Climbing. In each scenario, the user’s actions are mapped to the skin stretch parameters and fed back, allowing users to experience QuadStretch’s large output space that enables an immersive VR experience.
This interactive virtual museum provides insights into LGBTIQ+ issues by presenting the history and utilization of pride flags and different legal situations worldwide and by pointing out the meaning of identity markers and their interconnectedness. This is complemented with an intimate engagement through photography, personal narratives from members of the LGBTIQ+ community and a fully immersive pride parade, allowing users to engage and learn with various stylistic, factual and fun exhibitions.
From creating input devices to rendering tangible information, the field of HCI is interested in using kinematic mechanisms to create human-computer interfaces. Yet, due to fabrication and design challenges, it is often difficult to create kinematic devices that are compact and have multiple reconfigurable motional degrees of freedom (DOFs) depending on the interaction scenarios. In this work, we combine compliant mechanisms (CMs) with tensioning cables to create dynamically reconfigurable kinematic mechanisms. The devices’ kinematics (DOFs) is enabled and determined by the layout of bendable rods. The additional cables function as on-demand motion constraints that can dynamically lock or unlock the mechanism's DOFs as they are tightened or loosened. We provide algorithms and a design tool prototype to help users design such kinematic devices. We also demonstrate various HCI use cases including a kinematic haptic display, a haptic proxy, and a multimodal input device.
This paper presents our approach for visualizing, sonifying, and predicting wildfire in the near future and contactless interaction. We provide an interaction tool to depict the causes and results of the wildfire and promote awareness of the environmental issues by giving forecasting results of the wildfire to the audience. Multimodal interaction allows the audience to dynamically experience the changes of the wildfire over time in two representative locations. (California, United States, and South Korea) The interactive multimodal data visualization and sonification depict the past, present, and future of the wildfire. This data-driven design was installed in an art gallery and presented to audience members. Contactless user interaction with Leap Motion cameras was used during the pandemic period for hygienic interaction. In this paper, we describe the design process and how this interface was developed based on environmental issues, and informal user responses from the art gallery setup are discussed.
The fusion of music-information processing and human physical functions will enable new musical experiences to be created. We developed an interactive music system, which provides users with the experience of conducting a musical performance. This system converts music arranged using melody morphing method based on the general theory of tonal music into electrical muscle stimulation to control the body movements of multiple performers (e.g., musicians and dancers) via devices attached to the performers’ hands and feet. The melodies used in the system are divided into segments, and each segment has multiple variations of melodies. The user can interactively control the performers, thus the actual performance.
We present the AR Magic Lantern, an augmented reality flashlight that enables its holders to see and interact with situated virtual content that it projects onto the physical surfaces around them. The AR Magic Lantern tracks its position and orientation, so the projected content is truly a part of the physical space and is thus relevant for all members of a group. This makes it especially suitable for implementations of the World-as-Support AR paradigm, where augmented experiences should be shared and situated. We contribute the design principles and methodology used in the development of the hardware and software of the AR Magic Lantern. We also describe the interactive AR experience that we developed for a local cultural heritage site and which will be demoed during CHI 2022.
Our everyday digital tasks require access to information from a wide range of applications and systems. Although traditional search systems can help find information, they usually operate within one application (e.g., email client or web browser) and require the user’s cognitive effort and attention to formulate proper search queries. In this paper, we demonstrate EntityBot, a system that proactively provides useful and supporting entities across application boundaries without requiring explicit query formulation. Our methodology is to exploit the context from screen frames captured every 2 seconds to recommend relevant entities for the current task. Recommendations are not restricted to only documents but include various kinds of entities, such as applications, documents, contact persons, and keywords representing the tasks. Recommendations are actionable, that is, a user can perform actions on recommended entities, such as opening documents and applications. The EntityBot also includes support for interactivity, allowing the user to affect the recommendations by providing explicit feedback on the entities. The usefulness of entity recommendations and their impact on user behavior has been evaluated in a user study based on real-world tasks. Quantitative and qualitative results suggest that the system had an actual impact on the tasks and led to high user satisfaction.
Voice user interfaces (VUIs) have the problems in discoverability and learnability. Conventional VUIs tried to solve these problems by listing voice commands for using features on the screen of devices in combination with GUIs. However, with these VUI help tools, it is difficult to know how accurately the commands must be spoken and to determine whether the target operation is being executed if the words are rephrased or spoken incorrectly during speaking. Therefore, users are too preoccupied with speaking phrases correctly, and the advantage of VUIs, which can be used without awareness of the operation method, has not been fully demonstrated. To address these issues, we propose KaraokeVUI, which is a VUI help tool for supporting voice operation with feedback by displaying spoken words through filling in the blanks or overlaying on phrases, just like on a karaoke screen. In this paper, we evaluate and verify KaraokeVUI usefulness and usability.
Virtual reality allows us to operate bodies that differ substantially from our own. However, avatars with different topologies than the human form require control schemes and interfaces that effectively translate between user and avatar. In this position paper, we discuss the concept of ”non-anthropomorphic designs” that are inhuman in not just appearance, but in topology and/or motion. We examine current implementations of real and virtual non-anthropomorphic hands (NAHs), finding that existing NAHs generally rely on one-to-one or reductionist control strategies that limit their possible forms. We discuss the structure of a functional NAH system and design considerations for each component, including metrics for evaluating NAH system performance. The terminology and design considerations presented here support future research on NAHs in virtual and physical reality, as well as virtual and physical tool design, the body schema, and novel control interfaces and mappings.
Non-fungible tokens (NFTs) have been a defining trend for design, technology, and business in 2021. The value, legitimacy, and utility of NFTs is disputed: proponents highlight revolutionary economic and cultural potentials of an open, secure, and immutable ownership database, while opponents are displeased by the environmental issues and abundant wrongdoing in the ecosystem. Nevertheless, the phenomenon is relevant to HCI, and signifies important developments for future interactive products. To better understand the NFT phenomenon, and to inform future HCI research and design, we investigated the stakeholders in the NFT ecosystem and relations between them. Based on open data we mined from the social news website Hacker News, we contribute the first data-backed model of stakeholders in the NFT ecosystem. The model reveals a nuanced account of the outlooks of creators, owners, and technologists; identifies investment firms and auction houses as arbiters of knowledge and value; and presents implications for future research.
While users often share accounts with others, currently most accounts support only single user interactions. Within industry, Netflix and Disney+ providing profiles within accounts are testaments to popular services identifying and responding to user needs in this context but the solutions here are mostly naïve. Within academia, while sharing practices are of interest, the practicalities of dealing with them are yet to be studied. This paper highlights said gap in research and reports the preliminary findings of 4 user focus groups that reveal practical challenges and future expectations around the experience of sharing, social implications and user privacy, which when accounted for would support users in their sharing interactions. We intend to extend these findings by integrating them with expert interviews with ‘makers’ who research and work on said technologies, to produce a holistic set of design recommendations that form a practical guide for support around account sharing.
We study the effects of network latency and jitter on visual task performance between participants in video conferencing. Users of video conferencing tools commonly report a phenomenon known as Zoom fatigue. We postulate that this is a consequence of the effort required to maintain visual communication in the face of network latency and jitter. Understanding the relationship between network latency, jitter, and Zoom fatigue can inform the design of visual computing methods to mitigate the effects. To study the causes of Zoom fatigue, we analyzed a visual communication task via video conferencing through a user study employing a game of Charades since it relies on highly visual communication. Our findings demonstrate a direct correlation between user performance while playing the game and network latency and that the level of user engagement in the game plays a vital role in indicating adaptation to the latency when user engagement levels are high.
In this paper, we present ChromoPrint, a method which leverages photochromic dyes to convert resin-based 3D printing - a process that traditionally prints objects from a single material and therefore only a single color - into a multi-color 3D printing process. Rather than using a standard single-color resin, our resin contains a mixture of photochromic dyes that can transition into different colors when exposed to specific wavelengths of light. We modify an existing resin printer to incorporate an RGB projection system which can control each of the photochromic dyes in the resin during printing. By saturating the dyes with a UV light prior to mixing into the resin, and then projecting combinations of RGB light onto each layer after it has been UV cured, we can color objects directly during the printing process. We discuss the formulation of the photochromic resin, the modifications to the printer, the user interface that allows a user to apply color to a 3D model, and the software pipeline that outputs the build instructions to the 3D printer, including the exposure times for curing with UV light and for coloring with the RGB projector.
Self-harm is particularly prevalent amongst young people, with adverse consequences in terms of later wellbeing, vocational outcomes, and increased risk of suicide. Although many studies have explored self-harm, it remains difficult to predict and prevent, in part because the mental states that typically precede self-harm are highly variable and triggers can be unclear. The Experience Sampling Method (ESM), where participants are asked to record personal data many times during the day, allows researchers to capture these variable mental states and their relation to self-harming behaviour. We conducted a series of co-design workshops involving young people with lived experience of self-harm and researchers with a special interest in ESM to identify the requirements of both groups for an ESM digital platform for investigating self-harm. We describe their key requirements, some of which are conflicting, and suggest ways that these could be addressed to develop an effective ESM platform.
Mind-body therapies aim to improve health by combining physical and mental exercises. Recent developments tend to incorporate virtual reality (VR) into their design and execution, but there is a lack of research concerning the inclusion of virtual bodies and their effect on body awareness in these designs. In this study, 24 participants performed in-VR body awareness movement tasks in front of a virtual mirror while embodying a photorealistic, personalized avatar. Subsequently, they performed a heartbeat counting task and rated their perceived body awareness and sense of embodiment towards the avatar. We found a significant relationship between sense of embodiment and self-reported body awareness but not between sense of embodiment and heartbeat counting. Future work can build on these findings and further explore the relationship between avatar embodiment and body awareness.
Designing aesthetically pleasing single-page graphic designs (”banners”) that appeal to the target recipients is non-trivial and requires considerable human effort. Designers often start by collecting assets like background images, inspirational banners, and relevant taglines/textual phrases, and iterate over variants of the designs by testing different combinations of these assets for their current task. To expand creative processes beyond professionals, it is crucial to accelerate this tedious process of creating design variants. To this end, we propose an algorithm that takes as input multiple image and text elements to generate several visually pleasing banner designs harmonized for text colors, font styles, and the placement of these multimodal assets. The generated layout accounts for background image saliency and uses this to select typographic and color variants yielding the harmonized outputs. We demonstrate the effectiveness of the proposed algorithm in generating aesthetic banners with the help of a crowdsourced experiment.
The ongoing Covid-19 pandemic has impacted our everyday lives and demands everyone to take countermeasures such as wearing masks or disinfecting their hands. However, while previous work suggests that these countermeasures may profoundly impact biometric authentication, an investigation of the actual impact is still missing. Hence, in this work, we present our findings from an online survey (n=334) on experienced changes in device usage and failures of authentication. Our results show significant changes in personal and shared device usage, as well as a significant increase in experienced failures when comparing the present situation to before the Covid-19 pandemic. From our qualitative analysis of participants’ responses, we derive potential reasons for these changes in device usage and increases in authentication failures. Our findings suggest that making authentication contactless is only one of the aspects relevant to encounter the novel challenges caused by the pandemic.
Three-dimensional (3D) models have widely been used in medical diagnosis and planning tasks. Haptic virtual reality (VR) interfaces implemented by using VR equipment and haptic devices have previously been proposed for these medical 3D manipulation tasks. They have been found to be faster and more accurate in a medical marking task with novel users, compared with the traditional 2D interaction technique that uses a mouse and a 2D display. In this study, we recruited medical experts who do the medical landmarking task as part of their daily work to examine the performance of haptic VR interfaces and to investigate experts’ user experience. There were no statistically significant differences between the haptic user interfaces and the mouse-based 2D interface in terms of task completion time and marking accuracy. Based on experts’ subjective data, haptic VR interfaces showed great potential for medical work because of the natural input methods and haptic feedback.
Mobile crowdsourcing enables people to learn location-related information from others with diverse experiences and opinions. However, little research has investigated the expected quality of the location-related information users of mobile-crowdsourcing platforms, and the levels and types of relevant experience such users expect crowd members to possess, respectively. To fill this gap, we first conducted an interview study with 22 participants, which yielded five key information properties of the answers to location-based questions: objectivity, relativity, specificity, temporal regularity, and variability. Based on his//her stated perceptions of these properties of the requested information, we deemed each participant to desire at least one, and up to 10 main qualities of the information, and seven main aspects of contributors’ experience. A follow-up survey study was then used to quantify the characteristics of a list of location-related information according to the information properties that the 139 respondents perceived that information to have.
We propose a model to achieve human localization in indoor environments through intelligent conversation between users and an agent. We investigated the feasibility of conversational localization by conducting two studies. First, we conducted a Wizard-of-Oz study with N = 7 participants and studied the feasibility of localizing users through conversation. We identified challenges posed by users’ language and behavior. Second, we collected N = 800 user descriptions of virtual indoor locations from N = 80 Amazon Mechanical Turk participants to analyze user language. We explored the effects of conversational agent behavior and observed that people describe indoor locations differently based on how the agent presents itself. We devise “Entity Suitability Scale,” a concrete and scalable approach to obtain information to support localization from the myriad of indoor entities users mention in their descriptions. Through this study, we lay foundation to our proposed paradigm of conversational localization.
Privacy-invasive sensors such as cameras and microphones in smart devices (e.g., Facebook Portal, Google Nest Hub Max, Amazon Echo Show) are now ubiquitous in a user’s private setting, which raises users’ privacy concerns of always listening and monitoring. To address such privacy concerns, we developed ParaSight, a smart speaker add-on device that communicates to users sensing data transfer. ParaSight transmits the information yet speaks out loud what is being transmitted to a smart speaker connected to a server via utterances by leveraging the smart speaker’s ability to understand human languages. Communicating sensing data as utterances brings two benefits: (i) a user can hear and understand what data, and when, are being transferred as utterances are human-understandable; (ii) raw data are processed locally and only filtered signals can be uploaded to a server. ParaSight, for example, can receive and locally process raw data of a user’s audio and video data (e.g., snoring sound, workout video), and only transmit the filtered data to a server. We demonstrate two work-in-progress applications—snoring detection and home training applications—to show ParaSight’s use cases.
Mobile navigation has become a ubiquitous technology through mobile devices and smartphones. One of these devices’ most-used navigation methods is the turn-by-turn method, first used in car-based navigation systems. However, pedestrians and cyclists increasingly use other methods such as the as-the-crow-flies (ATCF) navigation method instead of the classical turn-by-turn navigation. Instead of giving fixed instruction, ATCF navigation only indicates the straight-line direction to the destination and users have to make their own decision when and where to turn. In this paper, we improve the route predictions for such alternative navigation methods as ATFC by incorporating a model on users’ angle estimation error. We show that the route predictions are closer to actual human navigation behavior than methods proposed in related work.
Contemporary 360° video players do not provide ways to let people explore the videos together. Tourgether360 addresses this problem for 360° tour videos using a pseudo-spatial navigation technique that provides both an overhead “context” view of the environment as a minimap, as well as a shared pseudo-3D environment for exploring the video. Collaborators appear as avatars along a track depending on their position in the video timeline and can point and synchronize their playback. In this work, we describe the intellectual precedents for this work, our design goals, and our implementation approach of Tourgether360. Finally, we discuss future work based on this prototype.
‘Red blanket’ (RB) processes accelerate the surgical treatment of patients who have suffered severe trauma. Although the rapid start of surgery has positive effects on patient outcomes, the short time available to prepare for a case and the limited information available at the outset of care can be challenging for physicians. To support the lead of an RB team in making the right clinical decisions quickly, this work proposes a head-worn display (HWD) based information system. HWD could be valuable in RB settings, because they provide information in a continuous and hands-free manner, potentially allowing physicians to access case relevant data while treating the patient. Through an interview study with seven anaesthetists, we studied work practices of RB team leads and identified user requirements for the HWD based support system. We discuss opportunities and challenges for HWDs in RB settings and provide a detailed description of the developed HWD application.
There has been a growing interest in HCI to understand the specific technological needs of people with dementia and supporting them in self-managing daily activities. One of the most difficult challenges to address is supporting the fluctuating accessibility needs of people with dementia, which vary with the specific type of dementia and the progression of the condition. Researchers have identified auto-personalized interfaces, and more recently, Artificial Intelligence or AI-driven personalization as a potential solution to making commercial technology accessible in a scalable manner for users with fluctuating ability. However, there is a lack of understanding on the perceptions of people with dementia around AI as an aid to their everyday technology use and its role in their overall self-management systems, which include other non-AI technology, and human assistance. In this paper, we present future directions for the design of AI-based systems to personalize an interface for dementia-related changes in different types of memory, along with expectations for AI interactions with the user with dementia.
As virtual reality (VR) is emerging in the tech sector, developers and designers are under pressure to create immersive experiences for their products. However, the current curricula from top institutions focus primarily on technical considerations for building VR applications, missing out on concerns and usability problems specific to VR interaction design. To better understand current needs, we examined the status quo of existing university pedagogies by carrying out a content analysis of undergraduate and graduate courses about VR and related areas offered in the major citadels of learning and conducting interviews with 7 industry experts. Our analysis reveals that the current teaching practices underemphasize design thinking, prototyping, and evaluation skills, while focusing on technical implementation. We recommend VR curricula should emphasize design principles and guidelines, offer training in prototyping and ideation, prioritize practical design exercises while providing industry insights, and encourage students to solve VR design problems beyond the classroom.
Identifying and raising awareness about web misinformation is crucial as the Internet has become a major source of information for many people. We introduce MisVis, a web-based interactive tool that helps users better assess misinformation websites and understand their connections with other misinformation sites through visual explanations. Different from the existing techniques that primarily only focus on alerting users of misinformation, MisVis provides new ways to visualize how the site is involved in spreading information on the web and social media. Through MisVis, we contribute novel interactive visual design: Summary View helps users understand a site’s overall reliability by showing the distributions of its linked websites; Graph View presents users with the connection details of how a site is linked to other misinformation websites. In collaboration with researchers at a large security company, we are working to deploy MisVis as a web browser extension for broader impact.
The European Large Logistics Lander (EL3) is being designed to carry out cargo delivery missions in support of future lunar ground crews. The capacity of virtual reality (VR) to visualize and interactively simulate the unique lunar environment makes it a potentially powerful design tool during the early development stages of EL3, as well as other relevant technologies. Based on input from the EL3 development team, we have produced a VR-based operational scenario featuring a hypothetical configuration of the lander. Relying on HCI research methods, we have subsequently evaluated this scenario with relevant experts (n=10). Qualitative findings from this initial pilot study have demonstrated the usefulness of VR as a design tool in this context, but likewise surfaced a number of limitations in the form of potentially impaired validity and generalizability. We conclude by outlining our future research plan and reflect on the potential use of physical stimuli to improve the validity of VR-based simulations in forthcoming design activities.
Existing text entry techniques for virtual reality are either slow and error-prone, stationary, break immersion, or physically demanding. We present Shapeshifter, a technique that enables text entry in virtual reality by performing gestures and fluctuating contact force on any opaque diffusely reflective surface, including the human body. For this, we developed a digital thimble that users wear in their index finger. The thimble uses an optical sensor to track the finger and a pressure sensor to detect touch and contact force. In a week-long in-the-wild pilot study, Shapeshifter yielded on average 11 wpm on flat surfaces (e.g., a desk) and 9 wpm on the lap when sitting down, and 8 wpm on the palm and back of the hand when standing up in text composition tasks. A simulation study predicted a 27.3 wpm error-free text entry rate for novice users in transcription typing tasks on a desk.
Speaking rate or the speed at which a person speaks is a fundamental user characteristic. This work investigates the rate in which users speak when interacting with speech and silent speech-based methods. Results revealed that native users speak about 8% faster than non-native users, but both groups slow down at comparable rates (34–40%) when interacting with these methods, mostly to increase their accuracy rates. A follow-up experiment confirms that slowing down does improve the accuracy of these methods. Both methods yield the best accuracy rates when speaking at 0.75x of the actual speaking rate. A post-hoc error analysis revealed that speech and silent speech methods and native and non-native speakers are susceptible to different types of errors.
With the development of social media, various rumors can be easily spread on the Internet and such rumors can have serious negative effects on society. Thus, it has become a critical task for social media platforms to deal with suspected rumors. However, due to the lack of effective tools, it is often difficult for platform administrators to analyze and validate rumors from a large volume of information on a social media platform efficiently. We have worked closely with social media platform administrators for four months to summarize their requirements of identifying and analyzing rumors, and further proposed an interactive visual analytics system, RumorLens, to help them deal with the rumor efficiently and gain an in-depth understanding of the patterns of rumor spreading. RumorLens integrates natural language processing (NLP) and other data processing techniques with visualization techniques to facilitate interactive analysis and validation of suspected rumors. We propose well-coordinated visualizations to provide users with three levels of details of suspected rumors: an overview displays both spatial distribution and temporal evolution of suspected rumors; a projection view leverages a metaphor-based glyph to represent each suspected rumor and further enable users to gain a quick understanding of their overall characteristics and similarity with each other; a propagation view visualizes the dynamic spreading details of a suspected rumor with a novel circular visualization design, and facilitates interactive analysis and validation of rumors in a compact manner. By using a real-world dataset collected from Sina Weibo, one case study with a domain expert is conducted to evaluate RumorLens. The results demonstrated the usefulness and effectiveness of our approach.
Mindfulness, a practice of maintaining awareness by bringing attention to the present without judgment, has many mental and physical well-being benefits when practiced consistently. Many technologies have been invented to support mindfulness practice: mobile apps, web resources, virtual reality environments, and wearables. We present findings from a semi-structured interview study with 6 experienced mindfulness practitioners to understand their daily practice experiences and technologies they incorporate in their practice. Participants identify the benefits and challenges of developing long-term commitment to mindfulness practice, and combine informal mindfulness practice in their daily activities in addition to formal meditation. While conflicted about including and even relying on technology, they adopt and appropriate a range of technologies in their daily practice, some of which are not designed for mindfulness, such as productivity tools and media streaming. Based on our findings, we suggest that designers of mindfulness technologies be cautious about applying behavioral change design principles and go beyond meditation to better situate the tools in practitioners’ existing daily routines.
Conversational AI is changing the way we interact with digital services. However, there is still a lack of conversational paradigms facilitating access to the Web. This paper discusses a new approach for Conversational Web Browsing, and introduces a design space identified through a user-centered process that involved 26 blind and visually impaired users. The paper also illustrates the conceptual architecture of a software framework that can automatically generate conversational agents for the Conversational Web.
Data glyphs continue to gain popularity for information communication. However, the cognition and perception theory of glyphs is largely unknown for many tasks including “categorization”. Categorization tasks are common in everyday life from sorting objects to a doctor diagnosing a patient’s disease. However it is unknown how glyph designs, specifically anthropomorphic human-like representations which in prior visualization research have demonstrated improved information recall, affect accuracy in a categorization task. To better understand how people comprehend and perceive glyphs for categorization, including anthropomorphic representations, we conducted a crowdsourced experiment to evaluate whether more human-like glyphs would lead to higher categorization accuracy. Contrary to our hypothesis, we found evidence that subjects are more accurate with a less anthropomorphic glyph. A posthoc analysis also reveals that anthropomorphic glyphs introduce biases due to their anatomically salient features. Based on these results we propose design guidelines for glyphs used in categorization tasks. The supplemental material of this paper available is on https://osf.io/3bgcv/.
Electric muscle stimulation (EMS) offers rich opportunities for haptic interaction. However, there is no comprehensive study assessing the user acceptance of this technology. Consequently, we synthesized four scenarios (motor learning, virtual reality, media player control, and pedestrian safety) based on various dimensions identified by the literature review. We assessed their user acceptance in an online survey (N=113) using the technology acceptance model (TAM). Our findings suggest that users may reject EMS systems if they perceive a high level of risk and lack of control over the environment. This was evident even in scenarios that could provide high benefits to users, such as accelerating the learning curve in motoric tasks. In contrast, EMS seems to be widely accepted for games and entertainment, represented by the virtual reality scenario in our study. HCI researchers should employ EMS to create more engaging VR experiences, keep users in control, and provide them the option of learning skills with or without EMS. Our results will help build better EMS systems accepted by users.
Connecting digital information with the physical is one of the essential ideas of tangible user interfaces. The design of the physical representation is important especially for specialised domains like surgery planning, because surgeons rely heavily on their tactile senses. Therefore, this research work investigates the effect of a soft and a hard 3D model as an interaction device for virtual reality surgical planning. A user study with 13 surgeons reveals a clear preference for the softer, more realistic material and a significantly higher haptic user experience for the soft model compared to the hard one. These results advocate for stressing material aspects along with the interaction design in domains with an inherently high focus on tactile aspects.
The majority of consent pop-ups on the web do not meet the requirements for legally valid consent laid out in the General Data Protection Regulation (GDPR). In the face of a lack of enforcement, we present the browser extension Consent-O-Matic which uses adversarial interoperability to automatically answer these pop-ups based on the user’s preferences. We document how the current implementation of these pop-ups support and inhibit interoperability, focussing on the difference between static and dynamic HTML, the quality of the semantic markup, and the visibility of the system’s state; and we present the implementation of Consent-O-Matic. Lastly, we discuss the possibilities, limitations, and concerns of an adversarial approach.
Many autistic children can have difficulty communicating, understanding others, and interacting with new and unfamiliar environments. At times they may suffer from a meltdown. The major contributing factor to meltdowns is sensory overwhelm. Technological solutions have shown promise in improving the quality of life for autistic children-however little exists to manage meltdowns. In this work with stakeholders, we design and deploy a low cost, mobile VR application to provide relief during sensory discomfort. Through the analysis of surveys from 88 stakeholders from a variety of groups (i.e., autistic adults, children with autism, parents of autistic individuals, and medical practitioners), we identified three key features regarding ways to manage meltdowns: escape, distract, and wait it out. These insights were implementation in a system, then was then remotely deployed with 6 families. Findings and future steps are discussed.
Social deduction or deception games are games in which a player or team of players actively deceives other players who are trying to discover hidden roles as a part of the win condition. Included in this category are games like One Night Werewolf, Avalon, and Mafia. In this pilot study (N=24), we examined how the addition of visual displays of heart rate (HR) signals affected players’ gameplay in a six-player version of Mafia in online and in-person settings. We also examined moments of synchrony in HR data during critical moments of gameplay. We find that seeing signals did affect players’ strategies and influenced their gameplay, and that there were moments of HR synchrony during vital game events. These results suggest that HR, when available, is used by players in making game decisions, and that players’ HR can be a measure of like-minded player decisions. Future work can explore how other biosignals are utilized by players of social deception games, and how those signals may undergo unconscious synchrony.
This paper complements the article “Restraints as a Mechanic for Bodily Play” by presenting the paraphernalia of games as different mechanics that address the surrounding and contextual factors of movement-based game and play activities, while restraints address the players’ bodily preconditions. Based on an analysis of a collection of traditional games combined as bridging concepts, the mechanics are derived and exemplified in traditional and digital game exemplars and explained using theoretical concepts from phenomenology and postphenomenology. The presented mechanics provide a roadmap to design for and encourage bodily play by drawing on the historical development of (i.e., traditional) play and game activities and leveraging this knowledge into the domain of digital and technology-supported games and play activities.
Misinformation spread through social media has become a fundamental challenge in modern society. Recent studies have evaluated various strategies for addressing this problem, such as by modifying social media platforms or educating people about misinformation, to varying degrees of success. Our goal is to develop a new strategy for countering misinformation: intelligent tools that encourage social media users to foster metacognitive skills ”in the wild.” As a first step, we conducted focus groups with social media users to discover how they can be best supported in combating misinformation. Qualitative analyses of the discussions revealed that people find it difficult to detect misinformation. Findings also indicated a need for but lack of resources to support cross-validation of information. Moreover, misinformation had a nuanced emotional impact on people. Suggestions for the design of intelligent tools that support social media users in information selection, information engagement, and emotional response management are presented.
Unconscious behaviors are one of the indicators of the human perception process from a psychological perspective. As a result of perception responses, hand gestures show behavioral responses from given stimuli. Mouse usages in Human-Computer Interaction (HCI) show hand gestures that individuals perceive information processing. This paper presents an investigation of the correlation between unconscious mouse actions and human cognitive workload. We extracted mouse behaviors from a Robot Operating System (ROS) file-based dataset that user responses are reproducible. We analyzed redundant mouse movements to complete a dual n-back game by solely pressing the left and right buttons. Starting from a hypothesis that unconscious mouse behaviors predict different levels of cognitive loads, we statistically analyzed mouse movements. We also validated mouse behaviors with other modalities in the dataset, including self-questionnaire and eye blinking results. As a result, we found that mouse behaviors that occur unconsciously and human cognitive workload correlate.
Human Intelligence Tasks (HITs) allow people to collect and curate labeled data from multiple annotators. Then labels are often aggregated to create an annotated dataset suitable for supervised machine learning tasks. The most popular label aggregation method is majority voting, where each item in the dataset is assigned the most common label from the annotators. This approach is optimal when annotators are unbiased domain experts. In this paper, we propose Debiased Label Aggregation (DLA) an alternative method for label aggregation in subjective HITs, where cross-annotator agreement varies. DLA leverages user voting behavior patterns to weight labels. Our experiments show that DLA outperforms majority voting in several performance metrics; e.g. a percentage increase of 20 points in the F1 measure before data augmentation, and a percentage increase of 35 points in the same measure after data augmentation. Since DLA is deceptively simple, we hope it will help researchers to tackle subjective labeling tasks.
Augmented reality (AR) has a diverse range of applications, including language teaching. When studying a foreign language, one of the biggest challenges learners face is memorizing new vocabulary. While augmented holograms are a promising means of supporting this memorization process, few studies have explored their potential in the language learning context. We demonstrate the possibility of using flashcard along with an expressive holographic agent on vocabulary learning. Users scan a flashcard and play an animation that is connected with an emotion related to the word they are seeing. Our goal is to propose an alternative to the traditional use of flashcards, and also introduce another way of using AR in the association process.
Urban Air Mobility (UAM) is an emerging form of aerial transport and is expected to pave the way for a new mobility experience. In order for UAM to be successfully integrated into the current urban transportation system, user experience (UX) considerations need to be explored. However, few studies have revealed the understanding of user needs and requirements from the perspective of transportation experience. In this regard, our research team conducted workshops with vehicle experts to explore important UX considerations for UAM usage according to the initial and mature phases of UAM operations. Also, we uncovered UAM usage motives, potential use cases, as well as three expected forms of UAM operations. The findings from this study contribute to providing insights and design guidelines for UAM UX developers in future urban contexts.
Learning menstruation in early adolescence could reduce teenagers’ misunderstanding of it and help them treat menstruation in a proper way. This paper explored a tangible game for teenagers of different genders learning menstruation through collaborative playing. The game included five levels where users play together and learn the cause, products, symptoms of menstruation as well as try to judge some scenarios and listen to audios about menstruation. In our user study, we invited three groups of teenagers ages 11 to 16. Each group contained at least one male and one female, and we let them play the game freely. Teenagers were successfully able to play the game collaboratively, learn menstruation-related knowledge. The results revealed motivation differences related to gender, and after the game, teenagers demonstrated the observable change of the attitude towards menstruation.
Conflict in online spaces can often lead to behaviors that may be categorized as “harassment.” We asked 307 U.S. adults to self-report if they have ever engaged in aggressive online conflict. Using logistic regressions, we examine what psychosocial characteristics predict which users would report engaging in behaviors that are commonly labeled as “harassment.” We find that psychological factors such as impulsivity, reactive aggression, and premeditated aggression distinguish those who never thought of, those who only imagined, and those who carried out harassing behavior. Demographic factors other than age do not have significance, contrary to the results of prior studies. Design interventions that address propensities to perpetrate harassment might reduce harm but also raise ethical and moral concerns about the nature of harassment and the disposition toward it.
Vocabulary acquisition is a fundamental part of learning a new language. In order to acquire new vocabulary, words with meanings that are unknown to the learner must be added to the language learning process. When searching for material in the target language, it is useful to know how much of a document is made up of currently unknown words. One simple way to estimate the unknown words in a document is to use the frequency of occurrence, which indicates the difficulty of the word. However, this approach can lead to missed unknown words. In this study, we aim to improve the accuracy of unknown word estimation by using reading activity data obtained from smartphone sensors and taking into account the individual learner’s English reading behavior. We apply a novel user interface which allows us to improve estimation through reading behavior, without the use of eye-trackers.
Conversational user interfaces (CUIs) have the potential to substantially influence current practice in diverse application fields, one important example being the way in which online surveys are conducted. Currently, CUIs are mainly applied to help improve the user experience with quantitative surveys. We describe the development of a Hybrid User Interface (HUI) prototype that combines a Graphical User interface (GUI) and a CUI to conduct a fully automated but structured qualitative survey according to the Repertory Grid Technique (RGT). This paper describes a pilot study that we conducted to inform the design of the HUI and to converge to a prototype that is sufficiently mature and robust to be used in a more extensive follow-up study. Our experiences with developing this HUI may help others avoid pitfalls and profit from the tools and lessons we have learned as part of our design process.
This study revisits findings of the 2016 paper by Dittus et al. that considered trends in contributor retention in humanitarian mapping projects organized using the OpenStreetMap (OSM) platform. In addition to revisiting many of the same metrics used for the 2017-2020 time period, our research takes on a broader scope by evaluating a wider cohort of recruits, as opposed to specific projects evaluated by the original paper. As a result, our findings offer an updated and more complete picture of contributor retention in humanitarian mapping projects. We also offer several insights and future research opportunities for long-term contributor retention in online peer production projects.
Gender is increasingly being explored as a social characteristic ascribed to robots by people. Yet, research involving social robots that may be gendered tends not to address gender perceptions, such as through pilot studies or manipulation checks. Moreover, research that does address gender perceptions has been limited by a reliance on the human gender binary model of feminine and masculine, prescriptive response options, and/or researcher assumptions and/or ascriptions of participant gendering. In response, we conducted an online pilot categorization study (n=55) wherein we provided gender-expansive response options for rating four robots ranging across four levels of anthropomorphism. Findings indicate that people gender robots in diverse ways, and not necessarily in relation to the gender binary. Additionally, less anthropomorphic robots and the childlike humanoid robot were deemed masculine, while the iconic robot was deemed gender neutral, fluid, and/or ambiguous. We discuss implications for future work on all humanoid robots.
The use of human operator managed robotics, especially for safety critical work, includes a shift from physically demanding to mentally challenging work, and new techniques for Human-Robot Interaction are being developed to make teleoperation easier and more accurate. This study evaluates the impact of combining two teleoperation support features (i) scaling the velocity mapping of leader-follower arms (motion scaling), and (ii) haptic-feedback guided shared control (haptic guidance). We used purposely difficult peg-in-the-hole tasks requiring high precision insertion and manipulation, and obstacle avoidance, and evaluated the impact of using individual and combined support features on a) task performance and b) operator workload. As expected, long distance tasks led to higher mental workload and lower performance than short distance tasks. Our results showed that motion scaling and haptic guidance impact workload and improve performance during more difficult tasks, and we discussed this in contrast to participants preference for using different teleoperation features.
Trust has emerged as a key factor in people's interactions with AI-infused systems. Yet, little is known about what models of trust have been used and for what systems: robots, virtual characters, smart vehicles, decision aids, or others. Moreover, there is yet no known standard approach to measuring trust in AI. This scoping review maps out the state of affairs on trust in human-AI interaction (HAII) from the perspectives of models, measures, and methods. Findings suggest that trust is an important and multi-faceted topic of study within HAII contexts. However, most work is under-theorized and under-reported, generally not using established trust models and missing details about methods, especially Wizard of Oz. We offer several targets for systematic review work as well as a research agenda for combining the strengths and addressing the weaknesses of the current literature.
Humans are unique in working collaboratively by sharing and understanding intentions. However, digital collaboration is daunting, especially in creative design life cycles, due to non-linear workflows and lack of micro-alignments coupled with the need for robust network connectivity. We present a formative study with creatives to identify key themes in conflicts that arise in this space. We introduce CollabColor, a user interface that aids in resolving conflicts for two users synchronously collaborating on a low-touch creative task. More specifically, given an uncolored line-art on a canvas and a set of reference images from the users as input, we arrive at design goals for an intelligent system that can enhance our interface. We find that such a system must provide non-obtrusive interventions during real-time collaboration to ensure that the final colorization of the art is coherent, and all the users’ aligned preferences are incorporated.
While affective non-verbal communication between pedestrians and drivers has been shown to improve on-road safety and driving experiences, it remains a challenge to design driver assistance systems that can automatically capture these affective cues. In this early work, we identify users’ emotional self-report responses towards commonly occurring pedestrian actions while crossing a road. We conducted a crowd-sourced web-based survey (N=91), where respondents with prior driving experience viewed videos of 25 pedestrian interaction scenarios selected from the JAAD (Joint Attention for Autonomous Driving) dataset, and thereafter provided valence and arousal self-reports. We found participants’ emotion self-reports (especially valence) are strongly influenced by actions including hand waving, nodding, impolite hand gestures, and inattentive pedestrian(s) crossing while engaged with a phone. Our findings provide a first step towards designing in-vehicle empathic interfaces that can assist in driver emotion regulation during on-road interactions, where the identified pedestrian actions serve as future driver emotion induction stimuli.
This paper presents a targeted system to help Chinese deaf children learn both sign language and Chinese characters in early language learning. The system combines sign recognition and in-air writing techniques with games so that children can practice sign language and Chinese character writing at the same time. Sign recognition is used to assess the accuracy of sign language and in-air writing records the process of writing Chinese characters. In addition, the game adds to the learning fun and makes children more willing to complete learning tasks. We have developed a prototype to evaluate the effectiveness of a simultaneous Chinese sign language and Chinese character teaching system based on gesture recognition and in-air writing. We expect that this system will increase children's willingness and efficiency to learn sign language and Chinese characters, and eventually be used to assist in early language education for deaf children in the Chinese region.
Decision-making algorithms can be obscure and fast-moving. This is especially the case in the context of the algorithm that mediates the work of Deliveroo riders. Forming a critical part of the food delivery platform, the algorithm’s obscurity and shifting nature is a part of its design. In this paper we argue that adapting usability techniques like the Critical Incident Technique (CIT) may provide one way to better understand algorithms and platform work. Though there are many methods to understand algorithms like this, asking people about negative or positive interactions with them and what they think provoked them can produce fruitful avenues for HCI research into the impacts of platforms on gig-economy work. We argue that despite the results being an assumption, assumptions from the algorithmically managed are interesting materials to challenge the researchers’ own assumptions about their context, and to, therefore, better scope out contexts and iterate future research.
People face lots of challenges when working from home (WFH). In this paper, we used both LDA (Latent Dirichlet Allocation) topic modeling and qualitative analysis to analyse WFH related posts on Weibo (N=1093) and Twitter (N=907) during COVID-19. We highlighted unique differences of WFH challenges between two platforms, including long work time, family and food commitment and health concerns on Weibo; casual wearing habits on Twitter. We then provided possible guidelines from a cross-cultural perspective on how to improve the WFH experience based on these differences.
Virtual avatars are widely used for collaborating in virtual environments. Yet, often these avatars lack expressiveness to determine a state of mind. Prior work has demonstrated effective usage of determining emotions and animated lip movement through analyzing mere audio tracks of spoken words. To provide this information on a virtual avatar, we created a natural audio data set consisting of 17 audio files from which we then extracted the underlying emotion and lip movement. To conduct a pilot study, we developed a prototypical system that displays the extracted visual parameters and then maps them on a virtual avatar while playing the corresponding audio file. We tested the system with 5 participants in two conditions: (i) while seeing the virtual avatar only an audio file was played. (ii) In addition to the audio file, the extracted facial visual parameters were displayed on the virtual avatar. Our results suggest the validity of using additional visual parameters in the avatars’ face as it helps to determine emotions. We conclude with a brief discussion on the outcomes and their implications on future work.
To provide accessible but also usable web interfaces to people with low vision (PLV), academics and regulators provide guidelines in the form of adaptation techniques, as well as adaptable interfaces. Following these recommendations, practitioners developed mainstream solutions such as the Microsoft Immersive Reader. With this kind of solution, PLV can adapt or customize web user interfaces in terms of style and structure. This study aims to explore the adaptation carried out by PLV. A mixed methods research design, including both accessibility and usability concerns, allowed us to capture the user interactions, observe them, and access their expressed perception of usability. Findings show the universal nature of mainstream solutions does not support the diversity of PLV. We believe that universal adaptability features better benefit users with less severe and more common visual impairments. Finally, we discuss potential improvements and future work to support a wider range of PLV.
Long-term breastfeeding has been shown to exhibit several environmental benefits and health benefits for both the mother and baby. Despite the known advantages, several mothers choose not to maintain breastfeeding long-term. How long a mother breastfeeds is heavily influenced by lactation and latching, and so the mother’s critical point of support is the lactation consultant (LC), who guides and provides instruction for creating a more positive breastfeeding experience. Empowering lactation consultants with methods to deliver instruction and support remotely is essential for advancing telehealth and wide-scale adoption. This paper presents findings from a need-finding study of 6 LC’s that sheds light on ways to address some of the challenges faced by the LC community when providing remote lactation support. Based on the interviews, a number of potential technologies were identified around wearable sensing, annotation tools, and digital repository for virtual education.
Electronics have become integral to all aspects of life and form the physical foundation of computing; however electronic waste (e-waste) is among the fastest growing global waste streams and poses significant health and climate implications. We present a design guideline for sustainable electronics and use it to build a functional computer mouse with a biodegradable printed circuit board and case. We develop an end-to-end digital fabrication process using accessible maker tools to build circuits on biodegradable substrates that reduce embodied carbon and toxic waste. Our biodegradable circuit board sends data over USB at 800 kbps and generates 12 MHz signals without distortion. Our circuit board dissolves in water (in 5.5 min at 100 °C, 5 hrs at 20 °C) and we successfully recover and reuse two types of chips after dissolving. We also present an environmental assessment showing our design reduces the environmental carbon impact (kg CO2e) by 60.2% compared to a traditional mouse.
Office work presents health and wellbeing challenges, triggered by unhealthy working habits or environmental factors. While technologies for vitality in the office context gain popularity, they are often solution-focused and fall short in acknowledging personal needs. Building on approaches from personal informatics, we see value in opening up the design space of tracking and sensing technologies for office workers. We designed and deployed an open-ended sensor kit and conducted two complementary studies to investigate the value of empowering office workers to investigate their own working habits. Findings show that Habilyzer triggers curiosity about working habits, and wireless sensors contribute to inquire into those habits, possibly supported by additional tools. We contribute new insights into how an open-ended sensor kit can be designed to support self-tracking practices and underlying reflections in the underexplored context of office work. It is an alternative approach to workplace vitality, moving from solution-oriented technologies to inquiry-enabling tools.
For non-sport fans, perceiving the excitement of surrounding fan groups and the arousal of collective emotions are some of the crucial factors that motivate their engaged excitement and loyalty in a sport; these factors are closely related to the process of evolution from a non-fan to a fan. The global COVID-19 pandemic changed the way of sport-watching from watching at the arena to watching from home alone. This has highlighted significant difficulties in the excitement transmission and arousal channel between non-sport fans and fans. Previous remote emotional intervening mediums had been limited to the use of virtual avatars to convey partners’ external cues (such as appearance), to enhance the sense of presence from visual-audio perspectives. In this study, we explored a novel remote emotional intervening medium that conveys sport-fans’ internal cues (bio-signals) that are widely believed to be related to internal emotional states of human, and displays those signals in a way that gives non-sport fans a deeper and more immersive experience: haptic feedback experience. Three bio-signal-based haptic feedback prototypes were developed, including heart-rate-based vibration, electromyography (EMG)-based pressure, and skin-temperature-based thermal feedbacks. An exploratory pilot study was conducted on a group of non-sport fans in a lab-control environment of remote-sport-watching to explore the effectiveness of the proposed mediums in enhancing their perception of sport-fan’s excitement (emotion perception). Besides, we also analyzed non-sport fans’ heart rate data when they were participants in the experiments to measure the performance of the proposed mediums in evoking the engaged excitement of non-sport fans (emotion arousal). Our results indicate the outstanding ability of EMG-based pressure feedback in effectively enhancing the process of emotion perception and the notable advantage of heart-rate-based vibration feedback in the arousal of non-sport fans’ engaged excitement. This study presents the potential utility of bio-signal-based haptic feedback in augmenting remote emotional perception and arousal and also provides the underlining support for the future exploration and development directions for social computing based on bio-signals and haptic technologies.
Online shoppers have a lot of information at their disposal when making a purchase decision. They can look at images of the product, read reviews, make comparisons with other products, do research online, read expert reviews, and more. Voice shopping (purchasing items via a Voice assistant such as Amazon Alexa or Google Assistant) is different. Voice introduces novel challenges as the communication channel is limited in terms of the amount of information people can and are willing to absorb. Because of this, the system should choose the single most effective nugget of information to help the customer, and present the information succinctly. In this paper we report on a within-subject user study (N = 24), in which we employed three template-based methods that use information from customer reviews, product attributes and search relevance signals to generate helpful supporting information. Our results suggest that: (1) supporting information from customer reviews significantly improves participants perception of system effectiveness (helping them make good decisions); (2) supporting information based on search relevance signals improves user perception of system transparency (providing insight into how the system works). We discuss the implications of our findings for providing supporting information for customers shopping by Voice.
Redirected walking (RDW) visually manipulates the movement of the virtual environment (VE) such that the movement of the real environment (RE) and VE are no longer matched 1:1. With RDW, users can overcome the spatial constraints of RE, such as furniture, walls, and columns, and freely move the wider VE using natural gait motion. However, when the intensity of visual manipulations increases, people notice the RDW manipulation owing to visual-vestibular inconsistency, and experience discomfort like motion sickness. To address visual-vestibular inconsistency, we modulated vestibular information in various directions to match the modulation of visual information using galvanic vestibular stimulation (GVS). We proposed a new RDW system—REVES: redirection enhancement using four-pole vestibular electrode stimulation. REVES stimulates the user’s vestibular system according to various visual modulation directions using four-pole GVS based on the proposed algorithm. REVES changed the detection threshold of RDW in three manipulation cases: rotation, translation, and walking direction.
User interface (UI) and user experience (UX) design have become an indispensable part of today’s tech industry. Recently, much progress has been made in machine-learning-enabled design support tools for UX designers. However, few of these tools have been adopted by practitioners. To learn the underlying reasons and understand user needs for bridging this gap, we conducted a retrospective analysis with 8 UX professionals to understand their practice and identify opportunities for future research. We found that the current AI-enabled systems to support UX work mainly work on graphical interface elements, while design activities that involve more ‘design thinking” such as user interviews and user testings are more helpful for designers. Many current systems were also designed for overly-simplistic and generic use scenarios. We identified 4 areas in the UX workflow that can benefit from additional AI-enabled assistance: design inspiration search, design alternative exploration, design system customization, and design guideline violation check.
How can we better organize code in computational notebooks? Notebooks have become a popular tool among data scientists, as they seamlessly weave text and code together, supporting users to rapidly iterate and document code experiments. However, it is often challenging to organize code in notebooks, partially because there is a mismatch between the linear presentation of code and the non-linear process of exploratory data analysis. We present StickyLand, a notebook extension for empowering users to freely organize their code in non-linear ways. With sticky cells that are always shown on the screen, users can quickly access their notes, instantly observe experiment results, and easily build interactive dashboards that support complex visual analytics. Case studies highlight how our tool can enhance notebook users’s productivity and identify opportunities for future notebook designs. StickyLand is available at https://github.com/xiaohk/stickyland.
Home assistants are becoming a widespread product, but they mostly come as a compact device and offer very few customization and personalization features, which often leads to dissatisfaction. With the technological advancements, these systems are becoming more adaptable to the users’ needs and can better imitate a human personality. To achieve that efficiently, understanding how different users envision their desired assistant is crucial. To identify people’s customization and personalization preferences and their desired personality for a home assistant, we designed a set of storyboards depicting a variety of possible features in a domestic setting and conducted a user study (), including a series of semi-structured interviews. Our quantitative results suggest that users prefer an agent which is highly agreeable and has higher conscientiousness and emotional stability. Furthermore, we discuss users’ customization and personalization preferences for a home assistant, which could be considered when designing the future generation of home assistants.
Location-based advertising (LBA), such as billboards and signage, has long been a direct-to-consumer advertising staple. As locative media such as Location-Based Games (LBG) begins to rise in prominence, digital LDA becomes increasingly appealing. In this paper, we explore the impacts of LBA on small businesses in the LBG Pokémon GO, a lacuna in the literature. We gather participant experience through a collection of 35 semi-structured interviews with businesses leveraging Niantic’s sponsored location LBA. These testimonies indicate (1) participant businesses found LBG advertising to be satisfactory, (2) LBG advertising improves brand recognizability for local commerce, and (3) local community is an important factor for success in LBG advertising. These findings indicate future patterns for integrating local businesses into LBG like Pokémon GO, suggesting potential for LBG advertising to assist local businesses.
This research introduces “Tribo Tribe”, a technique for fabricating 3D tangible interactive interfaces capable of sensing movement inputs through ubiquitous materials. Tribo Tribe is built on the working principle of Triboelectric Nanogenerators (TENG) to enable self-powered sensing to 3D systems. We introduce a tool kit that facilitates designers and makers to easily customise both prototyping and sensing through TENG technology. We also demonstrate four design possibilities for different fields to illustrate how Tribo Tribe can instrument TENG into 3D physical interactive prototypes (Figure 1).
Autism Spectrum Disorder (ASD) is a set of neurodevelopmental conditions, often characterised by important impairments in the social area. In the context of early intervention, we present preliminary results about the social behaviour of children with ASD using PlusMe as an experimental interactive toy, which is the first prototype of Transitional Wearable Companions concept. Specifically, PlusMe is designed to stimulate the children’s curiosity and encourage behaviour on the basis of social interaction. The pilot test involved five high-functioning children, mean age 41 months, range 36-50 months. The participants were engaged in play activities together with the PlusMe toy and two researchers who aimed to encourage the children’s social behaviour such as imitation and eye contact; the activities were repeated for four sessions (one per week). Although it is an ongoing study on a larger sample, the first data analysis is promising, preliminary observations seem to demonstrate that PlusMe can be used to improve some social behaviour such as eye contact, imitation, the interaction between two people.
Contemporary digital services often adopt mechanisms, e.g., recommendations and infinite scrolling, that exploit users’ psychological vulnerabilities to maximize time spent and daily visits. While these attention-capture dark patterns might contribute to technology overuse and problematic behaviors, they are relatively underexplored in the literature. In this paper, we first provide a definition of what are attention-capture dark patterns based on a review of recent works on digital wellbeing and dark patterns. Then, we describe a set 5 of attention-capture dark patterns extracted from a 1-week-long auto-ethnography during which we self-monitored our mobile and web interactions with Facebook and YouTube. Finally, we report on an initial study (N = 7) that explores whether and how a widespread mechanism, i.e., social investment, influence usage and users’ perception of the Facebook website. We discuss the implications that our work may have on the design of technologies that better align with users’ digital wellbeing.
In this paper, we present PriCheck, a browser extension that provides privacy-relevant information about smart devices (e.g., in an online shop). This information is oftentimes hidden, difficult to access, and, thus, often neglected when buying a new device. With PriCheck, we enable users to make informed purchase decisions. We conducted an exploratory study using the browser extension in a simplified (mock) online shop for smart devices. Participants chose devices with and without using the extension. We found that participants (N = 11) appreciated the usability and available information of PriCheck, helping them with informed decisions for privacy-preserving products. We hope our work will stimulate further discussion on how to make privacy information for novel products available, understandable, and easy to access for users.
Even though VR design applications that support sketching are popular, sketching accurately in mid-air is challenging for users. In this paper, we explore discrete visual guides that assist users’ stroke accuracy and drawing experience inside the virtual environment. We also present an eye-tracking study that compares continuous, discrete, and no guide in a basic drawing task. Our experiment asks participants to draw a circle and a line using three different guide types, three different sizes and two different orientations. Results indicate that discrete guides are more user-friendly than continuous guides, as the majority of participants preferred their use, while we found no difference in speed/accuracy compared to continuous guides. Potentially, this can be attributed to distinct eye-gaze strategies, as discrete guides led users to shift their eyes more frequently between guide points and the drawing cursor. Our insights are useful for practitioners and researchers in 3D sketching, as they are a first step to inform future design applications of how visual guides inside the virtual environment affect visual behaviour and how eye-gaze can become a tool to assist sketching.
This paper presents an extension for Amazon’s Alexa, which provides a gratitude journal, and investigates its effectiveness compared to a regular paper-based version. Decades of research demonstrate that expressing gratitude has various psychological and physical benefits. At the same time, gratitude routines run the risk of being a hassle activity, which diminishes the positive outcome. Speech assistants might help to integrate gratitude routines more easily in an intuitive way using voice input. The results of our 8-day field study with two experimental groups (Alexa group vs. Paper group, N = 8) show that users see the benefits, that Alexa was effective in reducing participants’ stress and that both groups express their gratitude differently. The positive effect of Alexa was restricted by a security setting (limiting user input to eight seconds) imposed by Amazon, which has now been repealed. The findings give practical and theoretical implications of how verbal gratitude expression affects participants’ well-being.
In this paper, we explore the notion of sympathy in the context of more-than-human design to include nonhuman participation in a design iteration process in an ongoing project named the Morse Things. We explore ways in which nonhuman agency, particularly breakage, can participate in an assembly of human and nonhuman designers. Motivated by Ron Wakkary's theory of designing-with and the concept of repertoires, we propose feeling-with as a potential repertoire for increasing nonhuman participation before, during, and after the design process. Finally, we explore four instances of sympathy and how breakage as a nonhuman force can lead us to new design iterations to redesign the new set of Morse Things.
Scientists collaborate remotely across institutions, countries and continents. However, collaborating remotely is challenging. Video-conferencing tools used for meetings limit the cognitive practices that collaborators can partake in. In virtual reality (VR) users can gain back some of the spatial and social affordances present in collocated collaboration, as well as benefit from interactions that would not be possible in the real world. We introduce Embodied Notes: a cognitive support tool designed to be used in a collaborative virtual environment.
Current commodity Virtual Reality (VR) hardware allows for free, even wireless roaming, however it is still confined by a finite tracking space. To overcome this issue, past research has introduced different methods for vertical locomotion, ranging from walking on a treadmill to interaction paradigms such as teleportation. Recently, researchers have integrated rock climbing on a physical wall into VR experiences. The space available is naturally confined by the dimensions of the wall. Building upon this, we present implementation details and future research directions of a VR system for vertical locomotion on a rock climbing treadmill.
The COVID-19 pandemic has pushed unexpected hardship on the health, environment, economic, and social-political governance of the entire human population. Local communities have adopted new ways of communicating and connecting to support each other. This paper reports people's attitudes towards online community support initiatives (OCSIs) during the COVID-19 pandemic based on a survey conducted in the UK. Our analysis of responses from 699 participants suggests the increased use of social media sites and OCSI engagement since the pandemic, and that people had positive attitudes towards the OCSIs, but improvements were still required. We suggest four design implications to alleviate the challenges of using OCSIs.
The recent advances in smart city infrastructure have provided support for a higher adoption of surveillance cameras as a mainstream crime prevention measure. However, a consequent massive deployment raises concerns about privacy issues among citizens. In this paper, we present VR-Surv, a VR-based privacy aware surveillance system for large scale urban environments. Our concept is based on conveying the semantics of the scene uniquely, without revealing the identity of the individuals or the contextual details that might violate the privacy of the entities present in the surveillance area. For this, we create a virtual replica of the areas of interest, in real-time, through the combination of procedurally generated environments and markerless motion capture models. The results of our preliminary evaluation revealed that our system successfully conceals privacy-sensitive data, while preserving the semantics of the scene. Furthermore, participants in our user study expressed higher acceptance to being surveilled through the proposed system.
The data collected in shared spaces are usually used to understand how occupants use the space. However, these data could also inform the experience of the occupants in these spaces over time. How could we leverage spatial and temporal memories as mediators among occupants of a shared physical space? We developed Memory Portal, an interactive mirror that reflects the shadows and sounds of past occupants of a shared physical space in the present. We conducted a short user study to explore how the occupants of a shared space perceive the interactions with the reflected memories. One of our key finding is that the occupants formed a novel perspective about their actions and experience in the space through these memories. Our contributions include introducing a novel technical system and new approaches of data engagement in shared physical spaces through the visualization and sonification of memories of the presence of its occupants.
Students frequently multitask with social media (SM) during self-study. Such social media multitasking (SMM) has the potential either to support wellbeing by acting as a recovery activity or subvert it by acting as a procrastination activity. It is currently unclear which specific SM behaviours and related factors push SMM towards recovery or procrastination. We conducted semi-structured interviews with 16 undergraduates to explore which SMM behaviours and factors led to recovery or procrastination. We found that both active and passive SM breaks have the potential to be recovery or procrastination activities. Whether a SM break becomes a recovery or procrastination activity partly depends on its automaticity and situational SM factors. This paper contributes empirical evidence that supports emerging criticism of an existing simplistic understanding of the relationship between active/passive SM use and wellbeing, and demonstrates how a richer model can inform the design of technologies that support better SM breaks.
Fear of missing out (FoMO) refers to one's perception that others live a better life as well as one's desire to keep constantly linked to what others are doing via social media. Our study aims to develop an emotion awareness intervention based on the Satir Model to help people become more aware of their emotions after using Instagram through a set of reflection questions and guiding prompts. We designed and built a web-based application called Being to deliver these content in a 10-day field study. Our findings suggested that participants generally found the reflection questions helped them better perceive how they use Instagram in terms of their use habits and mental states. The combination of reflection questions and guiding prompts helped improve participants’ emotion regulation skills and enabled them to develop strategies of using less Instagram. We conclude by discussing implications for future researchers interested in developing similar interventions.
We examine vibrotactile feedback delivered on the finger, wrist, and forearm with the goal of enriching the experience of touch input interactions with public displays. We focus on understanding the user experience of such interactions, which we characterize with a wide spectrum of UX measures, including subjective perceptions of the enjoyment, efficiency, input confidence, integration between touch input and on-body vibrations, distraction, confusion, and complexity of vibrotactile feedback for touch input with public displays. Our empirical findings, from a controlled experiment with fourteen participants, show positive and favorable perceptions of vibrotactile feedback as well as a significant preference for feedback on the finger compared to the wrist and forearm.
Technological objects present themselves as necessary, only to become obsolete faster than ever before. This phenomenon has led to a population that experiences a plethora of technological objects and interfaces as they age, which become associated with certain stages of life and disappear thereafter. Noting the expanding body of literature within HCI about appropriation, our work pinpoints an area that needs more attention, “outdated technologies.” In other words, we assert that design practices can profit as much from imaginaries of the future as they can from reassessing artefacts from the past in a critical way. In a two-week fieldwork with 37 HCI students, we gathered an international collection of nostalgic devices from 14 different countries to investigate what memories people still have of older technologies and the ways in which these memories reveal normative and accidental use of technological objects. We found that participants primarily remembered older technologies with positive connotations and shared memories of how they had adapted and appropriated these technologies, rather than normative uses. We refer to this phenomenon as nostalgic reminiscence. In the future, we would like to develop this concept further by discussing how nostalgic reminiscence can be operationalized to stimulate speculative design in the present.
The World Health Organization (WHO) and other public health agencies have identified vaccine hesitancy as a critical challenge in reducing future cases and deaths from COVID-19. The current study has investigated ways to improve a widely circulated vaccine infographic video by Centers for Disease Control and Prevention. After gathering qualitative feedback on properties of the message that could be improved (from online crowdworkers), we conducted a randomized experiment to investigate different combinations of these attributes. Our results suggest participants were more likely to share the video which was: (1) played more slowly; (2) had a female speaker; (3) did not have background music. The study demonstrates potential of user studies for improving existing communication strategies for encouraging vaccinations and alleviating vaccine hesitancy on social media platforms. Our contribution also includes a repository of messages to encourage vaccination, generated by online crowdworkers, which could be utilized by future studies.
Current solutions addressing misinformation on social media appear to rely on the misconception that misinformation is predominately spread by Artificial Intelligence (AI). However, the proliferation of false news is mainly due to humans. Solutions to curb misinformation should therefore emphasize human behavioral interventions, rather than focus solely on curbing AI bots. In this study, we analyze social media users’ behaviors by means of a user journey. We create VisualBubble, a design probe that encourages reflection-oriented user experiences during news consumption on social media. We test our design probe with 10 users, to determine its effectiveness in increasing users’ critical reflection on their news consumption behaviors. The initial findings show that VisualBubble can contribute to more critical attitudes towards the news that users are exposed to and, therefore, has the potential to mitigate social media misinformation.
Explanations are well-known to improve recommender systems’ transparency. These explanations may be local, explaining individual recommendations, or global, explaining the recommender model overall. Despite their widespread use, there has been little investigation into the relative benefits of the two explanation approaches. We conducted a 30-participant exploratory study and a 30-participant controlled user study with a research-paper recommender to analyze how providing local, global, or both explanations influences user understanding of system behavior. Our results provide evidence suggesting that both are more helpful than either alone for explaining how to improve recommendations, yet both appeared less helpful than global alone for efficiently identifying false positive and negative recommendations. However, we note that the two explanation approaches may be better compared in a higher-stakes or more opaque domain.
Recommender systems assist users by providing recommendations based on some filtration criteria to reduce information overload. Embedding context-awareness allows recommender systems to use context information around the user, situation, and system to adapt and provide more efficient, relevant, and personalized recommendations. However, embedding context-awareness into recommender systems inherently limits the users‘ control over the systems due to reduced interactivity from automatic adaptations. This may potentially impact users’ use and perception of the systems. Control can be purposefully designed to be given to the user in context-aware recommender systems at different levels. Our work investigates the effects of different levels of user control on the effectiveness and understandability of context-aware recommender systems (CARS) within the scenario of learning through web-based search (called ‘Search-As-Learning’). To enable our study, we implemented a CARS that supports web-based search by recommending users a link using context such as browsing history. Our study found that participants used more recommendations from the CARS with high control compared to no control and some control. In conclusion, higher control in a recommender system for web-based search is preferred by the user despite control manipulation taking more time possibly due to explicit user needs.
As smart-rings emerge in both research and commercial markets, their limited physical size remains to restrict the interaction potential and input vocabulary possible. Thus, focusing on touch interaction for its natural and preferred input potential, this early work explores the combination of slide and microroll gestures performed by the thumb in continual motion on a smart-ring’s touch capacitive surface. We first capture over 3000 slide and microroll gesture instances, extract features, and generate and test machine learning models that are able to discern the slide and microroll gestures within the same touch instance. Through the use of 18 features, our Random Forest model provides a 92.4% accuracy. We conclude with demonstrations of potential applications utilizing continual slide and microroll gestures, and a short discussion which provides future research directions stemming from the positive results obtained from this preliminary work.
User reviews represent a valuable source of information for developers and researchers to learn about the challenges users are facing when using mobile applications (apps). However, studies analyzing mobile app reviews from an accessibility perspective are limited. This paper aims to better understand how accessibility issues are expressed in reviews to reveal potential associations with accessibility guidelines. To this end, we categorized 1,200 accessibility reviews using accessibility categories from previous studies and the W3C Accessibility Guidelines (WCAG 2.1). Our results indicate that accessibility user reviews contain accessibility-specific information. This information, while expressed by users in terms that may differ from the technical terms, were still indicative of specific accessibility guideline violations. Our new dataset can potentially be used by app developers to better understand users’ accessibility experience.
Linguistic skills are the building blocks of one of the most resourceful abilities of human beings: communication. Compromised linguistic skills significantly reduce the understanding or exchanging of meaningful messages and information. Children with difficulties in this area are often enrolled in extensive speech therapy programs and, nowadays, such issues affect also bilingual children with a migrant background, impacting their quality of life. Our research explores a Tangible User Interface developed on a new paradigm we postulated called ”Embodied Argument Movement”. In this paper, we present Moovy, an engaging table-top game for rehabilitating challenging morphosyntactic constructs and improving language skills. We performed two pilot studies with 13 children; one to test the User Experience and the other to assess Moovy’s efficacy and the paradigm soundness. Preliminary results suggest that Moovy could make a difference in speech-therapies programs, fostering children to continue their therapy willingly and successfully.
The need for online meetings increased drastically during the COVID-19 pandemic. Wearing headphones for this purpose makes it difficult to know when a headphone wearing person is available or in a meeting. In this work, we explore the design possibilities of headphones as wearable public displays to show the current status or additional information of the wearer to people nearby. After two brainstorming sessions and specifying the design considerations, we conducted an online survey with 63 participants to collect opinions of potential users. Besides the preference of the colors red and green as well as using text to indicate availability, we found that only 54 % of our participants would actually wear headphones with public displays attached. The benefit of seeing the current availability status of a headphone-wearing person in an online meeting or phone call scenario were nonetheless mentioned even by participants that would not use such headphones.
When interacting with people online, unexpected technical issues such as network impairments or background noise happen frequently. Though studies have shown that network outages negatively impacted remote collaboration, we know little about (1) how people attribute the cause of these technical issues and (2) how technical issues affect people’s perception of their remote interlocutors and the communication system. In an online controlled experiment with 118 participants, we compared people’s perception of the remote speaker and the communication system with and without experiencing technical issues in the simulated online interaction. The result showed that people perceived the remote speaker as less suitable and less credible when experiencing technical issues than without experiencing technical issues in videoconferencing. Moreover, people were not aware that their assessment of the speaker was affected by technical issues. We highlight the need to enhance people’s awareness about the impact of uncontrollable technical issues in remote communication.
With non-Euclidean spaces, Virtual Reality (VR) experiences can more efficiently exploit the available physical space by using overlapping virtual rooms. However, the illusion created by these spaces can be discovered, if the overlap is too large. Thus, in this work, we investigate if users can be distracted from the overlap by showing a minimap that suggests that there is none. When done correctly, more VR space can be mapped into the existing physical space, allowing for more spacious virtual experiences. Through a user study, we found that participants uncovered the overlap of two virtual rooms when it was at 100% or the overlapping room extended even further. Our results show that the additional minimap renders overlapping virtual rooms more believable and can serve as a helpful tool to use physical space more efficiently for natural locomotion in VR.
Music visualization has recently become a popular application of augmented-reality (AR) technology. However, despite their various designs and functionalities, their potential benefits and the effectiveness of their designs are not well explored. By interviewing experts and generating and evaluating design ideas, we offer two contributions: (1) the application space of AR music visualizers, defined by the categories of their potential benefits and the constraints from their contexts of usage; and (2) a four-step design guidelines to design a visualizer that achieves the intended benefit(s), while appropriately handling the contextual constraints. We believe that this paper can provide a concrete basis for designing effective AR music visualizers in the future.
Accessibility research aims to aid humans that experience minor or major disabilities and conditions. However, researchers might have limited exposure to certain disabilities, therefore, focus on those prevalent in their own lives. This work presents a script-based meta-analysis on addressed populations in accessibility research published on top Human-Computer Interaction (HCI) venues (3617 full papers). We categorize the publications regarding the involved people and their disabilities. We found that work on vision disability makes up for almost one third (28.85%) of the work published in general HCI. In light of these findings, we present possible conference- and funding-related explanatory approaches and argue that disability research could more reflect the prevalence of disabilities in the world.
Social support in online mental health communities (OMHCs) is an effective and accessible way of managing mental wellbeing. In this process, sharing emotional supports is considered crucial to the thriving social supports in OMHCs, yet often difficult for both seekers and providers. To support empathetic interactions, we design an AI-infused workflow that allows users to write emotional supporting messages to other users’ posts based on the elicitation of the seeker’s emotion and contextual keywords from writing. Based on a preliminary user study (N = 10), we identified that the system helped seekers to clarify emotion and describe text concretely while writing a post. Providers could also learn how to react empathetically to the post. Based on these results, we suggest design implications for our proposed system.
World building or terrain modelling is an essential task when designing games, natural simulations or artistic creations involving virtual 3D landscapes. To support this task, we propose a virtual reality (VR) system based on a pen and touch tablet used in a sitting position (desktop VR) such that both hands are free to interact in an asymmetric way (pen hand + other bare hand). We present and compare several techniques to perform navigation, sculpting and menu operations using the two hands, which interact on and above the tablet surface, i.e. using the pen, touch and mid-air input spaces. A qualitative evaluation with 16 participants confirms the entertaining nature and practical benefits of our system. The study further underlines the complementarity of the different modalities and identifies the promising—and as of yet underexplored—combination of bimanual touch + pen + mid-air interaction in desktop VR.
Novel voice enhancements for larynx amputees utilize a variety of wearable interfaces and/or artificial intelligence. However, the problem formulation remains posed as “voice restoration”, which stems from common, but limiting, assumptions about communication as a reliable signal transfer, in which “signal” equals to voice. Yet beyond this signal transfer, the vital part of communication is mutual influence. During conversations people coordinate their movement dynamics, i.e. adjust speech timing, intonation and gestures. By taking this perspective, informed by the newest trends in the cognitive sciences (ecological, enacted, extended cognition) as well as recent studies on human-machine integration, we seek to shed new light on the processes of analysis and evaluation of specific systems for laryngectomees. We propose the integrated framework which enables us to interpret bionic communicative interfaces as parts of distributed cognitive systems that allow for different level of control over interactions. This paper analyses 5 bionic systems, consisting of a laryngectomee supported with i) voice copy, ii) voice conversion, iii) silent speech iv) bionic voice or v) lean-AI interfaces. We demonstrate how using the broader perspective leads to noticing important aspects of communication and informs a more human-centered design and evaluation.
CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI) → HCI theory, concepts and models • Human-centered computing → Interaction design → Interaction design theory, concepts and paradigms
Social interactions are multisensory experiences. However, it is not well understood how technology-mediated smell can support social interactions, especially in collaborative tasks. To explore its effect on collaboration, we asked eleven pairs of users to work together on a writing task while wearing an interactive jewellery designed to emit scent in a controlled fashion. In a within-subjects experiment, participants were asked to collaboratively write a story about a standardized visual stimulus while exposed to with scent and without scent conditions. We analyzed video recordings and written stories using a combination of methods from HCI, psychology, sociology, and human communication research. We observed differences in both participants’ communication and creation of insightful stories in the with scent condition. Furthermore, scent helped participants recover from communication breakdown even though they were unaware of it. We discuss the possible implications of our findings and the potential of technology-mediated scent for collaborative activities.
Like us all, people with intellectual disability are self-motivated learning and are enthusiastic information seekers. However, there are some technical challenges for people with intellectual disability when accessing required information and interacting with conversational search systems. This paper presents the preliminary development of a simulated multi-modal conversational search system to understand its potentials and limitations for people with intellectual disability. To conduct an exploratory study, we developed a Wizard of Oz conversational multi-modal system which records user activities, including touch, text and verbal interactions. Then, we preliminary evaluated the system with eight adults with intellectual disability in one of two different information-seeking scenarios – in an individual and collaborative setting. Drawing on the findings and insights of the study, participants valued the system as it fulfilled their information retrieval needs, and it achieved an acceptable level of accessibility. These findings and future research into this topic can guide accessibility improvements to current search systems and the development of future conversational search systems.
Developing user personas has been a crucial aspect of human-centered system design for decades as it helps in understanding and segregating users based on their prominent characteristics. However, such a technique has not been applied in developing and improving systems for supporting open government data (OGD) users. Therefore, this paper explores OGD users’ characteristics and creates relevant personas for them. Open coding-based content analysis and k-means clustering were performed on posts of an online community managed by a U.S. local-level OGD portal, where users’ characteristics such as purposes, challenges, and proficiency in civic data domain knowledge and computational skills were used as features for clustering. Through manually analyzing the output clusters, we identified three personas with their distinct behavior patterns based on their proficiency. The proposed personas can facilitate in personalizing informational and technological support for OGD forum users.
In this paper, we propose a novel method for academic assessment inspired by the decentralized applications made possible by blockchain technology. The proposed method applies to a wide range of academic material, including assignments, exams, academic papers, etc and tackles issues regarding potential personal bias and makes assessment possible without the need to rely on a few assessors. We examine the challenges and possibilities that arise with this method and further explore more general applications in areas such as education. In the experiments conducted for this research, poll results show generally positive views toward the fairness of this system compared to the traditional methods.
Difficulties in social communication along with impairments in understanding other's emotions and exhibiting atypical gaze behavior while avoiding social cues are common concerns in individuals with autism spectrum disorder (ASD). Such deficits are reported to be related with social anxiety. Social anxiety often deters the process of skill learning. Given the dramatic rise in the prevalence of ASD and limited availability of trained therapists, getting access to technology-assisted platforms that can estimate one's anxiety and autonomously vary the task challenges for effective skill learning is critical. The potential of gaze-related indices to serve as biomarkers to one's anxiety, and advancement in computing technologies allowing real-time access to gaze-related social signals, we have designed a Virtual Reality (VR)-based anxiety-sensitive social communication task platform. This features a Rule Generator that is used to offer tasks based on the composite effect of task performance and estimated anxiety from one's gaze in an individualized manner. Results of a study with 10 individuals with ASD showed that the anxiety-sensitive system showed promise in eliciting improved task performance and increased looking towards the face region of a communicator displaying emotional expression in comparison to the currently existing system that is blind to one's anxiety. This paves the way towards design of a complementary tool for the therapists.
Recently, mental health applications have gained increasing attention. This is because research shows that users find mental health apps as a good alternative for self-management of mental conditions, especially in the last two years when access to physicians was limited because of the pandemic. Despite the existence of several mobile apps for mental health, most available apps are not supported by empirical and scientific evidence, or they are designed based on experimental rather than real data. This work represents our first step toward designing evidence-based AI-driven mental health applications. In this paper, we show our initial results and discussion of in-the-wild users’ interactions and engagement with a mental health app (called Moodie) for a period of two years. Specifically, we investigate the interactions and engagements of 434 users (who used the app for two years) with the mood tracking feature where the app allows users to enter their moods and the corresponding factors associated with the mood. Chi-square analysis showed a significant correlation between moods and factors (or user's activities). The results also show that Home, Work, Relaxation, and Family-related activities are the most common factors that affect moods either positively or negatively and that these factors have a different impact on moods during different times of the year.
COVID-19 has been known to have a disproportionate impact on African Americans in the United States. Although studies have been conducted on the reasons for this disparity, there has been less focus on how the African American population sought trustworthy health information using technology. This is important because African Americans’ mistrust of the medical system has been suggested as a possible reason for the disproportionate impact. Therefore, we conducted interviews with 18 African Americans with chronic conditions to discover what types of challenges they faced while searching for trustworthy information on COVID-19 and the strategies they used to overcome these challenges. We found that participants actively evaluated the credibility of different information sources, searched for first-hand information from people they could relate to, and tried to avoid or reduce media consumption to prevent information overload.
Decision support alerts have the potential to assist clinicians in determining appropriate interventions for critically injured patients. The design of these alerts is critical because it can impact their adoption and effectiveness. In this late-breaking work, we explore how decision support alerts should be designed for cognitive aids used in time- and safety-critical medical events. We conducted interviews with 11 trauma team leaders to elicit their thoughts and reactions to potential alert designs. From the findings, we contribute three implications for designing alerts for cognitive aids that support team-based, time-critical decision making and discuss how these implications can be further explored in future work.
Video meetings are notorious for difficulties with conversational turn-taking, which has impacts on inclusion and outcomes. We present a scalable automatic process to categorize turn-taking patterns in remote meetings based on eyes-off analysis of meeting transcripts. Drawing on a series of remote meetings (10 series, 34 total meetings) recorded in July-August 2021 by employees of a global technology company, we identified and parametrized patterns of cooperative and competitive overlaps of turns. The results show initial success characterizing people's behaviours as either likely to continue or cede turns based on either the amount of overlap that they produce during other's turns or the amount of overlap they experience in their own turns. With further development and validation, this method could be used to measure inclusion in remote and hybrid meetings.
Knowledge workers suffer from wrist pain due to their long-term mouse and keyboard use. In this study, we present CareMouse, an interactive mouse system that supports wrist stretching exercises in the workplace. When the stretch alarm is given, users hold CareMouse and do exercises, and the system collects the wrist movement data and determines whether they follow the accurate stretching motions based on a machine learning algorithm, enabling real-time guidance. We conducted a preliminary user study to understand the users’ perception and user experience of the system. Our results showed the feasibility of CareMouse in guiding stretching exercises interactively. We provided design implications for the augmentation of existing tools when offering auxiliary functions.
Increasing adoption of Virtual Reality (VR) systems in various fields has created the need for collaborative work and communication. Today, asymmetric communication between a VR user and other non-VR users remains a challenge. The VR user cannot see the external non-VR users, and the non-VR users are restricted to the VR user’s first-person view. To address this, we propose WebTransceiVR, an asymmetric collaboration toolkit which when integrated into a VR application, allows multiple non-VR users to share the virtual space of the VR user. It allows external users to enter and be part of the VR application’s space through standard web browsers on mobile and computers. These external users can explore and interact with the other users, the VR scene as well as the VR user. WebTransceiVR also includes a cloud-based streaming solution that enables many passive spectators to view the scene through any of the active cameras. We conduct informal user testing to gain additional insights for future work.
Recommender systems have the potential to improve the user experience of digital mental health apps. Personalised recommendations can help users to identify therapy tasks that they find most enjoyable or helpful, thus boosting their engagement with the service and optimising the extent to which it helps them to feel better. Using a dataset containing 23,476 ratings collected from 973 players of a mental health therapy game, this work demonstrates how collaborative filtering algorithms can predict how much a user will benefit from a new therapy task with greater accuracy than a simpler baseline algorithm that predicts the average rating for a task, adjusted for the biases of the specific user and specific task. Collaborative filtering algorithms (matrix factorisation and k-nearest neighbour) outperform this baseline with a 6.5-8.3% improvement in mean absolute error (MAE) and context-aware collaborative filtering algorithms (factorisation machines) outperform with a 7.8-8.8% improvement in MAE. These results suggest that recommender systems could be a useful tool for tailoring recommendations of new therapy tasks to a user based on a combination of their past preferences, the ratings of similar users, and their current context. This scalable approach to personalisation – which does not require a human therapist to always be in-the-loop – could play an important role in improving engagement and outcomes in digital mental health therapies.
Here we present in-progress methodological research exploring how to co-design technology for nature-related experiences. To support increasingly situated and participatory practices in this space, we propose a turn towards co-designing from-the-wild, i.e. ideating during raw engagements that are radically situated in nature. Our approach extends existing in-the-wild practices by (1) enacting co-design in natural (rather than human-made) environments, and (2) avoiding techniques that privilege the designer's agenda over other stakeholders’ and compromise the situated nature of ideation. Our contribution includes: (1) the proposal of co-designing from-the-wild as a response to the limitations of existing in-the-wild methods when designing for and from nature; and (2) early reflections from our hands-on engagement with said approach, which begin to surface exciting opportunities and constraints emerging in this methodological space. By sharing our work-in-progress with the HCI community, we hope to spark a conversation stimulating future co-design methods research in increasingly wilder directions.
Since the COVID-19 pandemics, we have witnessed an increase in online worship services. Nevertheless, HCI has little insight into how technological mediation influences religious experiences and how technology should be designed for use in religious contexts. Therefore, we see a unique opportunity to understand better real-world experiences of technology use in religious rituals and, more specifically, in online worship services. Inspired by contextual design, We virtually observed and interviewed eight persons during and after participation in online worship services. We identified a field of tension between faith, everyday life, individuality, and community. The data suggests that current online worship service systems do not account for believers’ needs for community, faith, or extraordinariness. We discuss opportunities for future research and design, and aim to contribute to the understanding of online worship service experiences and the design of technology-mediated religious experiences.
Running a business without having a website is nearly impossible nowadays. Most business owners use content managements systems to manage their websites. Yet, those can pose security risks and provide vulnerabilities for manipulations. With vulnerability notifications, website owners are notified about security risks. To identify common themes with respect to vulnerability notifications and provide deeper insight into the motivations of website owners to react to those notifications, we conducted 25 semi-structured interviews. In compliance with previous research, we could confirm that distrust in unexpected notifications is high and, in contrast to previous research, we suggest that verification possibilities are the most important factors to establish trust in notifications. We also endorse the findings that raising awareness for the severity and the complexity of the problems is crucial to increase remediation rates.
Deaf and hard-of-hearing (DHH) people experience difficulties in accessing education. One reason is that they miss out on oral information in the classroom. Although some classrooms are equipped with sign language interpreters specifically for deaf students, DHH students have trouble switching their gaze and attention when they cannot see the interpreter and the visual information in the classroom simultaneously. To address this challenge, this paper develops Avatar Interpreter, a visual interface for synchronized sign language interpreters on an augmented reality head-mounted display. Avatar Interpreter can be displayed at different sizes and in different locations in space to fit different scenarios, helping deaf students to better receive sign language information and enhancing the classroom experience for deaf students. In the work presented in this paper, we discuss our prototype, make design recommendations, and discuss configuration and implementation. Finally, we propose questions and research methods for an upcoming user study.
Motivated by the outbreak of COVID-19, museums have increased interests in online museum and online museum education. This study presents a way to use AI image synthesis technology for online art education guided by a constructivist design approach. An experiment was conducted to empirically test the effectiveness of AI-based art education in the online museum context. A total of 83 participants visited one of 3 different web-based art museums (i.e., AI image synthesis not applied vs. AI image synthesis applied with given photos vs. AI image synthesis applied with self-uploaded photos). Those who experienced the online museum with synthesized images using self-uploaded photos reported a higher level of motivation and satisfaction and to experience in a more constructivist way compared to other conditions.
Social robots can be applied in different settings to enhance experiences for humans. Understanding physical and behavioural aspects of robot design is important to promote acceptance in human-robot interaction. In this study, we focus on robot initiative-taking as a behavioural aspect and investigate how it influences human perception and emotion during human-robot interaction. We built on previous work using questionnaires to evaluate user impressions in the presence or absence of robot activeness (initiative-taking). We also used Galvanic skin response (GSR) as a physiological measure to gauge participants’ emotion during interaction. Questionnaire analysis confirmed that generally active robot behaviour improved human impression. Moreover, the order of interaction seems to matter for participants to take more notice of robot initiative-taking. GSR analysis supported the questionnaire results, showing that on average participants were more emotionally aroused during an active interaction and that order of interaction somewhat mattered.
Being informed about the accessibility of neighborhoods, cities, and regions can help persons with disabilities in making travel and daily decisions. This information can also be useful and a pushing factor for supportive public policies. While accessibility mapping initiatives, such as Wheelmap.org, have enjoyed tremendous success and scale, they are still far from exhaustive, and their coverage contains biases stemming from volunteer practices. With the aid of the framework of causal statistics, we suggest approaches to adjust for these biases, with the end goal of providing helpful approximations of overall accessibility in different European geographical regions.
HCI research often involves accessing and storing information in databases. However, in case of a database node failure, researchers could experience significant work delays, monetary costs, and data loss. How can researchers who have little or no knowledge of systems and infrastructures ensure that their data collection source is reliable and maximally available for accessing and storing data? To answer this question, we surveyed 11 HCI researchers. Using the findings from the survey, we developed Styx++—an easy-to-integrate open-source solution that bundles together existing tools and concepts, providing HCI researchers with a reliable distributed system for their database needs. Styx++ is a hybrid solution involving both the Paxos and Chain Replication Protocol, providing strong consistency and high availability to minimize the risks of single-point failures in a traditional database system setup. Our evaluation of Styx++ against benchmark solutions shows promising results of an increase in reliability without substantial performance degradation.
This paper presents an exploratory study of designing augmented reality (AR) based interventions to encourage physical activity among students attending virtual classes. We conducted a focus group study with four HCI students to understand the current behaviour of students during breaks between classes. Based on these insights we designed two AR interventions: AR Exergame, an interactive game that requires the user to move their hands to grab virtual apples; AR micro-movements, a method that requires the user to switch to a different physical space when starting a new virtual class. These AR interventions were aimed at encouraging students to perform micro-activities in between virtual classes, i.e. small non-strenuous activities. The effectiveness of these methods to reduce video conferencing fatigue, a.k.a Zoom fatigue, and also to compare it with the students current methods was tested through a user study with six participants.
Within the general development of games becoming more pervasive and daily life turning more gameful, various location- and movement-based games have become prominent in contemporary culture, and are increasingly used in augmenting the physical reality. This study investigates tensions that arise in augmenting the mundane experience of walking in both urban and nature environments with location-based games (LBGs). We conducted an 8-week autoethnographic study of a newly launched LBG, Pikmin Bloom, a game that can be characterised as gamified walking. We focused on the central design tension of “augmenting walking vs. avoiding disturbing players’ everyday life.” Connected to this, we discuss four other tensions: (1) promise of future vs. enjoyable present; (2) too abundant vs. too scarce rewards; (3) seeking symbiosis vs. manipulating the environment; and (4) player privacy vs. immersive gameplay. This work-in-progress suggests that failing to optimally balance these tensions can have detrimental effects on the playing experience.
Laparoscopic surgeries require a high degree of visuo-spatial coordination between attending and resident surgeons. The challenge is intensified when surgeons communicate verbally using visual cues. Most prior work in the space supports attending surgeons to give clearer instructions to residents. However, in order to achieve intraoperative success, shared understanding, coordination and trust between faculty-resident dyads is essential. Our work focuses on unpacking both attending and resident surgeons’ experiences during intraoperative operations. We perform an interview study with 6 attending and 3 resident surgeons, in which we ask participants to share their thoughts on the utility and feasibility of capturing surgical dyads’ joint visual attention (JVA) during live surgeries. We find that attending and resident surgeons have contrasting and complementary views about autonomy, communication and coordination during surgeries. We also see positive attitudes towards capturing surgeons’ visual attention during live surgeries and using the data to support communication, coordination and instruction.
Engaging racially and ethnically diverse participants in Human-Computer Interaction (HCI) research is critical for creating safe, inclusive, and equitable technology. However, it remains unclear why and how HCI researchers collect study participants’ race and ethnicity. Through a systematic literature analysis of 2016–2021 CHI proceedings and a survey with 15 authors who published in these proceedings, we found that reporting race and ethnicity of participants is uncommon and that HCI researchers are far from consensus on the collection and analysis of this data. Because a majority (>90%) of the articles that report participants’ race and ethnicity are conducted in the United States, we focused our discussion on race and ethnicity accordingly. In future work, we plan to investigate considerations and best practices for collecting and analyzing race and ethnicity data in a global context.
Community organizers build grassroots power and collective voice in communities that are structurally marginalized in representative democracy, particularly in minoritized communities. Our project explores how self-identified community organizers use the narrative potentials of data to navigate the promises of data activism and the simultaneous risks posed to working-class communities of color by data-intensive technologies. Our nine respondents consistently named the material, financial, intellectual, and affective demands of data work, as well as the provisional, tenuous possibility of accomplishing movement work via narratives bolstered by data. Our early results identified two important factors in community organizers’ assessment of the efficacy and political potential of narratives built with data: audience and legitimacy.
Informal caregivers often struggle in managing to cope with both the stress and the practical demands of the caregiving situation. It has been suggested that digital solutions might be useful to monitor caregivers’ health and well-being, by providing early intervention and support. Given the importance of self-disclosure for psychological health, here we aimed to investigate the potential of employing a social robot for eliciting self-disclosure among informal caregivers over time. We conducted a longitudinal experiment across a five-week period, measuring participants’ disclosure duration (in seconds) and length (in number of words). Our preliminary results show a positive trend where informal caregivers speak for a longer time and share more information in their disclosures to a social robot across the five-week period. These results provide useful evidence supporting the potential deployment of social robots as intervention tools to help provide support for individuals suffering from stress and experiencing challenging life situations.
Soft robotics uses soft, flexible materials and elastic actuation mechanisms to create systems that are more adaptable and tolerant to unknown environments, and safer for human-machine interaction, than rigid robots. Pneumatic soft robots can be fabricated using more affordable materials compared to traditional robots and make use of technologies such as 3D printing, making them an attractive choice for research and DIY projects. However, their design is still highly unintuitive, and at up to two days, design iterations can take prohibitively long: The behavior of, e.g., a pneumatic silicone gripper only becomes apparent after designing and 3D printing its mold, casting, curing, assembling, and testing it. We introduce SoRoCAD, a design tool supporting a Maker-friendly soft robotics design and fabrication pipeline that incorporates simulating the final actuation into the design process.
Opening-encounters are an integral element of social interaction and are essential for social relationships. Specifically, opening-encounters between strangers form a complex social context and often involve awkwardness and tension. We explored whether augmenting everyday objects with autonomous capabilities can facilitate an opening-encounter between strangers. A pair of robotic bar-stools were designed to rotate participants sitting on them. We evaluated the opening-encounter experience in three conditions: bar-stools rotating participants towards one another; bar-stools rotating participants away from one another; and bar-stools with no rotation. Our initial findings indicate that rotating participants towards each other led to positive encounters, encouraged social interaction, and increased interpersonal communication. The other two conditions were less likely to initiate social interactions. This preliminary study highlights the potential of facilitating positive opening-encounters using autonomous furniture that are perceived as a natural part of the interaction, without altering its human-human nature.
Recent advances in Large Language Models (LLM) have made automatic code generation possible for real-world programming tasks in general-purpose programming languages such as Python. However, there are few human studies on the usability of these tools and how they fit the programming workflow. In this work, we conducted a within-subjects user study with 24 participants to understand how programmers use and perceive Copilot, a LLM-based code generation tool. We found that, while Copilot did not necessarily improve the task completion time or success rate, most participants preferred to use Copilot in daily programming tasks, since Copilot often provided a useful starting point and saved the effort of searching online. However, participants did face difficulties in understanding, editing, and debugging code snippets generated by Copilot, which significantly hindered their task-solving effectiveness. Finally, we highlighted several promising directions for improving the design of Copilot based on our observations and participants’ feedback.
OnlyFans is a digital patronage platform where independent creators provide exclusive content to their audiences for a monthly subscription fee. However, the platform affords creators limited resources to build their audiences, which is crucial to their success on the platform. Drawing on 15 semi-structured interviews with OnlyFans adult content creators, this study finds that creators must navigate the platform’s limitations through community building with fellow creators. Specifically, they engaged in computer-mediated communication with fellow creators to solicit, and provide, forms of social support. Our study thus sheds light on the social strategies that workers in the gig economy employ to overcome platform constraints.
User reviews of mobile applications in online stores have been used as a source for numerous studies. Many focus on indicating new functional requirements and extracting problems regarding quality characteristics, such as Usability. One of the most known strategies to evaluate Usability is Nielsen’s heuristics. We believe that user reviews in online stores can indicate heuristics issues, thus supporting evaluations more focused on the user feedback. Therefore, this study aims to investigate the presence of usability heuristic issues through user reviews. To achieve this goal, we collected and analyzed 200 reviews, split into 528 statements, from 10 free and paid Android and iOS apps. Our results showed that user reviews have certain potential to indicate heuristic issues, suggesting new and challenging future research for the CHI community.
Applications based on speech recognition have become widely used, and speech input is increasingly being utilized to create documents. However, it is still difficult to correct misrecognition by speech, which makes it necessary to re-edit documents by other means such as manual input. It is also difficult to input symbols and commands because these may be misrecognized as text letters. To deal with these problems, we propose a speech interaction method called DualVoice in which commands are input in a whispered voice and letters in a normal voice. The proposed method does not require any special hardware other than a regular microphone, thus enabling a complete hands-free interaction, and it can be used in a wide range of situations where speech recognition is already available. We designed two neural networks, one for discriminating normal speech from whispered speech, and the second for recognizing whisper speech.
In this paper, we present the design of Element, an ambient display system that displays the user’s emotional engagement level through color and shape change on a light fixture. Given the increasing number of people who experience exhaustion or burnout in professional contexts, our goal is to design an interface that helps people keep track of their engagements through an ambient display of real-time physiological data. We conduct two qualitative studies to investigate its possible benefits and users’ concerns from an individual perspective and social contexts with multiple users. The result shows that our design can not only aid users’ self-reflections and discovery on the intrapersonal level, it also shows the potential to help participants understand others’ emotional engagement and provide vital support when needed. Interestingly, our findings also highlighted the ethical concerns of displaying personal data in a public space. Reflecting on this crucial challenge, we also propose three social guidelines for helping users establish positive mindsets and cultures of using Element appropriately in sociotechnical environments.
The lived experiences of nonspeaking autistic people are typically not reflected in technology designed to support them. In this paper, we describe insights from nonspeaking autistic people who have learned to communicate by typing and those who support them. We leverage these within an augmented reality communication training prototype. The prototype provides an engaging, personalized learning environment in which a user can systematically develop their pointing skills. Pointing skills combined with elements of language such as spelling can in turn facilitate expressive written communication skills. We describe our requirements elicitation process and provide details about our initial implementation.
Spatially sonified audio is frequently used in virtual environments to serve as an object localization cue. However, spatializers that are available to the general public typically have their limitations and lead to sound localization errors. By combining it with non-spatial sonification, the technique of encoding information with perceptual attributes of the sound, the localization errors of spatially sonified audio could be reduced. However, the encoded information would require higher mental effort to understand, which could be a concern under high task load. In this paper, we conducted a subjective study to compare spatial sonification against combined auditory cue using spatial and non-spatial sonification for target localization. We tested both methods under low and high task load. Our results show that combined sonification could significantly reduce the errors in localizaiton, but the advantage would be influenced by higher task load. Further, we compared several different non-spatial sonification methods for encoding depth information. Our experiment shows that both pitch and pulse frequency modulation are feasible for presenting depth of the target. Our results could contribute as a first step to a general guideline of designing sonification method as target localization cue.
Online communities often experience some form of crisis. Regardless of whether these communities are small groups or entire platforms, the ability to handle disruptions during volatile periods signifies the resilience of a community. In this paper, we analyze the effects of a specific type of crisis event on Reddit: sudden spikes in attention received by a community due to a post from the subreddit hitting r/popular—the default feed that aggregates the most popular content on Reddit. Our results show that r/popular is a source of potential disruptions due to the higher number of comments, authors, and especially removed comments found compared to threads that do not reach r/popular. When looking at r/popular’s effects across subreddits, large communities’ commenting and posting behaviors were less impacted by r/popular threads compared to smaller communities. However, similar-sized subreddits had substantial variations in how their commenting and posting activities were influenced by an r/popular thread. Understanding the causal factors behind these differences has implications for online governance, as well as fostering resilient and healthy communities.
Design systems provide detailed guidelines on UI items’ appearance and behavior and serve as a primary reference for interface development. For bidirectional languages (e.g., Arabic and Hebrew), it is essential to highlight an additional parameter – the direction of the UI item. However, investigating the correct direction of all items could involve costly research. Hence, it is important to set priorities for such research. While prioritizing usability issues has long been attended by the HCI community, the specific problem of this research is new and has never been addressed. To deal with this challenge, we conducted five focus groups with HCI professionals and identified a set of prioritization criteria. We then used those criteria to prioritize the UI items for future empirical evaluation of their preferred directionality. In addition to these findings, we provide some insights from our experience of using focus groups for the prioritization task.
Smartwatches and fitness trackers integrate different sensors from inertial measurement units to heart rate sensors in a very compact and affordable form factor. This makes them interesting and relevant research tools. One potential application domain is virtual reality, e.g., for health related applications such as exergames or simulation approaches. However, commercial devices complicate and limit the collection of raw and real-time data, suffer from privacy issues and are not tailored to using them with VR tracking systems. We address these issues with an open source design to facilitate the construction of VR-enabled wearables for conducting scientific experiments. Our work is motivated by research in mixed realities in pervasive computing environments. We introduce our system and present a proof-of-concept study with 17 participants. Our results show that the wearable reliably measures high-quality data comparable to commercially available fitness trackers and that it does not impede movements or interfere with VR tracking.
Cognitive and physical exercises are important factors to support a healthy life, especially considering demographics change. Virtual reality (VR) exergames have great potential to support these activities in a more motivating way. However, regular usage of VR exergames by the older population is still limited. To address this issue, we designed and implemented a VR exergame: Canoe VR. We applied several prototyping sessions with older players and also report the results from interviews with physiotherapists. The results suggest that Canoe VR was very well received and can be used by older players as an additional fitness tool. We discuss the implications of extending a fitness routine with a VR exergame and using the game with players of different abilities.
Mental health applications (apps) show a good potential to help people suffering from a variety of mental illnesses. Most existing apps focus on people from developed countries. Our research contributes by focusing on underserved populations - Indian Working-Class Women. This study aims at creating a stress and anxiety management application for working-class Indian women (people who are a part of the workforce). We conducted a one-on-one interview with 31 working-class Indian women. Participants highlighted household chores and workload as major sources of stress/anxiety for them. The results also highlight several features required in mental health apps by this group including mood tracking features, social community features. Participants highlighted some concerns with existing mental health apps including privacy issues and the high fees charged for premium features. Based on our findings, we discuss design implications for future work in the field of creating interactive mHealth apps.
Customer Journey Mapping is a widespread service design tool that synthesizes and communicates user research insights to stakeholders. In its common form, a journey map is a synthetic (typically non-interactive) visualization of the key steps of the users’ experience with a service or product. By decomposing the elements of a journey map and staging them under the form of a physical and interactive installation, we intend to leverage the power of journey mapping to break silos and prompt employees within an organization to discover end-users journeys in a compelling and empathic way. This aims to support the user-centered maturity of the organization by developing employees’ curiosity and empathy towards users. We illustrate this approach through a case study on railway passengers’ experiences. We explore the value of richer transfers of user research insights through physical journey maps and discuss design processes and mediums enabling journey maps to come to life.
Brain activity sometimes synchronises when people collaborate together on real world tasks. Understanding this process could to lead to improvements in face to face and remote collaboration. In this paper we report on an experiment exploring the relationship between eye gaze and inter-brain synchrony in Virtual Reality (VR). The experiment recruited pairs who were asked to perform finger-tracking exercises in VR with three different gaze conditions: averted, direct, and natural, while their brain activity was recorded. We found that gaze direction has a significant effect on inter-brain synchrony during collaboration for this task in VR. This shows that representing natural gaze could influence inter-brain synchrony in VR, which may have implications for avatar design for social VR. We discuss implications of our research and possible directions for future work.
In 2015, all members of the United Nations adopted the 2030 agenda for Sustainable Development (SD). Seventeen (17) goals (SDGs) were formulated towards peace and prosperity for people and the planet. However, concerns have been raised about whether these sustainable goals can be achieved by 2030. Communicating advances regarding SDGs to citizens in an effective and engaging way is thus crucial, as it can reveal which goals need more attention and prompt them to think about what to do at an individual level to support SDG progress. Physicalizations present an opportunity in this context and the overall goal of this work is to empirically articulate the merits of different strategies to convey SDG data physically. We created a data physicalization that uses vibration and temperature as modalities to convey facts related to SDG 7 (Affordable and Clean Energy). Vibration and temperature was chosen to aim for reducing the metaphorical distance between the data and the representation (i.e. to align the quality of data with quality of representation). In a preliminary evaluation, both modalities were perceived as enjoyable by the participants. The two modalities were efficient, however, vibration as a modality was more effective. Temperature, despite presenting a lower metaphorical distance, did not appear to be an effective modality to convey SDG information.
Rapid eating is linked to numerous health problems, such as obesity and gastritis. In this study, we explore the possibility of using mastication sound as a novel behavior change strategy to subtly regulate rapid eating behavior. In particular, we present SLNOM, a system that can automatically detect chewing behavior using a convolutional neural network (CNN) model, and slow down the playback speed of real-time mastication sounds to implicitly modify eating behavior. Two empirical studies have been conducted to determine: 1) the threshold of sound volume and speed without user perception; and 2) the feasibility and effectiveness of SLNOM in changing eating behavior using a Wizard of Oz study. The result indicated that manipulation of chewing sound could modulate eating rate, bite size without cognitive and behavioral effort. We discussed how cognitive science could explain these findings and suggested how future eating interventions can be designed to take advantage of current exploration.
Joint media engagement (JME) provides digital interaction and becomes a convenient tool for collaborative storytelling. However, the person perspectives, reusability, and modifiability of the story contents are rarely discussed; also parents may lack the skills of professional story-making. Therefore, we propose an object-oriented three-dimensional (3D) storytelling system, which provides an interactive educational media space for parent-child storytelling. The system supports manipulating reusable character objects and narrating stories from the 1st and 3rd person perspectives in an intuitive manner. Parents can create stories freely based on the background of famous fairy tales and incorporate family education. Our goal is to create a storytelling system for educational entertainment that is fun and conducive to free expression and creativity.
In this mixed-methods study, we attempt to capture users’ conception of AI through the two-dimensional mind perception framework (perceived agency vs. experience) in cognitive psychology  and a series of drawing tasks. Our data illustrate how participants perceive AI entities with physical embodiment, depicting AI through devices, imaginary human figures, or full techno-ecosystems. Furthermore, we apply users’ mind maps of AI entities to highlight risks in human-AI interaction (HAII) and propose design solutions accordingly. We posit HAII research and development should be cognizant that users possess existing AI images and should exploit them as starting points for design improvement.
Deep conceptualization and reflection of online information is essential to people’s knowledge acquisition and decision-making in high-stakes domains, such as health. Reflective thinking, however, demands extra cognitive efforts, resources and skills, and therefore could deter people from taking further steps to question information encountered online. In this paper, we proposed a digital nudging tool for supporting critical reflection of video contents based on concept mapping and peer commenting mechanisms. The proposed tool, DeepThinkingMap, aims to promote people’s understanding and reflection of video content via interface features that foster the disclosure of personal conceptualization and transparency of personal beliefs about the video. By seeing how peers conceptualize and reflect about the video content, the concepts and comments made available to people could potentially serve as a ”thinking nudge,” allowing individuals to reap in-depth thoughts about the video otherwise inaccessible to them. Through a proof-of-concept controlled evaluation, we found that seeing peer thoughts through DeepThinkingMap significantly increased content comprehension, and fostered greater efforts for reflection in comparison to the baseline of receiving no nudge. The study contributes to understanding the socio-technical-cognitive mechanisms and the design space of social nudging that may be utilized to support reflection and critical thinking toward high-stakes information.
As the use of communication on social media among children increases, the cyberbullying phenomena is becoming a prominent challenge. We present the preliminary design of Tobe, a virtual keyboard that provides textual and visual feedback in real-time to help prevent harmful discourse on social media among elementary school-aged children. The widget is designed based on two unique principles: (1) An empathy based feedback mechanism which involves an animated character and verbal statements (both positive and negative); (2) A class specific solution in which children and the teacher build a vocabulary of words together. This element provides transparency as to why feedback was shown at any given moment, and can increase cooperation and empowerment for both the teacher and the children. We present the human centered design process and early results from a user study with 12 children. Our findings show high engagement with the system, and user awareness before texting.
The integration of various technologies with our daily lives, such as smartwatches, smartphones, smart TVs, etc. has proven beneficial. However, the HCI community is still in search for more intuitive text entry methods for smart devices. In this study, we explore a new text entry method—AcousticType for smart devices where a smartwatch is leveraged to infer what a user is typing on a physical keyboard employing the acoustic signals as a secondary input mechanism. AcousticType employs four modules: Noise Cancelling, Keystroke Detection, Key Identification and Word Correction. Our results show AcousticType recovers up to 98% of the typed text. We also showed that utilizing the noise cancellation, AcousticType is robust to the changes in the environment, which further boosts the practicality. The findings are promising and call for further investigation into new types of text entry methods that utilize acoustic signals to cope with the usability issues in smart devices.
This work aims to provide early-stage insights into an electro-thermally actuated liquid crystal elastomer (LCE) fiber for novel shape-changing behaviors that are both programmable and reversible. We build a control system and experimentally investigate the electro-thermal characteristics and actuation, identifying four categories of fiber behavior: Oscillating Tip, Oscillating Fiber, Tilting and Bending. These key parameters illustrate the broad application potential of the proposed approach within a functional, communication and expressive context. Our contributions are threefold a.) the control of an electro-thermal responsive LCE fiber, b.) directions for soft robotic device integration c.) early-stage insights into fiber shape-changing behaviors towards application.
As one of the manifestations of virtual reality (VR) in education, virtual classroom allows students to enjoy a near-real classroom experience. VR class creates much better engagement and helps to stimulate interest and motivation in learning, making it an ideal solution to online teaching and learning activities, especially during the COVID-19 pandemic. Distraction is an unavoidable problem in immersive virtual classes, which has a great detrimental impact on learning. However, how to intervene in students’ distraction behaviors in immersive virtual environments has not been thoroughly investigated up to now. In this paper, inspired by teachers’ instructional techniques in real-life classes, we propose three intervention strategies, namely eye contact, verbal warning and text warning, and explore the intervening effects of these strategies on the inattention of students seated at the front or back of a virtual classroom via eye tracking. Our results show that all of the proposed intervention strategies have positive impacts on the attention of students. This research gives evidence that teachers’ instructional techniques in the real world can be transferred to the virtual class, which provides a new insight for the future design of educational VR.
The high cost of dispatching professionals to physically audit locations limits the coverage of accessibility maps. Vision-based techniques and crowdsourcing using streetscape imagery are beneficial as they do not entail physical visits by people to actual locations. However, they exhibit limitations such as outdated photos and occlusions. In-the-field crowdsourcing enables physical collection of accessibility information at a low cost. The effectiveness of this technique is contingent on people with free time and high motivation to visit actual locations. In this paper, we introduce a crowdsourcing platform for constructing accessibility maps that support people with diverse free times and motivations. A combination of field auditing and field sensing, coupled with including and excluding gamification is explored. Game types are determined based on the characteristics of field auditing and field sensing. The experimental results confirm that gamification improves the motivation levels and performance of participants in certain aspects of crowdsourced field auditing and crowdsourced field sensing. Furthermore, the results indicate that item collection games have a greater impact on the motivation for taking unusual roads to collect accessibility information than continuous walking games.
Since December 2020, the Apple App Store has required all developers to create a privacy label when submitting new apps or app updates. However, there has not been a comprehensive study on how developers responded to this requirement. We present the first measurement study of Apple privacy nutrition labels to understand how apps on the U.S. App Store create and update privacy labels. We collected weekly snapshots of the privacy label and other metadata for all the 1.4 million apps on the U.S. App Store from April 2 to November 5, 2021. Our analysis showed that 51.6% of apps still do not have a privacy label as of November 5, 2021. Although 35.3% of old apps have created a privacy label, only 2.7% of old apps created a privacy label without app updates (i.e., voluntary adoption). Our findings suggest that inactive apps have little incentive to create privacy labels.
Non-use, particularly involuntary non-use, is an under-researched topic in HCI research, even though it has become quite common nowadays due to frequent digital outages. How do users react to social media outage? Do they become anxious? Or, do they enjoy these brief episodes of social media detox? To answer these questions, we conducted a topic modeling analysis of 223,815 tweets that used the hashtag #facebookdown during the major Facebook outage on 10/4/2021. We uncovered 10 major themes of users’ reactions towards social media outage. Results showed that most users complained, mocked and showed desperation about the outage situation, and during the outage period, increased their quest for other social-media alternatives. Also, surprisingly, many users celebrated the detox from Facebook rather than wishing it to come back as soon as possible. Results offer design implications for practitioners who would like to better respond to future outages.
As immersive virtual experiences find their way into our living room entertainment, they are becoming part of our daily technological consumption. However, state-of-the-art virtual reality (VR) remains disconnected from other digital devices in our environment, such as smartphones or tablets. As context switches between acting in the virtual environment and resolving external notifications negatively influence immersion, we look towards integrating smart devices into virtual experiences. To this aim, we present the VRySmart framework. Through either optical marker tracking or simultaneous localization and mapping (SLAM), embedded smart devices can be used as VR controllers with different levels of integration while their content is incorporated into the virtual context to support the plausibility of the illusion. To investigate user impressions, we conducted a study (N = 10) where participants used a smartphone in four different virtual scenarios. Participants positively assessed smart device usage in VR. We conclude by framing implications for future work.
While LLMs have made it possible to rapidly prototype new ML functionalities, many real-world applications involve complex tasks that cannot be easily handled via a single run of an LLM. Recent work has found that chaining multiple LLM runs together (with the output of one step being the input to the next) can help users accomplish these more complex tasks, and in a way that is perceived to be more transparent and controllable. However, it remains unknown what users need when authoring their own LLM chains – a key step to lowering the barriers for non-AI-experts to prototype AI-infused applications. In this work, we explore the LLM chain authoring process. We find from pilot studies that users need support transforming data between steps of a chain, as well as debugging the chain at multiple granularities. To address these needs, we designed PromptChainer, an interactive interface for visually programming chains. Through case studies with four designers and developers, we show that PromptChainer supports building prototypes for a range of applications, and conclude with open questions on scaling chains to even more complex tasks, as well as supporting low-fi chain prototyping.
Thermal cues in the car can be used as an alternative to audio and vibration feedback to lessen the burden on the often overloaded visual channel. This paper presents an investigation of a heated car seat to evaluate the effectiveness of the heating for feedback purposes. The temperature changes of a heated car seat were measured without and with seated passengers to assess the heating capabilities in general and to understand the temperature interaction with a seated person. A user study (N=12) investigated the recognition times during a simulated driving task. Temperature changes during driving simulation were detected on average in under 1min, with an average increase of around 0.33℃ on the backrest (0.36℃ on seat). These initial results can be used as basis and inspiration for further investigations to lessen the visual channel with thermal cues.
Various tools and approaches are available to support undergraduate students learning to program. Most of them concentrate on the code and aim to ease the visualization of data structures or guide the debugging. However, in undergraduate introductory courses, students are typically given exercises in the form of a natural language problem. Deriving a correct solution largely depends on the problem-solving strategy they adopt rather than on their proficiency in dealing with the syntax and semantics of the code. Indeed, they face various challenges (apart from the coding), such as identifying the relevant information, stating the algorithmic steps to solve it, breaking it into smaller parts, and evaluating the implemented solution. To our knowledge, almost no attention has been paid to supporting such problem-solving strategies before and during the coding. This paper reports an interview and a sketching exercise with 10 participants exploring how the novices approach the programming exercises from a problem-solving perspective and how they imagine a tool to support their cognitive process. Findings show that students intuitively perform various actions over the exercise text, and they would appreciate having support from the development environment. Accordingly, based on these findings, we provide implications for designing tools to support problem-solving strategies.
We revisit the foundational principles of Ambient Intelligence (AmI) and Augmented Reality (AR) environments to discuss the perspective that AmI and AR feature the same vision of computing, as intuited at their origins, despite their recent development into what may appear as two distinct areas of scientific investigation. We focus on three concepts core to both AmI and AR, on which we capitalize to argue that a significant philosophical overlap exists between their visions: (1) the concept of an environment that undergoes a form of augmentation, (2) the indispensable process of an integration involving the environment, and (3) the emergence of a specific form of media congruent with the characteristics of the environment in which they are created, transmitted, and consumed. We draw implications for the science and practice of Human-Computer Interaction regarding new interactive environments enabled by the technologies of AmI and AR used conjointly.
Despite ever improving digital ink and paper solutions, many people still prefer printing out documents for close reading, proofreading, or filling out forms. However, in order to incorporate paper-based annotations into digital workflows, handwritten text and markings need to be extracted. Common computer-vision and machine-learning approaches require extensive sets of training data or a clean digital version of the document. We propose a simple method for extracting handwritten annotations from laser-printed documents using multispectral imaging. While black toner absorbs infrared light, most inks are invisible in the infrared spectrum. We modified an off-the-shelf flatbed scanner by adding a switchable infrared LED to its light guide. By subtracting an infrared scan from a color scan, handwritten text and highlighting can be extracted and added to a PDF version. Initial experiments show accurate results with high quality on a test data set of 93 annotated pages. Thus, infrared scanning seems like a promising building block for integrating paper-based and digital annotation practices.
While seamful design has been part of discourses and work within HCI contexts for some time, it has not yet been fully explored in data visualization design. At the same time, critics of visualization have been arguing that the representation of data as contextual, contingent, relational, partial, heterogeneous, and situated is currently lacking in visualization. Seamful visualization promises a fresh perspective on visualization design as we seek to find more expressive encodings and novel approaches to representing data that acknowledge their wider qualities and limitations. By consulting seams in other realms and exploring existing seams and seamfulness in visualization, this paper offers a foundation for conceptualizing seamful visualization, points towards the value of seams and seamfulness in critical visualization, and proposes principles for engaging with seamful visualization in practice and research.
Crowdworking platforms are a prime example of a product that sells flexibility to its consumers. In this paper, we argue that crowdworking platforms sell temporal flexibility to requesters to the detriment of workers. We begin by identifying a list of 19 features employed by crowdworking platforms that facilitate the trade of temporal flexibility from crowdworkers to requesters. Using the list of features, we conduct a comparative analysis of nine crowdworking platforms available to U.S.-based workers, in which we describe key differences and similarities between the platforms. We find that crowdworking platforms strongly favour features that promote requesters’ temporal flexibility over workers’ by limiting the predictability of workers’ working hours and restricting paid time. Further, we identify which platforms employ the highest number of features that facilitate the trade of temporal flexibility from workers to requesters, consequently increasing workers’ temporal precarity. We conclude the paper by discussing the implications of the results.
Computational design tools can automatically generate large quantities of viable designs for a given design problem. This raises the challenge of how to enable designers to efficiently and effectively evaluate and select preferred designs from a large set of alternatives. In GeneratiVR, we present two novel interaction techniques to address this challenge, by leveraging Virtual Reality for rich, spatial user input. With these interaction methods, users can directly manipulate designs or demonstrate desired design functionality. The interactions allow users to rapidly filter through an expansive design space to specify or find their preferred designs.
Recent advances in ultra-low-power ubiquitous touch interfaces make touch inputs possible anytime, anywhere. However, their functions are usually pre-determined, i.e., one button is only associated with one fixed function. BoldMove enables spontaneous and efficient association of touch inputs and IoT device functions with semantic-based function filtering and a wait-confirm sequential selection strategy. In this way, such touch interfaces become ubiquitous IoT device controllers. We proposed the semantic-based IoT function filtering to improve control efficiency, then designed the sequential selection mechanism for interfaces with constrained input and output resources. We implemented BoldMove on a custom-built touch interface with capacitive button inputs and a smartwatch display. We then conducted a user study to determine the design parameters for the sequential selection method. At last, we validated that BoldMove only takes 3.25 seconds to complete a selection task if the target function appears within the Top-3 displayed item. Even if the assumption is relaxed to Top-10, BoldMove is still estimated to be more efficient than the conventional selection method with device-based filtering and menu-navigated selection.
Requesters on crowdsourcing platforms like Amazon Mechanical Turk (AMT) compensate workers inadequately. One potential reason for the underpayment is that the AMT’s requester interface provides limited information about estimated wages, preventing requesters from knowing if they are offering a fair piece-rate reward. To assess if presenting wage information affects requesters’ reward setting behaviors, we conducted a controlled study with 63 participants. We had three levels for a between-subjects factor in a mixed design study, where we provided participants with: no wage information, wage point estimate, and wage distribution. Each participant had three stages of adjusting the reward and controlling the estimated wage. Our analysis with Bayesian growth curve modeling suggests that the estimated wage derived from the participant-set reward increased from $2.56/h to $2.69/h and $2.33/h to $2.74/h when we provided point estimate and distribution information respectively. The wage decreased from $2.06/h to $1.99/h in the control condition.
Autonomous cars offer passengers a rich platform for Augmented Reality entertainment, with complex sensing that can drive passenger experiences by tracking, appropriating and altering elements of reality. This paper forms an early exploration of in-car AR games, starting with how existing game genres might work within an AR vehicle context, appropriating elements of reality (e.g. other cars) into gameplay, and altering the appearance of reality in relation to game events (e.g. augmented cracks in car windows). We discuss results from focus groups exploring an initial AR game prototype, and an informal in-car evaluation of a follow-up prototype inspired by the focus groups. Broadly, we found that participants enjoyed using AR gaming in-car, noted the immersive impact of appropriating real-world elements into gameplay, and felt that it improved their experience of the journey. We reflect on the ways in which future AR passenger experiences might take advantage of the available sensing and environment to create engaging reality-based gameplay.
In group decision-making, we can frequently observe that an individual adapts their behavior or belief to fit in with the group’s majority opinion. This phenomenon has been widely observed to exist especially against an objectively correct answer—in face-to-face and online interaction alike. To a lesser extent, studies have investigated the conformity effect in settings based on personal opinions and feelings; thus, in settings where an objectively right or wrong answer does not exist. In such settings, the direction of conformity tends to play a role in whether an individual will conform. While cultural differences in conformity behavior have been observed repeatedly in settings with an objectively correct answer, the role of culture has not been explored yet for settings with subjective topics. Hence, the focus of this study is on how conformity develops across cultures for such cases. We developed an online experiment in which participants needed to reach a positive group consensus on adding a song to a music playlist. After seeing the group members’ ratings, the participants had the opportunity to revise their own. Our findings suggest that the willingness to flip to a positive outcome was far less than to a negative outcome. Overall, conformity behavior was far less pronounced for participants from the United Kingdom compared to participants from India.
We present SipBit, a sensing platform that digitally recognizes beverages and their attributes, an essential component in facilitating novel human-food interactions. SipBit consists of an electrical impedance measurement unit and a recognition method based on deep learning techniques. First, impedance measurements of a beverage are acquired using Electrical Impedance Spectroscopy. Then, a multi-task network cascade algorithm was employed to identify eight different beverage types in various volume levels and sugar concentrations. Results show that the multi-task network cascade discriminates beverage types with an accuracy of 96.32%, and estimates volumes with a root mean square error of 13.74ml and sugar content with a root mean square error of 7.99gdm− 3. Future work will include: 1) developing utensils embedded with SipBit for automatic beverage and attribute recognition, and 2) further developing SipBit to recognize additional beverage types and their attributes, thus enabling a new avenue for designing human-food interactive technologies.
Augmented Reality (AR) applications embedding user-generated social media content into the physical environment have the potential to increase users’ socio-spatial connectedness but are under-researched. Therefore, we built a location-based AR social network app, including geotagged AR social media photos. We explore our app in a research probe of senior university students (n = 6), providing activity-related information to first-year university students at the campus (n = 11). We could identify how AR content is created at a location, what type of pictures are shared with whom, how the sharing increased the attachment to local places and the socio-spatial connectedness for first-year students. Finally, we discuss other application scenarios and outline challenges and opportunities for location-based AR social media networks.
Empathy towards users is crucial to the design of user-centered technologies and services. Previous research focused on defining empathy and its role in the design process for triggering empathy for end-users. However, there is a lack of empathy measurement instruments in design. Most previous work focused on designers, overlooking the need for other stakeholders to develop empathy towards the users to break organizational silos and deliver high-quality user-centered services and products. In this contribution, we share the preliminary stages of the development of an empathy scale for service design. We build on empathy literature from psychology and design to define 18 items representing four empathy dimensions. We report on the definition of these dimensions and their underlying items, and present preliminary studies in which we reviewed the first version of the scale with experts and stakeholders.
When it comes to creating website information structures, open card sorting is the main approach used. However, scientific research for the method's validity is lacking. This paper explores the validity of open card sorting for website structural design. To this end, participants first performed an open card sort for the redesign of a website with usability issues in its information structure. Next, a within-subjects user testing study was performed to compare two functional prototypes that differed only in their structure: one replicated the existing website structure, whereas the other implemented the structure produced by card sort data analysis. Results showed that participants using the redesigned structure had significantly better usability metrics (first click success rate, task time, SEQ score and SUS score) compared to when interacting with the existing structure. These findings provide support for the validity of the open card sorting method for the design of website information structures.
Interfaces for mediated communication are being designed to increase the quality of the interaction of people geographically separated. Communication modalities offered by the medium was found to affect on the perceived social presence, but also communication strategies involving higher coordinated actions could be related to a higher social presence. The purpose of this study was to investigate how behavior synchronization can influence individual’s perceived social presence and behavior coordination while interacting through a huggable interface with simple visual cues. Through an experimental study that involved two virtual agents, it was observed that a partner that promote synchronous actions : 1) led to a higher level of perceived social presence, and 2) encouraged more responsive behaviors from the user.
While a growing body of literature has begun to examine proxemics in light of human–robot interactions, it is unclear how insights gained from human–human or human–robot interaction (HRI) apply during human–drone interactions (HDI). Understanding why and how people locate themselves around drones is thus critical to ensure drones are socially acceptable. In this paper, we present a proxemic user study (N=45) in virtual reality focusing on 1) the impact of the drone’s height and 2) the type of cover story used to introduce the drone (framing) on participants’ proxemic preferences. We found that the flying height has a statistically significant effect on the preferred interpersonal distance, whereas no evidence was found related to how the drone was framed. While results also highlight the value of using Virtual Reality for HDI experiments, further research must be carried out to investigate how these findings translate from the virtual to the real world.
Privacy fatigue is a new perspective to explain the discrepancy between individual attitude and behavior, i.e., privacy paradox. Fatigued users are powerless about privacy decisions, and thus ”forced to accept” or ”regardless of it”. In order to understand social media users’ privacy fatigue comprehensively, we use China’s mainstream social media Weibo as the background to conduct experiments on 428 users. Specifically, we propose and verify the perceived antecedents and behavioral consequences of the privacy fatigue model based on the Stimulus Organism Response (S-O-R) theory, taking personality traits as the moderating factor. Based on collectivist culture, the study provides complementary evidence to existing research on cultural factors. We hope it will enrich the privacy fatigue theory, helping researchers and service providers pay attention to users’ potential fatigue and implement usable privacy design in real systems to alleviate privacy fatigue.
Visual attention is critical for everyday task performance and safety. The Attentional Visual Field Task (AVF) is an established, computerized method for assessing the distribution of visual attention across a wide visual field. High-fidelity virtual reality (VR) presents an opportunity for more ecological methods for assessing and training visual attention; however, this novel approach has not been examined. We developed a new VR-based AVF task, AVF-VE, using a Head-Mounted Display (HMD) VR device with an integrated eye-tracker, and conducted a study to validate this newly developed visual attention task. We further examined how visual attention is distributed in a virtual visual field. The findings suggest that the VR-based visual attention task is a valid and useful tool that can be used for future attention research and training. Unique characteristics of the spatial distribution of visual attention in the virtual environment observed in the current evaluation study are discussed.
The widespread use of Head-Mounted Displays (HMDs) allows ordinary users to interact with their friends daily in social Virtual Environments (VEs) or metaverse. However, it is not easy to eat in a metaverse while wearing an HMD because the Real Environment (RE) is not visible. Currently, users watch the RE’s food through the gap between the user’s face and the HMD (None) or superimposing a video see-through (VST) image on the VE, but these methods reduce the sense of presence. To allow natural eating in a VE, we propose Ukemochi that improves the presence and ease of eating. Ukemochi seamlessly overlays a food segmentation image inferred by deep neural networks on a VE. Ukemochi can be used simultaneously as a VE created with the OpenVR API and can be easily deployed for the metaverse. In this study, we evaluated the effectiveness of Ukemochi by comparing three visual presentation methods (None, VST, and Ukemochi) and two meal conditions (Hand condition and Plate condition). The experimental results demonstrated that Ukemochi enables users to maintain a high presence in VE and improve the ease of eating. We believe that our study will provide users with the experience of eating in the metaverse and encourage further research on eating in the metaverse.
The ability to successfully infer private behavioral intentions using publicly-available digital records has a far-reaching impact. Unlike other private attributes such as demographic attributes, an intention oftentimes leads to a near-future behavior. Prior knowledge about such future behaviors can be seen as “actionable intelligence” and constitutes a significantly bigger risk to users’ privacy than knowledge of non-behavioral attributes. In this paper, we present a novel, multidisciplinary methodology for behavioral-intention inference. Using Bayesian-Networks, we model a behavioral intention using a set of causes that influence the intention’s formation, a set of effects that are caused by the intention, and various dependency relations within and between those sets. Unlike the methodologies used in prior attribute-inference works, which are oftentimes tailored to a single target attribute, our methods can be applied to different types of intentions from a diverse set of domains as we demonstrate by applying our model to multiple real-world intention-inference tasks.
Selfies have become a prominent means of online communication. Group selfies, in particular, encourage people to represent their identity as part of the group and foster a sense of belonging. During the COVID-19 pandemic, video conferencing systems are used as a tool for group selfies. However, conventional systems are not ideal for group selfies due to the rigidness of grid-based layout, information overload, and lack of eye contact. To explore design opportunities and needs for a novel virtual group selfie platform, we conducted a participatory design, and identified three characteristics of virtual group selfie scenarios, “context with narratives”, “interactive group tasks”, and “capturing subtle moments.” We implemented Virfie, a web-based platform that enables users to take group selfies with embodied social interaction, and to create and customize selfie scenarios using a novel JSON specification. In order to validate our design concept and identify usability issues we conducted a user study. Feedbacks from the study participants suggest that Virfie is effective at strengthening social interaction and remote togetherness.
Virtual Reality (VR) headsets have the unique capability to create private virtual content anywhere around the user, going beyond the capacities of traditional devices, but are not widely used while travelling, due to safety and comfort concerns. Showing objects from reality - "Reality Anchors" - could help reduce these concerns. We present a user study (N=20) that investigates how the use of real-world cues and different VR environments affect users’ feelings of safety, social acceptability, awareness, presence, escapism and immersion, which are key barriers to VR adoption. Our findings show that knowing where other people are on the bus could significantly reduce concerns associated with VR use in transit, resulting in increased feelings of safety, social acceptability and awareness, but with the concession that the user's immersion may be reduced. The VR environment also affected the level of immersion and the feelings of escapism, with a 360-video environment returning higher scores than a 2D one.
Teenagers often engage with Personal Informatics tools for health and fitness without support or guidance specific to their individual experiences. There is a lack of research on how putting teens at the center of designing educational resources might empower young users to harness the affordances of self-tracking tools that are of most value to them. To address this gap, we ran five online co-design workshops with 44 teenagers (aged 13-18 years) in the United Kingdom to design a self-tracking guide for teens. We focused on content, presentation of information, and distribution strategy. Our findings highlight the complexity of creating resources which adeptly capture the needs of this diverse user group. To successfully build interactive educational resources appropriate for teens’ positive engagement with health-related self-tracking tools, designers and researchers must prioritize teen-centered design with a focus on: (1) teens’ core reasons for self-tracking and (2) opportunities for social support and community-building.
Self-reported quality and duration of sleep in Western populations is declining. The interest in wearable sleep-trackers that are promising better sleep is growing. By wearing a device day and night the sleeper is continuously connected to a more-than-human network. The mass-adoption of sleep-tracking devices has an impact on the personal, social and cultural meaning of sleep. This study looks at the discourse forming around wearable sleep-trackers. This extended abstract presents how non-human subjectivities are accounted for in this discourse. Through a posthuman discourse analysis of textual and visual artefacts from interviews, academic research and popular media, six distinct roles for these non-human social agents were identified: ‘Teacher’, ‘Informant’, ‘Companion’, ‘Therapist’, ‘Coach’ and ‘Mediator’. This characterisation is a first step to understanding sleep-trackers as social agents, reorganising personal and contextual relationships with the sleeping self.
Although clinical training in implicit bias is essential for healthcare equity, major gaps remain both for effective educational strategies and for tools to help identify implicit bias. To understand the perspectives of clinicians on the design of these needed strategies and tools, we conducted 21 semi-structured interviews with primary care clinicians about their perspectives and design recommendations for tools to improve patient-centered communication and to help mitigate implicit bias. Participants generated three types of solutions to improve communication and raise awareness of implicit bias: digital nudges, guided reflection, and data-driven feedback. Given the nuance of implicit bias communication feedback, these findings illustrate innovative design directions for communication training strategies that clinicians may find acceptable. Improving communication skills through individual feedback designed by clinicians for clinicians has the potential to improve healthcare equity.
In this work, we explore if informative art can represent a user’s indoor trajectory and promote user’s self-reflection, creating a new type of interactive space. Under the assumption that the simplicity of a digital picture frame can be an appealing way to represent indoor activities and further create a dyadic relationship between users and the space they occupy, we present Travelogue, a picture-frame like self-contained system which can sense human movement using wireless signal reflections in a device free manner. Breaking away from traditional dashboard-based visualization techniques, Travelogue only renders the high-level extent and location of users’ activities in different informative arts. Our preliminary user study with 12 participants shows most users found Travelogue intuitive, unobtrusive, and aesthetically pleasing, as well as a desired tool for self-reflection on indoor activity.
Gameplay spectatorship has developed significantly over the last decade, necessitating the need to design more engaging spectator experiences. In order to do this effectively, however, we must first better understand spectator behavior and motivations. In this paper, we build on the results of a previous work examining interaction preferences of remote livestream spectators depending on their user type, and report a new qualitative study exploring the underlying motivations that shape the experience of those engaged in interactive spectating. Our results highlight five main themes (entertainment, team support, learning, caster, social) that motivate spectators to engage with interactive esports experiences. This work will motivate ongoing research in this domain by promoting conversations about livestream spectator engagement and motivations, and facilitating engaging spectator experiences.
Spatial augmented reality (SAR) allows us to extend desktop environments to our surroundings. However, previous works on enabling mouse interaction in SAR lacks consistent pointing behaviour with desktop environments. We present Everywhere Cursor, an indirect mouse interaction technique that moves along the environment’s surface geometry, which preserves mouse behaviour in desktop environments. We discuss fundamental design problems in extending desktop mouse interaction to SAR, implementation details of our system, and guidelines on future improvements. Our work provides an important step in the transition of desktop environments to a pervasive computing system.
In this paper, we explore how computing device use by people with upper extremity impairment (UEI) was affected by COVID-19. Someone with UEI has reduced use of their shoulders, upper arms, forearms, hands, and/or fingers. We conducted six (6) semi-structured interviews with participants with UEI in the US. We found that people with UEI increased computing device use during COVID-19 not only for remote interactions but also in person. Additionally, social distancing for COVID-19 safety created the need for new assistive technology (AT), authentication requirements, and communication platforms, which introduced their own accessibility barriers. We also found that personal protective equipment (PPE) created new barriers during computing device use, which often caused people with UEI to choose COVID-19 safety over the usability of their computing devices. Based on these findings, we describe future opportunities to make computing devices more accessible for people with UEI to manage the shifts in computing device use introduced by COVID-19.
The goal of this paper is to understand how people assess human-likeness in human- and AI-generated behavior. To this end, we present a qualitative study of hundreds of crowd-sourced assessments of human-likeness of behavior in a 3D video game navigation task. In particular, we focus on an AI agent that has passed a Turing Test, in the sense that human judges were not able to reliably distinguish between videos of a human and AI agent navigating on a quantitative level. Our insights shine a light on the characteristics that people consider as human-like. Understanding these characteristics is a key first step for improving AI agents in the future.
Spoken dialog systems, lacking the means to address the complex phenomena of spontaneous speech and conversational dynamics, force users into a constrained mode of dialog that resembles text-based interaction more closely than spoken conversation. Turn-taking is simplified and discourse-related information is lost, as discourse markers are largely ignored and prosodic information is not captured or utilized. We hypothesize that incorporating a few of these key conversational phenomena at specific points in a dialog will reduce cognitive load in spoken human-computer interaction and expand the potential application areas of dialog systems to tasks requiring more complex interactions. In this paper, we describe our approach to adding conversational intelligence to dialog systems and our work to date validating the hypothesis that adding conversational intelligence to existing dialog systems will significantly reduce users’ cognitive load.
Many smartwatches are now equipped with front-facing cameras that can detect users’ faces and track the devices’ spatial location relative to the faces in mid-air space. This space can be used as virtual placeholders for user interfaces where users can access them by moving their watch. In this paper, we present a novel face-centered spatial user interface for smartwatches that leverages the mid-air space for augmenting virtual user interfaces. With a pilot study, we first examine the suitable mid-air space that users can access during active use while wearing smartwatches. Next we conduct an online survey investigating users’ attitudes towards using the space to access mid-air user interfaces. More specifically, we focus on the social acceptance of face-centered spatial user interfaces in different locations and in front of different audiences. Results indicate that participants welcomed the idea of face-centered spatial user interfaces, however, the acceptance varied based on where and in front of whom they are using the space.
Text summarization is an example of a complex algorithm with a single direct input to output user operation. As different types of users e.g. students, journalists, etc. adopt these algorithms, it is imperative that user experience with text summarization AI systems is modified to allow agency. To explore this, we designed an interactive multi-document summarization system, called Living Documents, that allows agency over the summarization process. We evaluated this system in a preliminary study with 25 students identified from Prolific. The students perform summarization tasks using the control functions in our system. Results from this study show that these functions contributed to the user’s understanding of how summarization works and this understanding slightly impacts user trust in our system positively. We discuss our experience designing for user control of a normally automated, complex process that produces immutable output and the future implications for other applications.
The elderly population worldwide got immensely affected by the increased isolation and risk for complications due to the COVID-19 pandemic. Notably, elderly women get more affected by social isolation and distress irrespective of health factors. We aim to understand how urban elderly women in Southeast Asia – typically highly dependent on the other family members due to cultural practices – took care of their mental health with uncertainty and distress using technology during the social distance period. Through 19 semi-structured interviews with participants from six Southeast Asian countries and analyzing the data using thematic analysis, we surfaced that our participants started learning different technology with great enthusiasm and used them for their mental well-being during the pandemic period. This paper portrays how our participants enhanced interpersonal bonding, cultivated self-care and creative outlets, and facilitated positivism around their social circle using different technology platforms to mitigate their stress and uncertainty during the pandemic. Our participants’ technology usage for better mental well-being during the COVID-19 period provides HCI researchers with valuable design guidelines. Here, we contribute by expanding the HCI community’s understanding of technology design within the intersection of the elderly population and mental health for the Southeast Asian cultural context.
Augmented reality (AR) tutorial systems have a strong potential to help workers improve learning efficiency in the ongoing trend of Industry 4.0. The current state-of-the-art point cloud approach usually requires cumbersome preparation and additional computational resources only to suffer from low-resolution visual results. To overcome the above limitations, we propose MobileTutAR, a lightweight AR tutorial system that runs entirely on mobile devices providing high definition spatially situated tutorial videos. Our approach captures the tutorial images that extract a human’s body segmentation combined with a user-defined area of interest. The system then projects the tutoring content spatially in situ, to align with the human expert’s recorded position. When playing back, the learner is guided by a navigation-centered user interface to observe the segmentation video from the recorder’s original position/orientation. In this way, we deliver a high-definition AR experience without needing any cumbersome equipment, exotic computational resources, or in-depth training.
Self-tracking technologies, especially those facilitating support from social systems, are becoming more common for treating serious mental illnesses in both clinical and informal contexts. A recently proposed feature is co-tracking, where data is gathered not only from the perspective of the user managing their condition, but also from their close contacts. The proposed system therefore supports multiple perspectives (data streams) about the same variable of interest (i.e., an individual’s mood). However, the subjective and reciprocal nature of mental health data gives rise to challenges in visualizing uncertainty that must be addressed before clinical use. Here, we create an application-specific typology of uncertainty for visualizing multi-source mental health data, and propose design solutions to communicate this uncertainty. Via a case study of mood tracking with bipolar disorder, we present an interactive visualization prototype for understanding dynamic mood states in close relationships, moving toward a real-world implementation of a co-tracking informatics system.
Acu.ation is an intervention that aims to mediate the urge to smoke by pairing a wearable device that delivers transcutaneous electric acupoint stimulation (TEAS) with a mobile application that provides Mindfulness Based Relapse Prevention (MBRP)(Figure 1). Given the pervasiveness of cigarette smoking, high rates of relapse, and limitations of existing treatments, it is critical to explore new methods of relapse prevention. Our solution draws from acupuncture's global use in treating addiction and recent evidence that non-invasive TEAS can reduce the urge to smoke after exposure to a drug-related cue. Specifically, we have designed a device formed to four acupoints (LI4, PC8, PC6, and TH5) located on the wrist and hand, and programmed to deliver 5-15mA stimulation. Through a paired mobile application, individuals are simultaneously guided through an MBRP intervention to help them take control of their response to the trigger. Together, this system provides real-time relief from the urge to smoke so individuals can better engage with cognitive behavioral relapse prevention strategies in high risk moments.
The common layer-by-layer deposition of regular, 3-axis 3D printing simplifies both the fabrication process and the 3D printer’s mechanical design. However, the resulting 3D printed objects have some unfavourable characteristics including visible layers, uneven structural strength and support material. To overcome these, researchers have employed robotic arms and multi-axis CNCs to deposit materials in conformal layers. Conformal deposition improves the quality of the 3D printed parts through support-less printing and curved layer deposition. However, such multi-axis 3D printing is inaccessible to many individuals due to high costs and technical complexities. Furthermore, the limited GUI support for conformal slicers creates an additional barrier for users. To open multi-axis 3D printing up to more makers and researchers, we present a cheap and accessible way to upgrade a regular 3D printer to 5 axes. We have also developed a GUI-based conformal slicer, integrated within a popular CAD package. Together, these deliver an accessible workflow for designing, simulating and creating conformally-printed 3D models.
Prospect theory is a behavioral model of how people make decisions in the presence of risk; this work explores the application of prospect theory, particularly the reference-dependence effect, to user interactions with cookie banners. We identify two possible risks associated with cookies—the functional risk that denying cookies will degrade user experience and the privacy risk that accepting cookies will allow a website to access and sell personal information—and explore how the slant of a cookie consent banner (which risk it emphasizes) and the framing of a banner (whether it emphasizes the potential for gain or the potential for loss) impact user decisions. We conduct an empirical users study (n = 1557) in which we observe how users interact with different cookie banner prompts. We find that for both possible slants, a negative framing is significantly more effective at nudging user decisions. We also find that the combination of slant and framing impact cookie opt-out rates by a factor of three. These results demonstrate the need for further consideration of the ethical implications of framing and nudging in the context of consent requests.
Social Virtual Reality (VR) use is growing for socializing and collaboration. However, current applications are not accessible to people with visual impairments (PVI) due to their focus on visual experiences. We aim to design VR technologies to enhance social VR accessibility for PVI. We focus on facilitating peripheral awareness, a vital ability in social activities. With an iterative design process involving five participants, we designed VRBubble, a VR technique to facilitate peripheral awareness for PVI via spatial audio. Based on Hall’s proxemic theory, VRBubble divides the social space with three Bubbles—Intimate, Personal, and Social Bubble—generating spatial audio feedback to distinguish avatars in different bubbles and provide suitable avatar information. We provide three audio alternatives: earcons, verbal notifications, and sound effects. PVI can select and combine their preferred feedback alternatives for different social context to maintain avatar awareness in a dynamic social VR environment.
Specialized documentation techniques have been developed to communicate key facts about machine-learning (ML) systems and the datasets and models they rely on. Techniques such as Datasheets, FactSheets, and Model Cards have taken a mainly descriptive approach, providing various details about the system components. While the above information is essential for product developers and external experts to assess whether the ML system meets their requirements, other stakeholders might find it less actionable. In particular, ML engineers need guidance on how to mitigate potential shortcomings in order to fix bugs or improve the system’s performance. We survey approaches that aim to provide such guidance in a prescriptive way. We further propose a preliminary approach, called Method Cards, which aims to increase the transparency and reproducibility of ML systems by providing prescriptive documentation of commonly-used ML methods and techniques. We showcase our proposal with an example in small object detection, and demonstrate how Method Cards can communicate key considerations for model developers. We further highlight avenues for improving the user experience of ML engineers based on Method Cards.
Lecture videos have become a prevalent learning resource, due to the rising popularity of educational video platforms and the necessity for remote learning during the COVID-19 pandemic. However, lecture videos limit feasible pedagogical approaches, since there are no interactions that students would engage with and learn from in synchronous classes. In this study, we proposed virtual peers - virtual TAs and students that interact with each other as though they are watching the video for the first time together - to help real students learn from observing their interactions. We conducted two rounds of interviews and participatory designs with instructors and TAs to design an authoring tool that would help them create engaging virtual characters and author five types of pedagogically valuable interactions. The design received favorable preliminary feedback from instructors. They expect the tool to add valuable components that are missing from lecture videos with minimal additional effort on their parts.
Constructive data physicalization (i.e. the creation of visualizations by non-experts using physical elements) is a promising research area in a context of rapid democratization of data collection and visualization, driven notably by the quantified-self movement. Despite a prolific body of work developed to explore physicalization as a mean to communicate data to individuals, little is known about how people transform data into physical artefacts. Current research also falls short in studying constructive physicalizations using other sensory modalities than sight or touch. Building on the principles of data edibilization, we propose to use candies as a medium to study constructive data physicalization processes, due to their ability to leverage multiple sensory channels. We conducted a preliminary study (candy workshop) to gain insights into how people make use of various sensory modalities in the construction of data physicalizations. We hope to inspire new research using candies as accessible research material.
The disruptive nature of smartphone notifications and their negative impact on users’ productivity are well documented. The majority of these results either originate from controlled laboratory studies, or protocols relying on subjective self-reporting, reducing their ecological validity. This paper presents results from a full day in situ study investigating the impact of perceiving one’s smartphone notifications on wrist motion patterns. Through this objective behavioral assessment, we document for the first time the manifestations of notification-induced disruption outside of the lab, independently of user activity and without the need for self-reporting. We identified a decrease in wrist motion activity following the presentation of a notification while the participant was engaged in higher intensity activities, independently of whether the notification is immediately attended to. These findings provide objective support for the claim that notifications have as much potential for disruption when merely perceived as they do when the user actually responds to them.
Micro-mobility is becoming a more popular means of transportation. However, this increased popularity brings its challenges. In particular, the accident rates for E-Scooter riders increase, which endangers the riders and other road users. In this paper, we explore the idea of augmenting E-Scooters with unimodal warnings to prevent collisions with other road users, which include Augmented Reality (AR) notifications, vibrotactile feedback on the handlebar, and auditory signals in the AR glasses. We conducted an outdoor experiment (N = 13) using an Augmented Reality simulation and compared these types of warnings in terms of reaction time, accident rate, and feeling of safety. Our results indicate that AR and auditory warnings lead to shorter reaction times, have a better perception, and create a better feeling of safety than vibrotactile warnings. Moreover, auditory signals have a higher acceptance by the riders compared to the other two types of warnings.
Child welfare (CW) agencies use risk assessment tools as a means to achieve evidence-based, consistent, and unbiased decision-making. These risk assessments act as data collection mechanisms and have been further developed into algorithmic systems in recent years. Moreover, several of these algorithms have reinforced biased theoretical constructs and predictors because of the easy availability of structured assessment data. In this study, we critically examine the Washington Assessment of Risk Model (WARM), a prominent risk assessment tool that has been adopted by over 30 states in the United States and has been repurposed into more complex algorithmic systems. We compared WARM against the narrative coding of casenotes written by caseworkers who used WARM. We found significant discrepancies between the casenotes and WARM data where WARM scores did not not mirror caseworkers’ notes about family risk. We provide the SIGCHI community with some initial findings from the quantitative de-construction of a child-welfare risk assessment algorithm.
With the increasing demands of digital creation, a creative user often starts by looking for inspirational resources online. Once they find such inspirations, they recreate certain design elements using tracing-like applications to internalize the design for their purpose. We aim to accelerate this process by extracting key semantics from an inspirational image, and converting it into a ready-to-use template using a multimodal information extraction setup. We propose NeurTEx, a holistic algorithm that takes an inspirational banner image as input and extracts multimodal design semantics: layout, text elements (actual text along with the font style), image elements (including semantics like logos and background/foreground objects) and shapes. Our technique uses a segmentation framework followed by a region based depth-first search to extract and identify different elements and their semantics. We process these regions to extract finer details such as font styles, texts and logos. We believe that such fine–grained semantics would accelerate the design process for a creator helping them rapidly adapt designs from inspirations for their requirements. With the help of metric-based and human survey based evaluations, we not only demonstrate the effectiveness of the proposed approach to extract the style & design components from an inspirational image but also illustrate how these extractions can accelerate the creation process thus aiding novices and professionals alike.
What if you could get a taste of your own thoughts? We propose a new system in HCI that uses computational tools to map text onto flavors and thus allow users to ”eat their words” in the form of a custom hot cocoa blend created from written journal entries, helping people engage in reflection. Our more open-ended approach to computer-augmented reflection explores new tangible mediums and methods to facilitate a sense of playful fluidity in machine interactions. A deep learning pipeline is developed that finds meaning within user journals by identifying similar poems/stories/texts in a pre-collected corpus. A dispenser outputs a drink mix with flavors based on these discovered meanings, and prints a receipt to help users make sense of their experience. To encourage disclosure and create a reflective mood, we model our experience based on journaling at an intimate cafe. Our preliminary evaluation demonstrates strong promise in this approach.
This preliminary study explores the financial practices of older Hong Kong immigrants in the Greater Toronto Area through semi-structured interviews with 10 participants. First, this study examines the impact of social networks in settlement patterns and financial decisions such as choosing banks. Secondly, this study looks at the variety of ways people use and track money, including the adoption of digital banking and contexts of cash use. Finally, a comparison between banking practices in Hong Kong and the Greater Toronto Area is explored. These narratives reveal how social capital, trust, geography and community influence financial habits from managing transnational bank accounts to navigating technological advancements. The diversity of financial practices revealed here highlights the importance of avoiding broad categorizations of older adults and immigrants, and instead contextualizing Fintech design within communities and cultural practices.
Audio-based navigation technologies can help people who are blind or have low vision (BLV) with more independent navigation, mobility, and orientation. We explore how such technologies are incorporated into their daily lives using machine learning models trained on the engagement patterns of over 4,700 BLV people. For mobile navigation apps, we identify user engagement features like the duration of first-time use and engagement with spatial audio callouts that are greatly relevant to predicting user retention. This work contributes a more holistic understanding of important features associated with user retention and strong app usage, as well as insight into the exploration of ambient surroundings as a compelling use case for assistive navigation apps. Finally, we provide design implications to improve the accessibility and usability of audio-based navigation technology.
This study presents the evaluation of ability-based methods extended to keyboard generation for alternative communication in people with dexterity impairments due to motor disabilities. Our approach characterizes user-specific cursor control abilities from a multidirectional point-select task to configure letters on a virtual keyboard based on estimated time, distance, and direction of movement. These methods were evaluated in three individuals with motor disabilities against a generically optimized keyboard and the ubiquitous QWERTY keyboard. We highlight key observations relating to the heterogeneity of the manifestation of motor disabilities, perceived importance of communication technology, and quantitative improvements in communication performance when characterizing an individual's movement abilities to design personalized AAC interfaces.
In recent years, virtual reality (VR) technology has shown promise as a means of delivering rehabilitative care to restore arm function in stroke patients. At the same time, limitations of traditional clinical scales for measuring arm function recovery have led to the more widespread use of kinematic metrics. These metrics quantify useful properties of patients’ movements using motion tracking data captured while the patient performs different types of assessment tasks. Given modern consumer VR systems already collect the data needed to calculate many common kinematic metrics, these systems could eventually be used to both deliver stroke rehabilitation programs and administer kinematic assessments to monitor patients’ recovery. However, it is not yet clear how the properties of VR-based assessment tasks may systematically impact the values of kinematic metrics used to assess arm function post-stroke. To begin addressing this question, we examined the influence of two task properties (movement direction and hand dominance) on a set of 10 kinematic metrics during a discrete reaching task performed by healthy participants using an Oculus Quest 2 VR headset. Our findings indicate that all 10 metrics were significantly impacted by these task properties, confirming that kinematic metrics captured in this context are sensitive to task properties. Our results also provide an initial account of how the metrics were influenced by each task property and highlight needs for future work to further understand the influence of assessment task properties on VR-based kinematic assessments.
As demonstrated by users’ resistance to ads on Google Home in 2017, persuasive communications from voice assistants (VAs) can be seen as inappropriate. But, they may be better received if the VAs are not the sources but mere media for ads, as happens with radio. User motives may play a role as well, with those who use VAs primarily for information resenting ads more than those who see their VAs as social companions. To test such propositions, we conducted a scenario-based user study (N = 264) in which Siri acted as either a source of ads or as a medium for delivering ads by a human spokesperson. Findings suggest that for informationally motivated users, Siri as ad source causes reactance via lowered sense of control over the interaction. On the other hand, for those with social motives, it increases social presence and positively affects user experience of the interaction.
Visitors in smart homes might want to use certain device features, as far as permitted by the device owner (e.g., streaming music on a smart speaker). At the same time, protecting access to features from attackers is crucial, motivating a need for authentication. However, it is unclear if and how smart home visitors should authenticate as they usually do not have access to respective interfaces. We explore considerations for the design of authentication for visitors evolving around, e.g., the visitors themselves as well as the environment and concrete mechanisms. Moreover, we suggest a concrete idea: security questions to authenticate visitors in smart homes. In an interview study (N = 24), we found that owners and visitors appreciated the low effort and would adapt our approach. We conclude with future research directions that we hope will spark further discussions around the design of authentication for smart homes, considering visitors and owners alike.
The benefits of taking part in adventurous activities are many; particularly, for people with visual impairments. Sports such as rock climbing can improve feelings of skillfulness, autonomy, and confidence for people with low or no vision as they strive to overcome environmental and personal challenges. In this late-breaking work we present Climb-o-Vision, a novel sensory substitution software that utilizes YOLOv5 computer vision object-detection architecture, to aid navigation for rock climbers with visual impairments. Climb-o-Vision uses commercially available and cost-effective hardware to detect, track, and convert climbing hold spatial locations on to the surface of the tongue, via an electrotactile tongue interface. Preliminary testing of the device highlights the possibility of using sensory substitution as a sporting aid for people with visual impairments. Furthermore, it demonstrates the potential for adapting and improving current sensory substitution systems by employing computer vision techniques to filter useful task-specific information to users with visual impairments.
There has been a major push to improve the transparency of online symptom checkers (OSCs) by providing more explanations to users about their functioning and conclusions. However, not all users will want explanations about all aspects of these systems. A more user-centered approach is necessary for personalizing user experience of explanations. With this in mind, we designed and tested an interactive dialogue interface to afford user control to receive only those explanations that they would like to read. A user study (N = 152) with a text-based chatbot for assessing anxiety levels and presented explanations to participants in one of the three forms–an interactive dialogue providing choice for viewing different components of the explanations, a static disclosure of all explanations, and a control condition with no explanations whatsoever. We found that participants varied in the kinds of information they wanted to learn. The interactive delivery of explanations led to higher levels of perceived transparency and affective trust in the system. Furthermore, both subjective and objective understanding of the mechanism used for assessing anxiety was higher for participants in the interactive dialogue condition. We discuss theoretical and practical implications of imbuing interactivity for enhancing the effectiveness of explainable systems.
User-generated sketches serve as a powerful lens for understanding users’ mental models of apps and services. Inspired by research on hand-drawn maps of cities, we conducted an exploratory study in which 123 participants drew the interface of a popular mobile app from memory. An analysis of the resulting sketches highlights that these types of artifacts can be used to understand which elements are most central to users’ understanding of the app's core features and architecture—including non-existent features from other apps and services. We discuss how this work can be applied to the design of mobile apps and describe areas of future research.
Affinity diagramming is an effective and efficient method for forming nuanced interpretations of wide-ranging, unstructured qualitative data; however, this method does not scale well to large data sets. We propose a novel affinity diagramming system, called Qualitative Affinity Diagrammer (QuAD) that leverages computer-generated suggestions using deep learning to address the scalability of the diagramming process. QuAD features automatic grouping suggestions to jump-start the affinity diagramming process and provides grouping suggestions throughout the diagramming process to reduce sifting through notes. In this paper, we present a prototype of QuAD that uses Bidirectional Encoder Representations from Transformers (BERT) and Girvan-Newman to generate grouping suggestions. This work is the first step towards creating a powerful tool for assisting in the analysis of large qualitative data sets in a variety of contexts, including human-computer interaction.
Storytelling promotes the development of imagination and creativity and facilitates the impregnation of linguistic, emotional and moral skills in children. Limited storytelling media for children with blindness and visual impairment (BVI) confines their access to this unique experience. StoryBox is a multi-modal storytelling agent, facilitating an independent and interactive storytelling experience for children (5-12 years) with BVI. The platform supports audio-based story narration and enables tangible interaction with story elements via sensor-enabled tactile clay figurines. StoryBox takes inspiration from ‘kaavad baanchana’ - an oral storytelling tradition from Rajasthan, India. This paper presents the design of the proposed accessible multi-modal storytelling platform and findings from initial user evaluation with blindfolded participants.
Medical care outside the doctor’s office and the hospital is gaining traction. Lifestyle programs for transmural and remote care are everywhere, facilitated by hospitals as part of rehabilitation, by general practitioners as preventative measures, and more and more commonly, by various private (health) organizations through app stores. What might often be overlooked is the time and energy that is spent on the creation of the content in these programs. In this article, we discuss the elaborate content creation process of lifestyle content for an outpatient clinic for patients scheduled for an ablation surgery. We describe the close collaboration between the clinicians and our design research team, and reflect on how to streamline and formalize this process.
Building on the sources of enjoyment identified by Schaffer and Fang's card sorting study, we describe the development of two new measures for player experience research, the Enjoyment Questionnaire (EQ) and the Sources of Enjoyment Questionnaire (SoEQ). Among other sources, the EQ and SoEQ draw on flow theory, self-determination theory, and desire fulfillment theory. The EQ assesses digital game enjoyment and the SoEQ assesses 38 sources of enjoyment in digital games, including Humor, Relaxation, Savoring, Optimal Pacing, Optimal Variety, Social Responsibility, Task Significance, and Clear Task Purpose. The scale was validated with a survey of 564 participants. Results demonstrated the EQ and each subscale of the SoEQ have both construct validity and internal consistency. Results also provided evidence for convergent and discriminant validity. The EQ and SoEQ are useful and reliable tools for the study of digital game enjoyment and its sources.
Conversational Agents (CAs) such as Apple’s Siri and Amazon’s Alexa are well-suited for task-oriented interactions (“Call Jason”), but other interaction types are often beyond their capabilities. One notable example is playful requests: for example, people ask their CAs personal questions (“What’s your favorite color?”) or joke with them, sometimes at their expense (“Find Nemo”). Failing to recognize playfulness causes user dissatisfaction and abandonment, destroying the precious rapport with the CA.
Today, playful CA behavior is achieved through manually curated replies to hard-coded questions. We take a step towards understanding and scaling playfulness by characterizing playful opportunities. To map the problem’s landscape, we draw inspiration from humor theories and analyze real user data. We present a taxonomy of playful requests and explore its prevalence in real Alexa traffic. We hope to inspire new avenues towards more human-like CAs.
Challenge is the core element of digital games. Game challenge with an appropriate type and level that matches with players’ skill, experience and motivation would lead players to achieve the optimal player experience. With a wider spectrum of challenge types such as physical, cognitive, and emotional challenge provided by modern digital games, a questionnaire tool of CORGIS has recently been developed to evaluate the whole range of challenge experiences subjectively. However, such challenge experiences still lack measures to evaluate them objectively ”in real time”. To explore the possibility to detect different challenge types based on physiological signals, we conducted an experiment where 12 players’ physiological signals (EDA, ECG, EMG, RSP and TEM) of overcoming different types of game challenges were recorded. With 80 extracted physiological features, two methods (ANOVA-based and Regression-based) were adopted to select challenge-related physiological features. Results of logistic regression models showed that both methods obtained detection accuracy over 60%, which suggest potential for further development of a real-time challenge measurement instrument.
Two-thumb touch typing (4T) is a touchpad-based text-entry technique also used in virtual reality (VR) systems. However, the performance of 4T in VR is far below that of 4T in a real environment, such as on a smartphone. Unlike “real 4T”, 4T in VR provides virtual cursors representing the thumb positions determined by a position tracker. The virtual cursor positions may differ from the thumb contact points on an input surface. Still, users may regard them as their thumb contact points. In this case, the virtual cursor movements may conflict with the thumb movements perceived by their proprioception and may contribute to typing errors. We hypothesized that virtual cursors accurately representing the contact points of the thumb can improve the performance of 4T in VR. We designed a method to provide accurate contact point feedback, and showed that accurate contact point feedback has a statistically significant positive effect on the speed of 4T in VR.
When designing Machine Learning (ML) enabled solutions, designers often need to simulate ML behavior through the Wizard of Oz (WoZ) approach to test the user experience before the ML model is available. Although reproducing ML errors is essential for having a good representation, they are rarely considered. We introduce Wizard of Errors (WoE), a tool for conducting WoZ studies on ML-enabled solutions that allows simulating ML errors during user experience assessment. We explored how this system can be used to simulate the behavior of a computer vision model. We tested WoE with design students to determine the importance of considering ML errors in design, the relevance of using descriptive error types instead of confusion matrix, and the suitability of manual error control in WoZ studies. Our work identifies several challenges, which prevent realistic error representation by designers in such studies. We discuss the implications of these findings for design.
In iterative physical object creation, only the latest design state is manifested in the physical artifact, while information about previous versions are lost. This makes it challenging to keep track of changes and developments in iterative physical design. In this paper, we propose the concept of Tangible Version Control (TVC), inspired by the visualizations of traditional version control systems. In TVC, the real-world artifact itself is used for exploring its alternative versions in physical space, while comparisons to an alternative version are displayed seamlessly on the artifact with the use of augmented reality. Our implementation of TVC includes three different comparison modes, namely SideBySide, Overlay, and Differences. Furthermore, we discuss the anticipated use, opportunities, and challenges of using TVC in the future for individual users as well as for asynchronous collaborative work.
Online petitions have served as an innovative means of citizen participation over the past decade. However, their original purpose has been waning due to inappropriate language, fabricated information, and the lack of evidence that supports petitions. The lack of deliberation in online petitions has influenced other users, deteriorating the platform to a degree that good petitions are seldom generated. Therefore, this study designs interventions that empower users to create deliberative petitions. We conducted user research to observe users’ writing behavior in online petitions and identified causes of non-deliberative petitions. Based on our findings, we propose ThinkWrite, a new interactive app promoting user deliberation. The app includes six main features: a gamified learning process, a writing recommendation system, a guiding interface for self-construction, tailored AI for self-revision, short-cuts for easy archiving of evidence, and a citizen-collaborative page. Finally, the efficacy of the app is demonstrated through user surveys and in-depth interviews.
People engage in different activities while eating alone, such as watching television or scrolling through social media on their phones. However, the impacts of these visual contents on human cognitive processes, particularly related to flavor perception and its attributes, are still not thoroughly explored. This paper presents a user study to evaluate the influence of six different types of video content (including nature, cooking, and a new food video genre known as mukbang) on people’s flavor perceptions in terms of taste sensations, liking, and emotions while eating plain white rice. Our findings revealed that the participants’ flavor perceptions are augmented based on different video content, indicating significant differences in their perceived taste sensations (e.g., increased perception of salty and spicy sensations). Furthermore, potential future implications are revealed to promote digital commensality and healthier eating habits.
Text entry is a fundamental part of human computer interaction and typing games are a popular way to train and improve text entry skills. To assess the impact of competitive gameplay on text entry performance we conducted a public app store trial. For this purpose TypeClash, a competitive multiplayer mobile typing game was designed, developed and publicly distributed. The results demonstrate a significant effect of competitive gameplay mechanics on text entry performance regarding both speed and accuracy, with competitive gameplay resulting in better text entry performance.
Learning about Artificial Intelligence (AI) from a young age can help students become competent citizens able to move through our increasingly digital world with confidence and responsibility. This contribution presents a preliminary investigation in Bulgaria, Greece, Italy, and Romania to understand middle school teachers’ perspective on how to best teach digital competencies for AI. It uses the Will, Skill, Tool model as a theoretical lens, and it aims to inform the design of educational content and online platforms to enable teachers to integrate AI education into their classroom. Through a human-centred design process – including focus groups and a survey – needs and requirements were identified for a supportive online educational platform that aids teachers in AI education. The research results showed a positive attitude towards AI education and high motivation to introduce AI-related content at school, which translates to a positive Will factor. Regarding the Skill factor, teachers seem to have a basic level of digital skills but low AI-related skills. No significant problems emerged regarding the availability of resources, but further research investigating whether the Tool factor is accounted for would be desirable. Based on the results, six design implications for a web-based educational platform on AI have been formulated: (i) provide the required basics; (ii) make it relevant; (iii) make it interactive and collaborative; (iv) keep everyone in the loop; (v) make it accessible; (vi) motivate the user. These implications are further discussed to extend computational thinking frameworks to incorporate AI-related concepts and perspectives.
This paper introduces MoodTurner, a wearable and mobile system that supports individuals in the self-care of high sensory processing sensitivity. We discuss how different aspects of personal informatics and embodied perception can be combined to help individuals in the tracking and reflecting on episodes of high sensory processing sensitivity, and overcome stigmatisation associated with the use of tracking tools. We present the design of a smart jewellery that tracks where episodes take place as well as their severity and a complementary mobile application that allows users to review and document episodes of high sensory processing sensitivity.
While natural language interfaces (NLIs) are increasingly utilized to simplify the interaction with data visualization tools, improving and adapting the NLIs to the individual needs of users still requires the support of developers. ONYX introduces an interactive task learning (ITL) based approach which enables NLIs to learn from users through natural interactions. Users can personalize the NLI with new commands using direct manipulation, known commands, or by combining both. To further support users during the training process, we derived two design goals for the user interface, namely providing suggestions based on sub-parts of the command and addressing ambiguities through follow-up questions and instantiated them in ONYX. In order to trigger reflections and gain feedback on possible design trade-offs of ONYX and the instantiated design goals, we performed a formative user study to understand how to successfully integrate the suggestions and follow-up question into the interaction.
Humor has various positive implications for our daily lives, and it has shown to improve human-robot interaction as well. To date, humor has been applied to robots that mimic human behavior thus missing out on improving interactions with the non-humanoid robots continually being deployed to our daily lives. In this work, we conducted an initial evaluation of the far-out possibility to create non-verbal humorous behavior for a robot with no human features. The robot’s humorous gestures were designed by a clown therapist, animator, and HRI expert. The initial evaluation compared participants’ responses to humorous and non-humorous robotic gestures. Our study indicates it is possible for a simple non-humanoid robot to communicate a humorous experience through gestures alone, provided the movements are carefully balanced to bring about this good humor encounter. This study’s gesture design insights can serve as first steps toward leveraging humorous behaviors in non-humanoid robots to enhance HRI.
This paper presents Pneumatic Auxetics, an inverse optimization method for designing and fabricating morphing three-dimensional shapes out of patterns laid out flat. In origami/kirigami research, optimization of patterns that can be transformed into the target surface by inverse design has been attempted. On the other hand, in the research area of pneumatically actuated geometries, the control of the transformation using skeletons and membranes has been attempted. In the study of the inverse design of the auxetic pattern based on kirigami, it cannot actuate deformation by removing air because the design does not consider the thickness. Therefore, we simulate the pneumatic transition with the thick shell structure that is generated by offsetting the input surface (Figure 1). The designed skeleton is optimized for FGF (Fused Granular Fabrication) 3D printing, and it is 3D printed using soft elastomeric materials. These allow for both a deformable hinge and a rigid pattern. Thus, a skeleton made of a single material can be deformed to approximate the shape of the target-input surface by placing it in a membrane and removing the air. In this paper, we introduce related works and research contexts, challenges, inverse design simulator and its fabrication by 3D printing, and potential future applications.
Virtual Reality (VR) can be used to create immersive infotainment experiences for car passengers. However, not much is known about how to best incorporate the essentials of their surroundings for balancing real-world awareness and immersion. To address this gap, we explored 2D and 3D visual cues of the rear-seat space to notify passengers about different real-world tasks (lower armrest, take cup, close window, and hold handle) during a first-person game in VR. Results from our pilot study (n = 19) show that users perceive a lower workload in the task hold handle than all other tasks. They also feel more immersed in VR after completing this task, compared to take cup and close window. Based on our findings, we propose real-world task types, synchronous visual cues, and various input and transition approaches as promising future research directions.
We report a within-subjects study of the effect of realistic and cartoon avatars on communication, task satisfaction, and perceived sense of presence in mixed reality meetings. For 2 − 3 weeks, six groups of co-workers (14 people) held a recurring real work meeting using Microsoft HoloLens2 devices. Each person embodied a personalised full-body avatar with a realistic face and another with a cartoon face. Half the groups started in the realistic condition and the other half started in the cartoon condition; all groups switched conditions half-way. Initial results show that, overall, participants found the realistic avatars’ nonverbal behaviour more appropriate for the interaction and more useful for understanding their colleagues compared to the cartoon one. Regarding the results over time, we identify different insights for cartoon and realistic avatars based on the type of avatar was embodied first. We discuss the implications of these results for mixed and virtual reality meetings.
Gaming is a more accessible, engaging and popular past-time than ever before. Recent research highlights games as strikingly effective means of capturing and holding our attention — so effective, some argue, to the point of deleterious effect. An impassioned CHI2021 panel discussion directed these efforts towards the ethics and adoption of dark patterns. And yet, we know little as to how dark patterns are perceived and arise in the design, development and use of games. This paper seeks to address this knowledge gap by recounting findings from a design-led inquiry comprising interviews and workshops conducted with mobile game players, designers, developers, and business developers. We contribute an understanding of how dark patterns arise in the development, use and commercialisation of mobile games, their effects on players and industry professionals, and means for the consideration, negotiation and navigation of these strategies for gamer-engagement by design — in support of healthier, highly-engaging game experiences.
Hua'er, a type of traditional oral performance, is one of the national intangible cultural heritages (ICH) in China. Experts have been trying to enhance public's awareness of Hua'er protection through digital documentation technology, but there's no efficacious means to attract interest and popularize knowledge yet. In this paper, we propose an interactive VR system engaging audiences to experience and understand the connotation of Hua'er performance. Based on an online survey with the public and in-depth interviews with Hua'er experts, we derive a set of design requirements for generating embodied storytelling and interactive experience in VR. Accordingly, we design “Hua'er and the Youth” (HY) that integrates three methods (virtual avatar, participatory performance, and game-based knowledge acquisition) and conduct a between-subjects user study to compare HY with the Comparison system. The results suggest that our methods significantly improved audience's interactive experience, knowledge level and the awareness of ICH safeguarding.
In today’s society, where “one size fits all” does not work anymore, people’s requirements for material living became more personalized. Despite the fact that there exist customized services for a variety of commodities, these services are generally expensive and such customization for physical artifacts is not accessible by average users. In this work, we follow the promise of Programmable Filament , a novel technique to enable end-users with low-cost Fused Deposition Modeling (FDM) 3D printers to equip multi-material printing capabilities at low investment. We discovered that this technique can be applied to mix mechanical properties of two different materials (e.g., tensility) that may affect user’s comfort in 3D printed objects (e.g., personal optimal softness in sports gear) to meet individual needs.
We briefly introduce the process to 3D print programmable filaments using consumer-grade 3D printers in various composition ratios and fabricate 3D printed objects using an FDM printer blending multi-materials in the hot end, present a tensile testing experiment on sample objects made of such filaments to show its feasibility in producing new properties, and demonstrate two real-world applications that can be benefited from Programmable Filament utilization. We conclude with the remaining limitations and discussions about two potential structural improvements to improve printability of this filament for deployment.
A large body of work investigated touch, mid-air, and gaze-based user authentication. However, little is known about authentication using other human body parts. In this paper, we investigate the idea of foot-based user authentication for public displays (e.g., ticket machines). We conducted a user study (N=13) on a virtual prototype, FeetAuth, on which participants use their dominant foot to rotate through PIN elements (0–9) that are augmented along a circular layout using augmented reality (AR) technology. We investigate FeetAuth in combination with three different layouts: Floor-based, Spatial, and Egocentric, finding that Floor-based FeetAuth resulted in the highest usability with 4-digit PIN entry as fast as M=6.71 (SD=0.67). Participants perceived foot-based authentication as socially acceptable and highlighted its accessibility. Our investigation of foot-based authentication paves the way for further studies on the use of the human body for user authentication.
AI-powered technologies are increasingly being leveraged in health and care practices for aging populations. However, we lack research about older adults perceptions of AI-driven health in long-term care settings. This paper investigates older adults’ perceptions of how one AI-powered technology, voice assistants, should be used for personal health management. We interviewed 10 older adults living in an assisted living community in the U.S. to explore their values around AI for health. Findings show that they value technologies that generate and share positive and relational health information. We use this emphasis on positive health representations to speculate on a critical refusal of negative health representations. We highlight this preference in contrast to existing deficit-based health tracking technologies for aging and discuss how researchers, developers, and designers can engage in better approaches to AI-driven health for older adults and other historically marginalized populations.
Over the past decade, initiatives to design for subjective well-being have gained increased attention and momentum in design research. These initiatives often draw from positive psychology to explore ways of making Positive Psychology Interventions (PPIs) more effective through technology. This paper explores how a mix of tangible and digital technology can realize activity-focused, diverse Emotion Regulation (ER) for its users. We propose that ER strategies can serve as a principle for designing technology that encourages users to savor, modify, reassess, or commemorate their experiences. By centering the design around music listening experiences, the paper explores how users can be supported to lower motivation hurdles that get in the way of frequent engagement with a PPI by developing three exploratory prototypes and reflecting on the development processes. Variapsody, a music listening device, integrates three features, each deploying a different set of ER strategies that make the experience more enjoyable and meaningful. Variapsody's regulatory diversity offers users the choice of how to approach music listening and expands their repertoire of ER strategies. The first feature, Reaction Tile, inscribes users’ reactions to music onto a tangible, domino-sized tile to encourage them to savor the music. The second is Monofilter, which purposefully muffles the salience of background music while working on a cognitively demanding task. Vibelist is the third feature that helps users capture and revisit the context of music listening experiences in a digital collage. The paper discusses the lessons learned and future research opportunities.
When a child is admitted to the hospital with a critical illness, their family must adapt and manage care and stress. HCI and Computer-Supported Cooperative Work (CSCW) technologies have shown the potential for collaborative technologies to support and augment care collaboration between patients and caregivers. However, less is known about the potential for collaborative technologies to augment family caregiving circles experiences, stressors, and adaptation practices, especially during long hospitalization stays. We interviewed 14 parents of children with cancer admitted for extended hospitalizations in this work. We use the Family Adaptive Systems framework from the family therapy fields as a lens to characterize the challenges and practices of families with a hospitalized child. We characterize the four adaptive systems from the theory: Emotion system, Control system, Meaning, and Maintenance system. Then, we focus on the Emotion system, suggesting opportunities for designing future collaborative technology to augment collaborative caregiving and enhance family resilience.
Radar is primarily used for applications like tracking and large-scale ranging, and its use for object identification has been rarely explored. This paper introduces RaITIn, a radar-based identification (ID) method for tangible interactions. Unlike conventional radar solutions, RaITIn can track and identify objects on a tabletop scale. We use frequency modulated continuous wave (FMCW) radar sensors to classify different objects embedded with low-cost radar reflectors of varying sizes on a tabletop setup. We also introduce Stackable IDs, where different objects can be stacked and combined to produce unique IDs. The result allows RaITIn to accurately identify visually identical objects embedded with different low-cost reflector configurations. When combined with a radar’s ability for tracking, it creates novel tabletop interaction modalities. We discuss possible applications and areas for future work.
Redirected walking (RDW) visually manipulates the virtual environment to imperceptibly redirect the walkers to keep them in the tracking area, and offers a larger space than physical space. An attractor is a redirected walking technique that captures the walker's attention and manipulates the walker's trajectory through rotational gain. However, the attractor visually manipulates the walker's virtual environment using a predefined rotational gain, and having to constantly gaze at the attractor or the attractor frequently appearing until the walker's direction matches the desired direction, are problems limiting the application of visual attractors. Moreover, when the walker is unable to recognize or ignores the attractor, reorientation fails. In this study, we designed a human-sense-stimulating attractor that utilizes the auditory and olfactory senses to improve the rotational gain, naturalness, and immersion and decrease the chance of reorientation failure. Although sound and scents are invisible, they can be detected through direction; however, humans cannot recognize the accurate direction of a sound or scent. Based on these characteristics, auditory and olfactory attractors are proposed. We measured the amount of reorientation induced by the auditory and olfactory attractors and calculated the reorientation success rate. Additionally, the naturalness and immersion of the attractor were evaluated. The auditory attractor has a high reorientation success rate, naturalness, and immersion. The olfactory attractor induces more turn changes in the walker than other attractors, and a high number of turn changes leads to a larger rotational gain. Auditory and olfactory-based attractors have the potential to overcome the shortcomings of visual attractors such as the frequent interventions.
Advanced technologies are increasingly enabling the creation of freeform devices: interactive devices with non-rectangular form-factors. We explore the applications they inspire and how users may interact with such freeform devices. In a week-long design workshop, we invited non-specialist designers to invent freeform devices and reflect on their myriad form factors and the applications they engender. We clustered their concepts into Introspection, Community and Magic Exploration applications, allowing to understand the perspective of non-specialists on freeform devices in real life.
Social media (SM) is a popular accessible form of smart memory vaults. Prior work shows that users are still unable to understand how their shared memories are used by smart systems to create Smart Interactions involving Personal Memories (SIPMs). This can lead to negative social repercussions such as cyberbullying. This work investigates the most memorable SIPMs on Facebook for Egyptian users and their impact on platform usage. We conducted an online survey (N=53) requesting critical incident reports about surprising Facebook SIPMs. The most remembered SIPMs were: customizing advertisements, cuing offline interactions, sharing data with third parties and personalizing the newsfeed. Our results suggest that SIPMs, particularly customized advertisements act as ambient memory augmentation solutions to the users’ shared memories. Additionally, negative platform perception does not necessarily translate into reduction of platform usage. Our work inspires the discussion about users expectations towards ambient usage of their data on smart memory vaults.
To learn how to self-direct research, students must learn to reflect and improve upon a diverse set of metacognitive skills. Models like Agile Research Studios (ARS) provide ecosystems of tools and processes designed to help students hone their reflection skills as they practice research. However, students still struggle to enact their reflection processes across the supports available to them, as mentors coach them to do. MindYoga integrates a process framework that helps students monitor and enact their metacognitive reflection process across an ARS ecosystem. Findings show students using MindYoga were (1) able to monitor which metacognitive risks may affect their upcoming project work, (2) able to develop action plans based on mentor feedback to address these risks, and (3) actively reminded of their action items during relevant practice sessions. Moving forward, process frameworks like MindYoga can help learners develop and improve their work processes as they practice within learning ecosystems.
Due to the pandemic, social media has become an essential route to satisfy socializing needs. Expanding from dominant services like Facebook and Instagram, a new wave caused a stir—Clubhouse as a voice-centered social media platform. Despite its worldwide popularity after its launch in 2020, general properties of Clubhouse have not been actively discussed. Accordingly, this study explores Clubhouse's opportunities and challenges as a voice-centered social media through its user experiences. We conducted interviews with regular Clubhouse users (N=26) to gain insight into their motivation, social networking, and conversations. Findings highlight that voice is effective for establishing social relationships via interactivity and intimacy, mutual respect, and convenience from ephemerality. Conversely, users reported patterns of the privacy paradox and the oligopoly of communication. Design guidelines for future voice-centric social media are proposed. Our initial study of Clubhouse will encourage more dialogues on voice-centered social media and its potential as a major platform.
Misconceptions surrounding disability and sexuality are still prevalent and people with disabilities are often depicted as asexual and incapable to lead fulfilling sex lives. As a result, many individuals with disabilities struggle to access adequate sex education with negative consequences such as unplanned pregnancies, body image issues and sexual exploitation. To explore needs and practices for accessing relevant and reliable information about disability and sexuality, we conducted semi-structured interviews with 5 participants. Results show that there are several topics which participants thought should be better explored in the context of disability and sexuality, but most available sex education resources are not inclusive. Moreover, people with disabilities, especially when young or less used to self-advocate for their own needs, face difficulties engaging in meaningful conversations around sexuality. Based on the results of our research, we make recommendations for areas where HCI research could significantly contribute to make intimate interactions more accessible.
A major challenge in remote meetings is that awareness cues, such as gaze, become degraded despite playing a crucial role in communication and establishing joint attention. Eye tracking allows overcoming these obstacles by enabling augmentation of remote meetings with gaze information. In this project, we followed a participatory approach by first distributing a scenario-based survey to students (n=79) to uncover their preference of eye-based joint attention support (real-time, retrospective, real-time & retrospective, no) for remote university meetings. Building on these findings, we developed EyeMeet, an eye-based joint attention support system that combines state-of-the-art real-time joint attention support with a retrospective attention feedback for remote meetings. In a four-week study, two student groups worked remotely on course assignments using EyeMeet. Our findings of the study highlight that EyeMeets supports students in staying more focused on the meetings. Complementing real-time joint attention support, retrospective joint attention feedback is recognized to provide valuable support for reflecting and adapting behavior for upcoming meetings.
In this work, we propose MultiFingerBubble, a new variation of the 3D Bubble Cursor. The 3D Bubble Cursor is sensitive to distractors in dense environments: the volume selection resizes to snap-to nearby targets. To prevent the cursor to constantly re-snap to neighboring targets, MultiFingerBubble includes multiple targets in the volume selection, and hence increases the targets effective width. Each target in the volume selection is associated with a specific finger. Users can then select a target by flexing its corresponding finger. We report on a controlled in-lab experiment to explore various design options regarding the number of fingers to use, and the target-to-finger mapping and its visualization. Our study results suggest that MultiFingerBubble is best used with three fingers and colored lines to reveal the mapping between targets and fingers.
Our ability to empathize stands at the core of our existence as humans. It has dramatically contributed to our evolution as species and remains key to our relationships with family, friends, and colleagues. Yet, we sometimes find it difficult to empathize, more so in the online space increasingly dominating our interactions.
In this work, we explore the potential of videoconferencing platforms as avenues for augmenting empathy. We have developed Project Us, a system that uses machine learning to analyze interlocutors’ tone of voice and facial expressions during online interactions, compiles their emotional valence and feeds it back to each participant as a real-time visual cue. Tested with 40 users, Project Us shows the emergence of empathy-related behaviours, such as a higher awareness of own and other’s emotions, and an increased use of pro-social language, as evaluated through quantitative (natural language processing) and qualitative methods. We also analyze ethical challenges of empathy augmentation.
A healthy dialogue between civil society and governments stems from meaningful communication and understanding of political data. Visualization tools have the potential to support this. To contribute to a growing Digital Civics agenda in the Global South, we built an interactive visualization prototype that enables open-ended analysis of legislative roll-call vote data from the Ecuadorian parliament; political actors were interviewed and shown this tool to explore future opportunities for open parliament technologies. This work serves as motivation for the design of open parliament technologies which ought to (i) provide stories and narratives about the parliament’s and legislators’ political history, (ii) support the understanding of how parliamentary bills and resolutions become law, and (iii) grapple with the socio-technical considerations that platforms must undergo in order to make citizen participation an incremental journey rather than a fixed destination.
Empathic vehicles are expected to improve user experience in automated vehicles and to help increase user acceptance of technology. However, little is known about potential real-world implementations and designs using empathic interfaces in vehicles with higher levels of automation. Given advances in affect detection and emotion mitigation, we conducted two workshops (N1 =24, N2 = 22, Ntotal = 46) on the design of empathic vehicles and their potential utility in a variety of applications. This paper recapitulates key opportunities in the design and application of empathetic interfaces in automated vehicles which emerged from the two workshops hosted at the ACM AutoUI conferences.
It is important to study how to help people quickly find misplaced objects. However, previous studies have focused on single-person scenarios without considering the influence of other people in public places. Based on the technology of object detection and face recognition, our system can help reduce the burden upon people's memory. It can provide useful information, whether the user forgets where the object is or because someone else has moved the object. The system includes a camera, processing server and smartphone application. To evaluate our approach, we conducted a quantitative and qualitative user study with participants (n=12). We demonstrated the usability of this system in helping users find misplaced items in public settings with multiple people.
Feedforward is a training technique where people observe themselves perform a new skill to promote rapid learning, commonly implemented via video self-modelling. Avatars provide a unique opportunity to self-model skills an individual’s physical self cannot yet perform. We investigated the use of avatars in video-based learning and explore the potential of feedforward learning from self-avatars. Using modern dancing as a skill to learn, we compared the user experience when learning from a human training video and an avatar training video, considering both self-avatars (n=8) and gender-matched generic avatars (n=8). Our results indicate that learning from avatars can improve the user experience over learning from a human in a video, providing attentional and motivational benefits. Furthermore, self-avatars make the training more relatable and immersive than generic avatars. We discuss the implications from this preliminary work, highlighting methodological considerations for feedforward learning from avatars and promising future work.
Crowdfunding provides project founders with a convenient way to reach online investors. However, it is challenging for founders to find the most potential investors and successfully raise money for their projects on crowdfunding platforms. A few machine learning based methods have been proposed to recommend investors’ interest in a specific crowdfunding project, but they fail to provide project founders with explanations in detail for these recommendations, thereby leading to an erosion of trust in predicted investors. To help crowdfunding founders find truly interested investors, we conducted semi-structured interviews with four crowdfunding experts and presents inSearch, a visual analytic system. inSearch allows founders to search for investors interactively on crowdfunding platforms. It supports an effective overview of potential investors by leveraging a Graph Neural Network to model investor preferences. Besides, it enables interactive exploration and comparison of the temporal evolution of different investors’ investment details.
Flying and ground robots complement each other in terms of their advantages and disadvantages. We propose a collaborative system combining flying and ground robots, using a universal physical coupling interface (PCI) that allows for momentary connections and disconnections between multiple robots/devices. The proposed system may better utilize the complementary advantages of both flying and ground robots. We also describe various potential scenarios where such a system could be of benefit to interact with humans - namely, remote field works and rescue missions, transportation, healthcare, and education. Finally, we discuss the opportunities and challenges of such systems and consider deeper questions which should be studied in future work.
The paper aims to highlight the way how competencies are changing, following technological progress and new tasks for sustainable development. New social challenges demand a new generation of competencies to apply technologies to produce positive effects on the wellbeing of society. There are known approaches to describing the competencies based on the assumption that they are applicable to any industry, but they are not. We suggest considering the quantity and quality of competencies in the context of industry, identifying the special abilities for its improvement with help of digital technologies. The industrial transformation promotes sustainability if it is based on a deep understanding of technology and industry. The result of this research is the designed concept of competencies for digital transformation based on a variety of industries. This is a preliminary step for advancing the existing concepts of digital competencies to transform the industry to support sustainable development.
Promoting healthy and active lifestyles is an important objective for many governing agencies. The design of active urban environments can be an effective tool to encourage more active behaviors and water features can attract people, improving their experience of the urban space. To explore the potential of these concepts, we designed Fontana; an interactive public installation that aims to stimulate physical activity and social connectedness in the urban outdoor space, using the multidimensional attractiveness of water. We focus on the use of embedded interactive technology to promote physical activity, using water as a linking element between users. Adopting a research-through-design approach, we explored how such installations can nudge people into an active behavior while additionally strengthening social connectedness, using inclusive design principles. We report on insights gathered through this case study and findings of a preliminary user test, discussing the implications of this work for design researchers and practitioners.
Young children love to draw. However, at around age 10, they begin to feel that their drawings are unrealistic and give up drawing altogether. This study aims to help those who did not receive the proper training in drawing at the time and as a result remain at that level of drawing. First, through 12 drawing workshop sessions, we condensed 2 prominent art education books into 10 core drawing skills. Second, we designed and implemented a novel interactive system that helps the user repeatedly train these skills in the 5 stages of drawing a nice car in an accurate perspective. Our novel interactive technique, Tick'n'Draw, inspired by the drawing habits of experts, provides friendly guidance so that the user does not get lost in the many steps of perceptual and perspective drawing. Third, through a pilot test, we found that our system is quick to learn, easy to use, and can potentially improve real-world drawing abilities with continued use.
In the last ten years, there have been great efforts to increase automation in the health-social care ecosystem, including the use of robotics to provide practical and social care. However, development of these robots often does not include potential users until late in the design process, so they may not adequately address user expectations or needs. This pilot introduces LEGO Serious Play workshops as design tools to support individuals’ articulation of the potential benefits and consequences of robot care systems. The narratives elicited address key themes in robotics for care, indicating the workshops’ potential use early in design.
While a significant part of communication in the workplace is now happening online, current platforms don’t fully support socio-cognitive nonverbal communication, which hampers the shared understanding and creativity of virtual teams. Given text-based communication being the main channel for virtual collaboration, we propose a novel solution leveraging an AI-based, dynamic affective recognition system. The app provides live feedback about the affective content of the communication in Slack, in the form of a visual representation and percentage breakdown of the ‘sentiment’ (tone, emoji) and main ‘emotion states’ (e.g. joy, anger). We tested the usability of the app in a quasi-experiment with 30 participants from diverse backgrounds, linguistic analysis and user interviews. The findings show that the app significantly increases shared understanding and creativity within virtual teams. Emerged themes included impression formation assisted by affective recognition, supporting long-term relationships development; identified challenges related to transparency and emotional complexity detected by AI.
Recent research on automating the transformation of low fidelity (LoFi) sketches to code using Deep Neural Networks require a large-scale dataset for generalizable results. This paper introduces the LoFi sketch dataset, an open-access dataset of hand-drawn smartphone LoFi sketches, to facilitate this research domain. It contains 4,527 LoFi sketches annotated with 21 categories of UI elements. Through annotations, it provides the category, position, and location of 41,560 constituent UI elements. We collected these sketches from 361 participants from 76 countries through questionnaires and by acquiring premade LoFi sketches to ensure the dataset’s diversity and ecological validity. These sketches were drawn using pen, pencil, mouse, touch, and stylus as the input medium and available in raster and vector format. This dataset enables further research on several avenues of AI assistance in UI design, such as UI element detection, sketch completion, UI layout refinement, and Sketch-based image retrieval.
Smartphones continue to proliferate throughout our daily lives, not only in sheer quantity but also in their ever-growing list of uses. In addition to communication and entertainment, smartphones can also be used as a credit card to make a contactless payment on Kiosk Systems, such as ordering food, printing tickets and self-checkout. When a user holds the phone close to the Kiosk system to present payment credentials, we propose to also verify the user’s identity based on a photo of the back of their smartphone gripping hand, which provides a second security layer. Compared to the widely used facial recognition, the proposed approach addresses the recent struggles of identifying faces under masks and the public concerns of potential privacy erosion, racial bias and misuse. We find that the geometry of each individual’s hand, when it grips a phone, is identifiable and then design a vision-based approach to extract the gripping hand biometrics. In particular, we develop hand image processing schemes to detect and localize the gripping hand while denoising and normalizing the hand images (e.g., size and color). Furthermore, we develop a Convolutional Neural Network (CNN)-based algorithm to distinguish smartphone users’ gripping hand images for authentication. Experiments with 20 participants show that the system achieves 99.5% accuracy for user verification.
Multisensory environments (MSE) are thought to promote relaxing experiences and learning opportunities for children with neurodevelopmental disorders (NDD), especially autistic kids. However, there seems to be a lack of proposals tailored to school with relaxing goals that address autistic kids’ anxiety, while answering support teachers’ needs. This paper presents Hoomie, a small relaxing multisensory space. A qualitative study with eleven NDD students and their support teachers was conducted to understand what elements should be taken into consideration to allow as much flexibility as possible to children and teachers, in order to be accessible for the kids with sensory processing dysfunction and adoptable in teachers’ routines. Findings suggest that by providing a wide range of possibilities in a few touchpoints, a small multisensory space can keep motivating the children’s interaction, while flexibility in activity management, activity parameters, and work method can answer teachers’ needs for adoptable tools.
In this research, by responding to users’ utterances with multiple replies to create a group chat atmosphere, we alleviate the problem that Natural Language Generation chatbots might reply with inappropriate content, thus causing a bad user experience. Because according to our findings, users tend to pay attention to appropriate replies and ignore inappropriate replies. We conducted a 2 (single reply vs. five replies) × 2 (anonymous avatar vs. anime avatar) repeated measures experiment to compare the chatting experience in different conditions. The result shows that users will have a better chatting experience when receiving multiple replies at once from the NLG model compared to the single reply. Furthermore, according to the effect size of our result, to improve the chatting experience for NLG chatbots which is single reply and anonymous avatar, providing five replies will have more benefits than setting an anime avatar.
There is a need for research into the technology needs of people with Postural Tachycardia Syndrome (POTS). This paper examines the apps that are currently being used by people with POTS and explores what this population would like in an app designed specifically for people with this health condition. A prototype app is created that allows both the person with POTS and their carers to manage this condition. This prototype app is evaluated by potential users.
Climate change is intensifying weather around the world. In cities like Detroit, USA, larger storms are resulting in widespread flooding and sewer overflows. To adapt to the changing climate, Detroit is modernizing its sewer infrastructure, adding sensors, robotics, and advanced algorithms that have the potential to increase the capacity and adaptability of its sewer system. The water utility is struggling, however, to incorporate these new technologies into its existing user workflows. We have conducted a user study of water operators in Detroit focused on understanding how they currently visualize and use one of their most critical data sources — weather data. Based on our findings, we have developed a new weather dashboard that minimizes weather data uncertainty by synthesizing multiple sources. This research aims to inform the design of new data interfaces for water operators, and learn best practices for incorporating uncertain data in data-driven decision-making processes.
Abilene Christian University recently rolled out multi-factor authentication (MFA) to the entire student body. Previous work has found frequent negative reactions and dissent experienced in university settings in regard to MFA. This can lead to decreased compliance or the use of less secure passwords to compensate. We hypothesize these responses are tied to the emotional impact of using required MFA for critical tasks. We present an empirical study on user perception of adopting two-factor authentication (n=465). Our findings indicate that, due to the time sensitive nature of many tasks that required MFA, university students are likely to experience strong negative emotions towards MFA that drastically lower their perceptions of its utility and usability. However, our findings also show that these negative emotions can be at least partially mitigated when users feel more personally secure due to MFA, which can in part be controlled by rollout strategy and communication.
Despite the abundance of online resources for learning CSS, novice web developers struggle to develop the expert intuition of choosing the best CSS technique to build a given layout. We present Knowledge Maps (KM), a tool that allows users to build transferable and conceptual knowledge of CSS techniques. By interactively exploring professional websites and by categorizing visual features of those sites and the relevant CSS techniques used to create them, KM users discover the relevant similarities, differences, and use cases of various CSS techniques in the process, developing the knowledge that characterizes experts. In a study where 6 users learned from conventional CSS tutorials and 7 users learned through KM, KM users identified the appropriate CSS to build a set of layouts with a 48% increase in accuracy as compared to a 15% increase for non-KM users and showed development of transferable knowledge through KM.
Though existing social technologies provide new opportunities for individuals to make new friends outside of their busy schedules, the lack of assistance to scaffold interactions creates anxiety and high mental efforts that make initiating and deepening the connection with strangers challenging. In this paper, we present Cerebro, a context-aware system that encourages self-disclosure between strangers by engaging users in Opportunistic Collective Experiences (OCEs) with situated prompts. OCEs provide a clear participation structure and ground the users’ interactions on their shared physical affordances, and in addition to sharing their experiences, the situated prompts encourage users to share personal decisions that their partner can potentially relate to. Through a deployment study with 8 users, we found that (1) the ease in completing OCEs leads to active engagement and constant self-disclosure on Cerebro, and (2) shared collective experiences surface new common grounds to support follow-up interactions and self-disclosure after both completing the same experience.
Body language is an essential component of communication. The amount of unspoken information it transmits during interpersonal interactions is an invaluable complement to simple speech and makes the process smoother and more sustainable. On the contrary, existing approaches to human-machine collaboration and communication are not as intuitive. This is an issue that needs to be addressed if we aim to continue using artificial intelligence and machines to increase our cognitive or even physical capabilities.
In this study, we analyse the potential of an intuitive communication method between biological and artificial agents, based on machines understanding and learning the subtle unspoken and involuntary cues found in human motion during the interaction process. Our work was divided into two stages: the first, analysing whether a machine using these implicit cues would produce the same positive effect as when they are manifested in interpersonal communication; the second, evaluating whether a machine could identify the cues manifested in human motion and learn to associate them with the appropriate command intended from its user. Results showed promising results with improved work performance and reduced cognitive load on the user side when relying on the proposed method, hinting to the potential of more intuitive, human to human inspired, communication methods in human-machine interaction.
New technologies and digitization have the potential to vastly alter our knowledge infrastructures. Specifically, this work focuses on the effects of interactive technologies on research practices, referred as “interactive research artifacts.” Current research investigates the communicative affordances of such technologies, but there exists minimal work which critically examines the creative ways scholars are engaging with these artifacts. Through in-depth interviews with 14 scholars, and Design Studies literature, this work arrives at an understanding of interactive research artifacts as creative knowledge creation tools, rather than simply communicative tools. As such, to design for a future where interactive research artifacts become commonly used scholarly tools, an in-depth understanding of the ways these artifacts are used as knowledge creation tools is critical.
We showcase ESsense, a system of electrostatic (ES) sensors for alternative controllers, and its potential as a low-cost, easily-assembled motion detection system, to enable new interactions otherwise inaccessible to DIY controller makers. We characterize their most suitable interactions and demonstrate their ability to remap game inputs and recontextualise existing games with motion detection. The custom motion controllers provide button-based games with greater immersion than their original inputs, requiring motion that corresponds intuitively to the game’s displayed actions and reactions. We also developed a game demo that showcases the potential of this technology to create new experiences. It gives the player the role of a music conductor, allowing them to control the tempo of a music track by swinging a baton in front of a conductor’s stand. Given its ability to detect motion of any kind, we also discuss other possibilities for the system, such as in contexts beyond games.
This work presents NERO, a game concept using the player’s active emotional input to map the emotional state of the player to representative in-game characters. Emotional input in games has been mainly used as a passive measure to adjust for game difficulty or other variables. However the player has not been given the possibility to explore and play with one’s emotions as an active feature. Given the high subjectivity of felt emotions we wanted to focus on the player’s experience of emotional input rather than the objective accuracy of the input sensor. We therefore implemented a proof-of-concept game using heart-rate as a proxy for emotion measurement and through repeated player tests the game mechanics were revised and evaluated. Valuable insight for the design of entertainment-focused emotional input games were gained, including emotional connection despite limited accuracy, influence of the environment and the importance of calibration. The players overall enjoyed the novel game experience and their feedback carries useful implications for future games including active emotional input.
As machine learning (ML) became more relevant to our lives, ML education for college students without technical background arose important. However, not many educational games designed to suit challenges they experience exist. We introduce an educational game Classy Trash Monster (CTM), designed to better educate ML and data dependency to non-major students who learn ML for the first time. The player can easily learn to train a classification model and solve tasks by engaging in simple game activities designed according to an ML pipeline. Simple controls, positive rewards, and clear audiovisual feedback makes game easy to play even for novice players. The playtest result showed that players were able to learn basic ML concepts and how data can impact model results, and that the game made ML feel less difficult and more relevant. However, proper debriefing session seems crucial to prevent misinterpretations that may occur in the learning process.
Persephone’s Feet is a virtual reality game that questions: What if you could only garden with your feet? Exploring a series of three foot-based gestures (Tap, Hover & Shake, and Kick), this work presents a proof-of-concept for a novel virtual reality game for gardening. The game is designed as a relaxation and light exercise activity to take during breaks for the demographic of users who spend prolonged periods of time sedentary at a desk, which is becoming increasingly common for both work and education in modern society. A preliminary evaluation suggests that foot-based gestures can engender fun and motivate light exercise.
The Melody of Mysterious Stones is a VR meditation game that utilizes spatial audio technologies. One of the most common mindfulness exercises is to notice and observe five senses including the sense of sound. As a way of helping the players with focusing on their sense of sound, the Melody of Mysterious Stones makes use of spatialized sounds as game elements. Our play tests showed that game players enjoyed playing missions with spatial audio elements. Also, they reported that spatial audio helped them with focusing on their sense of sound and therefore felt more engaged in meditation materials.
Swallowing ability declines with aging, which may cause problems such as choking in eating and difficulty in oral expression, reducing the quality of older adult's life. This study collaborated with medical experts and older adults to develop an oral-training exergame app—MusicTongue. The game design is similar to "Taiko no Tatsujin." Players can follow the music song they love and train the tongue with music beats. The game can recognize the player's movements through the Front-facing camera. After the game is finished, the game will show the training record. In the final iteration, we found that most elders can operate the game smoothly, leading to a greater willingness to perform oral-training exercises.
Evoker is a narrative-based facial expression game. Due to the COVID-19 pandemic, adolescents should be wearing masks in their daily lives. However, wearing masks disturbs emotional interaction through facial expressions, which is a critical component in emotional development. Therefore, a negative impact on adolescent emotional development is predicted. To solve this problem, we design a narrative-based game Evoker that uses real-time facial expression recognition. In this game, players are asked to identify an emotion from narrative contexts in missions, and make a facial expression appropriate for the context to clear the challenges. Our game provides an opportunity to practice reading emotional contexts and expressing appropriate emotions, which has a high potential for promoting emotional development of adolescents.
Gratitude, according to Oxford Language, is “the quality of being thankful; readiness to show appreciation for and to return kindness.” One way to express gratitude is through community service, however, we found that high school and college students find it difficult to access service opportunities, especially when busy with school and other extracurricular activities. After completing eight in-depth interviews, we found students had difficulty in finding opportunities that sparked their interest, contacting organizations, as well as keeping track of their hours for organizations such as service clubs or college Greek life. To solve these problems for students, we created VolunteerSpot: an app that allows high school and college students to easily find service opportunities at the tip of their finger. VolunteerSpot utilizes an interface for volunteers, where they can filter through service opportunities by category, and an interface for organizations, who can post job opportunities. The app employs a messaging feature for communication between groups, a review widget, and can filter service opportunities by category and proximity to the user. We hope that by creating VolunteerSpot, students can use our app to give back to their community and express gratitude in an easy and interactive way.
This study explores augmented reality (AR) as a new way to experience gratitude and inspire gratitude in others. We created GoGratitude, an AR application that allows users to create and view digital messages of gratitude connected to a physical location. We tested the application in a usability study (n = 25) where we observed participants using the application. Most participants affirmed that this application would help them feel more grateful. Participants also had an overall positive response to the AR experience for expressing gratitude and suggested novel and unexpected uses for the application as well as potential for misuse. We believe this study demonstrates the potential of using AR technology in new ways to positively influence the human-centered design space of gratitude.
The state of loneliness in modern society has been referred to by researchers as an epidemic, the rate of which has been exacerbated by COVID-19. Loneliness can have significantly negative impacts on human wellbeing. In this paper, we propose gratitude interventions as a potential solution to improve prosocial behaviour, social connectedness and to function as a mediator of the negative affect associated with loneliness. We present a multi-modal gratitude intervention journaling app that leverages research in social isolation and gratitude intervention and aims to alleviate some of the consequences of loneliness.
Before the pandemic, Quebec's education system was already in a fragile state. Now, it is in a more precarious situation. In this context, teachers, parents, and students are facing many challenges and are increasingly overloaded, stressed, and anxious. Maintaining good communication and collaboration between students, their parents, and teachers is essential. Therefore, we have developed DOÜ, a product system that acts as an intermediary between the teacher and the parent, through the child. DOÜ allows parents to strengthen their trust in teachers, children to better communicate their emotions and teachers to feel valued in their profession.
While gratitude in the workplace is essential for group well-being and teamwork efficiency, showing gratitude to coworkers becomes difficult under remote work settings, which is prevalent with the ongoing pandemic. Our research revealed that in remote workspaces, people often feel less emotionally touched by the appreciation messages sent by coworkers via text-based communication due to the lack of in-person interaction. In addition, it has become more challenging for coworkers to express their appreciation in a sincere and timely manner. To solve this problem, we designed Co-Orb, an IoT desktop orb that is connected to people’s work communication app and lights up with a customized emoji when a thank-you message is received from co-workers. By creating a multi-sensory experience that brings thank-you messages beyond screens, we aim to make it more enjoyable for users to express and receive appreciation in a remote workplace, thereby promoting better work relationships.
We often take things for granted. One such thing is food. A lot of effort goes on in bringing the plate of food to our tables, and we recognize that effort when we cook for ourselves, but often we neglect that when we eat at restaurants. Invisible Workers (People working behind the scenes) ranging from farmers, people from production, distribution, storing, stocking, and many more to the chefs that cook our food are some of the people that have put a lot of effort into the food on our plates. The goal of this paper is to acknowledge and make the effort of invisible workers visible by raising awareness and offering a means to express gratitude. As a specific case, we decided to focus on workers at Indiana University Diners that represent a part of a huge population of invisible workers. Our design uses a Kiosk based system that will allow students who eat at Indiana university diner to share gratitude that will reach the invisible workers via diverse ways such as email, audio, etc.
PneuBots is a modular soft robotics construction kit consisting of seven types of self-foldable segments with high tensile strength, three types of pneumatic connectors and splitters, and a custom-designed box. The kit enables various assemblies to be made with the modular pieces, allowing creators to explore the world of soft robotics in playful ways. When combined with a FlowIO device, PneuBots allows seamless programmability of the different assemblies, enabling artists, designers, engineers, and makers to create dynamic, shape-changing, and interactive works that can be used for education, storytelling, dynamic art, or expression of affect and gratitude. In this paper, we present our current progress towards developing a soft robotics starter kit that is affordable, modular, programmable, and adaptable to users’ creative visions. Finally, we discuss some of our preliminary results with users who had the opportunity to try using PneuBots.
Humans are more than ever disconnected from their food. The gap between consumers, food and producers is growing. As a result, customers rarely ever express their gratitude to producers. The food industry appears to have lost some of its fundamental values. This paper aims at providing a better understanding of the distances that separates consumers, their food, and producers. Most importantly, it will allow for moving beyond harmful practices that are currently pervasive in the food industry. We explore the possibility of recreating meaningful and deep connections between the different actors and food.