How long is a lifetime of practice in human-computer interaction? In my case, it has been 42 years, almost all of that in IBM Research (with a couple of sabbatical years in the then new Watson Group). Having queued up for access to DEC and Prime machines during my PhD work in the mid 70s, I was positively thrilled when, upon arriving at the T.J. Watson Research Center in 1978, I found a dedicated 3277 “green screen” terminal on my desk attached to a 370 mainframe. Hot-rodded with an onboard character buffer and double speed cursor circuitry, I could fly through writing and coding, personally experiencing the productivity benefits of sub-second response time .
Beyond some great hardware (for the time), I also had extraordinary colleagues. I imprinted on the working style of John Gould and Stephen Boies as we built the second hardware iteration of what became the world's first commercial voice mail system and its final manifestation as the messaging system for the 1984 Summer Olympics. In this talk I'll share some of the still relevant behavioral principles of system design we learned in the course of that project .
Beyond this, I'll review work in early (pre-Web!) Internet access for schools , home shopping , Web accessibility , and programmer productivity in peta-scale scientific computing . Each project will highlight a barrier to building useful and usable systems, barriers that still persist. More importantly, I'll tell you how we overcame those barriers, giving you something that will, perhaps, be useful in your own work.
I remember when I started my DPhil studies joking with friends that my research was improving the sum total of human happiness – by one. I was enjoying the work. It was as a post-doc, however, that I began to see how knowledge, or at the very least the pursuit of knowledge, is not neutral. The things that we choose to study, the problems that we choose to focus on, and the way we frame our questions, lead towards different benefits for different interests. If we want to make a better world, then perhaps we should focus on asking better questions.
One question that was a turning point for me was posed by Steve Walker in 2002. In a world where e-commerce and e-government already had thriving, well-financed research communities, he convened a workshop asking “Can there be a Social Movement Informatics?” The topics ranged from designing with voluntary organizations and trade-unions, to investigating hate speech in Internet bulletin boards and chat rooms. Together with colleagues, we ran projects around “Design for Civil Society”, and “Technology and Social Action”, exploring how we as technologists, designers and researchers can connect and collaborate more effectively with groups promoting social change.
Following on from that work, I won an opportunity to explore how participatory approaches in international social and economic development relate to understandings of participatory design in HCI. Working with the Sironj Crop Producers Company Ltd (a co-operative of small and marginal farmers in Madhya Pradesh, India) and Safal Solutions (a small software house focused on rural development, based in Telengana, India), this was my first attempt to apply participatory design methods in a context with very limited infrastructure and resources. How can we facilitate meaningful communications about priorities and possibilities across wide social, cultural, geographical, linguistic, experiential and economic divides? How does the way we arrange, organize and conduct projects aiming to advance ‘development’ affect the outputs, the outcomes and the impacts that are achieved? How can agency, creativity and control be shared in ways that move systems towards a more just world?
I don't know all the answers to those questions, but I have learned that the inequalities of this world are far greater than I had originally imagined. I started with high hopes that expertise in participatory design, together with a commitment to participatory development would deliver radical results. I discovered that true participation and reciprocity is tougher than I thought. We cannot communicate effectively across such huge social divides without questioning, acknowledging and responding to our own positionality in the wider context. For example, we should ask how our own actions are contributing to harming others, such as the millions who will become, or are already, climate refugees? A few short-term “bungee research” visits will not lead us to real understanding. When key decision making remains in the usual centers of power, that simply reinforces the neo-colonial arrangements that underpin the marginalization that we say we want to change.
To create a future for humanity as part of life on this planet, we must see changes in behavior close to centers of power – and that includes ourselves. We are already enmeshed in a system of unjust socio-economic relationships. “The problem” is not something that is “out there”, it is also “in here” and all around us. Are we asking the questions that really matter?
Fairness in machine learning (ML) and artificial intelligence (AI) has gained much attention in recent years, which has led to the creation of multiple fairness metrics and frameworks. However, it is still challenging for practitioners to decide which fairness metrics to incorporate in their work and how to effectively incorporate fairness into ML and AI models and systems. ML and AI can benefit from computer-human interaction (CHI) expertise from our backgrounds in empirically grounded research. ML and AI black-box models that are creating biased results are in need of CHI expertise to add the human aspect to the evaluation and creation of AI systems. This will result in what I call Equitable AI systems. In my talk, I will describe an equitable AI system I created to create holistic diversity in higher education admissions, scholarships, hiring, etc. I will also discuss research from a CHI perspective in addressing bias in ML and AI.
Human–computer interaction researchers continue to support the exerting user to promote the many benefits of being physically active. Recently, due to technological advances, systems emerged that can continuously sense, interpret and automatically act on information, opening the opportunity to design human-computer "integration", where the user and the system work in a partnership. However, today's knowledge gap is how to design human–computer integration experiences in an exertion context. In this talk, I present how I aimed to close this gap through the design of three eBike systems that "act on" different data types (motion, traffic light data, and EEG (electroencephalography)) to explore integrations with the exerting body. The investigation informed the creation of the "integrated exertion framework", which can guide interaction designers on how to, and in an inclusive manner, amplify a person's sensations of their abilities in an exertion context to create "superpower", "co-operative" and "symbiotic" human-computer partnerships.
In this dissertation, I present measurement methods to automatically identify manipulative user interfaces—colloquially known as “dark patterns”—at scale on the web. Using these methods, I quantify the prevalence of dark patterns in three studies and show how dark patterns are rampant on the web, thus a pressing concern for society.
First, I examine whether social media content creators, or “influencers,” disclose their relationships with advertisers to their audience. Analyzing over 500K YouTube videos and 2.1M Pinterest pins, I find that only about 10% of all advertising content is disclosed to users. Second, I examine various types of dark patterns in shopping websites. Analyzing data from 11K shopping websites, I discover over 1,800 dark patterns on over 1,200 websites that mislead users into making more purchases or disclosing more information than they would otherwise. Third, I examine dark patterns in political emails from the 2020 U.S. election cycle. Through an analysis of over 100K emails, I find that over 40% of emails from the median sender contain dark patterns that nudge recipients to open emails or make donations they might otherwise not make. I further outlay the conceptual foundation of dark patterns and articulate a set of normative perspectives for analyzing the effects of dark patterns. I conclude with how the lessons learned from the studies can be used to build technical defenses and to lay out policy recommendations to mitigate the spread of these interfaces.
HCI has become especially interested in the promises and challenges of user experience of AI, such as user acceptance, human-agent teamwork, and accessibility. Less discussed, however, is that the field of HCI routinely grapples with such challenges; across the various technologies commonly referred to as AI (e.g., predictive modeling, computer vision, NLP), what shared characteristics made human-AI interaction appeared uniquely difficult to design in the first place? Synthesizing my hands-on design and research over the past six years, in my dissertation, I worked to articulate whether, why, and how human-AI interaction appears uniquely challenging to design with established HCI methods. In this extended abstract, I first describe a human-AI interaction design framework as an answer to this question. I then discuss one critical implication of this framework: Framing data-driven AI systems as living socio-technical systems that co-evolve with their users. I analogize this reframing to the shift from desktop computing to ubiquitous computing and outline the ethnographic, design, and technological research opportunities it reveals.
HCI community believes in understanding socio-cultural norms and designing for users’ values - both of which can stem from users’ belief systems. Using stories from my research work in an Islamic context, I make a case for how religion can impact HCI research. In particular, I discuss a) the implications of socio-cultural norms and participants’ belief (e.g. hijab or ’veil’) on HCI research in these settings; b) how religion forms users’ individual and collective values and socio-cultural norms that impact users’ understanding, use, or perception of technologies; and c) how our presumptions about a belief system or our value tensions can impact reporting and viewing of such findings. Thus, HCI needs to look beyond engagement with populations to include the belief systems to understand the interpretations, negotiations, and enactments of these values, their implications on our research, and their results.
As the push for intersection between decolonial and post-colonial perspectives and technology design and HCI continues to grow, the natural challenge of embracing different ways of approaching knowledge production without ’othering’ begins to emerge. In this paper, we offer what we call ’decolonial paths’, possible portals to navigate through this challenge. This collective exploration inspires five pathways for approaching decoloniality within HCI: understanding, reconsidering, changing, expanding, and reflecting. Non-prescriptive and non-definitive, these pathways offer HCI researchers a framework to investigate their own practice and the spaces of sociotechnical research and learning they inhabit.
Citations are central to the production and sharing of knowledge, and how, why, and where citations are used has been an intense subject of study across disciplines. We discuss citational practices and the politics of knowledge production within the field of Human-Computer Interaction (HCI), drawing on parallels from related fields, and reflecting on our own experiences of being cited and not cited, citing and not citing. We also present recommendations for making concrete changes across the individual and the structural in HCI, related to how citations are viewed, and how the field might advance in solidarity towards greater citational justice.
Digital technologies shape how individuals, communities, and societies interact; yet they are far from equitable. This paper presents a framework that challenges the “one-view-fits-all” design approach to digital health tools. We explore systemic issues of power to evaluate the multidimensional indicators of Latino health outcomes and how technology can support well-being. Our proposed framework enables designers to gain a better understanding of how marginalized communities use digital technologies to navigate unique challenges. As an innovative and possibly controversial approach to assets-based design, we stress the importance of industry and academia self-reflection on their organization's role in the marginalization of communities in addition to valuing the lived experiences of marginalized communities. Through this approach, designers may avoid amplifying structural and health inequities in marginalized communities.
The HCI community has largely failed to serve the millions of people in rural communities in the developed world. In part, I believe this is because our plural values do not match the more traditional, conservative values often found in rural communities. However, these rural communities, and in particular the marginalized populations within them, could greatly benefit from our work. I believe that one way to set the stage for deeper engagement with rural communities is by creating cultural micro-exposures—small brushes with the everyday realities of a culture or lived experience that is different from one’s own—using technology.
Personas—distilled representations of particular user groups—are a key part of many design processes. Researchers have created animal personas to represent nonhuman animals as stakeholders in design efforts. However neither human nor animal personas allow for the representation of broader-scale ecological impacts in design decisions. To address this gap, we present an additional conceptual tool for design: the ecosystema. This design construct is analogous to a persona, but at the level of an entire ecosystem rather than of a particular human population or animal species. These constructs could allow ecosystem-wide impacts to influence design processes more effectively, and may provide greater leverage on current environmental crises than existing human-centered techniques.
In “Being a Beast”, Charles Foster recounts living with, and as, wildlife (e.g., otters, foxes). These encounters, he contends, forge human-nature connections which have waned, negatively impacting biodiversity conservation. Yet, we need not live amidst beasts to bridge the human-nature gap. Cross-reality (XR) platforms (i.e., virtual and augmented reality) have the unique capacity to facilitate pseudo interactions with, and as, wildlife, connecting audiences to the plight of endangered species. However, XR-based wildlife interaction, I argue, is a double-edged sword whose implementation warrants as much attention in HCI as in environmental science. In this paper I highlight the promise of XR-based wildlife encounters, and discuss dilemmas facing developers tasked with fabricating mediated interactions with wildlife. I critique this approach by outlining how such experiences may negatively affect humans and the survivability of the very species seeking to benefit from them.
This is like an abstract to a paper, but it is more abstract. In fact, it is the introduction to something which is a not paper. The global Covid-19 pandemic of 2020 represented an inflection point for our post-post-modern world, a moment where our old normal was dramatically arrested. We are now in a state of comprehensive flux as ‘new normals’ emerge, begin to solidify, and may evolve into an—as yet undetermined—futures. This not paper is a facet and exploration of that flux as it relates to publication and conference culture, video conferencing systems, and how we both conduct, and share, research. You should read the whole of this abstract, but then you should take a step inside the not paper, it lives on the web over here https://designresearch.works/thisisnotapaper/
COVID underscores the potential of VR meeting tools to compensate for lack of embodied communication in applications like Zoom. But both research and commercial VR meeting environments typically seek to approximate physical meetings, instead of exploring new capacities of communication and coordination. We argue the most transformative features of VR (and XR more broadly) may look and feel very different from familiar social rituals of physical meetings. Embracing “weird” forms of sociality and embodiment, we incorporate inspiration from a range of sources including: (1) emerging rituals in commercial social VR, (2) existing research on social augmentation systems for meetings, (3) novel examples of embodied VR communication, and (4) a fictionalized vignette envisioning a future with aspects of “Weird Social XR” folded into everyday life. We call upon the research community to approach these speculative forms of alien sociality as opportunities to explore new kinds of social superpowers.
Businesses and the software built for them are usually associated with models that have economic benefit at their core. The various disciplines of design, such as communication design for branding or design for human computer interaction (HCI), support those value-creating processes. Wabi-Sabi, a Japanese philosophy and aesthetic concept that is not rooted in creating monetary value, approaching a discipline within HCI, or vice versa, therefore asks for pathways that allow open, interdisciplinary and intercultural learning. We used speculative and narrative methods such as poetic writing and scriptwriting to prototype and create a narrative to illustrate our concepts. This paper investigates how Wabi-Sabi principles could be applied to the user experience of digital products, specifically business software. It further highlights a methodological approach that has proved valuable in investigating two initially contrary themes.
Co-creation with artificial intelligence (AI) is an upcoming trend. However, less attention has been given to the construction of systems for Japanese novelists. In this study, we built “BunCho”, an AI supported story co-creation system in Japanese. BunCho’s AI is GPT-2 (an unsupervised multitask language model) trained using a large-scale dataset of Japanese web texts and novels. With BunCho, users can generate titles and synopses from keywords. Furthermore, we propose an interactive story co-creation AI system as a tabletop role-playing game. According to summative studies of writers (N=16) and readers (N=32), 69% writers enjoyed writing synopses with BunCho more than by themselves, and at least one of five common metrics were improved at objective evaluation, including creativity. In addition, 63% writers indicated that BunCho broadened their stories. BunCho showed paths to assist Japanese novelists in creating high-level and creative writing.
Latency can be detrimental for the experience of Virtual Reality. High latency can lead to loss of performance and cybersickness. There are simple approaches to measure approximate latency and more elaborated for more insight into latency behavior. Yet there are still researchers who do not measure the latency of the system they are using to conduct VR experiments.
This paper provides an illustrated overview of different approaches to measure latency of VR applications, as well as a small decision-making guide to assist in the choice of the measurement method. The visual style offers a more approachable way to understand how to measure latency.
Deadlines are constitutional aspects of research life that the CHI community frequently observes. Despite their importance, deadlines are understudied. Here we bring a mixed art and science perspective on deadlines, which may find broader applications as a starter methodology. In a field study, we monitored four academics at the office, two days before a deadline and two regular days, after the deadline had passed. Based on face video, questionnaire, and interview data we constructed their profiles. We added a dose of fictionalization to these profiles, composing anonymized comic stories that are as humorous as they are enlightening. In the stressful and lonely days towards deadlines, the only common presence in all cases is the researchers’ computer. Accordingly, this work aspires to prompt an effort for a deeper understanding of “deadline users’’, in support of designing much needed affective interfaces.
We have to learn all new technologies and we continue to learn for as long as we use them and develop that use. Learning is therefore an integral part of human engagement with technology, as it is with all areas of life. This paper proposes that we should consider learning as an important part of all human computer interaction and that theories of learning can make an important contribution to HCI. It presents 6 vignettes that describe different ways in which this could happen: rethinking HCI concepts in terms of learning, applying learning theory to better understanding established ideas in HCI, using learning research to inform HCI practice, understanding how people learn software and inspiring us to rethink the aims of this discipline. This paper aims to start a conversation that could bring valuable new ideas into our “inter-discipline”.
What if things had a voice? What if we could talk directly to things instead of using a mediating voice interface such as an Alexa or a Google Assistant? In this paper, we share our insights from talking to a pair of boots, a tampon, a perfume bottle, and toilet paper among other everyday things to explore their conversational capabilities. We conducted Thing Interviews using a more-than-human design approach to discover a thing’s perspectives, worldviews and its relations to other humans and nonhumans. Based on our analysis of the speculative conversations, we identified some themes characterizing the emergent qualities of people’s relationships with everyday things. We believe the themes presented in the paper may inspire future research on designing everyday things with conversational capabilities at home.
The increasing digitalization of fashion design opens up new potentials for designing and experiencing fashion. The study presented in this paper aimed to investigate new potentials for artistic expression by deconstructing the human body into materials for design using technological devices. The senses of sight, hearing, and touch were used as materials for design by creating a distance between these stimuli and the body's surface and relocating them to the surface of a bodysuit that was designed for this purpose. Undergraduate students participated in the study by exploring bodily perception from the perspective of fashion design in a workshop. The findings of the workshop suggest potentials for fashion designers to use technologies as design tools for designing dress beyond the textile surface by turning the human body, in terms of bodily senses, into material for design.
We present Connected Companion (CoCo), a health tracking wearable that provides users with timely, context-relevant notifications aimed at improving wellness. Traditionally, self-tracking wearables report basic health data such as resting heart rate; these data are visualised and positive behaviours (e.g. exercising often) are encouraged with rudimentary gamification (e.g. award badges) and notification systems. CoCo is the first wearable to combine caffeine, alcohol and cortisol sensors, a context network (which predicts user context), and a wellness model (which establishes per-user wellness measures). Working in tandem these provide users with notifications that encourage discrete behaviours intended to optimise user-wellness per very specific biological and social contexts. The paper describes the (sometimes unexpected) results of a user-study intended to evaluate CoCo's efficacy and we conclude with a discussion about the power and responsibility that comes with attempts to build context-aware computing systems.
Our body and mind relate in ways which are extraordinarily enigmatic and seemingly incomprehensible. Recent findings exemplify this by showing not just that our minds can phenomenologically inhabit multiple bodies but also that our body can be accessed by multiple minds. As an exploration of this concept, we present Machinoia, a symbiotic augmentation that extends the user with two additional heads each of which are unique variations of the users identity: who you once were, and who you’ll eventually become. We used a generative adversarial network to synthesize life-like human faces and controlled them through artificial attitude models extracted from social media data of the wearer, thus creating “artificial personal intelligences” of the wearer, bringing to life past and future versions of oneself.
The word aesthetics, as used in Human-Computer Interaction (HCI), tends to refer to visual characteristics of an interface. Furthermore, it is broadly taken to mean beauty, which, while a significant aspect of aesthetics, is not its only concern. Unfortunately, HCI tends to hold a narrow-sighted view of the topic that often ignores a rich history of discourse. Aesthetics is a key concern of philosophy, considering our perception of the natural and artefactual world. In more recent times, it has grown to consider all of our sensory perceptions of the world around us, where our encounters with everyday objects and environments are two areas of interest. Here, I explain how HCI describes aesthetics and give an overview of philosophical approaches to aesthetics, show where some common ground lies between the two, and suggest how aesthetic categorisations could work for artefacts in HCI.
User interface design often focuses so heavily on clean and minimal interface aesthetics that any deviation is often rejected as “ugly”. This tendency towards abstraction in UI design can be contextualized as a removal of the “human” or “physical world” from the aesthetic choices and design considerations for the system. To resist this techno-deterministic eradication of the human presence from UI design, as well as radically inject the human presence back into user interfaces, we present typeFACE, a web interface and generative adversarial network designed to create fonts from human faces. We provide an implementation and applications for such a system, as well as contextualize and analyze the history of “ugliness” and the “uncanny” in UI design history. We also discuss implications of such a system within the domains of data ownership, identity, and HCI design research.
Health and safety concerns have led to policies that put individuals under lockdown, but such restrictions lose effectiveness in the long-term due to inherent human needs of connection and physical action. People maintain prosocial behaviors long-term only if they make decisions themselves intrinsically as opposed to forced restrictions. To build systems for effecting positive social purpose in pandemic and environmental concerns, we apply speculative design to create story structures and interactions that promote behaviors for social good. We designed stories and interactions using both plot-based narrative frameworks and character-based machine-learning-generated dialogues for effecting cooperation. We then ran a series of workshops investigating how designers negotiate and collaborate to tell stories for social purpose using a "finish each other's stories" approach. This work illustrates the application of design fiction to promote sustainable behavioral patterns that value societal good.
Although technical systems may not seem on the surface to be philosophical in nature, there is a historical influence in Human-Computer Interaction (HCI) of Heidegger via the work of Winograd and Dreyfus. However, the late philosopher Bernard Stiegler critiqued this positivist reading of Heidegger, noting how Heidegger himself ultimately did not understand the political stakes of technology. Rather than abandon technology, Bernard Stiegler argued that we must repurpose technology to create a new form of society in the wake of the digital disruption. We review his often difficult philosophical vocabulary, his political stance, and his nearly unknown role in motivating a number of innovative software projects at Institut de recherche et d’innovation du Centre Pompidou. It is precisely a fundamental philosophical reorientation that will allow researchers to create the new kinds of programs that can meet the challenge posed by our digital epoch.
The Futures Cone, a prominent model in design futuring, is useful for promoting discussions about possible, plausible, probable, and preferable futures. Yet this model has limitations, such as representing diverse human experiences as a singular point of “the present” and implicitly embedding notions of linear progress. Responding to this, we argue that a plurality of perspectives is needed to engage imaginations that depict a diverse unfolding of potential futures. Through reflecting on our own cultural and professional backgrounds, we offer five perspectives for design futuring as a contribution to this plurality: Parallel Presents, “I Am Time”, Epithelial Metaphors, the Uncertainties Cone, and Meet (with) “Speculation”. These perspectives open alternative approaches to design futuring, move outside prevalent notions of technological progress, and foreground interdependent, relational agencies.
In this article, I discuss the process of designing an object to protest against a specific surveillance device: the IMSI catcher, a controversial object used to monitor GSM networks. Being widely used in protests, I develop a tactical approach based on obfuscation to be adopted collectively to counteract IMSI catchers. In this case study, (1) I present how can remaking an IMSI catcher allow to re-appropriate the technology and create a basis for designing a disobedient object; (2) I introduce some examples of tactics to defeat surveillance based on obfuscation and the potential of inflatables; (3) I conceptualize a possible design of an object to defeat IMSI catchers and show the types of interactions it might generate in protests.
Quick, print this page!1 Otherwise, the information in this document might be digitally manipulated, and initial creative intention and contribution will be changed. You will lose your access to this document when the database containing this paper is decommissioned or compromised. This paper highlights the relevance of features from analogue media in an increasingly fragile digital world. Digital content can be altered to change its meaning or be deleted altogether. In analogue media, this is much more difficult without anyone noticing. Just imagine the impact if YouTube was decommissioned and all its content was rendered unavailable. In this paper, we outline concrete commercial examples and insight from the research of issues with distribution, storage, and manipulation of digital media and its impact on users. We conclude with strategies to preserve content, access, and artistic freedom in an increasingly digital future.
A person's name embodies identity. During user studies in Human Computer Interaction (HCI), persons are often renamed (pseudonymization) to hide their identity for privacy and ethical reasons. Pseudonymization occurs mostly as a “silent” affair of due diligence. Researchers barely give substantial information to the process nor reveal a reflexive position nor acknowledge the underlying elements of power and identity negotiation. As HCI advances in mitigating the design of biased technologies and breaking oppressive structures, I argue, in this paper, the need for the field to re-consider research requirements such as pseudonymization as possible to embody oppressive structures and erase identity. I present a review of papers from the 2020 CHI Conference on Human Factors in Computing Systems (CHI ‘20) that illustrate how silent the HCI approach is. My argument is built on Critical Race Theory, questioning the objectivity of such technical requirements. I use personal narratives to bolster this argument, ending with a call to the HCI community to acknowledge the power and privilege in renaming participants with three recommendations for consideration.
The current web pesters visitors with consent notices that claim to “value” their privacy, thereby habituating them to accept all data practices. Users’ lacking comprehension of these practices voids any claim of informed consent. Market forces specifically designed these consent notices in their favor to increase users’ consent rates. Some sites even ignore users’ decisions entirely, which results in a mere theatrical performance of consent procedures designed to appear as if it fulfills legal requirements.
Improving users’ online privacy cannot rely on individuals’ consent alone. We have to look for complementary approaches as well. Current online data practices are driven by powerful market forces whose interests oppose users’ privacy expectations – making turnkey solutions difficult. Nevertheless, we provide a bird’s-eye view on privacy-improving approaches beyond individuals’ consent.
The privacy of personal data is a human right that is systematically violated in the computing industry, according to human rights organisations. The vision that technology would help society progress through more computing and more accumulation of personal data is now 30 years old. With the knowledge of today, such a vision would be different. Instead of violating a human right, technologists could use data minimisation as a central pillar of innovation. A compelling amount of evidence shows that the majority of consumers does not feel comfortable with being tracked or profiled. This paper analyses how this vision may be realised through a variety of projects and case examples. The conclusion is that technology without tracking, personal data collection, or personal data analysis, may gradually emerge as the dominant mode of innovation and computing over the next 30 years.
Motivation: Immediate cognition without rationalization is called intuition - an empowering faculty. Many of us feel disconnected from our intuition, potentially causing us to struggle with confidently making decisions. Challenge: We set out to find out why that is and how to support aligned decision-making. Method: We interviewed fourteen experts on tuning into our intuition for decision-making, surveyed currently available decision-making support tools, and modeled a tool that extends current approaches with the expert insights. Results: Based on the experts’ insights, we modeled a decision trifecta including mind, heart, and gut along with a narrative of how to educate on their interplay and how to use that to take more aligned decisions. We critically question to what extent decision support technology is a beneficial way forward. Impact: The model opens up a space for discussion around holistic decision making from an individual’s perspective. It serves as a reflection tool for personal processes as well as the suitability and limits of supportive technology.
HCI practice can be a force for social good, but can it become a spiritual practice? If so, how? These questions are somewhat taboo, and usually discussed quietly at the fringes of the HCI community. This paper is based on the Bhagavad-Gita, and proposes three ways to tie-together a practical HCI career with our spiritual lives. The three approaches are broadly related to humanitarian action (karma yoga), love and devotion (bhakti yoga) and introspective insight (jnana yoga). Each offers a different perspective on our HCI practice, and how the practical challenge of being a researcher can be reframed as part of a spiritual path. It suggests approaches to issues such as emotional burnout and bias-awareness. It is based on teachings given by major Indian teachers.
Web content accessibility is essential for a product to serve a wider range of users and make content accessible for all users under different conditions. However, it is rarely mentioned in ICT industry and also not being widely addressed among most of the companies in China. Advocating for accessibility in an existing complex product poses more challenges and demands more effort. As we think everyone's experience matters and it is imperative to make Alibaba Cloud console more inclusive, we propose a method that making the accessibility adoption process easier for all enterprise scaled products by taking incremental steps. Starting from getting stakeholders onboard with explicit data, our practice includes a way to design an adapted, simplified yet comprehensible version of accessibility design guidelines, which can make the implementation of accessibility more executable in Chinese enterprises, as well as how to design a set of level definition that could help to prioritize tasks based on the user composition of a product. The approach could be a helpful reference for other companies and teams in China.
Co-location matters, especially when running collaborative design thinking workshops. What if participation cannot occur in person? How can we conduct these workshops remotely? Based on guidelines of the Wallet project that provides a framework for in-person design thinking workshops, we present the first experiences of facilitating 2 remote design workshops with radiation oncology faculty and residents. We report the logistics and tool support decisions for the first workshop, evaluate and adjust our approach for the second workshop, and present the 3 lessons learned: incorporating schedule flexibility, prioritizing technology familiarity, and integrating communication channels.
During the COVID-19 pandemic, we had to transition our user-centered research and design activities in the emergency medical domain of trauma resuscitation from in-person settings to online environments. This transition required that we replicate the in-person interactions remotely while maintaining the critical social connection and the exchange of ideas with medical providers. In this paper, we describe how we designed and conducted four user-centered design activities from our homes: participatory design workshops, near-live simulation sessions, usability evaluation sessions, and interviews and design walkthroughs. We discuss the differences we observed in our interactions with participants in remote sessions, as well as the differences in the interactions among the research team members. From this experience, we draw several lessons and outline the best practices for remotely conducting user-centered design activities that have been traditionally held in person.
In recent years, privacy and data rights have garnered growing attention in public discourse, policy making, and scholarly research. New data protection laws are being rolled out globally to codify data rights and ensure individual control over how personal data is shared and used. This evolving landscape presents several opportunities and challenges for healthcare. In this case study, we outline a design research agenda that emerged from the practical needs of an open source community focused on digital health software for community health in low- and middle-income countries. We situate this case study in the global landscape of data regulations, and the call for responsible data practices that go above and beyond regulatory compliance. We share findings from the formative stages of our multi-stage design process, which include a scoping literature review and a reframing of institutional policies and procedures. A primary contribution of our case study is that it offers an example of the institutional ‘pre-work’ necessary to make sense of the complex data protection landscape, and to chart a path forward for designing software that better supports responsible data practices. This case study also articulates the important role for digital health designers and implementers in operationalizing patient data rights.
Deep neural networks (DNNs) routinely achieve state-of-the-art performance in a wide range of tasks, but it can often be challenging for them to meet end-user needs in practice. This case study reports on the development of human-AI onboarding materials (i.e., training materials for users prior to using an AI) for a DNN-based medical AI Assistant to aid in the grading of prostate cancer. Specifically, we describe how the process of developing these materials changed the team’s understanding of end-user requirements, contributing to modifications in the development and assessment of the underlying machine learning model. Importantly, we discovered that onboarding materials served as a useful boundary object for cross-functional teams, uncovering a new way to assess the ML model and specify its end-user requirements. We also present evidence of the utility of the onboarding materials by describing how it affected user strategies and decision-making with AI in a study deployment to pathologists.
User-centered design is typically framed around meeting the preferences and needs of populations involved in the design process. However, when designing technology for people with disabilities, in particular dementia, there is also a moral imperative to ensure that human rights of this segment of the population are consciously integrated into the process and respectfully included in the product. We introduce a human rights-based user-centered design process which is informed by the United Nations Convention on the Rights of Persons with Disabilities (CRPD). We conducted two editions of a three-day-long design workshop during which undergraduate students and dementia advocates came together to design technology for people with dementia. This case study demonstrates our novel approach to user-centered design that centers human rights through different stages of the workshop and actively involves people with dementia in the design process.
This case study presents our novel user-centered design model for mHealth applications through our experiences developing Battle Buddy, an mHealth app designed to support the physical, cognitive, and emotional well-being of US service members and their families. Our approach combines an Information Systems Research (ISR) framework with the qualitative methodology of Rapid Assessment Process (RAP) to 1) guide app development and 2) include end-users into the design process in a meaningful way. The ISR framework is known in the HCI community but is rarely applied to the domain of mHealth applications, while RAP is a fast, cost-effective process for gaining insight into a situation from an insider’s perspective. This case study mainly focuses on our team’s experience with RAP to explore the mHealth needs and design preferences of members of the military community through a series of end-user interviews conducted by a community “insider”. Findings from our work support the use of combining the ISR framework with RAP as a process for designing future mHealth apps and understanding the unique needs and design preferences of groups of specific end-users.
Conversational agents have been touted for their potential to support individuals over time as health coaches or personal assistants, but have yet to live up to this potential. Wizard-of-oz (WOz) methods enable researchers to test early prototypes of conversational applications before they are fully implemented, with a human “wizard” filling in the gaps in functionality. Current WOz methods, however, are more commonly used for studies in a lab setting, rather than deployment studies, which more accurately capture users’ interactions in-the-wild. We argue for the need for WOz methods for deployment studies that address key challenges, namely the need for easy-to-prototype technology that works reliably in the wild, as well as the user's expectations for 24/7 availability. We describe an initial approach that begins to address these challenges, as well as the insights gleaned from a two-week WOz study of t2.coach, a conversational agent health coach for diabetes self-management. We argue that the findings from our WOz study could not have been identified from a lab usability study with the same prototype, and the need for the research community to further develop methods for WOz deployment studies.
A multi-disciplinary team of volunteers collaborated with New York State to design the COVID contact tracing app for its residents (COVID Alert NY). Facing a myriad of constraints and conflicting forces, we conducted 13 evaluation studies, using 5 UX methods, in 10 weeks. These studies revealed over 150 usability and attitudinal issues, many of which were prioritized and addressed before launch on Oct 1, 2020. We selected six stories in which to convey some of the lessons we learned: parables, if you will, for UX practice.
This case study involves: the design and evaluation of serious games; the use of longitudinal research and remote testing in an international setting. Current methods for cognitive assessment tend to be inconvenient, costly and infrequently performed. This is unfortunate because cognitive assessment is an important tool. In the young it can detect atypical development, and in older people it can detect cognitive decline. For both young and old, cognitive assessments can identify problems and trigger interventions for reducing harms (e.g., adverse reactions to drugs) or providing treatment. Serious games for cognitive assessment can potentially be self-administered and played on an on-going basis so as to track cognitive status over time, something that is not practical with current methods. Inspired by this opportunity the BrainTagger team has developed a suite of cognitive assessment games. Studies are being carried out to assess the validity of these games for measuring the cognitive functions that they target, but those studies don't address the issue of whether people will be willing to play the game repeatedly, without supervision, over an extended period of time. Thus we carried out a longitudinal study with BrainTagger. We report on the logistical challenges of running this study with an international team located in Canada and Japan during the COVID19 pandemic. We also report on how the perceived “fun” of games changed over time. Our games were all versions of Whack-a-mole games, with each game requiring a different cognitive function to distinguish between targets (moles to hit) and distractors (moles to avoid). While the basic Whack-a-mole game is fun to play, having to play the same games again and again over a larger time period appeared to be more challenging than anticipated and motivation and acceptance seemed to gradually decrease over the course of the study. We conclude that addition of gamification features, such as leaderboards and in-game rewards, are needed to sustain enjoyment of our BrainTagger games and likely other games as well.
Electrical signals produced by muscle contractions are found to be effective in controlling accurately artificial limbs. Myoelectric-powered can be more functional and advantageous compared to passive or body-powered prostheses, however extensive training is required to take full advantage of the myoelectric prosthesis’ usability. In recent years, computer technology has brought new opportunities for improving patients’ training, resulting in more usable and functional solutions. Virtual Reality (VR) is a representative example of this type of technology. These preliminary findings suggested that myoelectric-powered training enhanced with VR can simulate a pain-free, natural, enjoyable, and realistic experience for the patient. It was also suggested that VR can complement prosthesis training, by improving the functionality of the missing body part. Finally, it was shown that VR can resolve one of the most common challenges for a new prosthesis user, which is to accept the fitting of the prosthetic device to their own body.
In this paper, we describe two case studies of research projects that attempt to scale up HCI research beyond traditional small evaluation studies. The first of these projects focused on evaluating an interactive web application for promoting problem-solving in self-management of type 2 diabetes mellitus (T2DM) in a randomized clinical trial; the second one included deployment in the wild of a smartphone app that provided individuals with T2DM with personalized predictions for changes in blood glucose levels in response to meals. We highlight lessons learned during these two projects and describe four different design considerations important for large scale studies. These include designing for longevity, diversity, adoption, and abandonment. We then discuss implications for future research that targets large scale deployment studies.
Task design for crowdsourcing is a key factor limiting the quality of crowd-sourced results. This case study presents our design process for a complex cognitive task: generating Dimension/Values for categorizing ideas. Conveying the task to workers was a formidable design challenge. We present five strategies and lessons learned from testing with workers. Decomposing the cognitive process and testing mastery of each cognitive subprocess enabled us to finally convey the task requirements and obtain useful results.
Frequent monitoring of participant compliance is necessary when conducting large-scale, longitudinal studies to ensure that the collected data is of sufficiently high quality. While the need for achieving high compliance has been underscored and there are discussions on incentives and factors affecting compliance, little is shared about the actual processes and tools used for monitoring compliance in such studies. Monitoring participant compliance with respect to multi-modal data can be a tedious process, especially if there are only a few personnel involved. In this case study, we describe the iterative design of an interactive visualization system we developed for monitoring compliance and refined based on changing requirements in an ongoing study. We find that the visualization system, leveraging the digital medium, both facilitates the exploratory tasks of monitoring participant compliance and supports asynchronous collaboration among non-co-located researchers. Our documented requirements for checking participant compliance as well as the design of the visualization system can help inform the compliance-monitoring process in future studies.
This study tested two different approaches for adding an explainability feature to the implementation of a legal text summarization solution based on a Deep Learning (DL) model. Both approaches aimed to show the reviewers where the summary originated from by highlighting portions of the source text document. The participants had to review summaries generated by the DL model with two different types of text highlights and with no highlights at all. The study found that participants were significantly faster in completing the task with highlights based on attention scores from the DL model, but not with highlights based on a source attribution method, a model-agnostic formula that compares the source text and summary to identify overlapping language. The participants also reported increased trust in the DL model and expressed a preference for the attention highlights over the other type of highlights. This is because the attention highlights had more use cases, for example, the participants were able to use them to enrich the machine-generated summary. The findings of this study provide insights into the benefits and the challenges of selecting suitable mechanisms to provide explainability for DL models in the summarization task.
This paper describes how to help product teams move increasingly towards responsible AI experiences by examining user trust within business contexts. We discuss the development, application, and validation of the AI Trust Score, which allows teams to measure the success of an AI feature from enterprise users’ perspectives. The Trust Score is a multi-dimensional metric which consists of several statements users respond to that inform their trust, including whether an AI feature helps with job efficiency and effectiveness and understanding how and when to use the feature for their job role. The metric, along with usability feedback, can be adapted to encourage UX-related conversations and actions within product or feature development processes.
The U.S. Census Bureau serves as the leading source of statistical data about the nation's people and economy, and has the responsibility of disseminating the data for public use. To accomplish this mission, an initiative was undertaken to develop a platform that makes the data easier to find, use, and access the official statistical data effectively and efficiently. In this paper, we will describe a human-centered approach to designing and developing the platform, in particular, the process of incremental improvement in search functionality and search results presentation through usability evaluation. The process involves user research, expert review of design concept, low-fidelity wireframe usability testing, high-fidelity usability evaluation, and collaboration between the government agency and academic institutions. Our progress made so far in this project demonstrates the soundness of this human-centered approach.
With the rapid growth in virtual reality technologies, object interaction is becoming increasingly more immersive, elucidating human perception and leading to promising directions towards evaluating human performance under different settings. This spike in technological growth exponentially increased the need for a human performance metric in 3D space. Fitts’ law is perhaps the most widely used human prediction model in HCI history attempting to capture human movement in lower dimensions. Despite the collective effort towards deriving an advanced extension of a 3D human performance model based on Fitts’ law, a standardized metric is still missing. Moreover, most of the extensions to date assume or limit their findings to certain settings, effectively disregarding important variables that are fundamental to 3D object interaction. In this review, we investigate and analyze the most prominent extensions of Fitts’ law and compare their characteristics pinpointing to potentially important aspects for deriving a higher-dimensional performance model. Lastly, we mention the complexities, frontiers as well as potential challenges that may lay ahead.
People are increasingly encountering robots in public spaces. To increase the robustness of such in-the-wild robotic applications and to achieve their designed outcomes, existing research focuses on improving the technical reliability of robots and identifying effective strategies to prevent or recover from technical failures. However, in human-robot interaction (HRI), a user’s perception of a robot failure may not necessarily relate to technical issues. We focus on understanding users’ perception of robot behaviours and interactions within the context of a public space. In our exploratory study using a novel participatory design methodology, participants designed robot behaviours for applications in public spaces, and tested their design both in a simulator and on the physical robot. We investigate how participants’ perception and expectations change during this iterative participatory prototyping process, especially when the robot exhibits erratic or unexpected behaviours. Our work provides insights on users’ perception of robot failures, and how users’ design of robot behaviours shifts as they observe the robot within the spatial and social context.
We present a new device designed for field studies which have the goal of assessing the perception of overcrowding in nature experiences, with a special focus on areas subjected to mass tourism (pre COVID-19). The design of the device resulted from an interdisciplinary approach attempting to mix valuable techniques from User Experience exploratory research, with the typical way of conducting overcrowding investigations in outdoors and wild areas. We began by reframing both nature and overcrowding as experiences and defining overcrowding as a disturbance to the nature experience. This lead to a new idea for a more beneficial investigation of visitor’s perception, which is moment-to-moment and in context. The device supports recording someone’s subjective perception of overcrowding in space and time, with minimal task load or distraction from the nature experience. The streamlined design affords only two buttons with competing functionalities: one to signal “there are too many people at the park” and another for “the park is too empty”. The device continuously records time, location, and press-down button events, storing everything locally and anonymously, within the boundaries of European GDPR. We applied our method to a field study at the Þingvellir National Park in Iceland, both a protected area and a UNESCO World Heritage. We then analyzed the collected data and visualized where and when the visitors reported overcrowding. Initial results indicate that by complementing traditional questionnaires with moment-to-moment self-reporting, we could successfully estimate the disturbance from overcrowding over time and place, thus producing a deeper insight into the quality of the experience than questionnaires alone. Our results also speak of some intrinsic qualities of the outdoor infrastructure. The outcome has the potential to make park managers, designers, landscape architects and rangers more capable of understanding the complex interrelation between infrastructure and visitor flow, thus contributing towards the goals of a long-term sustainable management of the area.
Data in everyday life is represented in ways that are supposed to connect with our needs, tasks and situations, to inform and facilitate effectiveness and functionalism. And still, all we see are numbers, charts or graphics. We engage in sense-making all day through analytical interpretation, which is often not intuitive and causes friction whenever we encounter it in an everyday context. In our work, we aim for a different experience in everyday data-sensemaking that translates sensor data, time and data from external sources into lively, yet mundane video vignettes. The aim is to connect the experience to the phenomenon behind data, to deny data the stage and spotlight in our Everyday. This case study shows the design and implementation of a novel, interactive video installation that projects mundane data into an Everyday context, giving the data a new form and meaning. We work with video vignettes that are designed as short, neutral video fragments and show a person in different mundane situations, acting alone or interacting with props. The design process went through three iterations before the installation was exhibited as part of a group exhibition. We reflect on the most important design decisions, turning points in the process and how design and implementation are intertwined throughout the installation preparation. We conclude with a short overview of the contribution and an outlook to future work.
We present a case study where we developed an interface for remote vital signs self-monitoring at a large-scale isolation facility for COVID-positive patients, under disrupted conditions. These conditions were: a lack of time, lack of access to end users, changing requirements, high risk of infection and supply chain limitations. We first describe the background of the development of the facility and the vital signs self-monitoring system. We use the 5 commonly prescribed activities of user experience design - Empathise, Define, Ideate, Prototype and Test- to describe how our work as user experience designers was affected by these disrupted conditions. Finally, we recommend a focus on Empathy, Prototyping and Communication for user experience practitioners and educators, whose training may be needed in similarly mission-critical, time-constrained circumstances.
In this case study we describe the evolution of a new method for creating future personas, called SciberPunks, for use in sustainable city design scenarios. SciberPunks channel the voice of the environment and have special abilities for feeling and expressing data, such as the ability to taste it, or communicate it through living tattoos on the skin. The aim was to examine how environmental data could act as a bridge between people and nature, to encourage empathy towards ’more-than-human’ perspectives. We engaged 5 participants in activities designed to lead them through a process of engaging with information and data in the process of building their personas. The activities utilised arts-based methods as we were interested in the experiential aspects of engaging with data and how we might foster creative and sensory experiences with it. Activities included drawing, writing and performing and were framed by a single story that took participants on a journey through time: past, present and future. Activities took place online, due to COVID-19. Overall, participants produced 5 characters, including a shaman, a shape-shifter and a fairy, all with special skills for connecting to nature and/or to each other.
In this article we report a case study of a Language Learning Bauhaus VR hackathon with Goethe Institute. It was organized as an educational and research project to tap into the dynamics of transdisciplinary teams challenged with a specific requirement. In our case, it was to build a Bauhaus-themed German Language Learning VR App. We constructed this experiment to simulate how representatives of different disciplines may work together towards a very specific purpose under time pressure. So, each participating team consisted of members of various expert-fields: software development (Unity or Unreal), design, psychology and linguistics. The results of this study cast light on the recommended cycle of design thinking and customer-centered design in VR. Especially in interdisciplinary rapid prototyping conditions, where stakeholders initially do not share competences. They also showcase educational benefits of working in transdisciplinary environments. This study, combined with our previous work on human factors in rapid software development and co-design, including hackathon dynamics, allowed us to formulate recommendations for organizing content creation VR hackathons for specific purposes. We also provide guidelines on how to prepare the participants to work in rapid prototyping VR environments and benefit from such experiences in the long term.
This case study presents how the mixing of speculative design with artistic methodology can contribute to the inquiry of technological potentialities in the future of work. The goal and belief are that technologies such as artificial intelligence can augment employee creativity and support their well-being at work. The co-design process followed an artistic approach and consisted of three cycles of labs, workshops and events during the span of one year to support professionals with non-technical background in the ideation and conceptualization of possible futures. The artistic approach consisted of different exploration perspectives of technology through the use of embodiment, artifacts and creation of speculative fictions. The research team that facilitated the labs was interdisciplinary and the participants were assembled from different partner organizations from industry and public sector. We share the learnings from this study attending to three different perspectives: our learnings from the facilitation of the artistic approach, our learnings from the future of work ideas and concepts developed by participants, and discussion of what these learnings can mean to design practitioners and the research community. Results indicate that embodiment and speculative fiction can create engagement among professionals that lack technical expertise and support them in collaborative exploration of alternative futures of work with novel and abstract technologies such as AI.
Mergers and acquisitions pose complex organizational and technical challenges. A user-centered approach can help ensure the success of a merger. By prioritizing the management of user expectations, by addressing change aversion and by ensuring users feel heard, a merger cannot only be made more successful, but also be perceived as a positive change by end users. This case study describes and discusses the user-centered approach taken to successfully merge two established online LaTeX editors: Overleaf and ShareLaTeX. A mixed methods approach was used to simultaneously ensure broad coverage of the user base (quantitative; relevant because these were established products) and also in-depth understanding of their issues, opinions and motivations (qualitative; relevant to making informed design decisions). This user-centered approach was successful in helping the company to set a path for the merger, to adapt in the face of new data, and to create the foundation for sustained growth.
While the accuracy of Natural Language Processing (NLP) models has been going up, users have more expectations than captured by just accuracy. Despite practitioners’ attempt to inspect model blind spots or lacking capabilities, the status-quo processes can be ad-hoc and biased. My thesis focuses on helping practitioners organize and explore the inputs and outputs of their models, such that they can gain more systematic insights into their models’ behaviors. I identified two building blocks that are essential for informative analysis: (1) to scale up the analysis by grouping similar instances, and (2) to isolate important components by generating counterfactuals. To support multiple analysis stages (training data assessment, error analysis, model testing), I designed various interactive tools that instantiate these two building blocks. In the process, I characterized the design space of grouping and counterfactual generation, seeking to balance the machine powers and practitioners’ domain expertise. My future work proposes to explore how the grouping and counterfactual techniques can benefit non-experts in the data collection process.
Argumentation skills are an omnipresent foundation of our daily communication and thinking. However, the learning of argumentation skills is limited due to the lack of individual learning conditions for students. Within this dissertation, I aim to explore the potential of adaptive argumentation skill learning based on Artificial Intelligence (AI) by designing, implementing, and evaluating new technology-enhanced pedagogical concepts to actively support students in developing the ability to argue in a structured, logical, and reflective way. I develop new student-centered pedagogical scenarios with empirically evaluated design principles, linguistic corpora, ML algorithms, and innovative learning tools based on an adaptive writing support system and a pedagogical conversational agent. My results indicate that adaptive learning tools based on ML algorithms and user-centered design patterns help students to develop better argumentation writing skills. Thereby, I contribute to research by bridging the boundaries of argumentation learning and argumentation mining and by examining pedagogical scenarios for adaptive argumentation learning from a user-centered perspective.
In the prehospital environment, head-worn displays (HWDs) could support paramedics and emergency physicians during complex tasks and procedures. Previously, HWDs have been used in emergency medical service (EMS) contexts to support triage, telemedicine, patient monitoring, and patient localization. However, research on HWDs in EMS has three limitations: (1) HWD applications have not been developed based on field research of prehospital operations and training, (2) there are few guidelines that direct HWD deployment and application design, and (3) HWD applications seldom have been tested in randomized controlled trials. Therefore, it is unclear how HWDs affect EMS work and patient outcomes. During my PhD studies, I am investigating the potential of HWDs in EMS. I am addressing the limitations of previous research by conducting a literature review, a field study, design workshops, and a controlled evaluation study. The ultimate aims of this research are to benefit the work of EMS staff and to improve patient safety.
This work sits in the fields of Human Computer Interaction and accessibility research dedicated to the study and development of technology used by people who are blind or visually impaired. Increasingly, researchers have stated the need to get away from technological solutions that intend to ‘normalize’ disabled individuals, towards providing alternative ways that accommodate diverse bodies and minds. To achieve this, scholars and activists call for a shift in the design paradigm in which both the designers’ orientation and the design processes centre not only the needs of people with disabilities but also their lived experience and tacit knowledge. Moreover, more mainstream technologies must be built to accommodate them to the best extent possible, instead of leaving the responsibility to specialised assistive technologies. My PhD has been focused on uncovering and highlighting the competencies that people with visual impairments employ in their technology practices and how these are showcased, by closely examining a corpus of ethnographic data, including a comprehensive set of video demonstrations. Furthermore, my research aims to explore how these findings can be used for practical design within and beyond the accessibility and assistive technology fields, resulting in the production of resources that aid the design for supporting and extending such competencies.
My research is on Human-Robot Interaction (HRI) from a User Centered Design (UCD) perspective and values the user’s qualitative assessment of their interaction under the concept of User Experience (UX). Traditionally, UX is retrieved in a written questionnaire as a satisfaction survey, nevertheless, with the hike in electronic devices equipped with a diversity of sensors, actuators and processing capabilities (turning everyday appliances into robots) comes an intention to migrate from qualitative to quantitative assessment of the UX; for example, visual recognition of emotional facial expressions. In this regard, what I present as a novelty in the research area is the study of interaction data generated through the robot’s interface during its intended use (everyday use), while providing an anonymous experience (no cameras, no mics) to the user.
For the experimental setup, I designed and built a robotic Desk-Lamp with 5 degrees of freedom (height, brightness, projection angle, sensitivity of the interface and ambient lighting) as test bench, equipped with: an interface instrumented to sense the force delivered by the user while “pushing buttons”; 18 variable data-logging system at 20 SPS (Sample Per Second); high precision and high repeatability electronics and mechanisms; real-time sensing of the user and low-latency operation due to parallel processing. The test bench is designed to be used in multiple experiments with users, varying the robot behavior and interaction procedures to compare the interaction data generated in the interface with a written UX questionnaire.
In the twenty-first century, people need to learn and apply high-level thinking skills, such as decision-making, problem-solving, and critical thinking, in their daily lives. Traditionally when in need, people can get help from human assistants to complete these thinking tasks, yet, such human experts are not always accessible. Intelligent agents, such as physical robots and virtual chatbots, are promising alternatives to the human assistants in these tasks. My doctoral research focuses on designing intelligent agents to assist human users and evaluating the impact of their interaction mechanisms (i.e., way of behaving) on user experience. Specifically, my works surround this theme by proposing: 1) an anticipation-autonomy framework that models the service robot’s proactivity in decision-making support contexts (decision-making); 2) a bot assistant that helps users to solve the problem of writing poor-quality comments in online mental health communities either by assessing the writing performance or recommending writing examples (problem-solving); and 3) a chatbot in comparison to a non-conversational tool for facilitating people to read academic papers critically (critical thinking). My dissertation contributes to the design of intelligent agents for helping people in high-level thinking tasks and insights for developing appropriate interaction mechanisms for these agents in the future.
Presence is the feeling of actually being located in a virtual environment, and has been subject to intensive research as long as virtual reality exists. Nowadays, it is commonly evaluated using subjective measures, which most of the time take the form of questionnaires. However, there have been doubts about whether measures of this type are able to truly capture the actual concept behind presence (if it exists, although this is highly likely). While working on my master’s thesis, I found that it is quite difficult in practice to purposefully employ more objective measures into specific experimental designs. This is why I decided to dedicate myself to contributing ways to make this easier for as many scenarios as possible. My current work involves advanced pattern recognition and general methods from digital signal processing to analyze objective, continuous sensory data like eye-tracking and electromyography.
Input devices, such as buttons and sliders, are the foundation of any interface. The typical user-centered design workflow requires the developers and users to go through many iterations of design, implementation, and analysis. The procedure is inefficient, and human decisions highly bias the results. While computational methods are used to assist various design tasks, there has not been any holistic approach to automate the design of input components. My thesis proposed a series of Computational Input Design workflows: I envision a sample-efficient multi-objective optimization algorithm that cleverly selects design instances, which are instantly deployed on physical simulators. A meta-reinforcement learning user model then simulates the user behaviors when using the design instance upon the simulators. The new workflows derive Pareto-optimal designs with high efficiency and automation. I demonstrate designing a push-button via the proposed methods. The resulting designs outperform the known baselines. The Computational Input Design process can be generalized to other devices, such as joystick, touchscreen, mouse, controller, etc.
In the era of everyday ubiquitous computing, all of us have at least once lost track of time, space or our social environment when using our smartphone. In such situations, we might not feel present in the here and now in the physical real world. Presence has been a long term topic of interest in virtual reality research as the feeling of being there, in a technology-mediated virtual world. However, we have yet to understand our feeling of being here and now in the physical real world under the wide availability and easy accessibility of smartphones. In my thesis, I want to examine the concept of real-world presence from a theoretical, users’ and technology development perspective. I aim at understanding current and developing new mobile solutions, with the goal of supporting our presence in the physical real world.
The physical urban environment has been shown to play a large role in community resilience and well-being, but processes to plan for, revitalize, and repair these environments often show the goals of local residents to be at odds with official agendas. My research focuses on building co-creative design aids for ordinary citizens to help them create expert-level visualizations to communicate plans for tactical, urban revitalization projects in their communities. I rely on a variety of both technical and qualitative methods to build tools and algorithmic techniques as well as understand the scope of design knowledge of novice community members.
Ethnography has firmly established its position in the Human-Computer Interaction (HCI) community. Many studies have benefited from following ethnographic approaches to arrive at a grounded and comprehensive understanding of the respective research context. Applying that to the non-Western world, however, comes with challenges for researchers. Aside from ethical concerns which have been addressed in the past, we want to use this workshop to foster conversations and discussions on authority, bias and immersion when conducting ethnographic field work in the non-Western world – especially as a Western researcher. The main objective of this workshop is to exchange experiences and to identify common aspects and ways of overcoming, coping with or even embracing the messiness in ethnographic work and derive guidelines based on these discussions.
Everyday life hinges on smell, taste, and temperature-based experiences, from eating to detecting potential hazards (e.g., smell of rotten food, microbial threats, and non-microbial threats such as from hazardous gases) to responding to thermal behavioral changes. These experiences are formative as visceral, vital signals of information, and contribute directly to our safety, well-being, and enjoyment. Despite this, contemporary technology mostly stimulates vision, audition, and – more recently – touch, unfortunately leaving out the senses of smell taste and temperature. In the last decade, smell, taste, and temperature interfaces have gained a renewed attention in the field of Human Computer Interaction, fueled by the growth of virtual reality and wearable devices. As these modalities are further explored, it is imperative to discuss underlying cultural contexts of these experiences, how researchers can robustly stimulate and sense these modalities, and in what contexts such multisensory technologies are meaningful. This workshop addresses these topics and seeks to provoke critical discussions around chemo- and thermo-sensory HCI.
The home is a place of shelter, a place for family, and for separation from other parts of life, such as work. Global challenges, the most pressing of which are currently the COVID-19 pandemic and climate change has forced extra roles into many homes and will continue to do so in the future. Biodesign integrates living organisms into designed solutions and can offer opportunities for new kinds of technologies to facilitate a transition to the home of the future. Many families have had to learn to work alongside each other, and technology has mediated a transition from standard models of operation for industries. These are the challenges of the 21st century that mandate careful thinking around interactive systems and innovations that support new ways of living and working at home. In this workshop, we will explore opportunities for biodesign interactive systems in the future home. We will bring together a broad group of researchers in HCI, design, and biosciences to build the biodesign community and discuss speculative design futures. The outcome will generate an understanding of the role of interactive biodesign systems at home, as a place with extended functionalities.
Immersive media is becoming increasingly common in day-to-day scenarios: from extended reality systems to multimodal interfaces. Such ubiquity opens an opportunity for building more inclusive environments for users with disabilities (permanent, temporary, or situational) by either introducing immersive and multimodal elements into existing applications, or designing and creating immersive applications with inclusivity in mind. Thus the aim of this workshop is to create a discussion platform on intersections between the fields of immersive media, accessibility, and human-computer interaction, outline the key current and future problems of immersive inclusive design, and define a set of methodologies for design and evaluation of immersive systems from inclusivity perspective.
People engage in sportive activities for reasons beyond improving their athletic performance. They also seek experiences like fun, adventure, a feeling of oneness, clear their heads, and flow. Since sport is a highly bodily experience, we argue that taking an embodied interaction perspective to inspire interaction design of sports systems is a promising direction in HCI research and practice. This workshop will address the challenges of designing interactive systems in the realm of sports from an embodied interaction perspective focusing on athletes’ experience rather than performance. We will explore how interactive systems enhance sports experience without distracting from the actual goal of the athlete, such as freeing the mind. We will focus on several topics of interest such as sensory augmentation, augmented experience, multi-modal interaction, and motor learning in sports.
The Asian CHI symposium 2021 is the joint event organized by the researchers and practitioners in Asia. The symposium aims to bring together young and senior researchers from the academics and industries in one forum to exchange ideas and foster social network in the field of HCI. The symposium showcases the latest HCI work from Asia and those focusing on incorporating Asian sociocultural factors in their design and implementation. In addition to circulating ideas and envisioning future research in human-computer interaction, this symposium aims to foster social networks among academics (researchers and students) and practitioners and grow a research community from Asia.
We are facing increasingly pressure on reducing travel and working remotely. Tools that support effective remote communication and collaboration are much needed. Social Virtual Reality (VR) is an emerging medium, which invites multiple users to join a collaborative virtual environment (VE) and has the potential to support remote communication in a natural and immersive way. We successfully organized a CHI 2020 Social VR workshop virtually on Mozilla Hubs, which invited researchers and practitioners to have a fruitful discussion over user representations and ethics, evaluation methods, and interaction techniques for social VR as an emerging immersive remote communication tool. In this CHI 2021 virtual workshop, we would like to organize it again on Mozilla Hubs, continuing the discussion about proxemics, social cues and VE designs, which were identified as important aspects for social VR communication in our CHI 2020 workshop.
Our workshop will concentrate on vulnerability of specific social groups due to various reasons, including COVID-19, and the potential for technology design to result in empowerment. We want to address issues of what new forms of vulnerabilities emerge and how we can design digital environments in a way that acknowledges vulnerability but also has the potential to empower people in ways that are meaningful for them. When planning the workshop, we will also reflect on social situations that can result in vulnerabilities for participants. Therefore, we will ensure that interested participants will experience low barriers to participation include a variety of people with different backgrounds and ensure that interaction happens based on equality principles and in an atmosphere of solidarity. Participants can exchange ideas and thoughts without worrying about being exposed to biased assumptions. The workshop will allow for non-hierarchical and cooperative discussion and collaboration through interactive online exercises, resulting in a collaboratively developed zine. Finally, the social sustainability of the workshop will be ensured through a website, mailing lists, joint publications and continuous contact.
Decolonizing discourses teach us that we need to move away from the universalizing ‘grand narratives’ of knowledge production and focus on contextualizing diverse and situated experiences, epistemologies and narratives. Yet, few contributions actively demonstrate what a shift to decolonizing design means in practice. Participatory Design (PD) approaches are particularly well-suited to contributing to contemporary debates of decolonization in design due to PD’s long-standing political traditions and values of equality and empowerment, but even here empirical methods and techniques to fully realize pluriversality in design are lacking. In line with the CHI 2021 theme of Making Waves. Combining Strengths, this interactive workshop will invigorate the debates and practices in HCI of decolonization by bringing together and demonstrating how designers and researchers in diverse global contexts are working with and adapting modes, concepts, methodologies and sensibilities into decolonizing design practices. Not only will this workshop provide new ways of thinking in HCI but also fuse theories and practices to develop truly transcultural approaches to HCI.
As a wicked problem, limiting the harm caused by misinformation requires merging multiple perspectives to the design of digital interventions, including an understanding of human behaviour and motivations in judging and promoting false information, as well as strategies to detect and stop its propagation without unduly infringing on rights or freedoms of expression. Tools and online services are continuously being developed to support different stakeholders in this battle, such as social media users, journalists, and policymakers. As our studies have demonstrated, the expected impact of online solutions is hampered by limitations associated with lack of explainability, complex user interface, limited datasets, restricted accessibility, biased algorithms, among others factors that can confuse, overwhelm, or mislead users in their own ways. These ethical implications are typically neglected when new digital solutions to tackle misinformation are conceived. This hands-on workshop proposes to unpack the state-of-the-art on social, societal and political studies and socio-technical solutions to stop mis-information, challenging the participants to first critically reflect upon limitations of existing approaches, to then co-create a future with integrating perspectives focusing on ethical aspects and societal impact.
This Fourth Body as a Starting Point workshop investigates how to design interactive health technologies that assist users in developing insourcing abilities and then assist users in letting go of the same technology—in other words, supporting a transition from health technology dependence to independence. By making explicit two inbodied design continua of (1) ownership, from “outsourcing” to “insourcing” and (2) engagement period, from “single”, to” cycle”, to “permanent”, to prototype and reflect on interactive technology that takes the body as a starting point.
There has been increasing interest in socially just use of Artificial Intelligence (AI) and Machine Learning (ML) in the development of technology that may be extended to marginalized people. However, the exploration of such technologies entails the development of an understanding of how they may increase and/or counter marginalization. The use of AI/ML algorithms can lead to several challenges, such as privacy and security concerns, biases, unfairness, and lack of cultural awareness, which especially affect marginalized people. This workshop will provide a forum to share experiences and challenges of developing AI/ML health and social wellbeing technologies with/for marginalized people and will work towards developing design methods to engage in the re-envisioning of AI/ML technologies for and with marginalized people. In doing so we will create cross-research area dialogues and collaborations. These discussions build a basis to (1) explore potential tools to support designing AI/ML systems with marginalized people, and (2) develop a design agenda for future research and AI/ML technology for and with marginalized people.
The safe deployment of autonomous physical systems in real-world scenarios requires them to be explainable and trustworthy, especially in critical domains. In contrast with ‘black-box’ systems, explainable and trustworthy autonomous physical systems will lend themselves to easy assessments by system designers and regulators. This promises to pave ways for easy improvements that can lead to enhanced performance, and as well, increased public trust. In this one-day virtual workshop, we aim to gather a globally distributed group of researchers and practitioners to discuss the opportunities and social challenges in the design, implementation, and deployment of explainable and trustworthy autonomous physical systems, especially in a post-pandemic era. Interactions will be fostered through panel discussions and a series of spotlight talks. To ensure lasting impact of the workshop, we will conduct a pre-workshop survey which will examine the public perception of the trustworthiness of autonomous physical systems. Further, we will publish a summary report providing details about the survey as well as the identified challenges resulting from the workshop’s panel discussions.
Automation is transforming traditional workplaces and work processes tremendously. While automated systems are no longer restricted to manufacturing environments but pervade various work domains in manifold appearances, automation initiatives and research are still driven from a technology and performance perspective. The goal of this workshop is to provide an interdisciplinary forum for automation-focused user experience research. It will bring together researchers and practitioners from different disciplines to create and transfer knowledge on automation experiences of skilled workers and professionals at workplaces across domains. In a keynote talk, participant presentations, and the group-wise drafting of research ideas, the workshop will address three recent main challenges: encountering workplace automation, collaborating as well as building meaningful relationships with workplace automation. The outcome of this workshop will be a research agenda consisting of ideas for promising future research on automation experiences at the workplace.
Automation in driving will change the role of the drivers from actor to passive supervisor. Although the vehicle will be responsible for driving manoeuvres, drivers will need to rely on automation and understand its decisions to establish a trusting relationship between them and the vehicle. Progress has been made in conversational agents and affective machines recently. Moreover, it seems to be promising in this establishment of trust between humans and machines. We believe it is essential to investigate the use of emotional conversational agents in the automotive context to build a solid relationship between the driver and the vehicle. In this workshop, we aim at gathering researchers and industry practitioners from different fields of HCI, ML/AI, NLU and psychology to brainstorm about affective machines, empathy and conversational agent with a particular focus on human-vehicle interaction. Questions like ”What would be the specificities of a multimodal and empathic agent in a car?”, ”How the agent could make the driver aware of the situation?” and ”How to measure the trust between the user and the autonomous vehicle?” will be addressed in this workshop.
In this workshop, we strive to formulate a working definition of a participatory digital citizenship, and to share issues, challenges, opportunities, methods and empirical examples pertaining to participatory digital citizenship as a goal. The rational for such a work lies in extensive digitalization of everyday life, which has turned data into valuable capital and a means of manipulation. Excessively datafied environments and more and more powerful algorithms and artificial intelligences used for processing data pose a threat to societies’ democratic arrangements and principles. Our goal is to explore the possibilities and limitations of expanding the concept of digital citizenship towards a direction that addresses the deep power asymmetry existing between the ones that use data and ones that are monitored.
Augmented, virtual and mixed reality technologies offer new ways of interacting with digital media. However, such technologies are not well explored for people with different ranges of abilities beyond a few specific navigation and gaming applications. While new standardization activities are investigating accessibility issues with existing AR/VR systems, commercial systems are still confined to specialized hardware and software limiting their widespread adoption among people with disabilities as well as seniors. This proposal takes a novel approach by exploring the application of user model-based personalization for AR/VR systems to improve accessibility. The workshop will be organized by experienced researchers in the field of human computer interaction, robotics control, assistive technology, and AR/VR systems, and will consist of peer reviewed papers and hands-on demonstrations. Keynote speeches and demonstrations will cover latest accessibility research at Microsoft, Google, Verizon and leading universities.
Human augmentation or augmented humans is regarded as an important research field with a view to the future society in which computer technology such as Virtual and Augmented Reality, Artificial Intelligence, Computer Vision, Robotics, etc. are highly integrated. Human augmentation is not just a research to make human much stronger. It should be used to assist people’s daily activities. For example, it can be used to teach sports or musical performances more effectively. It can be used even for assisting people with disabilities. This workshop is expecting novel research results or late breaking results on designs, methods, implementations, or applications to augment or enhance human ability, in physical and intellectual, by using advanced technologies of VR/AR, AI, CV, and Robotics.
The realm of Artificial Intelligence (AI)’s impact on our lives is far reaching – with AI systems proliferating high-stakes domains such as healthcare, finance, mobility, law, etc., these systems must be able to explain their decision to diverse end-users comprehensibly. Yet the discourse of Explainable AI (XAI) has been predominantly focused on algorithm-centered approaches, suffering from gaps in meeting user needs and exacerbating issues of algorithmic opacity. To address these issues, researchers have called for human-centered approaches to XAI. There is a need to chart the domain and shape the discourse of XAI with reflective discussions from diverse stakeholders. The goal of this workshop is to examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we put an emphasis on “operationalizing”, aiming to produce actionable frameworks, transferable evaluation methods, concrete design guidelines, and articulate a coordinated research agenda for XAI.
Research in computing is becoming increasingly concerned with understanding and mitigating unintended consequences of technology developments. However, those concerns are rarely reflected in how we submit, review, and publish our own work. Specifically, in talking about how our new apps, devices, and algorithms will change the world, we focus almost exclusively on positive consequences. There have been calls to require some speculation about negative impacts as part of the peer review process. This workshop will explore how to think about and report potential negative consequences in our papers in a way that’s practical, inclusive, and achievable. The aim is to draw on scholarship around creative-yet-grounded speculation about technology futures and to consider how these might be applied to publication and peer review. The workshop aims to inspire the CHI conference and the computing research community to meaningfully consider and act upon the potential negative implications of their work.
Space travel and becoming an interplanetary species have always been part of human’s greatest imagination. Research in space exploration helps us advance our knowledge in fundamental sciences, and challenges us to design new technology and create new industries for space. However, keeping a human healthy, happy and productive in space is one of the most challenging aspects of current space programs. Our biological body, which evolved in the earth specific environment, can barely survive by itself in space’s extreme conditions with high radiation, low gravity, etc. This is similar for the moons and planets in the solar system that humans plan to visit. Therefore, researchers have been developing different types of human-computer interfaces systems that support humans’ physical and mental performance in space. With recent advancements in aerospace engineering, and the democratized access to space through aerospace tech startups such as SpaceX, Blue Origin, etc., space research is becoming more plausible and accessible. Thus, there is an exciting opportunity for researchers in HCI to contribute to the great endeavor of space exploration by designing new types of interactive systems and computer interfaces that can support humans living and working in space and elsewhere in the solar system.
Research on migration (both internal and external, voluntary and forced) has been an emergent domain in HCI and related disciplines over the past decade. However, as the number of migrants has been increasing over the last two decades, coupled with various growing global affairs, new challenges encountered by diverse types of migrants (e.g. international students, migrant workers) keep arising, and research on mobility gets entangled with many broader social and political issues. Hence, migration can no longer be considered as a ‘special case’ for some immigrant and refugee communities, but an everyday reality to hundreds of millions of people worldwide and across diverse socio-economic groups. Therefore, the objectives of this workshop are to (a) build a community with HCI researchers and practitioners involved in different domains, within and beyond migration, to share ideas and exchange expertise, (b) broaden the scope of HCI migration research and identify gaps within this field, and (c) provide a safe space for critical reflection on methodological approaches, research infrastructure, and space boundaries in relation with migration to achieve a better real-world impact.
As CUIs become more prevalent in both academic research and the commercial market, it becomes more essential to design usable and adoptable CUIs. Though research on the usability and design of CUIs has been growing greatly over the past decade, we see that many usability issues are still prevalent in current conversational voice interfaces, from issues in feedback and visibility, to learnability, to error correction, and more. These issues still exist in the most current conversational interfaces in the commercial market, like the Google Assistant, Amazon Alexa, and Siri. The aim of this workshop therefore is to bring both academics and industry practitioners together to bridge the gaps of knowledge in regards to the tools, practices, and methods used in the design of CUIs. This workshop will bring together both the research performed by academics in the field, and the practical experience and needs from industry practitioners, in order to have deeper discussions about the resources that require more research and development, in order to build better and more usable CUIs.
Reinforcement learning (RL) is emerging as an approach to understand intelligence in both humans and machines. However, if RL is to have a meaningful impact in human–computer interaction, it is critical that these two threads are integrated. This is required for genuinely interactive RL-based systems which take into account user capacities and preferences. This workshop will build a community and form a research agenda for investigating RL in HCI.
There is an increasing interest in food within the HCI discipline, with many interactive prototypes emerging that augment, extend and challenge the various ways people engage with food, ranging from growing plants, cooking ingredients, serving dishes and eating together. Grounding theory is also emerging that in particular draws from embodied interactions, highlighting the need to consider not only instrumental, but also experiential factors specific to human-food interactions. Considering this, we are provided with an opportunity to extend human-food interactions through knowledge gained from designing novel systems emerging through technical advances. This workshop aims to explore the possibility of bringing practitioners, researchers and theorists together to discuss the future of human-food interaction with a particular highlight on the design of experiential aspects of human-food interactions beyond the instrumental. This workshop extends prior community building efforts in this area and hence explicitly invites submissions concerning the empirically-informed knowledge of how technologies can enrich eating experiences. In doing so, people will benefit not only from new technologies around food, but also incorporate the many rich benefits that are associated with eating, especially when eating with others.
Immersive virtual experiences are becoming ubiquitous in our daily lives. Besides visual and auditory feedback, other senses like haptics, smell and taste can enhance immersion in virtual environments. Most solutions presented in the past require specialized hardware to provide appropriate feedback. To mitigate this need, researchers conceptualized approaches leveraging everyday physical objects as proxies instead. Transferring these approaches to varying physical environments and conditions, however, poses significant challenges to a variety of disciplines such as HCI, VR, haptics, tracking, perceptual science, design, etc. This workshop will explore the integration of everyday items for multi-sensory feedback in virtual experiences and sets course for respective future research endeavors. Since the community still seems to lack a cohesive agenda for advancing this domain, the goal of this workshop is to bring together individuals interested in everyday proxy objects to review past work, build a unifying research agenda, share ongoing work, and encourage collaboration.
Imagine buying flowers for a loved one. After selecting a bouquet, at checkout you discover that the site sneaked a paid greeting card into your shopping cart. This is an example of a dark pattern, an interface designed to manipulate a user into behavior that goes against their best interests. The notion of dark patterns has fostered a growing critical discussion about which interfaces go too far in exploiting the user. The first aim of this workshop at CHI 2021 is to bring together a transdisciplinary group of design practitioners and researchers to discuss dark patterns across domains. The second aim is to identify actions to address dark patterns from within the design community, which might include e.g., setting industry norms, articulating values during the design process, or incorporating dark patterns into design education curricula. The third aim is to look beyond the design community and consider what changes designers might advocate for via interactions with e.g., consumers, media, and policymakers.
Competitive esports is a growing worldwide phenomenon now rivaling traditional sports, with over 450 million views and 1 billion US dollars in revenue each year. For comparison, Major League Baseball has 500 million views and $10 billion in revenue, FIFA Soccer 900 million and $1.6 billion. Despite this significant popularity, much of the world remains unaware of esports — and in particular, research on and for esports is still extremely scarce compared to esports’ impact and potential.
The Esports and High Performance HCI (EHPHCI) workshop will begin addressing that research gap. In esports, athletes compete through the computer interface. Because this interface can make the difference between winning and losing, esports athletes are among the most expert computer interface users in the world, as other athletes are experts in using balls and shoes in traditional sports. The premise of this workshop is that people will apply esports technology broadly, improving performance in a wide range of human activity. The workshop will gather experts in engineering, human factors, psychology, design and the social and health sciences to discuss this deeply multidisciplinary enterprise.
Spatial experience is an important subject in various fields, and in HCI it has been mostly investigated in the urban scale. Research on human scale spaces has focused mostly on the personal meaning or aesthetic and embodied experiences in the space. Further, spatial experience is increasingly topical in envisioning how to build and interact with technologies in our everyday lived environments, particularly in so-called smart cities. This workshop brings researchers and practitioners from diverse fields to collaboratively discover new ways to understand and capture human scale spatial experience and envision its implications to future technological and creative developments in our habitats. Using a speculative design approach, we sketch concrete solutions that could help to better capture critical features of human scale spaces and allow for unique possibilities for aspects such as urban play. As a result, we hope to contribute a road map for future HCI research on human scale spatial experience and its application.
Serious concerns have been raised about social media’s and online news outlets’ contribution to a pandemic of misinformation. The sheer volume and tendency of misinformation to exploit people’s cognitive biases have eroded the public’s trust in media outlets, governmental institutions, and the democratic process. With Human-Computer Interaction being at the forefront of designing and developing user-facing computing systems, we bear special opportunities to address these issues and work on solutions to mitigate problems arising from misinformation. This workshop brings together designers, developers, and thinkers across disciplines to redefine computing systems by focusing on technologies and applications that instil and nurture critical thinking in their users. By focusing on the problem of misinformation and users’ cognitive security, this workshop will sketch out blueprints for systems and interfaces that contribute to advancing technology and media literacy, building critical thinking skills, and helping users telling fake from truth.
The HCI Across Borders (HCIxB) community has been growing in recent years, starting with the Development Consortium at CHI 2016 and the HCIxB Symposia at CHI since. This year, we intend to hold an HCIxB workshop that aims to foster the scholarship potential of student and early career HCIxB researchers across the world, particularly those from and in the Global South, engaging on the topic of decoloniality. Through this symposium, we aim to create space for discussions that have been emerging in pockets of the HCI community, but could benefit from greater focus and attention in the interest of demarginalizing members and research areas of the community that have thus far remained on the margins of the discipline. We expect this virtual workshop at CHI 2021 to be an inaugural session for a series of virtual events that will help continue this conversation on decolonizing HCI’s borders.
In this workshop, we will explore and discuss future developments in mobile user-interfaces for cyclists and users of similar interfaces or services. We highlight the challenge of balancing safety and ecological validity in experiments, and how novel and improved evaluation methods can improve the current situation. We aim to bring together researchers with a strong background in designing and evaluating novel user interfaces in the domain of bicycles and mobility, as well as practitioners who build consumer products in that domain. The workshop’s goal is to explore novel ways of designing and evaluating user interfaces for cyclists and similar users when it comes to interacting with mobile devices and services on the ride.
The last several years have shown a strong growth of Artificial Intelligence (AI) technologies with promising results for many areas of healthcare. HCI has contributed to these discussions, mainly with studies on explainability of advanced algorithms. However, there are only few AI-systems based on machine learning algorithms that make it to the real world and everyday care. This challenging move has been named the “last mile” of AI in healthcare, emphasizing the sociotechnical uncertainties and unforeseen learnings from involving users in the design or use of AI-based systems. The aim of this workshop is to set the stage for a new wave of HCI research that accounts for and begins to develop new insights, concepts, and methods, for transitioning from development to implementation and use of AI in healthcare. Participants are invited to collaboratively define an HCI research agenda focused on healthcare AI in the wild, which will require examining end-user engagements and questioning underlying concepts of AI in healthcare.
Due to the evolving nature of technology and its impact on individuals, communities and society, practitioners and designers in Human-Computer Interaction (HCI) are expected to consider ethics in their work. This role has inspired the development of a number of resources for practice, such as tools, frameworks and methods to tackle ethical issues in HCI. But these suffer from low adoption rate potentially because they are not yet part of the standard body of knowledge. To mitigate the issue, we argue that there is an urgent need for ethics education in HCI. Beyond defining ethics, an ethics curriculum must enable practitioners to reflect and allow consideration of intended and unintended consequences of the technologies they create from the ground up, rather than as a fix or an afterthought. In this co-design workshop, we aim to build upon existing practices and knowledge of ethics in HCI and work with the CHI community to enrich ethics curriculum. We will scaffold our collective understandings of the existing resources and create guidelines that support interactive educational experiences to support HCI ethics curriculum.
We are concurrently witnessing two significant shifts: digital devices are becoming ubiquitous, and older people are becoming a very large demographic group. However, despite the recent increase in related CHI publications, older adults continue to be underrepresented in HCI research as well as commercially. Therefore, the overarching aim of this workshop is to increase the momentum for such research within CHI and related fields such as gerontechnology. For this, we plan to continue developing a space to discuss and share principles and strategies to design interactions and evaluate user interfaces (UI) for the ageing population. We thus welcome contributions of proposing improved empirical studies, theories, design and evaluation of UIs for older adults. Building on the success of the last three years’ workshops, we aim to grow the community of CHI researchers across borders interested in this topic by fostering a space to exchange results, methods, approaches, and ideas from research on interactive applications in support of older adults that are reflective of international diversity.
Smart devices have pervaded every aspect of humans’ daily lives. Though single device UX products are relatively successful, the experience of cross-device interaction is still far from satisfactory and can be a source of frustration. Inconsistent UI styles, unclear coordination, varying fidelity, pairwise interactions, lack of understanding intent, limited data sharing and security, and other problems typically degrade the experience in a multi-device ecosystem. Redesigning the UX, tailored to multi-device ecosystems to enhance the user experience, turns out to be challenging but at the same time affording many new opportunities. This workshop brings together researchers, practitioners and developers with different backgrounds, including from fields such as computationally design, affective computing, and multimodal interaction to exchange views, share ideas, and explore future directions on UX for distributed scenarios, especially for those heterogeneous cross-device ecosystems. The topics cover but are not limited to distributed UX design, accessibility, cross-device HCI, human factors in distributed scenarios, user-centric interfaces, and multi-device ecosystems.
The past two decades have seen an increase in the amount of research in the CHI community from South Asia with a focus on designing for the unique and diverse socio-cultural, political, infrastructural, and geographical background of the region. However, the studies presented to the CHI community primarily focus on working with and unpacking the regional contextual constraints (of the users and the infrastructures), thus taking a developmental stance. In this online workshop, we aim to broaden the perspective of the CHI research and community towards the contributions from the region including and beyond development, by bringing together researchers, designers, and practitioners working or are interested in working within these regions on diverse topics such as universal education, global healthcare, accessibility, sustainability, and more. Through the workshop discussion, group design activity, and brainstorming, we aim to provide a space for symbiotic knowledge sharing, and defining shared visions and missions for HCI activities in South Asia for including and moving beyond the development agenda.
The increasing use of personal data and AI in everyday technologies has resulted in the amplification of complex and intertwined socio-technical challenges. These, often exemplified by data abuse, breaches, and exploitation, must be alleviated to support sustainable, resilient and human-centred data economies and positive global innovation. Here, we turn towards Human-Data Interaction, an interdisciplinary branch of research, inspired by HCI, that brings together diverse siloed perspectives to present three holistic response principles: data legibility, negotiability and agency. But, the emergent nature of this field calls for refinement of these theoretical tenets to help them translate into practical and tangible responses that are embedded in the technologies we create. We propose this workshop as a foundational step towards this agenda by opening these principles to the CHI community to encourage critique and dialogue about the strengths, weaknesses, value and opportunities of incorporating HDI into the design and evaluation of technology. The outcomes of this workshop, by engaging with HDI through Design, will form the basis for the next stages of research within HDI by contributing to foundational texts within academia and implementing HDI-infused systems within industry.
Recognizing human emotions and responding appropriately has the potential to radically change the way we interact with technology. However, to train machines to sensibly detect and recognize human emotions, we need valid emotion ground truths. A fundamental challenge here is the momentary emotion elicitation and capture (MEEC) from individuals continuously and in real-time, without adversely affecting user experience nor breaching ethical standards. In this virtual half-day CHI 2021 workshop, we will (1) have participant talks and an inspirational keynote presentation (2) ideate elicitation, sensing, and annotation techniques (3) create mappings of when to apply an elicitation method.
With growing understanding of negative social and environmental impacts of computing technologies and increasingly urgent calls to mitigate these impacts, the sector now faces thorny questions around whether and how to govern computing technologies. This workshop aims to bring together researchers and practitioners across a wide range of disciplines to explore critical perspectives on and solutions to anticipatory governance in the computing sector. We will draw on participants’ diverse expertise to develop a practical and ethical governance roadmap that attends to the computing sector’s responsibility to mitigate its own contribution to the climate emergency. Having developed strategies within this specific context, we will then produce a set of governance principles that could be useful to mitigate other harms resulting from computing, nominally those pertaining to efforts around responsible AI, data protection, and mis/disinformation.
Eye movement recording has been extensively used in HCI and offers the possibility to understand how information is perceived and processed by users. Hardware developments provide the ubiquitous accessibility of eye recording, allowing eye movements to enter common usage as a control modality. Recent A.I. developments provide powerful computational means to make predictions about the user. However, the connection between eye movements and cognitive state has been largely under-exploited in HCI. Despite the rich literature in psychology, a deeper understanding of its usability in practice is still required. This virtual EMICS workshop will provide an opportunity to discuss possible application scenarios and HCI interfaces to infer users’ mental state from eye movements. It will bring together researchers across disciplines with the goal of expanding shared knowledge, discussing innovative research directions and methods, fostering future collaborations around the use of eye movements as an interface to cognitive state.
The emerging possibilities of multisensory interactions provide an exciting space for disability and open up opportunities to explore new experiences for perceiving one's own body, it's interactions with the environment and also to explore the environment itself. In addition, dynamic aspects of living with disability, life transitions, including ageing, psychological distress, long-term conditions such as chronic pain and new conditions such as long-COVID further affect people's abilities. Interactions with this diversity of embodiments can be enriched, empowered and augmented through using multisensory and cross-sensory modalities to create more inclusive technologies and experiences. To explore this, in this workshop we will explore three related sub-domains: immersive multi-sensory experiences, embodied experiences, and disability interactions and design. The aim is to better understand how we can re-think the senses in technology design for disability interactions and the dynamic self, constructed through continuously changing sensing capabilities either because of changing ability or because of the empowering technology. This workshop will: (i) bring together HCI researchers from different areas, (ii) discuss tools, frameworks and methods, and (iii) form a multidisciplinary community to build synergies for further collaboration.
We propose a workshop on methods and theories for dealing with complex dynamical systems, and their application in HCI. Such methods are increasingly relevant across a wide range of disciplines which focus on human behaviour, applied to understand the role of context and interactions in the behaviour of individuals and groups, and how they unfold over time. Traditional approaches to quantifying and modelling behaviour in HCI have tended to focus primarily on individuals and components. Complexity methods shift the focus onto interactions between components, and the emergence of behaviour from complex networks of interactions, as for example in Enactivist approaches to cognitive science. While we believe that complexity methods can be highly informative to HCI researchers, uptake in the community remains low due to widespread unfamiliarity. This one-day workshop will introduce, support, and encourage the development and adoption of complexity methods within HCI. Reflecting the multidisciplinary mix within complexity science, we will draw on examples of complexity-oriented theories and methods from a range of disciplines, including Control-Theory, Social Science, and Cognitive Science. Attendees will engage in group discussions and a Q&A with a panel, and a discussion group will be set up ahead of time to encourage exploratory conversations. In this way, diverse backgrounds can be brought together, matched, and inform one another.
In this workshop, we will explore the emergent methodological space of social media based HCI design and research. We will gather scholars and practitioners from different areas within HCI to discuss how social media platforms might support their practice. Through short presentations, open discussions, and design-led activities, we will examine the affordances of existing social media platforms and speculate future developments in this methodological space. The outcome of the workshop will be an interactive data visualization of existing social media platforms, their main characteristics, and their affordances for HCI design and research. Overall, we will begin to characterize the methodological space of social media based HCI design and research, setting the foundation for future developments in this space.
HCI and social science experimentation that explores or uses extended reality (XR) has been particularly impacted by the recent Covid-19 pandemic. This is due to typical deployment of XR experiments inside laboratories, and a paucity of research into how to effectively conduct remote XR experimentation. This first CHI Remote XR workshop aims to explore the current state of the art around three main themes of remote XR experimentation: (i) participant recruitment and screening; (ii) data collection, including limitations and affordances of existing research and XR tools; and (iii) software frameworks and requirements for the effective design of encapsulated remote XR user studies. This workshop brings together researchers and practitioners in XR to explore these recently emerged themes and to imagine how effective future remote XR research might be conducted.
The HCI Education Community of Practice (CoP) has grown considerably over the past few years, starting with the HCI Living Curriculum workshop at CHI 2018 and continuing through to the EduCHI symposia at both CHI 2019 and CHI 2020. Central to the growth of the CoP has been two parallel efforts: creating channels to discuss issues pertinent to HCI education and providing a platform for sharing HCI curricula and teaching experiences. To continue this progress, we are organizing EduCHI 2021, the 3rd Annual Symposium on HCI Education. EduCHI 2021 will be held virtually and will feature interactive discussions about HCI education trends, curricula, pedagogies, teaching practices, and current and future challenges facing HCI educators.
While many systems have successfully demonstrated functional integration of humans and technology, little attention has been paid to how technologies might experientially integrate to feel as part of humans. Our aim is to shed light on the importance of experiential integration and provide researchers with a scientifically driven foundation for future designs and investigations. The workshop will consist of hands-on experiments with novel body-illusions, discussions on experiential integration, and instructor-guided sessions on psychological concepts related to the design and evaluation of experiential integration.
Approximately 15% of the world's population has a disability and 80% live in low resource-settings, often in situations of severe social isolation. Technology is often inaccessible or inappropriately designed, hence unable to fully respond to the needs of people with disabilities living in low resource settings. Also lack of awareness of technology contributes to limited access. This workshop will be a call to arms for researchers in HCI to engage with people with disabilities in low resourced settings to understand their needs and design technology that is both accessible and culturally appropriate. We will achieve this through sharing of research experiences, and exploration of challenges encountered when planning HCI4D studies featuring participants with disabilities. Thanks to the contributions of all attendees, we will build a roadmap to support researchers aiming to leverage post-colonial and participatory approaches for the development of accessible and empowering technology with truly global ambitions.
Human-computer interaction has entered a third, globally-connected era. Visions that drove research and development in the first era were realized in the second, with CHI a key player. The third presents opportunities and a need for creativity to address challenges. HCI has drawn on computer science, human factors, information systems, and information science. It relies on design and interacts with AI. Students, practitioners, and academics can gain an understanding of forces that have guided the interaction of related disciplines, constraints imposed by human nature, trajectories we are following, and opportunities and issues that will engage us in the years ahead.
Writing research papers can be extremely challenging for new academic authors or entire scientific communities, with their own review and style guidelines like CHI. The impact of everything that we do as researchers is based on how we communicate it. Writing for CHI is a core skill to learn because it is hard to turn a research project into a successful CHI publication. This online edition of the successful CHI paper writing course offers hands-on advice and more in-depth tutorials on how to write papers with clarity, substance, and style. It is structured into four online units with a focus on writing CHI papers.
As Artificial Intelligence technologies are increasingly used to make important decisions and perform autonomous tasks, providing explanations to allow users and stakeholders to understand the AI has become a ubiquitous concern. Recently, a number of open-source toolkits are making the growing collection of Explainable AI (XAI) techniques accessible for researchers and practitioners to incorporate explanation features in AI systems. This course is open to anyone interested in implementing, designing and researching on the topic, aiming to provide an overview on the technical and design methods for XAI, as well as hands-on experience with an XAI toolkit.
Today, AI is used in many high-stakes decision-making applications in which fairness is an important concern. Already, there are many examples of AI being biased and making questionable and unfair decisions. Recently, the AI research community has proposed many methods to measure and mitigate unwanted biases, and developed open-source toolkits for developers to make fair AI. This course will cover the recent development in algorithmic fairness, including the many different definitions of fairness, their corresponding quantitative measurements, and ways to mitigate biases. This course is open to beginners and is designed for anyone interested in the topic of AI fairness.
Recent advancements in artificial intelligence (AI) create new opportunities for implementing a wide range of intelligent user interfaces. Speech-based interfaces, chatbots, visual recognition of users and objects, recommender systems, and adaptive user interfaces are examples that have majored over the last 10 years due to new approaches in machine learning (ML). Modern ML-techniques outperform in many domains of previous approaches and enable new applications. Today, it is possible to run models efficiently on various devices, including PCs, smartphones, and embedded systems. Leveraging the potential of artificial intelligence and combining them with human-computer interaction approaches allows developing intelligent user interfaces supporting users better than ever before. This course introduces participants to terms and concepts relevant in AI and ML. Using examples and application scenarios, we practically show how intelligent user interfaces can be designed and implemented. In particular, we look at how to create optimized keyboards, use natural language processing for text and speech-based interaction, and how to implement a recommender system for movies. Thus, this course aims to introduce participants to a set of machine learning tools that will enable them to build their own intelligent user interfaces. This course will include video based lectures to introduce concepts and algorithms supported by practical and interactive exercises using python notebooks.
HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines – despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines' ability to understand speech, the CHI community (and the HCI field at large) has only recently started embracing this modality as a central focus of research. This can be attributed in part to the unexpected variations in error rates when processing speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces. As such, the development of interactive speech-based systems is mostly driven by engineering efforts to improve such systems with respect to largely arbitrary performance metrics. Such developments have often been void of any user-centered design principles or consideration for usability or usefulness in the same ways as graphical user interfaces have benefited from heuristic design guidelines.
The goal of this course is to inform the CHI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.
While leading technology companies have created “digital wellbeing” initiatives in response to public concern over psychological impacts, these largely focus on changing human behavior (eg, via self-tracking and ‘screen time’ restriction) rather than on changing technology. If respect for human wellbeing is to become a genuine priority with measurable impact, then technology will need to change too. Designers are in a position to lead this change by integrating wellbeing psychology into design practice. This course will equip technology-makers with research-based knowledge and skills for practicing “wellbeing supportive design”, and for improving UX across the board by supporting psychological health.
This course introduces participants to rapid prototyping for augmented, virtual, and mixed reality. Participants will learn about physical prototyping with paper and Play-Doh and digital prototyping via visual authoring tools. After an overview of the XR prototyping process and tools, participants will complete two hands-on sessions. A combination of paper-based XR design templates and easy-to-use digital authoring tools will be used to create working interactive prototypes that can be run on XR devices. The course is targeted at non-technical audiences including HCI practitioners, user experience researchers, and interaction design professionals and students interested in XR design.
The combination of the Internet of Things and Artificial Intelligence has made it possible to introduce numerous automations in our daily environments. Many new interesting possibilities and opportunities have been enabled, but there are also risks and problems. Often these problems are originated from approaches that have not been able to consider the users’ viewpoint sufficiently. We need to empower people in order to actually understand the automations in their surroundings environments, modify them, and create new ones, even if they have no programming knowledge. The course discusses these problems and some possible solutions to provide people with the possibility to control and create their daily automations.
UI design rules and guidelines are not simple recipes. Applying them effectively requires determining rule applicability and precedence and balancing trade-offs when rules compete. By understanding the underlying psychology, designers and evaluators enhance their ability to apply design rules. This two-part (160-minute) course explains that psychology.
The population of the developed world is aging. Most websites, apps, and digital devices are used by adults aged 50+ as well as by younger adults, so they should be designed accordingly. This one-part course, based on the presenter's recent book, presents age-related factors that affect older adults’ ability to use digital technology, as well as design guidelines that reflect older adults’ varied capabilities, usage patterns, and preferences.
Computational design is one of the hot topics in HCI and related research fields, where various design problems are formulated using mathematical languages and solved by computational techniques. By this paradigm, researchers aim at establishing highly sophisticated or efficient design processes that otherwise cannot be achieved. Target domains include graphics, personal fabrication, user interface, etc. This course introduces fundamental concepts in computational design and provides an overview of the recent trend. It then goes into a more specific case where human assessment is necessary to evaluate the quality of design outcomes, which is often true in HCI scenarios. This course is recommended to HCI students and researchers who are new to this topic.
While increasing numbers of HCI Designers or Researchers design tools for health and wellbeing, few have a background in human anatomy and physiology. Inbodied Interaction 101 provides a fundamental orientation to human anatomy and physiology specifically framed for HCI designers, researchers and engineers that students will be able to use immediately to inform their own designs. The models offer practical orientation for any designer interested in developing tools to support human performance, health and wellbeing, both physically, socially and cognitively. It will help anyone whose work touches the human body – from mixed reality to behaviour mediation, all of which are mediated by the state of the body. Students will learn fundamental physiological parameters within our anatomy to support these states. Students will also test these approaches in class, to see how they can be used for both evaluation and design of innovative, inbodied-aligned designs.
In Inbodied Interaction 101, we considered the Physiology and Anatomy of the body via three associated interactions that reflect an inbodied state: 1. inbodied adaptation in response to the in5 and C4 over Time and Context in order to maintain 2. homeostasis via 3. metabolism. We called this adaptation process “tuning.” In 102 we build on this foundation to consider the physiology tuning. In particular we will look at a series of inbodied interactions: the neuro-endocrine system interaction with the organ systems that cue adaptive responses from genetic signals to fat metabolism; the autonomic nervous system and the limbic system's interactions that affect volitional/non-volitional interaction. We will introduce the components of the brainstem, basal nuclei and cerebellum that support interoception around self-Tuning. Within this framing, we will look at the strengths and limits of non-invasive measures of these processes (eg, HRV, EEG, blood oxygen saturation, qualitative responses). Outcomes will familiarity with how we function as inbodied complex systems, with worked examples of how the physiology of tuning can be translated into interactive designs to support health, wellbeing, performance in new ways.
The objective of this CHI course is to provide CHI attendees with an introduction and overview of the rapidly evolving field of automotive user interfaces (AutomotiveUI). The course will focus on UI aspects in the transition towards automated driving. In particular, we will also discuss the opportunities of cars as a new space for non-driving-related activities, such as work, relaxation, and play. For newcomers and experts of other HCI fields, we will present the special properties of this field of HCI and provide an overview of new opportunities, but also general design and evaluation aspects of novel automotive user interfaces.
Many researchers and practitioners find statistics confusing. This course aims to help change that, to give attendees an understanding of the meaning of the various statistics they see in papers or need to use in their own work. The course builds on the instructor’s previous tutorials and master classes including at CHI 2017, and on his recently published book “Statistics for HCI: Making Sense of Quantitative Data”. The course will focus especially on material you will not find in a conventional textbook or statistics course including aspects of statistical ‘craft’ skill, and it will also offer the attendees an introduction to some of the instructor’s extensive additional online material.
Many research contributions in human-computer interaction are based on user studies in the lab. However, lab studies are not always possible, and they may come with significant challenges and limitations. In this course, we take a broader look at different approaches to doing research. We present a set of evaluation methods and research contributions that do not rely on user studies in labs. The discussion focuses on research approaches, data collection methods, and tools that can be conducted without direct interaction between the researchers and the participants.
We are witnessing an increase in fieldwork within the field of HCI, particularly involving marginalized or under-represented populations. This has posed ethical challenges for researchers during such field studies, with "ethical traps" not always identified during planning stages. This is often aggravated by the inconsistent policy guidelines, training, and application of ethical principles. We ground this in our collective experiences with ethically-difficult research, and frame it within common principles that are common across many disciplines and policy guidelines – representative of the instructors’ diverse and international backgrounds.
Child Computer Interaction is concerned with the research, design, and evaluation of interactive technologies for children. Whilst many aspects of general HCI can be applied into this field, there are important adaptations to be made when conducting work for and with children throughout all stages of the design cycle. This course overviews the main tools and techniques in use by the CCI community presented alongside examples and experiences from academia and industry. The course is hands on and provides highly useful checklists and tips to ensure children (and researchers and developers) get the most out of participation in HCI activities.
Sketching is a universal activity but an often overlooked skill – yet it can benefit researchers and practitioners in HCI – sketching has proven to be a valuable addition to skill-sets in academic and industrial contexts. Many individuals lack the confidence to take up sketching after years of non-practice, but it is possible to re-learn, improve, and apply this skill in practical ways. This course is a sketching journey, from scribbles and playful interpretations, to helpful theory, storytelling and practical applications. Attending individuals will learn techniques and applied methods for utilizing sketching within the context of HCI, guided by experienced instructors.
It is assumed that to appreciate a knowledge contribution in research-through-design, we all agree on what the act of designing is and should deliver in research. However, just from a glance at contributions in an HCI context, this is far from the case. The course is based on the book: Drifting by intention – four epistemic traditions in constructive design research authored by the instructors. It unpacks different ways of knowing in practice-based design and provides operational models and hands-on exercises applied on participants cases to help plan and articulate the contribution of design in each participant's individual research project.
This course takes both a practical and theoretical approach to introduce the principles, methods and tools for automation. Examples are taken from industries that have been embedding automation in their systems for a long time (such as aviation, automotive, satellite ground segments …), but the main focus is on designing and assessing automation for interactive systems. Interactive hands-on exercise of how to "do it right", will be provided answering questions such as: How to add automation in interaction techniques to improve efficiency? How to migrate tasks towards automation to improve effectiveness? How to design usable automation at system, application and interaction levels? Is more automation always better? If not, when do we stop automating? And more....
We are witnessing the work of user experience (UX) designers expanding beyond single digital products towards designing customer journeys through several service touchpoints and channels. Greater understanding of the service design approach and the interplay between service design and UX design is needed by UX researchers and practitioners in order to address this challenge. This course provides a theoretical introduction to service design and practical activities that help attendees understand the principles of service design and apply key methods within their work. It is targeted at UX design practitioners, teachers, and researchers, and those interested in systemic approaches to design.
In March, 2007, a forum entitled HCI 2020: Human Values in a Digital Age, was held in Sanlúcar la Mayor, Spain, just outside Seville. Its purpose was to gather luminaries in computing, design, social sciences, and scientific philosophy to discuss, debate and help formulate an agenda for human-computer interaction (HCI) over the next decade and beyond. This resulted in a detailed report, released in April 2008, in the form of a book called Being Human: Human-Computer Interaction in the Year 20201, authored by 45 members of the wider HCI community.
In this panel, we shall build from four core questions. How successfully did the HCI 2020 forum and report recognize trends and shape HCI? What major trends or issues did they fail to anticipate? How valuable to the HCI community, to the participants, and to the sponsoring organizations was the forum and the report? And finally, what does this history suggest about both the process and the ultimate value to HCI, to computing in general, and to the world, of creating a HCI 2035 vison?
Classes involving physical making were severely disrupted by COVID-19. As workshops, makerspaces, and fab labs shut down in Spring 2020, instructors developed new models for teaching physical prototyping, electronics production, and digital fabrication at a distance. Instructors shipped materials and equipment directly to students, converted makerspaces to job-shops, and substituted low-tech construction methods and hobbyist equipment for industrial tools. The experiences of students and instructors during the pandemic highlighted new learning opportunities when making outside the makerspace. Simultaneously, the shutdown raised new questions on the limits of remote learning for digital fabrication, electronics, and manual craft. This panel brings together experts in making to discuss their experiences teaching physical production in art, design, and engineering during the pandemic. Panelists will discuss their teaching strategies, describe what worked and what did not, and argue for how we can best support students learning hands-on skills going forward.
During the past year we've all spent many hours on videoconference calls, sometimes more than was comfortable. While CHI might not have anticipated a viral-driven surge in videoconferencing, online meetings has been a topic of CHI research for the past 25 years. This is a good time to assess how well our research has matched what this natural experiment is telling us. What did we get right? And what did the field get wrong? The panel, comprised of people who directly witnessed much of this history, will reflect on these questions. We don't expect all to agree with each panelist's conclusions, and we will invite reactions and contributions from the audience as well..
This panel will provoke the audience into reflecting on the dark side of interaction design. It will ask what role the HCI community has played in the inception and rise of digital addiction, digital persuasion, data exploitation and dark patterns and what to do about this state of affairs. The panelists will present their views about what we have unleashed. They will examine how ‘stickiness’ came about and how we might give users control over their data that is sucked up in this process. Finally, they will be asked to consider the merits and prospects of an alternative agenda, that pushes for interaction design to be fairer, more ethically-grounded and more transparent, while at the same time addressing head-on the dark side of interaction design.
The COVID19 pandemic has unfolded alongside a concurrent ‘infodemic’ – defined by the World Health Organization as an overabundance of information, some accurate, some not, that occurs during an epidemic. Key to managing this is not only identifying, countering and debunking misinformation but also providing unbiased and factually correct information and signposting people towards it. However, during COVID19, the ‘truth’ has not always been clear. It has not always been easy to prepare public health messaging that is consistent, easily understood or practical for everyone to apply. This presents unique challenges, to which social media platforms need to be part of the solution. One such solution can be found on www.reddit.com where, in January 2020, a group of research scientists, students, academics and medics came together to create and moderate forums in which the pandemic can be discussed and questions about it answered. These forums provide case studies of how information can be generated, misinformation corrected and disinformation debunked on subreddits with, combined, more than 3 million subscribers.
Artificial Intelligence (AI) can refer to the machine learning algorithms and the automation applications built on top of these algorithms. Human-computer interaction (HCI) researchers have studied these AI applications and suggested various Human-Centered AI (HCAI) principles for an explainable, safe, reliable, and trustworthy interaction experience. While some designers believe that computers should be supertools and active appliances, others believe that these latest AI systems can be collaborators. With today’s AI algorithm breakthroughs, in this panel we ask whether the supertool or the collaboration metaphors best support work and play? How can we design AI systems to work best with people or for people? What does it take to get there? This panel will bring together panelists with diverse backgrounds to engage the audience through the discussion of their shared or diverging visions on the future of human-AI interaction design.
Microbe-HCI is a community whose works implicate micro-organisms in HCI. This special interest group is a venue for the first gathering of the community, offering an opportunity for networking and structured discussions. It encourages participation from both active and new researchers to microbe-HCI, with the objective of acquiring an overview of people, themes, trends, and prospective research pathways for the community.
Accessibility, diversity, and inclusion are key concerns for the CHI community. In 2019, Accessibility was one of the top keywords among conference publications. Despite the focus of research, many scholars with disabilities still struggle to access SIGCHI events and activities. At this SIG at CHI 2021, Access-SIGCHI, along with AccessComputing and SIGACCESS, will host an open discussion about the state of accessibility within SIGCHI and discuss opportunities for organizers and volunteers to improve accessibility. The event is open to all, but we are particularly excited to bring together people with disabilities and organizers from across SIGCHI to collaboratively develop a plan for increase access across all SIGCHI events.
Over the past several years artificial intelligence (AI) techniques have gained a considerable presence in immersive media. From assistance with real-time digital production to emerging novel forms of creative expression, applications of AI are becoming ubiquitous in the fields of human-computer interaction (HCI), computer graphics, and media art. As we are interested in how novel computational techniques are shaping the state of creativity in immersive and interactive technologies, we organize this special interest group (SIG) to stimulate a discussion among AI, HCI, immersive media, and art communities. The goal of this SIG includes outlining existing and emerging areas of cross-disciplinary collaborations, proposing a roadmap of future goals and challenges for creative immersive AI research, and establishing a diverse group of researchers and practitioners involved in creative applications of AI in immersive and interactive media.
A lot of academic and industrial HCI work has focused on making interactions easier and less effortful. As the potential risks of optimising for effortlessness have crystallised in systems designed to take advantage of the way human attention and cognition works, academic researchers and industrial practitioners have wondered whether increasing the ‘friction’ in interactions, making them more effortful might make sense in some contexts. The goal of this special interest group is to provide a forum for researchers and practitioners to discuss and advance the theoretical underpinnings of designed friction, the relation of friction to other design paradigms, and to identify the domains and interaction flows that frictions might best suit. During the SIG, attendees will attempt to prioritise a set of research questions about frictions in HCI.
As Queer Human-Computer Interaction (HCI) becomes an established part of the larger field, both in terms of research on and with queer populations and in terms of employing queering theories and methods, the role of queer researchers has become a timely topic of discussion. However, these discussions have largely centered around member-researcher status and positionality when working with queer populations. Based on insights gathered at multiple ACM events over the past two years, we identified two pressing issues: (1) we need to better support queer people doing HCI research not specific to queer populations, and (2) we need to identify how to best support member-researchers in leading Queer HCI while including collaborators beyond the queer community. This Special Interest Group (SIG) aims to directly address these challenges by convening a broad community of queer researchers and allies, working not only on explicitly-queer topics but across a broad range of HCI topics.
Rapidly gaining in mainstream appeal, esports constitute a phenomenon at the intersection of different Human-Computer Interaction (HCI) perspectives—relating to games user research, multifaceted aspects of game design for performance and entertainment, design and support of social or interpersonal interaction, inclusion vs. toxicity in online communities, visualization of esports data, subdomains like physical esports, as well as connections with education and health contexts. This special interest group (SIG) will provide a space for HCI researchers and practitioners to connect and discuss themes at the intersection of HCI and esports. It will serve as a starting point for mapping the esports design and research landscape in order to identify and pursue opportunities for research, to increase awareness for collaboration in this domain within the HCI community, to share experiences and knowledge, and to establish a community to shape the future of esports.
Interactive computing systems are able to receive, as inputs, activity generated by the user’s physiology (e.g., skin conductance, heart rate, brain potentials, and so forth). Besides health-related applications, this type of physiological sensing enables systems to infer users’ states (e.g., task engagement, anxiety, workload, and so forth). More recently, a number of techniques emerged that can also stimulate physiological activity (e.g., electrical muscle stimulation, galvanic vestibular stimulation, transcranial stimulation). These can serve as outputs of an interactive system to induce desired behavior in the user. Taken together, we envision systems that will close the loop between physiological input and output—interactive systems able to read and influence the user’s body. To realize this, we propose a Special Interest Group on Physiological I/O that will consolidate successful practices and identify research challenges to address as a community.
Visualization grammars, often based on the Grammar of Graphics, are popular choices for specifying expressive visualizations and supporting visualization systems. However, there are still open questions about grammar design and evaluation not well-answered in visualization research. In this SIG, we propose to discuss what makes a grammar “good” and explore evaluation methodologies best suited for visualization grammars.
Human-Computer Integration (HInt) is a growing paradigm within HCI which seeks to understand how humans can, and already are, merging with computational machines. HInt’s recent inception and evolution has seen much discussion in a variety of symposiums, workshops, and publications for HCI. This has enabled a democratized and decentralised emergence of its core concepts. While this has allowed for rapid growth in our understanding of HInt, there is some discrepancy in how the proponents of this movement might describe its principles, motivations, definitions, and ultimate goals, with many offshoot concepts of HInt beginning to emerge. SIGHint aims to provide a platform to facilitate high level discussion and collation of information between researchers and designers seeking to learn from and contribute to the development of Human-Computer Integration. It is our intention that through this SIG we may better understand how new and emerging, diverging ideas, and perspectives within Human-Computer Integration relate to each other, ultimately facilitating a mapping of the paradigm and a synthesis of its concepts.
Sketching is a physical activity: moving a stylus to create marks on paper or screen, from mind to visual output. But sketching can also translate to the virtual space. When we sketch collaboratively, we look for cues, exchange ideas, and annotate work via mark-making or comment. The digital medium has evolved to explore the potentials of sketching online, and this Special Interest Group aims to bring together researchers and practitioners interested in Sketching in HCI to explore the new virtual landscape of sketching, popularised by the constraints of the current world situation. We invite you to join our virtual group, discuss and share sketches, query the existing state-of-the-art, and help pave the way for the development of this medium in the virtual space with your imagery and ideation.
JetController is a novel haptic technology capable of supporting high-speed and persistent 3-DoF ungrounded force feedback. It uses high-speed pneumatic solenoid valves to modulate compressed air to achieve 20-50Hz of full impulses at 4.0-1.0N and combines multiple air propulsion jets to generate 3-DoF force feedback. Compared to propeller-based approaches, JetController is more than 10 times faster in impulse frequency, and its handheld device is significantly lighter and more compact. JetController supports a wide range of haptic events in games and VR experiences, from firing automatic weapons in games like Halo to cutting fruits in Fruit Ninja. To discuss the differences between JetController and vibrotactile feedback approaches, we designed a new prototype based on PS5 Controller, DualSense . There are three haptic approaches in our demonstration, JetController, Adaptive Trigger, and regular vibration feedback. We provide two scenarios, driving with tactile (surface texture) feedback and shooting with recoil feedback.
Flower jellies, a delicate dessert in which a flower-shaped jelly floats inside another clear jelly, fascinate people with both their beauty and elaborate construction. In efforts to simplify the challenging fabrication and enrich the design space of this dessert, we present Flower Jelly Printer: a printing device and design software for digitally fabricating flower jellies. Our design software lets users play with parameters and preview the resulting forms until achieving their desired shapes. We also developed slit injection printing that directly injects colored jelly into a base jelly, and shared several design examples to show the breadth of design possibilities. We hope to enable more people to design and create their own flower jellies while expanding access and the design space for digitally fabricated foods.
With the popularity of online access in virtual reality (VR) devices, it will become important to investigate exclusive and interactive CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) designs for VR devices. In this paper, we first present four traditional two-dimensional (2D) CAPTCHAs (i.e., text-based, image-rotated, image-puzzled, and image-selected CAPTCHAs) in VR. Then, based on the three-dimensional (3D) interaction characteristics of VR devices, we propose two vrCAPTCHA design prototypes (i.e., task-driven and bodily motion-based CAPTCHAs). We conducted a user study with six participants for exploring the feasibility of our two vrCAPTCHAs and traditional CAPTCHAs in VR. We believe that our two vrCAPTCHAs can be an inspiration for the further design of CAPTCHAs in VR.
This demonstration presents ShowMeAround, a video conferencing system designed to allow people to give virtual tours over live 360-video. Using ShowMeAround a host presenter walks through a real space and can live stream a 360-video view to a small group of remote viewers. The ShowMeAround interface has features such as remote pointing and viewpoint awareness to support natural collaboration between the viewers and host presenter. The system also enables sharing of pre-recorded high resolution 360 video and still images to further enhance the virtual tour experience.
Mastering the guitar requires regular exercise to develop new skills and maintain existing abilities. We present Let’s Frets - a modular guitar support system that provides visual guidance through LEDs that are integrated into a capacitive fretboard to support the practice of chords, scales, melodies, and exercises. Additional feedback is provided through a 3D-printed fretboard that senses the finger positions through capacitive sensing. We envision Let’s Frets as an integrated guitar support system that raises the awareness of guitarists about their playing styles, their training progress, the composition of new pieces, and facilitating remote collaborations between teachers as well as guitar students. This interactivity demonstrates Let’s Frets with an augmented fretboard and supporting software that runs on a mobile device.
We present XRTeleBridge (XRTB), an application that integrates a Mixed Reality (MR) interface into existing teleconferencing solutions like Zoom. Unlike conventional webcam, XRTB provides a window into the virtual world to demonstrate and visualize content. Participants can join via webcam or via head mounted display (HMD) in a Virtual Reality (VR) environment. It enables users to embody 3D avatars with natural gestures and eye gaze. A camera in the virtual environment operates as a video feed to the teleconferencing software. An interface resembling a tablet mirrors the teleconferencing window inside the virtual environment, thus enabling the participant in the VR environment to see the webcam participants in real-time. This allows the presenter to view and interact with other participants seamlessly. To demonstrate the system’s functionalities, we created a virtual chemistry lab environment and presented an example lesson using the virtual space and virtual objects and effects.
We demonstrate a new type of haptic actuator, which we call MagnetIO, that is comprised of two parts: one battery-powered voice-coil worn on the user's fingernail and any number of interactive soft patches that can be attached onto any surface (everyday objects, user's body, appliances, etc.). When the user's finger wearing our voice-coil contacts any of the interactive patches it detects its magnetic signature via magnetometer and vibrates the patch, adding haptic feedback to otherwise input-only interactions. To allow these passive patches to vibrate, we make them from silicone with regions doped with polarized neodymium powder, resulting in soft and stretchable magnets. This stretchable form-factor allows them to be wrapped to the user's body or everyday objects of various shapes. We demonstrate how these add haptic output to many situations, such as adding haptic buttons to the walls of one's home.
Consumer electronics are increasingly using traditional materials to allow technology to better blend into everyday environments. Specifically, transmissive materials enable emissive displays to disappear when turned off, and appear when turned on. However, covering displays with textile meshes, veneer, one-way mirrors, or translucent plastics greatly limit the display brightness of typical graphical displays.
In this work, we leverage a parallel rendering technique to enable ultrabright graphics that can pass through transmissive materials. While previous work has shown interactive hidden displays, our approach unlocks expressive interfaces with practical end-to-end software and low-cost hardware implementation on mass-produced passive OLED displays.
We developed a set of interactive prototypes with touch-sensing that can blend into traditional aesthetics due to the ability to provide user interfaces through wood, textile, plastic and mirrored surfaces.
We demonstrate Assembler3 a software tool that allows users to perform 3D parametric manipulations on 2D laser cutting plans. Assembler3 achieves this by semi-automatically converting 2D laser cutting plans to 3D, where users modify their models using available 3D tools (kyub), before converting them back to 2D. In our user study, this workflow allowed users to modify models 10x faster than using the traditional approach of editing 2D cutting plans directly. Assembler3 converts models to 3D in 5 steps: (1) plate detection, (2) joint detection, (3) material thickness detection, (4) joint matching based on hashed joint "signatures", and (5) interactive reconstruction. In our technical evaluation, Assembler3 was able to reconstruct 100 of 105 models. Once 3D-reconstructed, we expect users to store and share their models in 3D, which can simplify collaboration and thereby empower the laser cutting community to create models of higher complexity.
In smart office buildings, almost every aspect of the environment can be assessed and adjusted by sensors. Yet employees rarely have access to the data collected to act upon it. It is also unclear what they would find meaningful to follow the recommendations on healthy work conditions and behavior, while productivity is the priority. The Office Agents are a set of artefacts placed on the employee's desk, which capture data about the office environment. Air quality, sound level, light exposure, productivity, and physical activity level are measured to provide office workers with feedback on the ‘best’ working conditions. Using speculative design and Objects with Intent, the employee engages in a negotiation with the Office Agents based on the office ecosystem. Through this project and interactivity session, we open a debate on the use of sensors in office environments and the stakes around office vitality from the viewpoint of the employees.
We demonstrate a novel approach to bring quick touch interaction on surfaces to Virtual Reality, which is a challenge for current camera-based VR headsets that support free-hand mid-air interaction or physical hand-held controllers for input. In our approach, we use our wrist-worn prototype TapID to complement the optical hand tracking from VR headsets with inertial sensing to detect touch events on surfaces, which establishes the same interaction modality that is present on today’s phones and tablets. Each TapID band integrates a pair of inertial sensors in a flexible strap, from whose signals TapID reliably detects surface touch events and identifies the finger used for touch. This event detection is then fused with the optically tracked hand poses to trigger input in VR. Our demonstration comprises a series of VR applications, including UI control in word processors, web browsers, and document editors. These applications showcase the beneficial use of rapid tapping, typing, and surface gestures in Virtual Reality.
Otaku is a Japanese term commonly associated with fans of Japanese animation, comics or video games. Otaku culture has grown to be a global phenomenon with various hobbies and media. Despite its popularity, research efforts to contribute to the otaku culture have been modest. Therefore, we present Hatsuki, which is a humanoid robot that is especially designed to embody anime characters. Hatsuki advances the state of the art as it: 1) realizes aesthetics resembling anime characters, 2) implements 2D anime-like facial expression system, and 3) realizes anime-style behaviors and interactions. We explain Hatsuki's design specifics and its interaction domains as an autonomous robot and as a teleoperated humanoid avatar. We discuss our efforts under each interaction domain, and follow by discussing its potential deployment venues and applications. We highlight opportunities of interplay between otaku culture and interactive systems, potentially enabling highly desirable interactions and familiar system designs to users exposed to otaku culture.
In this study, we propose PicPop, a pop-up picture book system that uses mid-air images to make digital images pop out from a flat picture book. The user does not need to wear a special device to have an interactive experience with pop-up picture books that employ the proposed system. A reflective type of paper is used to turn the book into a display surface for projecting mid-air images onto. Furthermore, we realized a reduction in the overall volume of the system as compared to the conventional reflective mid-air image-display method, while retaining the desired luminance of the mid-air images.
Voice control provides hands-free access to computing, but there are many situations where audible speech is not appropriate. Most unvoiced speech text entry systems can not be used while on-the-go due to movement artifacts. SilentSpeller enables mobile silent texting using a dental retainer with capacitive touch sensors to track tongue movement. Users type by spelling words without voicing. In offline isolated word testing on a 1164-word dictionary, SilentSpeller achieves an average 97% character accuracy. 97% offline accuracy is also achieved on phrases recorded while walking or seated. Live text entry achieves up to 53 words per minute and 90% accuracy, which is competitive with expert text entry on mini-QWERTY keyboards without encumbering the hands.
This demo presents FlexTruss, a design and construction pipeline based on the assembly of modularized truss-like objects fabricated with conventional 3D printers and assembled by threading. To create an end-to-end system, a parametric design tool with an optimal Euler path calculation method is developed, which can support both inverse and forward design workflow and multi-material construction of modular parts. In addition, the workflow guides the assembly of printed truss modules by threading. Finally, a series of application cases to demonstrate the affordance of FlexTruss are presented. We believe that FlexTruss extends the design space of 3D printing beyond typically hard and fixed forms, and it will provide new capabilities for designers and researchers to explore the use of such flexible truss structures in human-object interaction.
This demonstration showcases the improvements made to Project Esky, an open source Extended Reality (XR) software framework capable of high fidelity natural hand-interactions with virtual content. In combination with showing our temporal re-projection rendering technique in realtime, we also showcase several live desktop demonstrations, such as a virtually controlled powerpoint presentation, an e-learning example with an animated car engine, a fire simulation and a model painting application. It is our hope that the demonstrations presented will inspire others to work with Project Esky to build their own experiences, and will be a stepping stone towards bringing high-fidelity XR content to researchers and hobbyists.
COVID-19 has dramatically limited opportunities for in-person human-robot interaction research and shifted focus towards remote technologies such as telepresence robots. Telepresence robots enable rich communication and agency through their physical presence and controllability, but their screen-oriented designs and button-centric controls abstract users away from their own physicality. In this demonstration, we present a telepresence system for remotely controlling a social robot using a smartphone’s motion sensors. Users can select between a first-person perspective from the robot’s internal camera or a third-person perspective showing the robot’s whole body. Users can also record their movements for later playback. This system has applications as an embodied remote communication platform and for crowdsourcing demonstrations of user-crafted robot movements.
Tagbly is an acronym for Text Audio Graphics Assem BLY. It is a method and system that synchronizes audio data with image data using an HTML control file. The HTML control file contains references to linked multimedia asset files and timestamps indicating when each of these multimedia assets will be displayed/played. Audiovisual content creators can incorporate interactive content, such as graphic elements (e.g., pictures, 360 images and 3D models), text-based annotations, clickable links and buttons, and zooming in/out of images. Viewers can interact with the interactive features while the multimedia content is being displayed/played. In the viewer's eye, Tagbly's audiovisual content looks like an “interactive video”.
Technology use in antenatal education is often basic and inadequately presents the emotional and practical challenges of breastfeeding. This work presents Virtual Feed, a Virtual Reality interactive breastfeeding simulation designed with parents and health care professionals to convey key aspects of breastfeeding and to explore the potential and limitation of interactive technology in eliciting intimacy in parent-child space.
Paper books offer a unique physical feel, which supports the reading experience through enhanced browsing, bookmarking, free-form annotations, memory and reduced eye strain. In contrast, electronic solutions, such as tablets and e-readers, offer interactive links, updatable information, easier content sharing, and efficient collaboration. To combine the best aspects of paper and digital information for reading, we demonstrate two mechanisms for augmenting paper with light sensors that trigger digital links on a nearby smartphone. Light Tags on every page of a book are used in a first demonstration to identify which pages are open. These are replaced with an electronic Magic Bookmark in a second demonstration, avoiding the need to instrument every page.
Metaphorical thinking acts as a bridge between embodiment and abstraction and helps to flexibly organize human knowledge and behavior. Yet its role in embodied human-computer interface design, and its potential for supporting goals such as self-awareness and well-being, have not been extensively explored in the HCI community. We have designed a system called MetaVR to support the creation and exploration of immersive, multimodal metaphoric experiences, in which people’s bodily actions in the physical world are linked to metaphorically relevant actions in a virtual reality world.
As a team of researchers in interaction, neuroscience, and linguistics, we have created MetaVR to support research exploring the impact of such metaphoric interactions on human emotion and well-being. We have used MetaVR to create a proof-of-concept interface for immersive, spatial interactions underpinned by the WELL-BEING is VERTICALITY conceptual mapping—the known association of ‘good’=‘up’ and ‘bad’=‘down’. Researchers and developers can currently interact with this proof of concept to configure various metaphoric interactions or personifications that have positive associations (e.g., ‘being like a butterfly’ or ‘being like a flower’) and also involve vertical motion (e.g., a butterfly might fly upwards, or a flower might bloom upwards). Importantly, the metaphoric interactions supported in MetaVR do not link human movement to VR actions in one-to-one ways, but rather use abstracted relational mappings in which events in VR (e.g., the blooming of a virtual flower) are contingent not merely on a “correct” gesture being performed, but on aspects of verticality exhibited in human movement (e.g., in a very simple case, the time a person’s hands spend above some height threshold).
This work thus serves as a small-scale vehicle for us to research how such interactions may impact well-being. Relatedly, it highlights the potential of using virtual embodied interaction as a tool to study cognitive processes involved in more deliberate/functional uses of metaphor and how this relates to emotion processing. By demonstrating MetaVR and metaphoric interactions designed with it at CHI Interactivity, and by offering the MetaVR tool to other researchers, we hope to inspire new perspectives, discussion, and research within the HCI community about the role that such metaphoric interaction may play, in interfaces designed for well-being and beyond.
The Co-Drive interactive experience at CHI2021 gives conference attendees the possibility to experience social virtual travelling by car either as the driver or the remote passenger. Through the dislocation of two prototypes in two different parts of the world, Co-Drive trips will be available in two different locations, one of which crowdsourced among prospective CHI attendees. After experiencing the Co-Drive trip, participants will be able to share their experience in a collective way through a virtual meeting held during the conference and subsequently in an individual way through a phone call or video/audio recording from their car to the main author.
We present HoloBoard, a next generation teaching board based on large-format, semi-transparent, interactive display technology that supports immersive teaching and learning. It can present immersive media curriculum materials including large-format videos, interactive 3D objects, AR effects, etc. Based on a multi-stage mixed-methods teacher-centered design thinking approach and a series of user studies with 6-9 years old primary students, the concept design of HoloBoard was defined and refined. The results showed that HoloBoard can be deployed in the wild and bring more engagement and interactivity in the primary school classroom, promoting students’ spontaneous exploration and active learning.
This demonstration shows eyemR-Vis, a 360 panoramic Mixed Reality collaboration system that translates gaze behavioural cues to bi-directional visualisations between a local host (AR) and a remote collaborator (VR). The system is designed to share dynamic gaze behavioural cues as bi-directional spatial virtual visualisations between a local host and a remote collaborator. This enables richer communication of gaze through four visualisation techniques: browse, focus, mutual-gaze, and fixated circle-map. Additionally, our system supports simple bi-directional avatar interaction as well as panoramic video zoom. This makes interaction in the normally constrained remote task space more flexible and relatively natural. By showing visual communication cues that are physically inaccessible in the remote task space through reallocating and visualising the existing ones, our system aims to provide a more engaging and effective remote collaboration experience.
Soft sensors made of deformable materials, that are capable of sensing touches or gestures, have attracted considerable attention for use in tangible interfaces or soft robotics. However, to achieve multimodal gesture detection with soft sensors, prior studies have combined multiple sensors or utilized complex configurations with multiple wires. To achieve multimodal gesture sensing with a simpler configuration, a novel soft sensor consisting of a conductive foam with a single wire, was proposed in this study. This sensor was named as foamin and utilizes an impedance measurement technique at multiple frequencies called foamin. Additionally, a surface-shielding method was designed for improving the detection performance of the sensor. Several patterns of foamin were implemented to investigate the detection accuracy and three application scenarios based on the sensor were proposed.
The development of additive manufacturing (AM) makes it possible to manufacture models with infill microstructures. Studies regarding microstructures that can alternating material properties have emerged. However, existing microstructure designs have many limitations for makers. Drawing inspiration from minimal surface structure, we demo a quickly designed and low-cost method to design and manufacture elastic structures with the flexibility of controlling mechanical properties in multiple directions. We also demo an interface to facilitate users to create FlexCube models. Based on previous researches and alternative perspectives, we explore a quickly designed and low-cost method to fabricate patterns with elastic properties, mainly based on the minimal surface structure.
We propose a novel modality for active biometric authentication: electrical muscle stimulation (EMS). To explore this, we engineered an interactive system, which we call ElectricAuth, that stimulates the user’s forearm muscles with a sequence of electrical impulses (i.e., EMS challenge) and measures the user’s involuntary finger movements (i.e., response to the challenge). ElectricAuth leverages EMS’s intersubject variability, where the same electrical stimulation results in different movements in different users because everybody’s physiology is unique (e.g., differences in bone and muscular structure, skin resistance and composition, etc.). As such, ElectricAuth allows users to login without memorizing passwords or PINs. ElectricAuth’s challenge-response structure makes it secure against data breaches and replay attacks, a major vulnerability facing today’s biometrics such as facial recognition and fingerprints. Furthermore, ElectricAuth never reuses the same challenge twice in authentications – in just one second of stimulation it encodes one of 68M possible challenges.
Writing formulas in LaTeX can be difficult, especially for complex formulas. MathDeck simplifies LaTeX formula entry by: 1) allowing rendered formulas to be edited directly alongside their associated LaTeX strings, 2) helping build formulas from smaller ones, and 3) providing searchable formula cards with associated names and descriptions. Cards are searchable by formula and title.
Mid-air interfaces enable rich 3D interactions and are inherently hygienic, but the technology is not yet ready for real-world public interfaces such as elevators, ATMs, and kiosks – one barrier to their adoption is the limited work on the accessibility of mid-air systems. To this end, we designed an interactive simulation of a contactless elevator simulation with mid-air touch feedback and comprehensive accessibility considerations. Despite being fully contactless, the controls are tactile and closely resemble the mental model of ordinary elevator buttons. Moreover, our demo features 1) mid-air haptics for conveying Braille information, 2) responsive button magnification to assist people with low vision, 3) intuitive gestures for opening or closing doors, and 4) audio feedback. Because many of the ideas and interactions in this demo can apply to other mid-air public interfaces, we hope to inspire more work on improving the accessibility of mid-air interfaces.
As Virtual Reality (VR) headsets become more mobile, people can interact in public spaces with applications often requiring large arm movements. However, using these open gestures is often uncomfortable and sometimes impossible in confined and public spaces (e.g., commuting in a bus). We present FingerMapper, a mapping technique that maps small and energy-efficient finger motions onto virtual arms so that users have less physical motions while maintaining presence and partially virtual body ownership. FingerMapper works as an alternative function while the environment is not allowed for full arm interaction and enables users to interact inside a small physical, but larger virtual space. We present one example application, FingerSaber that allows the user to perform the large arm swinging movement using FingerMapper.
Pen computing has become popular with tablet and wall-screen computers for digital precision tasks such as writing, annotating, and drawing. Digital pens have been made possible by the developments in input sensing technologies integrated into such screens. Virtual Reality systems, however, largely detect input using cameras, whose update rates are insufficient for capturing pen input with the necessary fidelity. In this demonstration, we showcase a digital pen for VR that accurately digitizes writing and drawing, including small and quick turns. Our prototype Flashpen repurposes an optical flow sensor from gaming mice, which digitizes minute motions at over 8,kHz when dragged across a surface.
We demonstrate several use-cases for Flashpen during interaction in VR, including sketching, selecting, annotating, and writing.
The piano keyboard offers a significant range and polyphony for well-trained pianists. Yet, apart from dynamics, the piano is incapable of translating expressive movements such as vibrato onto the played note. Adding sound effects requires additional modalities. A pitch wheel can be found on the side of most electric pianos. To add a vibrato or pitch bend, the pianist needs to actively operate the pitch wheel with their hand, which requires cognitive effort and may disrupt play. In this work, we present EMPiano, a system that allows pianists to incorporate a soft pitch vibrato into their play seamlessly. Vibrato can be triggered through muscle activity and is recognized via electromyography. This allows EMPiano to integrate into piano play. Our system offers new interaction opportunities with the piano to increase the player’s potential for expressive play. In this paper, we contribute the open-source implementation and the workflow behind EMPiano.
We propose a real-time system for synthesizing gestures directly from speech. Our data-driven approach is based on Generative Adversarial Neural Networks to model the speech-gesture relationship. We utilize the large amount of speaker video data available online to train our 3D gesture model. Our model generates speaker-specific gestures by taking consecutive audio input chunks of two seconds in length. We animate the predicted gestures on a virtual avatar. We achieve a delay below three seconds between the time of audio input and gesture animation.
With the global COVID-19 pandemic that began in early 2020, it has become difficult to hold workshops that bring people together in one place. Consequently, demand for online events and virtual workshops is increasing so as to minimize the loss of learning opportunities. However, providing an experience of building things remotely is not easy because of the time and cost of preparing materials and equipment, and the difficulties of distance teaching. So, we designed a kinetic toy kit that can be sent in an envelope, be constructed with ease, and work without any batteries. The kinetic toy, constructed using magnetic sheets, cardboard, and paper, allows the users to design and create “animals” with variable motions. At the two online workshops we held for 15 participants from 3 to 11 years old, all of the children enjoyed building toys and some participants invented original mechanisms and new animals. In this paper, we describe the details of the toy kit and the online workshops that used it.
We demonstrate a technique that allows an unprecedented level of dexterity in electrical muscle stimulation (EMS), i.e., it allows interactive EMS-based devices to flex the user's fingers independently of each other. EMS is a promising technique for force feedback because of its small form factor when compared to mechanical actuators. However, the current EMS approach to flexing the user's fingers (i.e., attaching electrodes to the base of the forearm, where finger muscles anchor) cannot flex one finger alone at the metacarpophalangeal (MCP) joint, they always induce unwanted actuation to adjacent fingers. To tackle the lack of dexterity, we propose a new electrode layout that places the electrodes on the back of the hand, where they stimulate the interossei/lumbricals muscles in the palm, which have never received attention with regards to EMS. We demonstrate the improved dexterity with a series of EMS-assisted music applications for playing piano, drums, and guitar.
Post-plant is a plant-like robot which communicates nonverbally through physical movements. Until now, robots have in most cases communicated with us by mimicking human speech and human/animal expressions and gestures. Post-plant takes a radically different approach by assuming a form inspired by plants; responding to touch instead of language, it nonverbally conveys simple emotions and information feedback. With post-plant as a starting point, robots of the future will communicate with us in their own way, without having to mimic human behavior.
From providing nutrition to being social platforms, food plays an essential role in our daily lives and cultures. In this work, we are interested in using food as an interaction medium and a context of personal fabrication with an expanded design space enabled by support bath-assisted printing. The bath scaffolds the embedded materials while preserving shapes during the printing processes and allows us to create freeform food with fluid-like materials. Coupled with different post-processing and cooking methods, this technique grants the versatility of food printing with fluidic materials. We will demo confectionery arts and dishes designed by our software tool.
We present a mixed-reality system for remote collaborations, where collaborators can discuss, explore, create and learn about 3D physical objects. The system combines Hololens augmented reality, 3D Kinect cameras, PC and virtual reality interfaces, into a virtual space that hosts remote collaborators, and physical & virtual objects.
When talented people have access to fabrication tools and expertise, incredible inventions can be manifested to solve local and global problems. Because of this, the “maker movement” is a growing phenomenon, manifested through the increased number of maker spaces in many affluent communities. However, many talented individuals lack access to resources in their local communities, and collaboration opportunities with remote experts are wasted due to the limitations of current teleconferencing systems. We present a mixed-reality (MR) system for enabling remote collaborations in the context of maker activities, that allows groups of students and instructors to discuss, explore, create and learn about physical objects. The system combines augmented reality (AR) headsets, 3D cameras, PC and virtual reality (VR) interfaces, into a virtual space that contains multiple remote students, instructors, physical and virtual objects. Remote students can see a real-time 3D scan of the on-site user's physical environment, and the virtual avatars of other students. The system can support learning and exploration by showing virtual overlays on real objects (ex: showing a physical robot's sensor data or internal circuitry) while responding to real-time manipulation of physical objects by the on-site user; and can support design activities by allowing remote and local participants to annotate physical objects with virtual drawings and virtual models. This platform is being developed as an open-source project, and we are currently building applications with the intention to deploy in hybrid makerspace classrooms involving on-site and remote students.
We propose a nail-mounted foldable haptic device that provides tactile feedback to mixed reality (MR) environments by pressing against the user's fingerpad when a user touches a virtual object. What is novel in our device is that it quickly tucks away when the user interacts with real-world objects. Its design allows it to fold back on top of the user's nail when not in use, keeping the user's fingerpad free to, for instance, manipulate handheld tools and other objects while in MR. To achieve this, we engineered a wireless and self-contained haptic device, which measures 24×24×41 mm and weighs 9.5 g. Furthermore, our foldable end-effector also features a linear resonant actuator, allowing it to render not only touch contacts (i.e., pressure) but also textures (i.e., vibrations). We demonstrate how our device renders contacts with MR surfaces, buttons, low- and high-frequency textures.
We demonstrate two trigeminal-based interfaces. The first provides a temperature illusion that uses low-powered electronics and enables the miniaturization of simple warm and cool sensations. Our illusion relies on the properties of certain scents, such as the coolness of mint or hotness of peppers. These odors trigger not only the olfactory bulb, but also the nose's trigeminal nerve, which has receptors that respond to both temperature and chemicals. The second is a novel type of olfactory device that creates a stereo-smell experience, i.e., directional information about the location of an odor, by rendering the readings of external odor sensors as trigeminal sensations using electrical stimulation of the user's nasal septum. We propose that electrically stimulating the trigeminal nerve is an ideal candidate for stereo-smell rendering. We demonstrate these interfaces by allowing an audience to stimulate an author and receive an explanation of the sensations.
Gaze in the context of interactive systems plays a vital role in Human-Computer Interaction (HCI). Gaze-aware systems provide support for users with physical disabilities, are employed as a game input device, or serve as an input form in public settings. The installation introduced in this work, called “Blind Spot”, reflects these developments by using gaze in an art context by partially obscuring the user’s field of view (i.e., the current gaze position). In doing so, the installation illustrates the potentials and characteristics of eye-tracking technology. It aims at fostering discussions about new design possibilities for gaze-based interactions in the arts and academia. Furthermore, it makes spectators aware of how visual information is processed by humans (i.e., only a small fraction with high visual acuity is perceived at a given moment). Thirdly, it addresses the human’s desire to retrieve a complete picture of a given scenery (i.e., revealing this desire by limiting the spectator’s perception capabilities).
We demonstrate fastForce, a software tool that detects structural flaws in laser cut 3D models and fixes them by introducing additional plates into the model, thereby making models up to 52x stronger. By focusing on a specific type of structural issue, i.e., poorly connected sub-structures in closed box structures, fastForce achieves real-time performance. This allows fastForce to fix structural issues continuously in the background, while users stay focused on editing their models and without ever becoming aware of any structural issues.
In our study, six of seven participants inadvertently introduced severe structural flaws into the guitar stands they designed. Similarly, we found 286 of 402 relevant models in the kyub  model library to contain such flaws. We integrated fastForce into a 3D editor for lasercutting (kyub) and found that even with high plate counts fastForce achieves real-time performance.
Sketchable Interaction (SI) describes a concept and environment where end-users create regions by drawing on a canvas. These regions apply effects to each other on collision. Attributes of regions, e.g. position, can be linked to each other so that they change together once modified, e.g. moved on the canvas. Within Sketchable Interaction, all entities - mouse pointer, desktop icons, or windows - are implemented as interactive regions. End-users customize this environment by drawing new regions that apply certain actions e.g. tagging files, deleting other regions or automating processes.
Stymphalian Birds is a research project which explores the influence of feathers as sensors in our environment and on our bodies. In this research, complex haptic interactions with feathers are sonified in acoustic soundscapes using conductive hybrid feathers at the crossroads of electronics, Haute Couture and natural dyeing. The wearers of dresses made of Stymphalian Birds experience the societal impacts of sensing beyond the human body with feathers in a society where social distancing guidelines must be followed.
In the field of sports, visually impaired spectators are unfortunately at a disadvantage in terms of understanding the precise developments of a game. To provide such information to them, making play-by-play announcements of the live coverage of the game is the best approach; however, this is infeasible as it is difficult to employ a professional commentator for every game. Therefore, this study proposes a blind football play-by-play system, combining a tactile graphic display function and a position acquisition function, to aid visually impaired spectators understand game developments. To conduct a basic study, an experimental system was developed herein, which detects the positions of the players and ball using image processing technology and deep learning, and shows these positions via a refreshable tactile display. The validity of the proposed system was then established through subjective experiments. This system provides a rich user experience to the spectators as well as being a powerful feedback tool to the players. Moreover, an audio–tactile graphic system, which can be used to overcome tactile cognitive limitations, can be implemented as an application of the system in the future.
The use of multi-rotor drones has grown exponentially as a consumer product and in the commercial sector. The inescapable reality is that drones will become a ubiquitous part of society. One major obstacle to the mainstream acceptance of drones is the public perception of drones being dangerous or a safety hazard. This paper presents an investigation into the human factors toward potential drone collisions. The study included twenty participants who underwent a controlled drone collision exposure and a post-exposure interview. We propose a novel drone collision exposure involving a novel experimental setup simulating drone to human collisions safely. We found that all participants identified the drone’s propellers as their primary concern, with the propeller’s sound being the most threatening. Based on the participant feedback, we identified some concerns on a drone’s unregulated aspects and outline common participant recommendations on drone regulations.
We present TexelBlocks, modular blocks that dynamically change their surfaces to support rich physical interactions. The core component of each block is a motorized crawler belt with a set of material patches attached to it, allowing each of the TexelBlocks to change their surface independently and be concatenated into flat surfaces of various sizes. We implemented a working system that controls 24 TexelBlocks to form 6 × 4 dynamic surfaces. With the working prototype, we developed five applications such as context-aware surface adaptation, real-time tactile feedback, immersive storytelling with dynamic background, real-time embodied platform game, and a surface-changing board game to demonstrate rich physical interactions of dynamic surfaces.
The understanding of situation-based emotions is of great significance for children, as it is able to assist children to better understand other people's behaviors in certain emotional scenarios and also to understand how their behaviors and actions can affect others. However, children with Autism Spectrum Disorders (ASDs) might have difficulties associating situations with emotions. In this paper, we presented the design of DailyConnect, a visual-based mobile application that supports children with ASDs to recall memories by reviewing photos based on their personal experience, and then taught by their teachers to understand situation-based emotions. There are two types of user modes for parents and teachers respectively in the system. For parents, the system enables them to upload daily-life photos associated with descriptions of contexts and behaviors of the child. For teachers, the system enables them to guide the children to recall what happened by reviewing the uploaded photos, and adopts the Discrete Trial Training (DTT) teaching technique to help the children understand situation-based emotions. Through recalling the memories of the children's real-life experience, the system enables teachers and parents to help children build emotions and situations associations step by step, including the understanding of contexts, emotional cues, emotions recognition, facial expression recognition and learn expected responses. Moreover, the system can provide individual feedback for each child to achieve targeted training after a long-term use: by analyzing of the data recording of the children (if with their consent), the system is able to generate reports on which step of the understanding situation-based emotions needs attention for each child respectively.
Smart energy management systems incorporate advanced sensing and control technologies that enable users to monitor and reduce their energy consumption through interactive visualisations and automated control features. Plug load management systems (PLMS), in particular, are applications of such systems targeting electrical devices found in homes and workplaces. While past studies have mostly focused on PLMS adoption in homes, past literature indicates several key differences in user motivation as office users typically do not bear the cost of their consumption. This reduces their motivation to embrace such systems, resulting in low adoption rates. In our research, we examined user perception of adopting PLMS in the workplace through a series of focus group discussions and an online survey guided by findings from the focus group discussions. By analysing the quantitative and qualitative responses from 101 participants, we identified six design implications to guide the development of future PLMS in the workplace.
Knowing where users might click in advance can potentially improve the efficiency of user interaction in desktop user interfaces. In this paper, we propose a machine learning approach to predict mouse click location. Our model, which is LSTM (long short-term memory)-based and trained by joint supervision, can predict the rectangular region of mouse click with feeding mouse trajectories on the fly. Experiment results show that our model can achieve a result of a predicted rectangle area of 58 × 79 pixels with 92% accuracy, and reduce prediction error when compared with other state-of-the-art prediction methods using a multi-user dataset.
In this paper, we propose FieldSweep, a tracking method on a plane using only permanent magnets and a smartphone. This method estimates the position of a magnetic sensor on a plane using the magnetic fields of permanent magnets placed at an appropriate pattern. The magnetic sensor is a built-in one in a smartphone. Since the planar side consists of only magnets and a plate for fixing the magnet, no power supply or electronic components are required for the system. In this paper, we report the necessary conditions for tracking, present the implemented prototype, and discuss possible future developments and applications.
Following the conventional pipeline, the training dataset of a human activity recognition system relies on the detection of the significant signal variation regions. Such position-specific classifiers provide less flexibility for users to alter the sensor positions. In this paper, we proposed to employ the simulated sensor to generate the corresponding signal from human motion animation as the dataset. Visualizing the corresponding items from the real world, the user can determine the sensor’s placement arbitrarily and obtain accuracy feedback as well as the classifier interface to get relief from the cost of a conventional training model. With the cases validation, the classifier trained by simulated sensor data can effectively recognize the real-world activity.
Thermal sensation systems are embedded into head-mounted displays using Peltier devices, water, or chemical substances to enhance the sense of presence in virtual reality environments. We propose a new method of presenting thermal sensation to the forehead by using electrical stimulation. This is based on our finding that when electrical stimulation is applied to the forehead, thermal sensation occurs in rare cases. We conducted an evaluation experiment and found that cathodic current pulses frequently provide a cold sensation, and the sensation is highly correlated with pressure sensation.
This paper presents a technique for 3D printing firm inflatables with consumer-grade fused-deposition modeling (FDM) 3D printers and flexible filaments. By printing bridges inside the inflatable to tie its walls, internal tethers can retain the shape of the surfaces when inflated. This internal structure gives extra stiffness to the inflatables while retaining them lightweight and portable; the inflatables can be squished down to reduce the volume and inflated back to a sturdy state. Compared to conventional drop-stitch fabrics, the length of internal tethers can be easily varied owing to 3D printing, allowing us to fabricate angled surfaces as well as parallel surfaces. We evaluate the physical properties of the 3D-printed inflatables with internal tethers made with diverse printing parameters. Finally, we demonstrate the feasibility of our technique in custom inflatable design with example applications.
Artificial Intelligence (AI) will provide novel User Experience (UX) solutions if UX designers understand how AI can be best utilized. They need to understand AI capabilities and envisage potential applications. To aid the ideation processes, design heuristics for AI are needed to support UX designers in the conceptual design stage. Forty design heuristics were extracted from 1,755 granted AI patents through a four-step process. The feasibility of the heuristics was verified with two AI-powered case studies: a smart canteen and an online smart shopping system. Case studies suggest that AI design heuristics can be used as design stimuli in the early conceptual design phase to support practitioners in exploring a larger design space for the generation of AI-powered ideas.
During the COVID-19 pandemic, Chinese social media platform Weibo created a special super-topic for online users to seek help through posts or provide help through comments. Prior work has analyzed online help-seeking and help-providing messages during crisis situations, however, it is not clear why and how people commented on help-seeking posts to provide support, especially in such a crisis support community. In this study, we interviewed 23 commenters in the COVID-19 super-topic to examine their motivations and strategies for commenting. Our findings showed that geographical proximity and expertise level affected users’ commenting behaviors. For example, non-locals or people lacking professional knowledge applied a “copy-and-paste” strategy and acted as a bridge between help-seekers and influencers on social media, while locals and experts focused on detailed information and provided targeted help. Some commenters also exhibited avoidance behaviors due to their sense of powerlessness and frustration in such a long-lasting pandemic. We discuss the implications of our findings for designing social media platforms to better support people in need during crisis situations.
Social robots have been used to provide services in many public spaces. However, many of them are autonomous robots that have poor natural language conversation skills. It is considered a difficult task for an autonomous robot to be a salesperson. In this paper, we propose the use of a teleoperated robot with a human operator for sales purposes. We conducted an exploratory field experiment in which we built a sales booth in a shopping mall and used our teleoperated robot as the salesperson to promote the sales of toothbrushes. We found that our robot has the ability to attract people’s attention and influence their shopping behavior. Furthermore, we discuss the research challenges in designing and applying teleoperated robots to real sales situations.
The Persona, as a 2D poster, is a commonly created and used tool in user-centred design activities. Whilst popular, many in HCI have critiqued its depictions of ‘the user’ as reductive, shallow, and static. Yet, there are not many alternatives. In this late-breaking work, we present our efforts to imagine and create an alternative to this 2D persona - one that consists of a carefully curated, staged collection of artifacts. We call this an Experiential Persona because it allows designers to interact with and explore the artifacts, individually and as a collection, to imagine and experience the world of ‘the user’. This more embodied, interactive and open-ended persona can potentially support richer sense making; encouraging a more open, emergent, and unfinalized view of people we design for. This study contributes to extending design tools, and exploring novel use of tangible artifacts to support design, as well as a way to represent design knowledge.
Video-based learning and digital note-taking are widely adopted by online learners. To investigate how to better support self-paced online learners’ note-taking and learning activities, we presented and evaluated NoteCoStruct, a digital note-taking and note-sharing tool for video-based learning by scaffolding the learners through a structured note-taking process and displaying learning traces left by peers watching the same videos before. With an online laboratory study involving 20 participants, we found that NoteCoStruct significantly reduced perceived distraction caused by note-taking, fostered a sense of learning community, and better engaged learners cognitively and emotionally in video-based learning. We discussed our findings and outlined possible future work around digital note-taking and note-sharing tools.
In this paper we report the results of our early explorations regarding Ninja Codes, a new class of visual codes intended to be used in a variety of interactive applications including augmented reality, motion/gesture control, contactless data transfer, robotics, etc. By harnessing the power of adversarial examples, Ninja Codes can be rendered discreet, concealed to human eyes but easily recognizable to detectors based on deep neural networks. The paper will provide a high-level overview of Ninja Codes, and describe an initial, proof-of-concept implementation built on top of existing face detection software. We see this work as a promising step toward a new family of methods by which digital information can be seamlessly encoded into real-world objects and environments.
Touch plays an essential role in communicating emotions and intensifying interpersonal communication. A lot of research focuses on how to create or improve haptic interfaces looking into challenges and possibilities that the haptic technology can offer. The objective of this research is to investigate whether people can share subjective feelings through simple vibrotactile feedback. In an initial experiment, we used the TECHTILE toolkit to record 28 vibration sample sets for 4 different emotions (joy, anger, sadness, relaxation). We then replayed the vibrations to test how well they could be recognized. The results support the hypothesis that people can use vibration feedback as a medium for expressing specific subjective feelings. It also indicates some universalities in affective vibrotactile stimuli that even strangers with little to no knowledge about the senders could recognize the emotional meanings.
In this study, we propose a method to remotely change the friction of a three-dimensional surface made of polystyrene foam. In a previous study, a system that projects an image onto a two-dimensional polystyrene foam surface and irradiates ultrasound according to a finger’s position in contact with the surface to change the friction has been proposed. In this study, we extended it to a three-dimensional shape and quantitatively evaluated the change in friction. It was confirmed that the frictional change could be obtained even when extended to three dimensions but that the frictional change was less likely to occur depending on how the polystyrene foam was fabricated. This technology can be applied for adding the tactile feel to a simple 3D mockup model made of polystyrene foam.
Sleep plays an integral role in human health and is vitally important for neurological development in infants. In this study, we propose the PneuMat, an interactive shape-changing system integrating sensors and pneumatic drives, to help ensure sleep safety through novel human-computer interaction. This system comprises sensor units, control units and inflatable units. The sensor units utilize information exchange between infants and the system, and detect the infant's sleeping posture, sending raw data to control units. For better sleep experience, the inflatable units are divided into nine areas. The inflatable units are multi-mode, can be independently inflated in different areas, and can be inflated in different areas together. We aim to ensure sleep safety by ensuring that infants stay in a safe sleeping position while in bed, by autonomously actuating the PneuMat's shape-changing capability. In this article, we describe the division of PneuMat, the design of the control unit, integration of the sensors and our preliminary experiments to evaluate the feasibility of our interaction system. Finally, based on the results, we will discuss future work involving the PneuMat.
While commercial videogames are increasingly recognized to be able to facilitate meaningful experiences, little has been researched on its potential as a medium to help players cope with the loss of a loved one. In this study, we aimed to investigate the player’s bereavement process while playing the commercial death-themed game Spiritfarer. Through a thematic analysis of qualitative in-depth interviews with 6 participants, we found that the player’s grieving experience closely resembled the Dual Process Model of Coping with Bereavement by Stroebe and Schut. In the game, players oscillated between facing loss-orientation stressors (‘Character resemblance to the deceased’, ‘Sending characters away’) and restoration-orientation stressors (‘Taking care of remaining spirits’). Through this process, the participants coped with and sometimes reappraised their loss. We further found that the players bereavement experience and level of engagement greatly varied depending on their ‘prior loss experience’, ‘gaming environment’ and ‘tendency to focus on self or game’. Those differences were partly accommodated by the game through its complex and diverse characters and engaging game elements. We conclude with insights for future works in game design for bereavement support.
As reading on mobile devices is becoming more ubiquitous, content is consumed in shorter intervals and is punctuated by frequent interruptions. In this work, we explore the best way to mitigate the effects of reading interruptions on longer text passages. Our hypothesis is that short summaries of either previously read content (reviews) or upcoming content (previews) will help the reader re-engage with the reading task. Our target use case is for students who study using electronic textbooks and who are frequently mobile. We present a series of pilot studies that examine the benefits of different types of summaries and their locations, with respect to variations in text content and participant cohorts. We find that users prefer reviews after an interruption, but that previews shown after interruptions have a larger positive influence on comprehension. Our work is a first step towards smart reading applications that proactively provide text summaries to mitigate interruptions on the go.
Video accessibility is crucial for blind and low vision users for equitable engagements in education, employment, and entertainment. Despite the availability of professional description services and tools for amateur description, most human-generated descriptions are expensive and time consuming, and the rate of human-generated descriptions simply cannot match the speed of video production. To overcome the increasing gaps in video accessibility, we developed a system to automatically generate descriptions for videos and answer blind and low vision users’ queries on the videos. Results from a pilot study with eight blind video aficionados indicate the promise of this system for meeting needs for immediate access to videos and validate our efforts in developing tools in partnership with the individuals we aim to benefit. Though the results must be interpreted with caution due to the small sample size, participants overall reported high levels of satisfaction with the system, and all preferred use of the system over no support at all.
Within computing education, accessibility topics are usually taught in Human Computer Interaction and Web Design courses. Few have included accessibility in programming courses as an add-on topic. We studied assignments that infuse accessibility into programming topics without impacting the core computing learning objectives. We present two examples, assignments that can be used in Introductory Programming and Object Oriented Programming courses. Both assignments cover accessibility in addition to the primary computing topic taught. We included the two assignments in two courses for two semesters, conducting surveys and interviews to understand the impact of the assignments on students’ learning of accessibility and computing. Our findings show this approach has potential to satisfy accessibility and programming learning objectives without overwhelming the students, though more work is needed to make sure that students are clear on the relationship between the assignments and technical accessibility knowledge.
Mobile health (mHealth) apps can support users’ behavioral changes towards healthier habits (e.g., increasing activity) through goal setting, self-monitoring, and notifications. In particular, mHealth app notifications can aid in behavioral change through increasing user app engagement and adherence to health objectives. Previous studies have established empirically-derived notification design recommendations; however, prior work has shown that few mHealth apps are grounded in advised health behavior theories. Therefore, we wanted to examine if there was also a gap between recommendations and practice for mHealth notifications. We surveyed 50 mHealth apps and found a disconnect in several areas (e.g., tailoring, interactivity). Our findings show that mHealth apps can be improved to further support users’ health goals. We discuss open research questions in the context of mHealth notifications.
This paper presents our vision of on-the-wall tangible interaction. We envision a future where tangible interaction can be extended from conventional horizontal surfaces to vertical surfaces; indoor vertical areas such as walls, windows, and ceilings can be used for dynamic and direct physical manipulation. We first discuss the unique properties that vertical surfaces may offer for tangible interaction and the interaction scenarios they imbue. We then propose two potential paths for realizing on-the-wall interaction and the technical challenges we face. We follow with one prototype called Climbot. We showcase how Climbot can be used as an on-the-wall tangible user interface for dynamic lighting and as a wall switch controller. We conclude with a discussion of future work.
This paper describes a modular framework developed to facilitate the design space exploration of cross-device Augmented Reality (AR) interfaces that combine an AR head-mounted display (HMD) with a smartphone. Currently, there is a growing interest in how AR HMDs can be used with smartphones to improve the user’s AR experience. In this work, we describe a framework that enables rapid prototyping and evaluation of an interface. Our system enables different modes of interaction, content placement, and simulated AR HMD field of view to assess which combination is best suited to inform future researchers on design recommendations. We provide examples of how the framework could be used to create sample applications, the types of the studies which could be supported, and example results from a simple pilot study.
We propose a study on the effects of live streaming on an educational game’s learning outcomes. The COVID-19 pandemic has strengthened the call for interactive online learning experiences. There is a growing body of literature examining learning on platforms such as Twitch, and studies have shown that enhancing in-game performance is possible from viewing a stream. However, little work has explored whether learning from live streaming educational games, where in-game performance relates to educational outcomes outside of the game context, is possible. We share the details of our proposed study, in which an educational game called Angle Jungle will streamed to participants, and learning gains will be compared to three non-live streamed conditions. By executing this study, we can understand the benefits and shortcomings of current live streaming interfaces in supporting educational games, paving the way for the design of novel and interactive learning experiences built for live streaming platforms.
The Covid-19 pandemic has led to large-scale lifestyle changes and increased social isolation and stress on a societal level. This has had a unique impact on US “essential workers” (EWs) – who continue working outside their homes to provide critical services, such as hospital and infrastructure employees. We examine the use of Twitter by EWs as a step toward understanding the pandemic’s impact on their mental well-being, as compared to the population as a whole. We found that EWs authored a higher ratio of mental health related tweets during the pandemic than the average user, but authored fewer tweets with Covid related keywords than average users. Despite this, sentiment analysis showed that, on average, EWs’ tweets yield a more positive sentiment score than average Twitter users, both before and during the pandemic. Based on these initial insights, we highlight our future aims to investigate individual differences in this impact to EWs.
Light painting photos are created by moving light sources in mid-air while taking a long exposure photo. However, it is challenging for novice users to leave accurate light traces without any spatial guidance. Therefore, we present LightPaintAR, a novel interface that leverages augmented reality (AR) traces as a spatial reference to enable precise movement of the light sources. LightPaintAR allows users to draft, edit, and adjust virtual light traces in AR, and move light sources along the AR traces to generate accurate light traces on photos. With LightPaintAR, users can light paint complex patterns with multiple strokes and colors. We evaluate the effectiveness and the usability of our system with a user study and showcase multiple light paintings created by the users. Further, we discuss future improvements of LightPaintAR.
We often get questions about the processes and things that we observe in our surroundings, but there exists no practical support for exploring these questions. Exploring curiosity can lead to learning new science concepts. We propose using a post-event recall and reflection approach to support curiosity-inspired learning in everyday life. This approach involves capturing contextual cues during the curiosity moment with wearables that can capture these contextual cues in daily life, and later using them for recall and focused reflection. Firstly, we conducted a preliminary study to explore different cues and their effectiveness in recalling these curiosity moments. Further, we conducted a virtual study to evaluate the amount of exploration through post-event recall and reflection and compared it with insitu recall and reflection. Results show a significant increase in questions and reflections made with the post-event recall and reflection approach, providing evidence for better learning outcomes from everyday curiosity.
Activity recognition computer vision algorithms can be used to detect the presence of autism-related behaviors, including what are termed “restricted and repetitive behaviors”, or stimming, by diagnostic instruments. Examples of stimming include hand flapping, spinning, and head banging. One of the most significant bottlenecks for implementing such classifiers is the lack of sufficiently large training sets of human behavior specific to pediatric developmental delays. The data that do exist are usually recorded with a handheld camera which is itself shaky or even moving, posing a challenge for traditional feature representation approaches for activity detection which capture the camera's motion as a feature. To address these issues, we first document the advantages and limitations of current feature representation techniques for activity recognition when applied to head banging detection. We then propose a feature representation consisting exclusively of head pose keypoints. We create a computer vision classifier for detecting head banging in home videos using a time-distributed convolutional neural network (CNN) in which a single CNN extracts features from each frame in the input sequence, and these extracted features are fed as input to a long short-term memory (LSTM) network. On the binary task of predicting head banging and no head banging within videos from the Self Stimulatory Behaviour Dataset (SSBD), we reach a mean F1-score of 90.77% using 3-fold cross validation (with individual fold F1-scores of 83.3%, 89.0%, and 100.0%) when ensuring that no child who appeared in the train set was in the test set for all folds. This work documents a successful process for training a computer vision classifier which can detect a particular human motion pattern with few training examples and even when the camera recording the source clip is unstable. The process of engineering useful feature representations by visually inspecting the representations, as described here, can be a useful practice by designers and developers of interactive systems detecting human motion patterns for use in mobile and ubiquitous interactive systems.
Visualizations are now widely adopted across disciplines, providing effective means to understand and communicate data. However, people still frequently create misleading visualizations that distort the underlying data and ultimately misinform the audience. While design guidelines exist, they are currently scattered across different sources and devised by different people, often missing design trade-offs in different contexts and providing inconsistent and conflicting design knowledge to visualization practitioners. Our goal in this work is to investigate the ontology of visualization design guidelines and derive a unified framework for structuring the guidelines. We collected existing guidelines on the web and analyzed them using the grounded theory approach. We describe the current landscape of the available guidelines and propose a structured template for describing visualization design guidelines.
The ubiquity of mobile devices gives rise to mobile applications designed for self-regulating anxiety, yet empirical evidence of the efficacy and safety that these apps provide is lacking. An in-depth understanding of mobile app-based support for anxiety can provide guidelines to improve future designs, including their accessibility and user experience. Our research takes one step toward filling this gap by exploring individual experiences of anxiety through semi-structured interviews with eight participants with varying anxiety experiences. Our findings indicate that mobile apps have the potential to be a supplemental tool for anxiety regulation through user-level customization, and naturalistic and trustworthy communication. We propose a framework that integrates top-down (e.g., Cognitive Behavioral Therapy) and bottom-up (e.g., Body Psychotherapy, Polyvagal Theory) therapy approaches, and Norman’s Three Levels of Emotional Design to contribute to the design of mobile apps for self-regulating anxiety.
In this work we introduce the Thinking Cap, a wearable system designed to resemble the Sorting Hat from Harry Potter, fitted with a commercially available Electroencephalography (EEG) headset and a Bluetooth speaker. The Thinking Cap can inform the user about their brain activity in real-time via a speaker. We designed and conducted a study with 48 elementary and middle schoolers to investigate the influence of a BCI device and perceived magic on the development of growth mindset in children. Our results suggest that interacting with the Thinking Cap has a positive impact on children's mindset, which was expressed through their communicated beliefs and task-based behaviors.
Individuals facing homelessness navigate through different moments and situations that bring specific challenges to their lives. A thorough investigation of their ‘journeys’ through such a difficult experience can provide new opportunities about how technologies can support individuals through homelessness. Based on a qualitative interview study involving nine individuals struggling with homelessness in an Australian city, we aimed to understand and trace critical moments of their struggles through a ‘journey tracking’ exercise. Our findings unpacked challenges and everyday resilient practices surrounding homelessness and the role technology can play in it. We discuss implications around developing services to provide emergency support to the homeless population based on our findings.
To protect the password from visual attacks, most password entry screens use a password masking scheme that displays a series of placeholder characters (e.g., dots and asterisks) instead of the actual password. Recent research has however shown the security provided by this form of password masking to be weak against keystroke timing-analytics attacks. The underlying idea behind these attacks is that, even when a password is masked as described above, the timing between consecutive placeholder characters gives away information about the password since the relative locations of characters on the keyboard dictate how fast fingers move between them. In this paper we argue that, for security-sensitive applications, password masking mechanisms ought to hide the true intervals between password characters in order to overcome these kinds of attacks. Making adjustments to these timings however has the potential to pose usability issues given the fact that the typing would not perfectly align with the display of typed content. The paper proposes 3 different password masking schemes and undertakes a usability evaluation on them. Our early results suggest that user receptiveness to two of the schemes is not much worse than that seen with the conventional (insecure) scheme.
Proprioception is the body’s ability to sense the position and movement of each limb, as well as the amount of effort exerted onto or by them. Methods to assess proprioception have been introduced before, yet there is little to no study on assessing the degree of proprioception on body parts for use cases like gesture recognition wearable computing. We propose the use of Fitts’ law coupled with the N-Back task to evaluate proprioception of the hand. We evaluate 15 distinct points at the back of the hand and assess the musing extended 3D Fitts’ law. Our results show that the index of difficulty of tapping point from thumb to pinky increases gradually with a linear regression factor of 0.1144. Additionally, participants perform the tap before performing the N-Back task. From these results, we discuss the fundamental limitations and suggest how Fitts’ law can be further extended to assess proprioception
This paper describes a qualitative study of media and advocacy publications about digital surveillance in the context of Black Lives Matter protests, including recommendations for techniques on how to circumvent such surveillance. We conducted a content analysis of the recommendations given for circumventing surveillance provided by media, news, activist, and commercial outlets. We describe the recommendations provided and identify common fears and implications of protest surveillance as expressed by these sources. We identified thematic categories of surveillance fears and implications, including ruined reputations, online harassment, arrest, lack of transparency, and the chilling of free speech and protest. Finally, we describe what we see as challenges protesters will have implementing the recommendations (for example, due to availability and accessibility of technology and certain types of expertise required), complicating the creation of the kind of security culture protesters need.
The strengthening of community care and the development of co-managed telehealth systems are vital components in addressing growing critical healthcare issues encountered worldwide. The global COVID pandemic highlights the challenges in providing appropriate co-managed home-based care in a systemic and financially viable way at scale. It is important to understand the individual, institutional, and socio-technical opportunities and barriers potentially encountered when attempting to implement telehealth systems as part of a broader social healthcare network. As part of our work designing telehealth systems for home based physical rehabilitation, we conducted a survey and interviews with occupational and physical therapists to better understand the everyday individual and institutional reality within which our systems might ultimately be embedded. We describe the integrated personal, economic, and regulatory issues involved and propose guidelines to consider for designers of telehealth systems for home-based contexts.
Imagine an application that requires constant configuration changes, such as modifying the brush type in a drawing application. Typically, options are hierarchically organized in menu bars that the user must navigate, sometimes through several levels, to select the desired mode. An alternative to reduce hand motion is the use of multimodal techniques such as gaze-touch, that combines gaze pointing with mechanical selection. In this paper, we introduce GazeBar, a novel multimodal gaze interaction technique that uses gaze paths as a combined pointing and selection mechanism. The idea behind GazeBar is to maximize the interaction flow by reducing ”safety” mechanisms (such as clicking) under certain circumstances. We present GazeBar’s design and demonstrate it using a digital drawing application prototype. Advantages and disadvantages of GazeBar are discussed based on a user performance model.
Semi-rigid and rigid structures have been utilized in many on-body applications including musculoskeletal support (e.g., braces and splints). However, most of these support structures are not very compliant, so effortless custom fitting becomes a unique design challenge. Furthermore, the weight and space needed to transport these structures impede adoption in mobile environments. Here, we introduce ExoForm, a compact, customizable, and semi-rigid wearable material system with self-fusing edges that can semi-autonomously assemble on-demand while providing integrated sensing, control, and mobility. We present a comprehensive and holistic engineering strategy that includes optimized material composition, computationally-guided design and fabrication, semi-autonomous self-morphing assembly and fusing steps, heating control, and sensing for our easy-to-wear ExoForm. Finally, we fabricate wearable braces using the ExoForm method as a demonstration along with preliminary evaluation of ExoForm's performance.
A sense of autonomy is important to ensure a good quality of life for older adults. However, the recent COVID-19 pandemic challenged older adults’ autonomy. The government has imposed rules and regulations to slow down the spread of the virus that reduced older adults’ abilities to go outside. We conducted semi-structured interviews with 15 older adults living alone in community-dwellings to understand the influence of the pandemic on their experiences of autonomy. Our findings show that even though older adults experienced limited autonomy due to extrinsic (e.g., government regulation, influence of others) and intrinsic (e.g., risk perception) factors, they experienced new opportunities for autonomy through engagement in various self-chosen activities at home. The insights from the study pave ways for future research and design of aging in place technology focusing on providing a sense of autonomy.
In spite of growing demand for mobile data visualization, few design guidelines exist to address its many challenges including small screens and low touch interaction precision. Both of these challenges can restrict the number of data points a user can reliably select and view in more detail, which is a core requirement for interactive data visualization. In this study, we present a comparison of the conventional tap technique for selection with three variations including visual feedback to understand which interaction technique allows for optimal selection accuracy. Based on the results of the user study, we provide actionable solutions to improve interaction design for mobile visualizations. We find that visual feedback, such as selection with a handle, improves selection accuracy three- to fourfold compared to tap selection. With a 75% accuracy, users could select a target item among 176 items total using the handle, but only from 60 items using tap. On the other hand, techniques with visual feedback took about twice as long per selection when compared to tap. We conclude designers should use selection techniques with visual feedback when the data density is high and improved selection precision is required for a visualization.
It is common knowledge that health professionals’ device training is a major problem, as the inadequate quality of device training puts patients at risk . To find a way to counter this problem, we propose a virtual reality training simulation. The corresponding use case is exemplified by a priming procedure of a dialysis machine. This is achieved by users going through sequential interaction tasks in Virtual Reality Training using a head-mounted display. We evaluate this training method’s potential within a user study, comparing it to traditional training methods, Group Training, and Video Training. Our findings demonstrate that Virtual Reality Training using a head-mounted display is a more effective and efficient method of learning how to prime a dialysis machine. Compared to other training methods, Virtual Reality Training takes longer on average, but the resulting learning effect is also higher. Furthermore, VR-training is more cost-effective than personal training and can be repeatedly performed as it dispenses with the need for teaching professionals.
Advertising networks enable developers to create revenue, but using them potentially impacts user privacy and requires developers to make legal decisions. To understand what privacy information ad networks give developers, we did a walkthrough of four popular ad network guidance pages with a senior Android developer by looking at the privacy-related information presented to developers. We found that information is focused on complying with legal regulations, and puts the responsibility for such decisions on the developer. Also, sample code and settings often have privacy-unfriendly defaults laced with dark patterns to nudge developers’ decisions towards privacy-unfriendly options such as sharing sensitive data to increase revenue. We conclude by discussing future research around empowering developers and minimising the negative impacts of dark patterns.
The integration of gaze gesture sensors in next-generation smart glasses will improve usability and enable new interaction concepts. However, consumer smart glasses place additional requirements to gaze gesture sensors, such as a low power consumption, high integration capability and robustness to ambient illumination. We propose a novel gaze gesture sensor based on laser feedback interferometry (LFI), which is capable to measure the rotational velocity of the eye as well as the sensor’s distance towards the eye. This sensor delivers a unique and novel set of features with an outstanding sample rate allowing to not only predict a gaze gesture but also to anticipate it. To take full advantage of the unique sensor features and the high sampling rate, we propose a novel gaze symbol classification algorithm based on single sample. At a mean F1-score of 93.44 %, our algorithms shows exceptional classification performance.
In this paper, two cornerstone game design and research models — the Mechanics, Dynamics, Aesthetics framework and the magic circle — are revisited and inspected through a lens of user experience design. This suggests two aspects that are missing from these models: a clear distinction between the intended and actual player experience, and the inclusion of transfer of knowledge, actions, and emotions between the game world and the real world, as part of the player experience. Based on these observations, a new unified model is proposed.
Numerous systems based on mid-air gestures have recently been proposed as a digital variant of object manipulation with hands. Simultaneously, however, direct haptic feedback is lost, eliminating an important aspect that we are familiar with from real-life interaction. We believe that smartwatches, as widely used personal devices, could provide a platform for accessible, flexible, and unobtrusive integration of haptic feedback into mid-air gesture interaction. We prototyped a vibrotactile wrist band with four vibration actuators aiming at communicating invisible, undetectable virtual object properties such as electricity, weight, and tension into 3D haptic experiences. In this paper, we present findings from a user study (n=18) that examined the suitability of different vibration patterns (variation in intensity, temporal profile, rhythm, and location). Results show that all feedback variants have a positive impact on user experience (UX) when interacting with virtual objects. Constant, continuous patterns outweigh the other variants examined.
Conversations with Holocaust survivors are an integral part of education at schools and universities as well as part of the German memory culture. The goal of interactive stereoscopic digital Holocaust testimonies is to preserve the effects of meeting and interacting with these contemporary witnesses as faithfully as possible. These virtual humans are non-synthetic. Instead, all their actions, such as answers and movements, are pre-recorded. We conducted a preliminary study to gauge how people perceive this first German-speaking digital interactive Holocaust testimony. The focus of our investigation is the ease-of-use, the accuracy and relevance of the answers given as well as the authenticity and emotiveness of the virtual contemporary witness, as perceived by the participants. We found that digital 3D testimonies can convey emotions and lead to enjoyable experiences, which correlates with the frequency of correctly matched answers.
Nowadays, with the advent of electronic devices in everyday life, mobile devices can be utilized for learning purposes. When designing a mobile-based learning application, a large number of aspects should be taken into account. For the present paper, the following aspects are of special importance: first, it should be considered how to represent information; second, possible interactions between learner and system should be defined; third – and depending on the second aspect – it should be considered how real-time responses can be provided by the system. Moreover, psychological theories as for example the 4C/ID model and findings with respect to blended learning environments should be taken into account. In this paper, a mobile-based learning prototype concerning the learning topic ”logic circuit design” is presented which considers the mentioned aspects to support independent practice. The prototype includes four different representations: (i) code-based (Verilog hardware description language), (ii) graphical-based (gate-level view), (iii) Boolean function, and (iv) truth table for each gate. The proposed learning system divides the learning content into different sections to support independent practice in meaningful steps. Multiple representations are included in order to foster understanding and transfer. The resulting implications for future work are discussed.
Strict password policies can frustrate users, reduce their productivity, and lead them to write their passwords down. This paper investigates the relation between password creation and cognitive load inferred from eye pupil diameter. We use a wearable eye tracker to monitor the user’s pupil size while creating passwords with different strengths. To assess how creating passwords of different strength (namely weak and strong) influences users’ cognitive load, we conducted a lab study (N = 15). We asked the participants to create and enter 6 weak and 6 strong passwords. The results showed that passwords with different strengths affect the pupil diameter, thereby giving an indication of the user’s cognitive state. Our initial investigation shows the potential for new applications in the field of cognition-aware user interfaces. For example, future systems can use our results to determine whether the user created a strong password based on their gaze behavior, without the need to reveal the characteristics of the password.
We report the opportunities and challenges of parallel chat in work-related video meetings, drawing on a study of Microsoft employees’ remote meeting experiences during the COVID-19 pandemic.
We find that parallel chat allows groups to communicate flexibly without interrupting the main conversation, coordinate action around shared resources, and also improves inclusivity. On the other hand, parallel chat can also be distracting, overwhelming, and cause information asymmetries.
Further, we find that whether an individual views parallel chat as a net positive in meetings is subject to the complex interactions between meeting type, personal habits, and intentional group practices. We suggest opportunities for tools and practices to capitalise on the strengths of parallel chat and mitigate its weaknesses.
Virtual reality (VR) is, by nature, excellent in showing spatial relationships, e.g. for viewing medical 3D data. In this work, we propose a VR system to view and manipulate medical 3D images of livers in combination with 3D printed liver models as controllers. We investigate whether users benefit from a controller in the shape of a liver and if the size matters by using three different sizes (50 %, 75 %, 100 %). In a user study with 14 surgeons, we focused on presence, workload and qualitative feedback such as preference. While neither size differences nor the VIVE tracker as control resulted in significant differences, most surgeons preferred the 75 % model. Qualitative results indicate that high similarity of physical and virtual objects regarding shape and a focus on good manageability of the physical object is more important than providing an exact replica in size.
Today’s level of cyclists’ road safety is primarily estimated using accident reports and self-reported measures. However, the former is focused on post-accident situations and the latter relies on subjective input. In our work, we aim to extend the landscape of cyclists’ safety assessment methods via a two-dimensional taxonomy, which covers data source (internal/external) and type of measurement (objective/subjective). Based on this taxonomy, we classify existing methods and present a mobile sensing concept for quantified cycling safety that fills the identified methodological gap by collecting data about body movements and physiological data. Finally, we outline a list of use cases and future research directions within the scope of the proposed taxonomy and sensing concept.
Lateral epicondilytus (LE), or tennis elbow, is a highly prevalent musculoskeletal condition that affects millions of people. Physiotherapy is a common treatment, with a large portion consisting of prescribed home-based exercises. Adherence to these programs is an important factor in rehabilitation, however there are many barriers to adherence including the exercise taking up too much of the patient’s attention, or the patient feeling like they are not carrying out exercises correctly. To address these problems, this paper describes a prototype system that uses haptic feedback to guide the patient to correctly carry out a commonly prescribed LE rehabilitation exercise, while allowing them to attend to external information such as another person or a screen. The system peripherally conveys information about the speed of movement and position of the user’s wrist movement via peripheral vibration feedback, allowing the user to make adjustments to movement whilst keeping the visual and auditory senses free to attend to other sources. Finally, we discuss future areas of research for this prototype and applications of vibrotactile feedback for physiotherapy in general.
Companion robots have been suggested as a promising technology for older adults who experience loneliness. However, healthy older adults commonly reject robots designed to be an ”artificial friend”. We follow the approach of ”companionship as a secondary function”, in which a non-humanoid robot is designed with a primary function that older adults perceive as appropriate, and a secondary function of companionship. In a Zoom-based exploratory need-study we unfold how older adults perceive the various aspects of a robot’s ”companionship” as a secondary function. Our qualitative analysis reveals several use cases that older adults find to be appropriate for their daily routine, and classify them into three high-level categories: companionship as ”attentive to me”, companionship as ”looking after me”, and companionship as ”experiencing together with me”. Our findings indicate that robot companionship, when designed as a secondary function, is perceived by older adults as a multifaceted social experience.
Responsible conduct of research (RCR) is an essential skill for all researchers to develop, but training scientists to behave ethically is complex because it requires addressing both cognitive (i.e., conceptual knowledge and moral reasoning skills) and socio-affective (i.e., attitudes) learning outcomes. Currently, both classroom- and web-based forms of RCR training struggle to address these distinct types of learning outcomes simultaneously. In this paper, we present a study providing initial evidence that playing a single brief session of Academical, a choice-based interactive narrative game, can significantly improve players’ attitudes about RCR. We further demonstrate the relationship between engagement with the game and resulting attitudes. Combined with our previous work showing Academical’s advantages over traditional RCR training for teaching cognitive learning outcomes, this study’s results highlight that utilizing a choice-based interactive story game is a uniquely effective way to holistically address RCR learning outcomes that drive ethical research behavior.
Ubiquitous technology, e.g., smartphones or tablets, has created a continuously available digital world, drastically changing our feeling of being in the here and now – named presence. We thus increasingly shift between the real and the digital world, ranging from losing awareness of real surroundings to cutting out the digital world to truly be in the real one. In this work, we aim to explore the middle ground in between. We move beyond classic VR research on presence and look at everyday ubiquitous technology and its influence on presence in the real and digital world. By means of a focus group (N=6) and a subsequent online survey (N=36), we gathered individual notions of presence as well as experiences and situations in which people move between the worlds. We discuss the need to further explore presence and its dynamics across the real and digital worlds in everyday life.
This paper introduces TactiHelm, a helmet that can inform cyclists about potential collisions. To inform the design of TactiHelm, a survey on cycling safety was conducted. The results highlighted the need for a support system to inform on location and proximity of surrounding vehicles. A set of tactile cues for TactiHelm conveying proximity and directions of the collisions were designed and evaluated. The results show that participants could correctly identify proximity up to 91% and directions up to 85% when tactile cues were delivered on the head, making TactiHelm a suitable device for notifications when cycling.
Aging and time are mostly being focused separately in HCI. In this paper, we examine retired people's experiences around time through their daily routines with the lens of emerging time and design concepts from Human-Computer Interaction. In this Late-Breaking Work, we introduce a design concept based on the interviews with retired people about their time routines. Inspired by previous work in HCI on emerging non-clock time concepts such as plastic time and collective time, we detected several exciting spots for design explorations around organizing time in later life. Based on insights from the initial findings around collective rhythms and quality of time, we propose an artefact (with two versions) designed with the guidance of positive design theory to discuss creating meaningful experiences. Also, we pose questions on how to form positive time interactions for later life.
The ability to deal properly with emotion could be a critical feature of future VoiceBots. Humans might even choose to use fake emotions, e.g., sound angry to emphasize what they are saying or sound nice to get what they want. However, it is unclear whether current emotion detection methods detect such acted emotions properly, or rather the true emotion of the speaker. We asked a small number of participants (26) to mimic five basic emotions and used an open source emotion-in-voice detector to provide feedback on whether their acted emotion was recognized as intended. We found that it was difficult for participants to mimic all five emotions and that certain emotions were easier to mimic than others. However, it remains unclear whether this is due to the fact that emotion was only acted or due to the insufficiency of the detection software. As an intended side effect, we collected a small corpus of labeled data for acted emotion in speech, which we plan to extend and eventually use as training data for our own emotion detection. We present the study setup and discuss some insights on our results.
Embodied conversational agents (ECAs) provide an interface modality on smartphones that may be particularly effective for tasks with significant social, affective, reflective, and narrative aspects, such as health education and behavior change counseling. However, the conversational medium is significantly slower than conventional graphical user interfaces (GUIs) for brief, time-sensitive tasks. We conducted a randomized experiment to determine user preferences in performing two kinds of health-related tasks—one affective and narrative in nature and one transactional—and gave participants a choice of a conventional GUI or a functionally equivalent ECA on a smartphone to complete the task. We found significant main effects of task type and user preference on user choice of modality, with participants choosing the conventional GUI more often for transactional and time-sensitive tasks.
The explication of socio-technical challenges like privacy concerns, data transparency and control, accountability issues and lack of trust, posed by personal data leverage in media research has been of interest recently. While many of these challenges have been studied and often discussed in various disciplines ranging from policy to technology to sociology, effective responses that alleviate them are yet to be realised. This calls for holistic approaches that enable inter-disciplinary perspectives and methodologies that speak to these wider challenges in more comprehensive and thus, effective manners. Human Data Interaction, such a cross-disciplinary branch of knowledge, inspired by HCI, seeks to alleviate said challenges through three overarching principles : data legibility, negotiability and agency. But, given the emergent nature of this ‘fledgeling’ domain, the practical realization of these theortical tenets are yet to be operationalized. This paper reports the design of one such initiative that integrates the principles of HDI, supported by wider research, into the design of a novel, data driven media experience : a Cross Media Profiler. The first half of the paper presents the socio-technical challenges confronting the turn to personal data use in media experiences, rationalizes the need and scope for interdisciplinary approaches like HDI here and reviews wider literature that is in line with the principles of HDI. The rest of the paper reports the design of the CMP, particularly the realization of the HDI principles within this particular media service proposition. We intend for this design intervention to be leveraged in formative and evaluative studies that test user response to the design decisions made and their effectiveness in alleviating socio-technical challenges while also contributing to the current knowledge base supporting HDI.
Despite the growing interest in Intelligent Personal Assistants in many domains, limited studies have explored this technology’s usage among persons with neurodevelopmental disorders (NDD) and its potential for improving their cognitive skills. This paper presents an exploratory study that investigated the use of Google Home featured with Google Assistant in a therapeutic setting. We provided three therapists with two Google Home devices for 21 days, and they were welcome to use it as they liked at their ordinary one-to-one therapy sessions. During the study, we gathered different data: history logs from Google Assistant and semi-structured observations and comments by the specialists through questionnaires, forms, and a group interview. Our findings give a first glimpse of the usage patterns of Google Assistant within the therapy for children with NDD. Furthermore, our results point out the benefits and challenges for both therapists and children while interacting with conversational technology.
We explore how commodity objects and technologies can be repurposed to provide a multimodal programming environment that is accessible to children with visual impairments, flexible, and scalable to a variety of programming challenges. Our approach resorts to four main components: 1) a LEGO base plate where LEGO blocks can be assembled to create maps, which is flexible and robust for tactile recognition; 2) a tangible programming area where LEGOs, with 3D printed caps enriched with tactile icons, can be assembled to create a program; 3) alternatively, the program can be created through a voice dialogue; and 4) a low-cost OzoBot Bit. A preliminary study with educators suggests that the approach could be useful to a variety of developmental stages, is accessible and stimulating, and promising for CT training.
As online streaming becomes a primary method for music consumption, the various modalities that many people with hearing loss rely on for their enjoyment need to be supported. While visual or tactile representations can be used to experience music in a live event or from a recording, DRM anti-piracy encryption restricts access to audio data needed to create these multimodal experiences for music streaming. We introduce BufferBeats, a toolkit for building multimodal music streaming experiences. To explore the flexibility of the toolkit and to exhibit its potential use cases, we introduce and reflect upon building a collection of technical demonstrations that bring previous and new multimodal music experiences to streaming. Grounding our work in critical theories on design, making, and disability, as well as experiences from a small group of community partners, we argue that support for multimodal music streaming experiences will not only be more inclusive to the deaf and hard of hearing, but it will also empower researchers and hobbyist makers to use streaming as a platform to build creative new representations of music.
We present eGlove, a wearable and low-cost fabric sensor for recognizing a rich context of objects by touching them, including daily necessities, fruits, plants, as well as different body parts. Our sensing approach utilizes Swept frequency Capacitive Sensing (SFCS) to provide consistent sensor readings even when the fabric electrode is under varying deformation and stretching degrees. Our work proposes an easy fabrication method and hardware configuration for prototyping the interactive fabric sensor. We evaluated our system’s classification accuracy through per-user training and found a real-time classification of 96.3%. We also demonstrated novel contextual interactions enabled by our technical approach with several applications.
Navigation is a multifaceted human ability involving complex cognitive functions. It allows the active exploration of unknown environments without becoming lost while enabling us to move efficiently across well-known spaces. However, the increasing reliance on navigation assistance systems reduces surrounding environment processing and decreases spatial knowledge acquisition and thus orienting ability. To prevent such a skill loss induced by current navigation support systems like Google Maps, we propose a novel landmark technique in augmented reality (AR): the virtual global landmark (VGL). This technique seeks to help navigation and promote spatial learning. We conducted a pilot study with five participants to compare the directional arrows with VGL. Our result suggests that the participants learned more about the environment while navigation using VGL than directional arrows without any significant mental workload increase. The results have a substantial impact on the future of our navigation system.
Implicit gender bias has costly and complex consequences for women in the workplace. We present an online desktop virtual environment that follows the story of a male or female self-avatar from the first-person perspective, who either experiences a positive or negative workplace scenario. Participants who experienced negative workplace experiences with a female self-avatar had significantly decreased levels of implicit gender bias compared to those who had a male self-avatar with evidence of perspective taking. Experiences of a positive workplace scenario showed no significant decreases in implicit gender bias regardless of self-avatar gender. We discuss the implications of these findings and make recommendations for virtual environment technologies and scenarios with respect to the reduction of implicit biases.
Drug addiction is a chronic disorder associated with many of the emotional or cognitive problems observed in addicted individuals. As a result, drug-addicted patients often display deficits in their mind-body awareness and easily ignore or fail to identify emotional or environmental cues. This study conducted a needs assessment study to gain guidelines for incorporating virtual reality and biofeedback technologies to assist psychotherapists in raising patients’ awareness and identifying their cues in psychotherapy. Our results identify current difficulties to reinforce mind-body awareness and correct attributions of craving stimuli. Also, we summarized concerns and the potential to promote the effectiveness of psychotherapy when applying virtual reality and biofeedback technologies to induce cravings. We also proposed a preliminary design of technology solutions to incorporate clinical-friendly VR-based craving inducing along with real-time biofeedback for enabling psychotherapists to jointly review the induced craving events based on the collected information.
The purpose of this study was to investigate whether stimulation of the proprioceptors in the arm during active movement can affect not only the proprioception of the arm but also the perception of the hand-held object. If it is possible to control the perception of a hand-held object through stimulation to the body, it can be applied to virtual-reality interfaces and controllers, which can be used in a wide range of situations. In the experiment, proprioceptive stimulation was based on the kinesthetic illusion induced by vibratory stimulation of muscle spindles and skin stretch near the joint. Participants were given a context in which they grasped an object and actively moved. They were asked to evaluate the perception of the object and the arm as the phase between movement and stimulation, and the conditions of stimuli were changed. Consequently, it was found that the perception of not only the arm but also the hand-held object could be changed, although there were large individual differences.
Volunteerism in the digital age offers many new possibilities and avenues for public participation. In this paper, we discuss volunteering as a form of work and how certain experiential aspects of HCI systems supporting the volunteers’ unpaid labour are instrumental in volunteer wellbeing. The sense of feeling close to others and experiencing relatedness is an important factor that can predict engagement and wellbeing of volunteers. Relatedness can be achieved in many ways, for instance, through expressing and receiving gratitude. Through four co-design workshops with n=9 participants, we identified seven perceptions of volunteers regarding their relatedness experiences. This was achieved via a case study of an online teleconferencing platform where volunteers help train and assess medical students for their medical communication skills. Findings are further discussed to inform future design to support adequate level of formalness and emotional labour in online volunteering communities.
Artificial intelligences (AI) are increasingly being embodied and embedded in the world to carry out tasks and support decision-making with and for people. Robots, recommender systems, voice assistants, virtual humans—do these disparate types of embodied AI have something in common? Here we show how they can manifest as “socially embodied AI.” We define this as the state that embodied AI “circumstantially” take on within interactive contexts when perceived as both social and agentic by people. We offer a working ontology that describes how embodied AI can dynamically transition into socially embodied AI. We propose an ontological heuristic for describing the threshold: the Tepper line. We reinforce our theoretical work with expert insights from a card sort workshop. We end with two case studies to illustrate the dynamic and contextual nature of this heuristic.
The perceived emotional intelligence of a conversational agent (CA) can significantly impact people’s interaction with the CA. Prior research applies text-based sentiment analysis and emotional response generation to improve CAs’ emotional intelligence. However, acoustic features in speech containing rich contexts are underexploited. In this work, we designed and implemented an emotionally aware CA, called HUE (Heard yoUr Emotion) that stylized responses with emotion regulation strategies and empathetic interjections. We conducted a user study with 75 participants to evaluate their perceived emotional intelligence (PEI) of HUE by having them observe conversations between people and HUE in different emotional scenarios. Our results show that participants’ PEI was significantly higher with the acoustic features than without.
Gaze is one of the most important communication cues in face-to-face collaboration. However, in remote collaboration, sharing dynamic gaze information is more difficult. In this research, we investigate how sharing gaze behavioural cues can improve remote collaboration in a Mixed Reality (MR) environment. To do this, we developed eyemR-Vis, a 360 panoramic Mixed Reality remote collaboration system that shows gaze behavioural cues as bi-directional spatial virtual visualisations shared between a local host and a remote collaborator. Preliminary results from an exploratory study indicate that using virtual cues to visualise gaze behaviour has the potential to increase co-presence, improve gaze awareness, encourage collaboration, and is inclined to be less physically demanding or mentally distracting.
Pet owners form strong emotional bonding with their pet dogs, but sometimes they have to be separated from their pet dogs due to varying reasons resulting in negative feelings such as distress or anxiety. Researchers and designers in HCI have explored a variety of approaches to resolve this issue, including designing pet camera, pet chat system, as well as devices for playful remote interaction and haptic simulation. Informed by prior work on facilitating intimacy between distant couples and family members, we present the design and implementation of cAMpanion to promote distant owner's sense of connectedness to their pet dogs when separating from each other. cAMpanion displays three statuses of pet dogs via different light colors in real-time enabled by sensor modules installed in pet dog's living environment. We share lessons learned from a pilot study examining potential issues during sensor deployment as well as participants’ feedback toward cAMpanion.
With the spread of infectious diseases, the importance of remote communication is increasing in people's lives. We focused on a "feeling of presence," which is considered to be an important factor in remote communication. The feeling of presence is an implicit indication of the presence of others. In this paper, we propose a system that presents a presence-like sensation using an electrostatic field. Through experiments, we showed that the system can present a presence-like sensation and investigated the distance that is most suitable for inducing a presence-like sensation.
It is known that social support is beneficial for mental health. With the development of social media, social support could also influence the development of online mental health communities. However, few studies examined these effects from the perspective of online mental health communities. This study focused on 22 mental health related subreddits and conducted causal analysis by matching and comparing users (1) who received social support with who did not receive social support; (2) who received positive social support with who received negative social support. The results shows that social support is “contagious”: users who received social support on their first post would be more likely and quickly to post again and provide support for others; users who received positive support would also provide more positive social support for others in the future. Our findings indicate the potential chain reaction of social support. This study also provides insights into how online mental health communities can better facilitate people to spread social support and deal with mental problems.
Good eating habits are important for growing children. However, many children have anorexia and picky eating behaviors, causing a series of problems such as malnutrition and decreased resistance to illness. Our research focuses on guiding children to develop healthy eating habits through interactive animation and gamified design. We designed an intelligent dinner plate system, composed of a gravity-sensing dinner plate and a micro projection device, which can provide guidance through interactive animation when children eat. We conducted tests of the system with children aged 5-7 with picky eating behaviors. The results showed that the system had a positive effect on improving their eating habits.
COVID-19 has resulted in the rapid popularization of video conferencing. A growing number of users have become obligated to find suitable places for video conferencing, but sometimes they inevitably participate in unsuitable conditions such as noisy or too silent public spaces. However, the video conference experience according to the environment users are in has not been sufficiently discussed. In particular, there is no conducted research on the occasions where video conferencing participants feel unable to speak with their voice due to spatial factors and how to address these situations. In this study, we propose a voice output communication aid (VOCA) for video conferencing which allows users to chat without making a sound. We made a technology probe and conducted a user test. Users who feel unable to speak orally could participate more actively with VOCA. Based on the results, we described the effects and potential of VOCA for video conferencing.
There is an increasing trend of Virtual-Reality (VR) applications found in education, entertainment, and industry. Many of them utilize real world tools, environments, and interactions as bases for creation. However, creating such applications is tedious, fragmented, and involves expertise in authoring VR using programming and 3D-modelling softwares. This hinders VR adoption by decoupling subject matter experts from the actual process of authoring while increasing cost and time. We present VRFromX, an in-situ Do-It-Yourself (DIY) platform for content creation in VR that allows users to create interactive virtual experiences. Using our system, users can select region(s) of interest (ROI) in scanned point cloud or sketch in mid-air using a brush tool to retrieve virtual models and then attach behavioral properties to them. We ran an exploratory study to evaluate usability of VRFromX and the results demonstrate feasibility of the framework as an authoring tool. Finally, we implemented three possible use-cases to showcase potential applications.
Advances in Artificial Intelligence (AI), especially the stunning achievements of Deep Learning (DL) in recent years, have shown AI/DL models possess remarkable understanding towards the logic reasoning behind the solved tasks. However, human understanding towards what knowledge is captured by deep neural networks is still elementary and this has a detrimental effect on human’s trust in the decisions made by AI systems. Explainable AI (XAI) is a hot topic in both AI and HCI communities in order to open up the blackbox to elucidate the reasoning processes of AI algorithms in such a way that makes sense to humans. However, XAI is only half of human-AI interaction and research on the other half - human’s feedback on AI explanations together with AI making sense of the feedback - is generally lacking. Human cognition is also a blackbox to AI and effective human-AI interaction requires unveiling both blackboxes to each other for mutual sensemaking. The main contribution of this paper is a conceptual framework for supporting effective human-AI interaction, referred to as interactive and continuous sensemaking (HAICS). We further implement this framework in an image classification application using deep Convolutional Neural Network (CNN) classifiers as a browser-based tool that displays network attention maps to the human for explainability and collects human’s feedback in the form of scribble annotations overlaid onto the maps. Experimental results using a real-world dataset has shown significant improvement of classification accuracy (the AI performance) with the HAICS framework.
Human-computer interaction (HCI) researchers have explored designs that connect humans and non-human beings based on post-humanistic discussions on speculative ethics of care regarding more-than-human worlds. Following these empirical and theoretical frameworks, this paper explores the potential of HCI design to foster human affective emotion toward fermentative microbes. We present the design process for the Nukabot, which is a technologically enhanced traditional Japanese wooden bucket used to pickle vegetables using lactic acid bacteria; the Nukabot is able to have conversations with humans via voice interaction. We describe the ethnographic accounts of six participants who spent 10 days taking care of, talking to, and being addressed by the Nukabot. We analyze their experiences through three ethopoietic elements of care: maintenance, affection, and obligation. Finally, we discuss the design implications of the Nukabot and its contributions to HCI research.
Smart speakers and conversational interfaces increasingly make it into consumer’s homes. Listening to users’ commands assigns them a rather passive role. Proactive speakers, on the other hand, have the potential to empower a broad range of applications such as context-aware health interventions and self-tracking systems. To achieve proactivity, however, requires the speaker to become context-aware and be able to detect opportune moments to initiate interactions. In this work, we present a proactive speaker prototype based on Google Home to investigate interaction contexts. We, additionally, built a voice-based Experience Sampling Method (ESM) application to study contextual factors that are correlated to opportune moments for device-triggered interactions. Preliminary results from a three-week field study with 7 participants indicate the proposed prototype is a robust way to achieve proactivity and implement voice-based ESM. We present use cases for future research and applications for proactive smart speakers in the context of digital health.
With a growing interest in HCI around food, there is a trend to combine computing technology and food to facilitate novel eating experiences. However, most current systems tend to superimpose the technology over food rather than consider food itself as a focal interaction material. This paper proposes a more direct computational food integration by conceptualizing the notion of “Cyber Food”, accentuating “food as computational artifact”, where food embodies digital computation that can be ultimately consumed and digested by the human body. With this work, we attempt to open a new pathway to enrich human-food interactions beyond the traditional boundaries between the physical (edible) and digital realms.
The configuration of office environments is related to worker satisfaction and can improve work efficiency. Although open offices promote communication among co-workers, privacy issues surface as well as increased risks from such health concerns as viral infections and pandemics. On the other hand, territorial offices protect worker privacy and reduce infection risks. Unfortunately, such arrangements often hinder communication among co-workers. Although a physical office must satisfy different needs under various scenarios with limited space, the lack of flexibility in furniture designs prevents dynamic space management. In this paper, we propose a self-actuated stretchable partition whose physical height, width, and position can dynamically change to create secure workplaces (e.g., against privacy and infectious risks) without inhibiting group collaboration. To support secure workplace layouts and space reconfigurations, the partitions’ height, length, and position are adapted automatically and dynamically. First, we consider the design space of a self-actuated stretchable partition and implement a proof-of concept prototype with a height-adjustable stand, a roll-up screen, and a mobile robot. We then show some example application scenarios to discuss potential future actuated-territorial offices.
Satellite mobile navigation systems have been criticized for impairing human spatial memory as an assistant in daily life. Nevertheless, few studies have directly investigated the specific role of different main contributors in such impairment in the same context. This study selected two of the main contributors leading to the impairment of human spatial memory and then made every possible effort to eliminate them from the navigation system: return user's route decision-making control, and subliminally return user's attention that has ever been distracted by the system to the environment. Two navigation modes were designed and compared in detail with the different spatial knowledge acquired by the efficient everyday navigation system. A within-subjects study in Virtual Reality (VR) urban environment was conducted and data on spatial knowledge was collected and analyzed. We concluded that the redirection of attention significantly benefits the acquisition of landmark as well as route knowledge, and the increase of route decisions further benefits survey knowledge as well as self-orientation. We also reflected on how these results might implicate future navigation systems design, and the results of analysis showed that it was possible to design a navigation system that does not impair or even enhance user's spatial memory while still retaining efficient navigation. Based on our findings, different future application-specific studies can better design navigation systems that focus on different spatial memories.
Retail experiences have started adopting Augmented Reality (AR) as a tool to let customers see products in their space before committing to a purchase. Although online retailers use high resolution 3D models to create onsite imagery, limitations in the rendering capabilities of phones require them to optimize models for real-time use, diminishing the visual quality of the experience and widening the gap between what the customer thinks they are buying and what they get. We present Fused Photo, a system that leverages AR to capture the customer’s spatial context and recreate it in a remote rendering environment that then composites photorealistic images of products onto their space. Results show that Fused Photos outperform other AR visualizations in the market both in realism and ability to represent a product’s appearance, while being comparable in quality to photorealistic renders staged manually by artists.
Developing fully parametric building models for performance-based generative design tasks often requires proficiency in many advanced 3D modeling and visual programming software, limiting its use for many building designers. Moreover, iterations of such models can be time-consuming tasks and sometimes limiting depending on the the design stage, as major changes in the layout design may result in remodeling the entire parametric definition. To address these challenges, we introduce a novel automated generative design system, which takes a basic floor plan sketch as an input and provides a parametric model prepared for multi-objective building optimization as output. In addition, the user-designer can assign various design variables for its desired building elements by using simple annotations in the drawing. We take advantage of a asymmetric convolutional module combined with a parametrizer to allow real-time parametric sketch-retrieval for a performance-based generative workflow. The system would recognize the corresponding element and define variable constraints to prepare for a multi-objective optimization problem. We illustrate the the use case of our proposed system by running a real-time structural optimization form-finding study. Our findings indicate the system can be utilized as a promising generative design tool for novice users.
This paper explores the possibility of using self-tracking technologies to promote sustainable food habits. To explore socio-cultural aspects in food practices and to understand the potential role of self-tracking tools in participants’ everyday lives, we designed and deployed a probe kit with seven participants. We found that (1) participants had differing, and at times conflicting, conceptions of what made their diet sustainable; (2) they viewed self-tracking tools as an unobtrusive means to help them achieve a goal–but didn’t report that those goals were ever met; and (3) they expressed strong associations between particular eating behaviors and rituals and/or holidays. Reflecting on the results and the concept of design (in)action, we suggest an alternative way of designing self-tracking tools for self-reflection (i.e., the art of noticing) that might help people draw attention to the impact of their food practices on the environment. We conclude by recommending a shift in focus from traditional and “always-on” self-tracking solutions to everyday practices, holidays, or rituals in which a technological tool could bring more attention to a particular facet of food sustainability.
You may feel special and believe that you are getting personalized care when your doctor remembers your name and your unique medical history. But, what if it is an AI doctor and not human? Since AI systems are driven by personalization algorithms, it is possible to design AI doctors that can individuate patients with great precision. Is this appreciated or perceived as eerie and intrusive, thereby negatively affecting doctor-patient interaction? We decided to find out by designing a healthcare chatbot that identified itself as AI, Human, or Human assisted by AI. In a user study assessing Covid-19 risk, participants interacted twice, 10 days apart, with a bot that either individuated them or not. Data show that individuation by an AI doctor lowers patient compliance. Surprisingly, a majority of participants in the human doctor condition thought that they chatted with an AI doctor. Findings provide implications for design of healthcare chat applications.
Post-it notes are great problem-solving tools. However, physical Post-it notes have limitations: surfaces for attaching them can run out; rearranging them can be labor-intensive; documenting and storing them can be cumbersome. We present Post-Post-it, a novel VR interaction system that overcomes these physical limitations. We derived design requirements from a formative study involving a problem-solving meeting using Post-it notes. Then, through physical prototyping, using physical materials such as Post-it notes, transparent acrylic panels, and masking tape, we designed a set of lifelike VR interactions based on hand gestures that the user can perform easily and intuitively. With our system, the user can create and place Post-it notes in an immersive space that is large enough to ideate freely, quickly move, copy, or delete many Post-it notes at once, and easily manage the results.
Virtual Reality (VR) has long been an important research focus in HCI since the 1960s. In the past five years, we have seen a more pronounced rise of VR, especially social VR. In particular, social VR is becoming increasingly popular within the LGBTQ community. Yet, little research explores how LGBTQ users participate in social VR and how social VR has potential to support them by affording a range of inclusive interactions. Based on eight interviews and two months of participatory observations, in this paper we report findings of our preliminary study of LGBTQ users’ engagement in social VR, especially regarding how social VR may afford social support for these users. Our study contributes to better understanding the nuanced experiences of LGBTQ users in social VR so as to create more inclusive and safe social VR spaces for all.
Women with metastatic breast cancer (MBC) face serious physiological and psychosocial challenges. The management of this chronic, incurable condition requires long-term care coordination. Traditional clinical methods do not often provide adequate and personalized support for these individuals. In this project, we aim to use smart speakers to provide supportive care interventions to improve the quality of life for women with MBC. Specifically, we have developed Nurse AMIE (Addressing Metastatic Individuals Everyday) by leveraging the Amazon Alexa to remotely deliver validated interventions. We believe that voice interactions can significantly lower the barrier to interact with remote intervention technologies for this population. In a pilot study, we deployed Nurse AMIE for 14 days over 6 women with MBC. Based on the collected data, this paper discusses the feasibility, acceptability, and future directions of Nurse AMIE. To the best of our knowledge, this is the first study to use smart speakers to support women with MBC.
Rapid diagnostic tests are point-of-care medical tests that are used by clinicians and community healthcare workers to get quicker results at a better cost compared to traditional diagnostic tests. Distributing rapid diagnostic tests to people outside of the healthcare industry would significantly improve access to diagnostic testing; however, there are concerns that novices may administer rapid diagnostic tests incorrectly and thus be left with invalid results. In response to this concern, we propose RDTCheck — a mobile application that guides users through the instructions of Quidel’s QuickVue Influenza A+B test and ensures adherence to the procedure using computer vision. RDTCheck provides users with real-time feedback so that they may either correct their mistakes or re-administer their test. In this work, we conducted findings from a pilot study that demonstrates how well RDTCheck is able to detect common mistakes and successes during the various steps of the QuickVue test. For the 7 participants we recruited, RDTCheck had an average success rate of 91.1% at giving the correct feedback during the RDT administration procedure.
This work presents a novel prototype autonomous vehicle (AV) human-machine interface (HMI) in virtual reality (VR) that utilizes a human-like visual embodiment in the driver's seat of an AV to communicate AV intent to pedestrians in a crosswalk scenario. There is currently a gap in understanding the use of virtual humans in AV HMIs for pedestrian crossing despite the demonstrated efficacy of human-like interfaces in improving human-machine relationships. We conduct a 3x2 within-subjects experiment in VR using our prototype to assess the effects of a virtual human visual embodiment AV HMI on pedestrian crossing behavior and experience. In the experiment participants walk across a virtual crosswalk in front of an AV. How long they took to decide to cross and how long it took for them to reach the other side were collected, in addition to their subjective preferences and feelings of safety. Of 26 participants, 25 preferred the condition with the most anthropomorphic features. An intermediate condition where a human-like virtual driver was present but did not exhibit any behaviors was least preferred and also had a significant effect on time to decide. This work contributes the first empirical work on using human-like visual embodiments for AV HMIs.
While machine learning algorithms continue to improve, their success often relies upon the data scientists’ ability to detect patterns, determine useful features and visualizations, select good models, and evaluate and iterate upon results. Data scientists often spend a long time making very little progress as they struggle to determine how to proceed. In this respect, the understanding of data scientists’ workflows and challenges has recently attracted a great deal of scholarly interest. However, the literature is mostly based on interviews and qualitative research methodologies. With this in mind, we developed DSWorkFlow, a data collection framework that provides researchers with the ability to observe and analyze data scientists’ cognitive workflows as they develop predictive models. Using DSWorkFlow, researchers can collect data from a Jupyter Notebook, to reconstruct the code execution order and extract relevant information about data scientist workflow alongside the concomitant collection of qualitative data. We tested the framework experimentally with seven data scientists as they each created three machine learning models to inform our extraction algorithms.
To quickly and accurately measure psychological well-being has been a challenging task. Traditionally, this is done with self-report surveys, which can be time-consuming and burdensome. In this work, we demonstrate the use of short voice recordings on smartphones to automatically predict well-being. In a 5-day study, 35 participants used their smartphones to make short voice recordings of what they were doing throughout the day. Using these recordings, our model can predict the participants’ well-being scores with a mean absolute error of 14%, relative to the self-reported well-being (“ground truth”). Both audio and text features from the recordings, especially, MFCC and semantic features, are important for prediction accuracy. Based on the work, we provide suggestions for future research to further improve the prediction result.
The proliferation of the Internet of Things (IoT) has started transforming our lifestyle through automation of home appliances. However, there are users who are hesitant to adopt IoT devices due to various privacy and security concerns. In this paper, we elicit peoples’ attitude and concerns towards adopting IoT devices. We conduct an online survey and collect responses from 232 participants from three different geographic regions (United States, Europe, and India); the participants consist of both adopters and non-adopters of IoT devices. Through data analysis, we determine that there are both similarities and differences in perceptions and concerns between adopters and non-adopters. For example, even though IoT and non-IoT users share similar security and privacy concerns, IoT users are more comfortable using IoT devices in private settings compared to non-IoT users. Furthermore, when comparing users’ attitude and concerns across different geographic regions, we found similarities between participants from the US and Europe, yet participants from India showcased contrasting behavior. For instance, we found that participants from India were more trusting in their government to properly protect consumer data and were more comfortable using IoT devices in a variety of public settings, compared to participants from the US and Europe. Based on our findings, we provide recommendations to reduce users’ concerns in adopting IoT devices, and thereby enhance user trust towards adopting IoT devices.
UX practitioners increasingly rely on online communities to collaborate on and discuss complex design problems. Understanding how these platforms flourish is thus of interest to both HCI academia and the broader UX discipline. In this study, we comparatively investigate the longevity of two such groups: the r/userexperience community on Reddit and the UX subforum on Stack Exchange. By quantifying how users post online on aggregate and what users discuss in their individual posts, we find that Reddit has grown consistently as a digital forum for UX practice. In contrast, Stack Exchange has contracted despite being more responsive and being as capable of addressing mainstream UX concepts as Reddit. Discussions of niche, higher-level UX concepts on Stack Exchange also declined disproportionately, leading to less conceptual diversity. Our results therefore contribute an initial comparative understanding of community longevity between online UX platforms.
We have developed a conversational recommendation system designed to help users navigate through a set of limited options to find the best choice. Unlike many internet scale systems that use a singular set of search terms and return a ranked list of options from amongst thousands, our system uses multi-turn user dialog to deeply understand the user’s preferences. The system responds in context to the user’s specific and immediate feedback to make sequential recommendations. We envision our system would be highly useful in situations with intrinsic constraints, such as finding the right restaurant within walking distance or the right retail item within a limited inventory. Our research prototype instantiates the former use case, leveraging real data from Google Places, Yelp, and Zomato. We evaluated our system against a similar system that did not incorporate user feedback in a 16 person remote study, generating 64 scenario-based search journeys. When our recommendation system was successfully triggered, we saw both an increase in efficiency and a higher confidence rating with respect to final user choice. We also found that users preferred our system (75%) compared with the baseline.
In this paper, we compared audio and text note-taking behaviors for video-based learning in various mobile settings. We designed and implemented a note-taking tool and conducted a task-based study to examine how users took audio and text notes differently. Our results show that participants’ audio notes were significantly longer than text notes; longer audio notes were taken to capture unfamiliar video content and participants’ emotions. However, audio notes also raised several privacy concerns. Text notes allowed participants to revise for better accuracy and deeper reflection. Our findings of the complementary features of audio and text notes for video-based learning shed light on designing future note-taking tools that can be used to facilitate learning in varied mobile settings.
New developments in automation have led to discussions about the impact that autonomous trucks will have on the trucking industry. However, there is a lack of literature on truck drivers’ perceptions of automation. To gain an understanding of the trucking community’s sentiments towards automation, we analyzed member discussions related to automation in the r/Truckers subreddit. Among the comments we analyzed, concerns about the feasibility of automation were popular and, in general, community members expressed negative perspectives on automation in trucking. This was corroborated by our findings that only 0.98% (9/915) comments had positive views on automation. Speculations on when automation of any degree will take place in the trucking industry varied, but the view that automation would eventually happen but not for a long time was the most common. To conclude, we highlight a need to support and empower truck drivers through the significant changes facing this industry.
TEACHActive is an automated feedback dashboard that provides instructors with visual classroom analytics about the active learning facilitation strategies they use in their classrooms. We describe TEACHActive system's root requirement of improving pedagogical practices through reflection, the system's process of data flow from an automated observation system, EduSense, to the feedback dashboard, and the technical design of the infrastructure. We designed the TEACHActive dashboard to visualize EduSense's automated observation output and give instructors feedback about their active learning facilitation strategies in their classrooms with the goal of improving their pedagogical practices. We present the TEACHActive prototype development process with three illustrative prototypes.
Although we extensively use routing services in our daily commutes, such systems are yet to be personalized around the user. It often happens that different routes are close in their estimated time of arrival (ETA) while being very different in how they affect the driver’s states. Using traces of a user’s physiological measures, different candidate routes can be ranked based on how they affect users’ well-being. In this research, we introduce the “empathetic routing” framework for providing human-centered routing based on historical biomarkers of the drivers collected through naturalistic settings and by using smart wearable devices. Through this framework, we rank three specific routes between two points in the city of Charlottesville, based on historical driver heart rate data collected through a three-month naturalistic driving study. Additionally, we demonstrate that the proposed framework is capable of finding infrastructural elements in a route that can potentially affect a driver’s well-being.
Prevailing methods for mapping large generative language models to supervised tasks may fail to sufficiently probe models’ novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts. We suggest that the function of few-shot examples in these cases is better described as locating an already learned task rather than meta-learning. This analysis motivates rethinking the role of prompts in controlling and evaluating powerful language models. We discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language. We explore techniques for exploiting the capacity of narratives and cultural anchors to encode nuanced intentions and techniques for encouraging deconstruction of a problem into components before producing a verdict. Informed by this more encompassing theory of prompt programming, we also introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks. Finally, we discuss how these more general methods of interacting with language models can be incorporated into existing and future benchmarks and practical applications.
As machine learning is applied to an increasingly large number of domains, the need for an effective way to explain its predictions grows apace. In the domain of child welfare screening, machine learning offers a promising method of consolidating the large amount of data that screeners must look at, potentially improving the outcomes for children reported to child welfare departments. Interviews and case-studies suggest that adding an explanation alongside the model prediction may result in better outcomes, but it is not obvious what kind of explanation would be most useful in this context. Through a series of interviews and user studies, we developed Sibyl, a machine learning explanation dashboard specifically designed to aid child welfare screeners’ decision making. When testing Sibyl, we evaluated four different explanation types, and based on this evaluation, decided a local feature contribution approach was most useful to screeners.
Virtual representations of ourselves can influence the way we feel and behave. While this phenomenon has been explored heavily in the realms of virtual reality and gaming, little is known about the level of impact increasingly pervasive real-time camera filters can have on how people feel, think, and behave. The prevalence and popularity of these technologies have surged, coupled with greater usage of online communication tools. Motivated by a desire for self-improvement in an age of regular video-based online communication, we conducted a user study to investigate the potential for real-time camera filters to influence emotions, support embodiment illusions, and consequently impact cognitive performance by applying it to the domain of creative thinking.
Academic work dealing with queerness in HCI is predominantly based in the Global North and has often dealt with one identity dimension at a time. This work-in-progress study attempts to complicate the notion of queerness in HCI by highlighting how in the multi-religious, multi-ethnic, and multi-cultural context of India, LGBTQ+ movements and spaces are deeply fractured on the basis of various identity intersections. We interview 18 LGBTQ+ activists, lawyers, and allied activists in the Delhi, India to understand the issues faced by queer Indians from minority groups and their use of social media and discuss how they negotiate their non-normative identities to create safe spaces, gain access to resources, and engage in care work. The argument that we are bringing into HCI scholarship through this paper is geared toward a future endeavor for designing safe space for marginalized groups in the global south keeping in mind negotiations of power, legitimacy, and resources.
Eye tracking can be used to infer what is relevant to a user, and adapt the content and appearance of an application to support the user in their current task. A prerequisite for integrating such adaptive user interfaces into public terminals is robust gaze estimation. Commercial eye trackers are highly accurate, but require prior person-specific calibration and a relatively stable head position. In this paper, we collect data from 26 authentic customers of a fast food restaurant while interacting with a total of 120 products on a self-order terminal. From our observations during the experiment and a qualitative analysis of the collected gaze data, we derive best practice approaches regarding the integration of eye tracking software into self-service systems. We evaluate several implicit calibration strategies that derive the user’s true focus of attention either from the context of the user interface, or from their interaction with the system. Our results show that the original gaze estimates can be visibly improved by taking into account both contextual and interaction-based information.
Vehicle automation is one of the major trends in the automotive industry and beyond. In our study, we investigate how future users with different levels of initial trust evaluate design features of level 4 automated vehicles in regards to the features’ ability to create passenger well-being. For this purpose, we identified potential design features from existing automated vehicle concepts and asked experts (n = 15) to rate them regarding their relevance to passenger well-being. In a second step, we conducted a user study (n = 69) to investigate how future users classify those features deemed relevant by the experts. Using the Kano method, the subsample with low initial trust rated 14 of 28 features as relevant, while the subsample with high initial trust rated 20 of 28 features as relevant. Further, the results indicate that the features deemed important for passenger well-being differ depending on the level of initial trust.
Continuous location sharing (CLS) can foster intimacy, for example, for couples in long-distance relationships. However, turning off CLS can then raise suspicions. To address this, we developed nuanced sharing settings in a focus group (N = 6) and implemented them to moderate CLS in an Android app. Crucially, the app also discloses each person’s current sharing settings to the partner. In a 16-day field study, four couples interacted with our app and the disclosed configurations, confirming the disclosure’s positive effect on transparency. However, features obfuscating the location were considered superfluous, as participants preferred sharing their location exactly or not at all. While participants overall appreciated having the configuration options, changes in their partners’ configurations raised questions about their motivations. Instead, participants would adjust the configuration for different intimacy levels (colleague vs. partner) rather than different activities when using CLS with the same person.
The interaction design research community continues to benefit from material-focused approaches, and from the diversity of materials under investigation. One category of such material is bio-materials of microbial origin, such as bacteria, mycelium, moulds, and Euglena. However, despite the increasing momentum towards bio-material based research, one type that is yet to be investigated in HCI, is viruses; an infectious, sub-microscopic, quasi-living, computational bio-agent. This paper initiates exploration of Human-Virus Interaction (HVI), through a material lens. This was achieved first by generating a literature-based material profile sketch of viruses, highlighting some of their distinct and/or unique material properties, characteristics, composition, and meaning. The components of the profile were then used as anchor points, to unpack the practical, ethical, and philosophical implications that are associated with viruses, and those that could be considered by researchers to help in their preparation of working with viruses in interaction design.
The Covid-19 pandemic has led to a health crisis with 90 million infections and two million deaths by the end of January 2021. To prevent an overload of medical capacities, quickly identifying potentially infected persons is vital to stop the spread of the virus. Mobile apps for tracing people’s contacts seem effective, but raise public concerns, e. g., about privacy. Hence, they are contested in public discourse. We report a large-scale NLP-supported analysis of people’s comments about the German contact-tracing app on news websites, social media and app stores. We identified prevalent topics, stances, and how commenting developed over time. We found privacy to be among the most debated topics discussed from various perspectives. Commenting peaked at one point in time, when public discourse centered on the potential tracing protocols and their privacy protection. We encourage further research on the link between the public discussions and actual adoption rates of the app.
Social contexts play an important role in understanding acceptance and use of technology. However, current approaches used in HCI to describe contextual influence do not capture it appropriately. On the one hand, the often used Technology Acceptance Model and related frameworks are too rigid to account for the nuanced variations of social situations. On the other hand, Goffman’s dramaturgical model of social interactions emphasizes interpersonal relations but mostly overlooks the material (e.g., technology) that is central to HCI. As an alternative, we suggest an approach based on Social Practice Theory. We conceptualize social context as interactions between co-located social practices and acceptability as a matter of their (in)compatibilities. Finally, we outline how this approach provides designers with a better understanding of different types of social acceptability problems and helps finding appropriate solutions.
Family technology use can create or amplify conflict in parents’ relationships – we found four key factors that contribute to this issue. We conducted a probe and interview study with eight sets of parents, to explore how and why technology use might cause conflict in their relationships. This paper presents data from two particular sets of parents to illustrate our findings. In doing so, it complements existing work that primarily focuses on parent-child relationships, and contributes to a more complete understanding of how family technology use can affect family dynamics. We also suggest directions for further work to address this issue of conflict between parents, associated with their family's use of technology.
ActPad is a desk pad, capable of sensing capacitive touch input in desk setups. Our prototype can sense touches on both, its electrodes and on connected objects. ActPad’s interaction-space is customizable, allowing easy integration and extension of existing desk environments. In smart environments, users may interact with more than one device at the same time. This generates the need for new interaction mechanisms that bundle the control of multiple ubiquitous devices. We support this need through a platform that extends interaction with IoT devices. ActPad accounts for different ways of controlling IoT devices by enabling various modes of interaction – in particular simultaneous, sequential, implicit and explicit – and, hence, a rich input space. As a proof of concept, we illustrate several use cases, including, but not limited to, controlling the browser on a PC, turning lights on/off, switching songs, or preparing coffee.
Record linking is needed to analyze observations across multiple sessions. However, recent privacy legislature such as the General Data Protection Regulations (GDPR) restricts the storage of information that identify individuals. Obtaining permissions to store information about individuals can be bureaucratic and time-consuming. Anonymous schemes such as self-generated ids and machine generated ids have been proposed. However, self-generated linking ids demand effort from the participants, while machine assisted schemes typically generate long and incomprehensible ids. Consequently, there is a risk that students and researchers will limit their research to single-session experiments to avoid privacy issues. To simplify the administration of small multi-session experiments, the HIDE procedure is proposed for generating short human readable ids for linking participants across multiple sessions while maintaining anonymity and being robust to input errors. The approach is different from previous approaches in that the goal is to minimize the length of the linking ids. First, the procedure converts the participant's name into a phonetic representation. Next, this phonetic representation is hashed, and a truncated snippet of the hash is used as the linking id. HIDE is initialized by searching for a salt (a random data added to the hash input) that minimizes the id lengths. Experiments show that the procedure is capable of coding small experiments with 20 participants using two digits, and experiments of around 200 participants with four digits. An implementation of the procedure has been made available through a simple web interface. It is hoped that the procedure can help students and HCI researchers collect more comprehensive data by following participants over time, while protecting their privacy.
Explanations in Human-AI Interaction are communicated to human decision makers through interfaces. Yet, it is not clear what consequences the exact representation of such explanations as part of decision support systems (DSS) and working on machine learning (ML) models has on human decision making. We observe a need for research methods that allow for measuring the effect different eXplainable AI (XAI) interface designs have on people’s decision making. In this paper, we argue for adopting research approaches from decision theory for HCI research on XAI interface design. We outline how we used estimation tasks in human-grounded design research in order to introduce a method and measurement for collecting evidence on XAI interface effects. To this end, we investigated representations of LIME explanations in an estimation task online study as proof-of-concept for our proposal.
Recommender systems for runners primarily rely on existing running traces in an area. In the absence of running traces, recommending running routes is challenging. This paper describes our approach to generating and proposing ”pleasant” running tours that consider the runner’s standard preferences and their distance and elevation constraints. Our algorithm is an approach to solve the cold start recommendation problem in unknown places by mining available map-data. We implemented a prototypical smartphone app that generates and recommends pleasant running routes to evaluate our algorithm’s effectiveness. An in-the-wild user study was conducted, with 11 participants across three cities. We tested the correlation between what is defined as ”pleasant path” by our algorithm and the user’s perception. The results of the user study show a positive correlation and support our algorithm. We also outline implications for the design of successful recommendation algorithms for runners.
We present “participatory threat modelling” as a feminist cybersecurity practice which allows technology research to centre traditionally marginalized and excluded experiences. We facilitated a series of community workshops in which we invited participants to define their own cybersecurity threats, implement changes to defend themselves, and reflect on the role cybersecurity plays in their lives. In doing so, we contest both hierarchical approaches to users in cybersecurity—which seek to ‘solve’ the problems of human behavior—and a tendency in HCI to equate action research with the development of novel technology solutions. Our findings draw highlight barriers to engaging with cybersecurity, the role of personal experiences (for instance of gender, race or sexuality) in shaping this engagement, and the benefits of communal approaches to cybersecurity.
Open card sorting is a popular method for designing information architectures. However, there is a lack of empirical evidence on the validity and reliability of the method. This paper explores the test-retest reliability of open card sorting. To this end, a within-subjects study was conducted. The same participants performed open card sorts twice with a time interval of 15-20 days. Content from three website domains was used: an eshop, a travel and tourism website, and a university website. Results showed that participants provided highly similar groupings and labels between the two card sorts per domain. A high agreement of the produced navigation structures was also found. These findings provide support for the test-retest reliability of open card sorting.
Fast-paced fitness training (e.g. aerobics, Bodycombat, step workouts) is one of the most popular activities at fitness clubs worldwide because it is beneficial for physical health and it can enhance participants’ motivation and engagement of fitness training. Yet, their fast pace and required coordination make it difficult for some participants, especially for beginners and those with coordination problems, to follow the class. Here we present the design, implementation and qualitative user evaluation of TwinkleBands, a piece of wearable technology that provides real-time support with trainees’ movement and coordination learning by providing discriminative visual cues on the extremities. We show that TwinkleBands helps movement imitation and coordination in several ways. Based on this, we discuss key design takeaways for future technology design to support movement teaching and learning in fast-paced activities.
In this paper, we propose a method we call MEMEography for HCI research to understand people and their interactional contexts from the remixed internet memes they post in internet communities. While memes might not be the most obvious choice of a research subject, they allow us to investigate unfamiliar domains even when access to the field is beyond reach. We describe an initial approach of data selection, collection, prioritization and analysis. In addition, we demonstrate the kinds of insights we can gain through MEMEographies by analyzing a corpus of memes in the intensive care unit (ICU) context posted 2020 on Instagram. ICU memes open up insights into the environment, work practices, challenges, emotions and familiarized us with ICU practitioners’ language, even though access to an actual ICU was completely impossible during 2020.
Extended reality (XR) systems are among the most prominent interactive environments of today’s entertainment. These systems are often complemented by supportive wearables such as haptic gloves or full–body suits. However, applications are usually limited to tactile feedback and gestural controls while other strong parts of wearables such as the performative, social and interactive features are neglected. To investigate the ways of designing wearables for playful XR environments by drawing upon these strong parts, we conducted five participatory design workshops with 25 participants. Our study resulted in 14 design concepts that were synthesized into three design themes that include 9 sub-themes, namely Virtual Costumes, Modification of Bodily Perception and Social Bioadaptivity. The knowledge created extends the design space of XR wearables and opens new paths for designers and researchers to explore.
Many users take advantage of digital self-control tools to self-regulate their device usage through interventions such as timers and lockout mechanisms. One of the major challenges faced by these tools is the user reacting against their self-imposed constraints and abandoning the tool. Although lower-risk interventions would reduce the likelihood of abandonment, previous research on digital self-control tools has left this area of study relatively unexplored. In response, this paper contributes two foundational principles relating risk and effectiveness; four widely applicable novel design patterns for reducing risk of abandonment of digital self-control tools (continuously variable interventions, anti-aging design, obligatory bundling of interventions, and intermediary control systems); and a prototype digital self-control tool that implements these four low-risk design patterns.
Acoustic levitation displays are a novel technology that use ultrasound to levitate small ‘particles’ in mid-air, to create physical 3D content. Interaction with levitated content is currently limited to simple operations, e.g., moving a particle. In this paper, we investigate the addition of pseudo-haptic effects to levitation displays, to enrich the interaction with new sensory experiences and provide simulated touch feedback. We describe a range of pseudo-haptic effects and possible implementations, based on the relationship between user control and particle response. A user-study is proposed to evaluate the potential contribution of pseudo-haptic effects when used with acoustic levitation displays.
We present SpectroPhone, a surface material sensing approach based on the rear camera of a smartphone and external white LED light sources. Warm and cool white LEDs, as used for dual or quad flashlights in smartphones, differ in their spectral distribution in the red and blue range. Warm and cool white LEDs in combination can produce a characteristic spectral response curve, when their light is reflected from a surface. We show that with warm and cool white LEDs and the rear-camera of a smartphone 30 different materials can be distinguished with an accuracy of 99 %. Based on a dataset consisting of 13500 images of material surfaces taken at different LED light intensities, we report recognition rates of support vector machines with different parameters.
This paper describes the design of a mobile based gaming application - Meri Kahani - created to teach computational thinking skills to school going teenagers in underdeveloped areas of Pakistan. We explore the use of gamification to teach computational thinking through level-based learning in a Pakistani context. This paper's final design demonstrates how gamified learning, rewarding techniques, and feminine themes can be used to attract female teenagers towards computational thinking. This paper also discusses the evaluation and usability testing results conducted on 16 school-going female teenagers. We hope that through this study, we have taken the first step towards nurturing an interest in young females for computational thinking and overcoming the gender gap that adversely affects female involvement in Computer Science in Pakistan.
Software using Machine Learning algorithms is becoming ever more ubiquitous making it equally important to have good development processes and practices. Whether we can apply insights from software development research remains open though, since it is not yet clear, whether data-driven development has the same requirements as its traditional counterpart. We used eye tracking to investigate whether the code reading behaviour of developers differs between code that uses Machine Learning and code that does not. Our data shows that there are differences in what parts of the code people consider of interest and how they read it. This is a consequence of differences in both syntax and semantics of the code. This reading behaviour already shows that we cannot take existing solutions as universally applicable. In the future, methods that support Machine Learning must iterate on existing knowledge to meet the challenges of data-driven development.
In 2020, many among us have spent more time than ever before with our mobile devices. For many years, technology developers, designers, researchers and ethicists have each debated the impact of technology on how we spend our time and relate to others. Our relationships to and through our devices are increasingly complex; and perhaps for none more so than university students. And yet, there remain many gaps in our knowledge of just how students perceive and desire their engagement with these devices – leading to a lack of real-world design solutions to enable configuration of these vital human-device relationships. This paper presents a design-led inquiry into the role smartphones play in students’ lives; contributing findings from five phases of mixed-methods research conducted as part of a user-centred, iterative design process (n=157), and resulting in a novel scaffolding of the mobile app ecosystem in support of the modes of engagement students desire.
Social media platforms face rampant misinformation spread through multimedia posts shared in highly-personalized contexts [10, 11]. Foundational qualitative research is necessary to ensure platforms’ misinformation interventions are aligned with users’ needs and understanding of information in their own contexts, across platforms. In two studies, we combined in-depth interviews (n=15) with diary and co-design methods (n=23) to investigate how a mix of Americans exposed to misinformation during COVID-19 understand their information environments, including encounters with interventions such as Facebook fact-checking labels. Analysis reveals a deep division in user attitudes about platform labeling interventions, perceived by 7/15 interview participants as biased and punitive. As a result, we argue for the need to better research the unintended consequences of labeling interventions on factual beliefs and attitudes. Alongside these findings, we discuss our methods as a model for continued independent qualitative research on cross-platform user experiences of misinformation in order to inform interventions.
Many active people experience physical activity (PA) frequency fluctuation over the year and need to overcome several perceived barriers to PA to remain active. This paper mainly addresses one of these barriers, Seasonal Variability, and its adherent influencing factors on PA maintenance, which has received limited attention in previous self-tracking studies. We followed a Research through Design approach to explore the influence of the seasons on people's PA fluctuations. We were inspired by professional athletes’ seasonal training plan approach, and conceptualized Seasons, a PA self-management tool. We used the tool to gather PA maintenance and fluctuation coping strategies from 10 participants. Our results show that experiencing PA fluctuations is commonplace, especially due to weather conditions, as well as unexpected circumstances. Individuals deploy different strategies to overcome these fluctuations. Our findings lead to recommendations for HCI researchers to consider when designing future PA maintenance tools.
Observing user interactions with interactive persona systems offers important insights for the design and application of such systems. Using an interactive persona system, user behavior and interaction with personas can be tracked with high precision, addressing the scarcity of behavioral persona user studies. In this research, we introduce and evaluate an implementation of persona analytics based on mouse tracking, which offers researchers new possibilities for conducting persona user studies, especially during times when in-person user studies are challenging to carry out.
The effect of removing gamification elements from interactive systems has been a long-standing question in gamification research. Early work and foundational theories raised concerns about the endurance of positive effects and the emergence of negative ones. Yet, nearly a decade later, no work to date has sought consensus on these matters. Here, I offer a rapid review on the state of the art and what is known about the impact of removing gamification. A small corpus of 8 papers published between 2012 and 2020 were found. Findings suggest a mix of positive and negative effects related to removing gamification. Significantly, insufficient reporting, methodological weaknesses, limited measures, and superficial interpretations of “negative” results prevent firm conclusions. I offer a research agenda towards better understanding the nature of gamification removal. I end with a call for empirical and theoretical work on illuminating the effects that may linger after systems are un-gamified.
This study investigated whether weight illusions in virtual reality (VR) without haptic feedback occur as in the real world. In the experiment, we set up three scenarios to cause three different weight illusions in VR: the size–weight illusion (smaller objects look lighter but feel heavier than larger ones), brightness–weight illusion (brighter objects look lighter but feel heavier than darker ones), and material–weight illusion (lighter-looking materials, such as wood, look lighter but feel heavier than heavier-looking materials, such as metal). The experimental results indicated that the weight perceptions of the brightness–weight and material–weight illusions in VR were opposite to those in the real world. However, the weight perception of the size–weight illusion in VR was the same as in the real world. This study demonstrates how weight illusions occur in VR without haptic feedback, and classifies weight perceptions and the robustness of the illusions.
Prior game researchers mostly focused on designing the ”macro” context which requires a decent proportion of time to engage and play, from which players may benefit. However, the demanding time and effort restrict playability during working hours. To our knowledge, there is little research exploring game design in a manner that enables momentary detachment from primary work for a mind refreshment and effortlessly resumes the suspended work after playing. In this paper, we propose a novel micro-game concept that enables the gamification of micro-breaks within working hours. To examine our concept, we adapted a conventional water-ring game into an interactive prototype, InterRings, which empowers users to play with handy objects. Our mixed observational studies and interviews revealed high acceptability and feasibility of the micro-game concept. Conclusively, three design guidelines are summarised for future development of micro-games.
With the increasing prevalence of Artificial Intelligence (AI) agents, the transparency of agents becomes vital in addressing the interaction issues (e.g., explainability and trust). The existing body of research provides valuable theoretical and practical studies in this field. However, determining the transparency of AI agents requires the systematic consideration of the application categories and automation level, which is hardly considered by the prior literature. We thus apply the bibliometric analysis to gain insights from the published literature. Our work outlines the trend of how the number of studies about AI agent transparency increased over the years. We also identify the major application topics and issues in designing transparent AI agents. Furthermore, we categorize the identified applications according to the specific dimensions (risk and timeliness) and put forward potential strategies for designing different agents. Besides, we suggest the possible transparency degree corresponding to the automation level.
Animated illustrations are a genre of graphic design that communicate a specific contextualized message using dynamic visuals. While animated illustrations has been gaining popularity across different applications, exploring them through the storytelling lens has received limited attention. In this work, we introduce a design space for animated narratives applied to illustrations. The design space combines a dimension for object types of animation techniques with one for narrative intents served by such animation techniques. We derived our design space from the analysis of 121 high-quality animated illustrations collected from online sources. To evaluate the effectiveness of our design space, we ran a workshop with 18 participants. The results of our workshop indicated that the design space can be used as a tool that supports ideation and increases creativity for designing expressive animated illustrations.
The COVID-19 pandemic has led to the proliferation of non-face-to-face video-mediated communication such as through ZOOM or Google Meet. However, video-mediated communication has several limitations related to exchanging vocal reactions and non-verbal expressions. Consequently, although current video conferencing platforms provide visual support through icons, it remains challenging for users to express various intentions because of the limited number of icons, their uniform size, and their fixed location. In particular, these limitations challenge designers who require collaborative design processes such as brainstorming. To investigate user needs related to icons that better support video-mediated communication, we conducted a participatory design methodology. Based on the analyses of participants’ brainstorming experiences with various icons through participatory paper prototyping, we found that icons that accurately reflect diverse user needs facilitated turn-taking during the design process, and allowed participants to exchange more opinions and emotions. Thus, they created a positive atmosphere in the online meeting environment.
Music is a universal medium that can elicit strong emotion, and can significantly help us in gaining focus while doing specific tasks. However, it is unclear what types of music can help to improve focus while doing other activities. In this paper, we investigate the effects of six different music stimuli on participants’ verbal and physiological responses while identifying genuine and acted emotions from video clips. Initial analysis was conducted on the comments participants made on the different stimuli in order to identify emerging patterns. Then, participants’ verbal and EEG responses were collected, processed and analyzed to classify two types of emotion. Empirical analysis of the results show that binaural beats, which are believed to increase focus on tasks, can often cause discomfort and therefore hinder focus. On the other hand, music containing a sombre tone, or familiar popular music with high level valence can help improve focus. Identifying which music stimuli can improve focus can be highly beneficial in managing day-to-day tasks and activities. This study will also be useful in broadening the range of music stimuli used in affective computing studies.
Hyperscanning is an emerging method for measuring two or more brains simultaneously. This method allows researchers to simultaneously record neural activity from two or more people. While this method has been extensively implemented over the last five years in the real-world to study inter-brain synchrony, there is little work that has been undertaken in the use of hyperscanning in virtual environments. Preliminary research in the area demonstrates that inter-brain synchrony in virtual environments can be achieved in a manner similar to that seen in the real world. The study described in this paper proposes to further research in the area by studying how non-verbal communication cues in social interactions in virtual environments can affect inter-brain synchrony. In particular, we concentrate on the role eye gaze plays in inter-brain synchrony. The aim of this research is to explore how eye gaze affects inter-brain synchrony between users in a collaborative virtual environment.
We propose a novel fingering estimation method that allows a player to use any wind instrument as a music controller by attaching a single microphone and loudspeaker-embedded mouthpiece. The loudspeaker plays white noise continuously while the fingerings are estimated in real-time based on the sound pressure recorded at the end of the instrument. Our method addresses a major problem of conventional music controllers: differences in the tactile feel of keys compared to the player’s own instrument. We demonstrated that the proposed method accurately estimated fingerings on a saxophone with promising performance (a 1.05 % misclassification rate), satisfying the low-latency feedback required in the context of musical performance.
Immersive theater in virtual reality (VR) has been increasingly popular in recently years. However, it is difficult for the audience to immerse in the play while interacting with the performer who usually presented as virtual avatars. In this paper, we present a novel approach for enhancing audience-performer interaction based on the theory of immersive theater. We integrated three types of interaction into a virtual play: individual-based interaction, scenario-based interaction, and narrative-based interaction. We reported the findings of a pilot qualitative study and identified three design recommendations: (1) letting the audience decide how the scenario goes, (2) making clear the identity of the audience, and (3) increasing the audience's freedom to explore in VR space. These recommendations could be adapted for future VR theater experience.
Inferring emotions from Head Movement (HM) and Eye Movement (EM) data in 360° Virtual Reality (VR) can enable a low-cost means of improving users’ Quality of Experience. Correlations have been shown between retrospective emotions and HM, as well as EM when tested with static 360° images. In this early work, we investigate the relationship between momentary emotion self-reports and HM/EM in HMD-based 360° VR video watching. We draw on HM/EM data from a controlled study (N=32) where participants watched eight 1-minute 360° emotion-inducing video clips, and annotated their valence and arousal levels continuously in real-time. We analyzed HM/EM features across fine-grained emotion labels from video segments with varying lengths (5-60s), and found significant correlations between HM rotation data, as well as some EM features, with valence and arousal ratings. We show that fine-grained emotion labels provide greater insight into how HM/EM relate to emotions during HMD-based 360° VR video watching.
Storytelling is a common creative activity for children. During storytelling, children need creative support and are subjected to cognitive challenges. This paper explores a co-creative agent - StoryDrawer, which supports children in creating oral stories through collaborative drawing. StoryDrawer works with children in two strategies: Child says and AI draws; and Child scribbles and AI completes. These two collaborative strategies allow children to draw their stories as an externalization and provoke unexpected ideas. This paper presents the interaction design, collaborative strategies and implementation of StoryDrawer, and conducts a user study for the fun and task effectiveness of StoryDrawer. The results reveal that our system can encourage children's active participation in storytelling and help them create novel stories through human-computer collaboration.
Tracking drinking behaviors and digitally presenting the information can motivate people's daily water intake. In this paper, we present MossWater, a living interface designed to visualize the daily water intake and so as to motivate people to drink water more frequently and proactively. The system utilizes living interface as an unobtrusive ambient display to evoke empathy. Caring and watering the moss is designed as a metaphor for caring for people's health by regularly drinking, which invokes people's feeling of empathy to motivate their healthier behaviors. This paper also presents the results of a user study exploring the usability of MossWater and users' feedbacks to this living interface. Our system is expected to promote the welfare of both human and nature in the long-term.
All the activities that we do online, by either preference or obligation, deprive us of social interactions especially the impromptu ones. We present UnlockMe, a concept that aims at preserving the social link, the disposition to come across other people serendipitously when engaged in online activities such as purchasing goods, working from home or when viewing media and entertainment. Our concept relies on virtual co-location detection (both synchronous and asynchronous), to allow users to engage with other people with whom they would have been likely to interact when doing the same activities offline in the physical world. We developed and illustrated UnlockMe with six scenarios and low-fidelity prototyping to test it with 10 participants who were isolated due to the coronavirus disease (COVID-19) pandemic. Our findings reveal multimedia recommendations from close social connections to be the best scenario for UnlockMe followed by online shopping and connecting with the local community.
Smart Helmet is a new wearable device to monitor wildland firefighters’ real-time bio-signal data and alert potential health issues, i.e., dehydration. In this paper, we applied the human-centered design method to develop Smart Helmet for firefighters. We initially conducted multiple rounds of primary research to collect user needs and the deployment constraints by interviewing 80 firefighters. Targeted on dehydration caused by heat exhaustion and overexertion, we developed a smart helmet prototype, named FireWorks, with an array of sensors collecting the firefighter’s bio-signals, including body temperature, heart rate, and motions. When abnormal bio-signal levels are detected, the alert system will notify the firefighter and their supervisor. The notification is achieved by an on-device algorithm that predicts imminent health risks. Further, we designed a mobile application to display real-time and historical bio-signal data as well as alert users about potential dehydration issues. In the end, we ran user evaluation studies and iterated the prototype based on user feedback, and we ran the functional evaluation to make sure all the implemented functions work properly.
Existing artificial skin interfaces suffer from the lack of on-skin compute that can provide fast neural network inference for time-critical application scenarios. In this paper, we propose AI-on-skin - a wearable artificial skin interface integrated with a neural network hardware accelerator that can be reconfigured across diverse neural network models and applications. AI-on-skin is designed to scale to the entire body, comprising tiny, low-power, accelerators distributed across the body. We built a prototype of AI-on-skin that covers the entire forearm (17 by 10 cm) based on off-the-shelf FPGAs. Our electronic skin based prototype can perform (a) handwriting recognition with 96% accuracy, (b) gesture recognition with 95% accuracy and (c) handwritten word recognition with 93.5% accuracy. AI-On-Skin achieves 20X and 35X speedup over off-body inference via bluetooth and on-body microcontroller based inference approach respectively. To the best of our knowledge, AI-On-Skin is the first ever wearable prototype to demonstrate skin interfaces with on-body neural network inference.
Learners consume video-based learning content on various mobile devices due to their mobility and accessibility. However, most video-based learning content is originally designed for desktop without consideration of constraints in mobile learning environments. We focus on readability and visibility problems caused by visual design elements such as text and images on varying screen sizes. To reveal design issues of current content, we examined mobile learning adequacy of content with 681 video frames from 108 video lectures. The content analysis revealed a distribution and guideline compliance rate of visual design elements. We also conducted semi-structured interviews with six video production engineers to investigate current practices and challenges in content design for mobile devices. Based on the interview results, we present a prototype that supports a guideline-based design of video learning content. Our findings can inform engineers and design tool makers on the challenges of editing mobile video-based learning content for accessible and adaptive design across devices.
User-centric approaches to technological interfaces have increasingly surfaced in the recent times. In smart homes, this manifests in the form of how people adapt and interact with devices, and the set up as a whole. This study aims to analyze the behavioral aspects of living in a smart home through interview and structured survey-based interactions with 21 families in the Netherlands who were provided a predefined set of smart devices for about three weeks. We evaluate user experience with smart devices through four stages of smart home adoption. Based on user behavior and motivations, we find four personas emerging, namely convenience-oriented, safety-oriented, cautiously supportive and sustainability-oriented. The study finds a positive experience predominantly with smart home devices when they are delivered and integrated in a manner that serves user's priorities in a home. Future directions incorporating user preferences in smart home design are discussed.
To explore the emotional effect of the chat bubble’s background color on voice messages, we carried out a user survey with Facebook Messenger, WeChat, and KakaoTalk, which use blue, green, and yellow, respectively, as the default color for chat bubbles. We provided the colors in orange, dark red, dark grey, and pale blue when the voice message seemed to be in an excited, angry, sad, or serene mood, respectively. With the exception of the serene, the emotion was intensified through the background color across the three messengers. Concerning willingness to use, the likelihood was reduced, particularly in negative emotions. Based on the empirical evidence, we discussed the potentials and concerns when the method is implemented in voice messaging.
More than 1 in 5 adults in the U.S. serving as family caregivers are the backbones of the healthcare system. Caregiving activities significantly affect their physical and mental health, sleep, work, and family relationships over extended periods. Many caregivers tend to downplay their own health needs and have difficulty accessing support. Failure to maintain their own health leads to diminished ability in providing high-quality care to their loved ones. Voice user interfaces (VUIs) hold promise in providing tailored support family caregivers need in maintaining their own health, such as flexible access and handsfree interactions. This work is the integration of VUIs into a virtual therapy platform to promote user engagement in self-care practices. We conducted user research with family caregivers and subject matter experts, and designed multiple prototypes with user evaluations. Advantages, limitations, and design considerations for integrating VUIs into virtual therapy are discussed.
Our work proposes a novel interactive model for TV such that users can freely decide when and how much to interact with the story. At any time, users can enter the immersive 3D environment of each scene in the story by controlling an avatar. In this scene, users embrace a rich set of interactive possibilities, including in-depth exploration, conversations with characters, and gamified quests to guide the story. Users also have the option to watch their avatar explore the 3D environment automatically for a lean-back experience. User evaluation of our prototype confirmed the acceptance of this new model and enthusiasm for the interactive freedom and depth it provides. To the best of our knowledge, this is the world’s first lean-back-compatible TV model that offers a complete immersive experience driven by users, shedding light on a new direction for designing interactive videos appealing to a wide audience.
Over the past decade, Deep Neural Networks (DNN) applied to eye tracking data have seen tremendous progress in their ability to perform Autism Spectrum Disorder (ASD) diagnosis. Despite their promising accuracy, DNNs are often seen as ’black boxes’ by physicians unfamiliar with the technology. In this paper, we present EyeXplain Autism, an interactive system that enables physicians to analyse eye tracking data, perform automated diagnosis and interpret DNN predictions. Here we discuss the design, development and sample scenario to illustrate the potential of our system to aid in ASD diagnosis. Unlike existing eye tracking software, our system combines traditional eye tracking visualisation and analysis tools with a data-driven knowledge to enhance medical decision-making for physicians.
Recent years have seen an explosion in Extended Reality (XR) experiences. Until now, researchers and developers were limited to relatively expensive, closed-source, and consumer-level hardware when creating high-fidelity XR experiences, as these were the only devices that can create a high-fidelity XR experience out of the box with minimal setup times. In this paper, we present Project Esky, a complete open-source modular software framework that enables the rapid creation of high-fidelity XR experiences on any combination of display and tracker hardware. Our framework includes several components that handle 6DOF head tracking with head pose prediction to minimize latency, hand interactions with virtual content, and network components to facilitate between-user co-located experiences. Our framework also includes an asynchronous rendering pipeline that simplifies the representation of optical distortions, as well as a simplified planar-based temporal re-projection technique to minimize rendering latency. It is our hope that the techniques described in this paper, and the open source software implementations that support it, will be a stepping stone towards bringing high-fidelity XR content to researchers and hobbyist users alike.
Mobile devices have become daily companions for millions of users. They have access to privacy-sensitive data about their users which stresses the importance of privacy. Users have to make privacy-related decisions already before app installation because once installed, apps can access potential privacy-sensitive data. In this work-in-progress, we present an in-depth investigation of privacy indicator visualizations for mobile app stores. We report the results of two consecutive user studies in which we investigate 1) visual depiction, 2) score, and 3) monetary value of collected data. Our studies reveal that a visual depiction by a privacy meter were easiest to understand for users, scores were easiest to spot, and monetary value was most difficult to interpret and requires further investigation.
Feeling co-present and connected over distance is a challenge. To create a sense of sharing a space over distance and being together in the periphery of daily life, we designed and implemented a pair of connected picture frames that display paintings generated from live camera streams. These paintings are abstracted based on the distance of the user from the device, and are sent between two portals that can be placed anywhere in the world. We deployed the Painting Portals during a week-long field study investigating how a distance-separated mother and daughter experienced the use of the Painting Portals. We generated recommendations for future design and research, including: (1) stylizing the paintings in a way that better captures emotion/sentiment, (2) placing multiple Painting Portals in different common areas throughout a home, (3) incorporating a video call feature into the Painting Portals, and (4) adding the option to turn off one’s camera for more privacy.
Artificial Intelligence (AI) systems deployed in the open world may produce negative side effects—which are unanticipated, undesirable outcomes that occur in addition to the intended outcomes of the system’s actions. These negative side effects affect users directly or indirectly, by violating their preferences or altering their environment in an undesirable, potentially harmful, manner. While the existing literature has started to explore techniques to overcome the impacts of negative side effects in deployed systems, there has been no prior efforts to determine how users perceive and respond to negative side effects. We surveyed 183 participants to develop an understanding of user attitudes towards side effects and how side effects impact user trust in the system. The surveys targeted two domains: an autonomous vacuum cleaner and an autonomous vehicle, each with 183 respondents. The results indicate that users are willing to tolerate side effects that are not safety-critical but prefer to minimize them as much as possible. Furthermore, users are willing to assist the system in mitigating negative side effects by providing feedback and reconfiguring the environment. Trust in the system diminishes if it fails to minimize the impacts of negative side effects over time. These results support key fundamental assumptions in existing techniques and facilitate the development of new methods to overcome negative side effects of AI systems.
When creating a visualization to understand and communicate data, we face different design choices. Even though past empirical research provides foundational knowledge for visualization design, practitioners still rely on their hunches to deal with intricate trade-offs in the wild. On the other hand, researchers lack the time and resources to rigorously explore the growing design space through controlled experiments. In this work, we aim to address this two-fold problem by crowdsourcing visualization experiments. We developed VisLab, an online platform in which anyone can design and deploy experiments to evaluate their visualizations. To alleviate the complexity of experiment design and analysis, our platform provides scaffold templates and analytic dashboards. To motivate broad participation in the experiments, the platform enables anonymous participation and provides personalized performance feedback. We present use case scenarios that demonstrate the usability and usefulness of the platform in addressing the different needs of practitioners, researchers, and educators.
Personal-assistant devices like Amazon Alexa and Google Assistant are increasingly popular among consumers. Users activate these systems through some type of wake-up approach, e.g. using a wake-word “Alexa” or “Ok, Google.” Voice-based interaction poses accessibility barriers for Deaf and Hard of Hearing (DHH) users, and technologies for sign-language recognition are improving. We therefore explore wake-up interactions for DHH users for potential personal assistant devices that understand sign language commands. Interviews with DHH users (N=21) motivated the design of six wake-up approaches, and we produced video prototypes demonstrating each using a Wizard-of-Oz approach. These prototypes were evaluated in a follow-up study in which DHH users (N=12) identified factors that influenced their preference among approaches. This study contributes empirical knowledge about DHH ASL signers’ preferences and concerns with wake-up interaction, thereby providing guidance for future designers of these systems.
The number of older adults with Mild Cognitive Impairment (MCI) is rapidly increasing. Individuals with this condition suffer from reduced daily activities and capacities, the resulting sense of frustration, and a decline in mental and physical health. However, there are not many existing studies for this specific user group in the HCI community. This study focused on addressing issues related to aging and memory in the MCI population. We designed a new interface, PENCODER, which supports prospective memory. The “design thinking process” and multidisciplinary human-centered perspectives were used in creating this interface. Findings and insights from this exploratory human-centered design approach suggest that it is imperative to conduct further research in HCI fields on (1) tools and technology features that support prospective memory, (2) measurements that assess the effect of using these kinds of tools, and (3) longitudinal perspectives for assessing self-determination outcomes in everyday interactions.
We introduce–aSpire–a clippable, mobile pneumatic-haptic device designed to help users regulate their breathing rate via subtle tactile feedback. aSpire can be easily clipped to a strap/belt and used to personalize tactile stimulation patterns, intensity, and frequency via its array of air pouch actuators that inflate/deflate individually. To evaluate the effectiveness of aSpire’s different tactile stimulation patterns in guiding the breathing rate of people on the move, out-of-lab environment, we conducted a user study with car passengers in a real-world commuting setting. The results show that engaging with the aSpire does not evoke extra mental stress, and helps the participants reduce their average breathing rate while keeping their perceived pleasantness and energy level high.
Compromising the privacy of personally identifiable information (PII) can leave users vulnerable to risks, such as identity theft. We conducted a study with 27 participants in which we examined the types of publicly available PII they could locate on their social media accounts, and through a web search. We interviewed participants about the online and offline behaviours they employ to manage their PII. Participants leaked significant amounts of PII through their online presence, and potentially further exposed it through their offline behaviours. Many were surprised at the amount of PII they came across, and immediately took rectifying actions.
Transnational newcomers, i.e., foreign-born populations who move to a new country, rely on consumer-to-consumer electronic commerce (C2C e-commerce) to access local resources for adaptation. However, with low trust among transnational newcomers who enter a new country, they often face difficulties in the adaptation process, and little is known about which determinants affect their trust in C2C e-commerce. Because social identity is often complicated in transnational newcomers’ adaptation process, our work focuses on unpacking shared identity, a key trust antecedent in C2C e-commerce. We interviewed 12 transnational newcomers in the United States to identify the determinants of their shared identity in C2C e-commerce. Our preliminary results suggest that shared identity determinants include geographic proximity, ethnic background, life stage, and socio-economic status. We also uncovered ways that shared identity determinants influence transnational newcomers’ trust in local C2C e-commerce. Our work contributes two research implications to future studies on transnational newcomers’ technology use.
This study explores non-verbal co-design techniques with multisensory wearables to give the body a voice. Sessions were led with professional caregivers, parents, and clients with PIMD (profound intellectual and multiple disabilities) to find fundamental building blocks for a common language based on tangible technologies. To provide an agent for communication we employed the tools of extimacy - translating biodata to visual, auditory, or tactile interactive displays. The caregivers expressed the need for action – reaction “Actie Reactie” to keep attention, which was an update from the Multisensory Environment (MSE) rooms previously used to calm. In the co-design sessions, we found the on-the-body wearables held the most focus. The final discovery from the study became the outline for creating a modular, highly personalized kit for a Multisensory Wearable (MSW) to inspire surprise and wonder.
Digital fabrication tools for makers have increased access to manufacturing processes such as 3D printing and computer-controlled laser cutting or milling. Despite research advances in novel hardware and software tools for fabrication tasks, there is no formal way to reason about the fabrication machine itself. There is no standard format for representing the high-level features of machines and trade-offs between them; instead, this important information is relegated to folk knowledge. To make machine information explicit, we present Taxon, a machine specification language broad enough to represent many machines, while also allowing for enough expressivity to meaningfully compare and infer performance. We describe and detail the motivation behind the design of Taxon, as well as how Taxon programs compile to a simulation of physical machines. We discuss opportunities for future work in digital fabrication that requires a standard, formalized representation of machines.
In this paper, we present electrohydraulic actuators for origami inspired shape-changing interfaces, which are capable of producing sharp hinge-like bends. These compliant actuators generate an immediate hydraulic force upon electrostatic activation without an external fluid supply source, are silent and fast in operation, and can be fabricated with commodity materials. We experimentally investigate the characteristics of these actuators and present application scenarios for actuating existing objects as well as origami folds. In addition, we present a software tool for the design and fabrication of shape-changing interfaces using these electrohydraulic actuators. We also discuss how this work opens avenues for other possible applications in Human Computer Interaction (HCI).
Sharing humor within a team can have various benefits. In multilingual teams, however, it is often challenging for nonnative-speaking members (NNS) of the team to fully understand the humor shared among native speakers (NS). In an interview study with 28 NNS, we identified common sources of confusion in humorous interactions. Our study also revealed that the (mis)alignment of people's assumptions of others’ knowledge is key to failed humor and its consequences. NNS felt that NS often assumed everyone shared the knowledge required to understand their humor; thus, cultural references unfamiliar to NNS were a dominant source of confusion. On the other hand, NNS often felt ignorant for not knowing the references required to understand NS humor, as they assumed all the references unfamiliar to them must be generic knowledge known by all NS. We propose design ideas for better alignment of common ground assumptions between NS and NNS to facilitate humor-related communication and alleviate possible negative consequence of failed humor.
Effective communication of genetic risk is increasingly important for prevention and treatment of hereditary cancer syndromes. Many individuals do not have access to genetic counselors, or lack the ability to understand and act on genetic risks, due to the complexity of the information. Automated approaches that incorporate animated pedagogical conversational agents may address these barriers. We describe a pedagogical agent that functions in the role of a virtual genetic counselor that discusses hereditary breast cancer risk and motivates women to obtain breast cancer screening. We report the design and evaluation of two prototypes of the virtual counselor. Our results demonstrate the feasibility of this approach, and the effectiveness in improving breast cancer genetics knowledge, by adapting the virtual counselor's pedagogical strategies to an individual's comprehension based on dynamic assessments and preferences.
Nearly 1.35 million people are killed in automobile accidents every year, and nearly half of all individuals involved in these accidents were not wearing their seatbelt at the time of the crash. This lack of safety precaution occurs in spite of the numerous safety sensors and warning indicators embedded within modern vehicles. This presents a clear need for more effective methods of encouraging consistent seatbelt use. To that end, this work leverages wearable technology and activity recognition techniques to detect when individuals have buckled their seatbelt. To develop such a system, we collected smartwatch data from 26 different users. From this data, we identified trends which inspired the development of novel features. Using these features, we trained models to identify the motion of fastening a seatbelt in real-time. This model serves as the basis for future work in which systems can provide personalized and effective interventions to ensure seatbelt use.
Modern teamwork often happens between subgroups located in different countries. Members of the same subgroup prefer to communicate in their native language for efficiency, which increases the coordination cost between subgroups. The current study extends previous HCI literature that explores the effects of machine translation (MT) on crosslingual teamwork. We investigated whether automated keyword tagging would assist people's comprehension of imperfect MT outputs and, therefore, enhance the quality of communication between subgroups. We conducted an online experiment where twenty teams performed a collaborative task. Each team consisted of two native English speakers and two native Mandarin speakers. We provided MT support that enabled participants to read all subgroups’ discussions in English before team meetings, but in two forms: with vs. without automated keyword tagging. We found MT with automated keyword tagging affected people's interaction with the translated materials, but it did not enhance translation comprehensibility in the context of teamwork.
Augmented Reality (AR) technology offers the opportunity to change the customer experience in the retail context. For example, AR applications can be used as a bridge to close the gap between physical and online shops, mixing the best characteristics of both. Different researchers have measured the quality of the user's experience through AR. However, here we focused not only on user experience but also on the impact of AR on brand recall and recognition. We analyzed the impact on brand recall and recognition of an AR application by comparing three contexts: a traditional context where participants look at three products, a focused context where participants have only one product to look at, and a technology-enhanced context where participants interacted with the product using the designed AR experience. Experimental results revealed that AR improved brand recall and recognition by creating an engaging environment. Moreover, participants in the AR context recalled additional information about the product. Also, the AR experience motivated them to participate in an omnichannel activity based on a QR interaction. These results show the influence of well-designed AR experiences on brand recall and recognition opening the possibility for new technology in retail environments.
Recent reports have shown a growing demand for mental health resources and services on university campuses for Black and Latinx students. These students have a higher rate of unmet mental health needs and are more likely to experience mental health problems. Offering a technical solution is promising for navigating on campus mental health services. In this paper, we present findings from a preliminary study focused on understanding the mental health related technology practices and preferences of university students and a content analysis of 60 U.S. college and university counseling center websites. Findings highlight how university students’ desire for applications that integrate with their existing on-campus offerings contrasted with the apparent offerings of campus counseling centers.
User studies have found persona application challenging. We argue that a potential reason for the challenges is the organization's readiness to apply personas. This research reports the on-going effort of developing the Persona Readiness Scale, a survey instrument for organizations’ readiness for personas. The scale involves twenty-two items from seven dimensions: Need Readiness, Culture Readiness, Knowledge Readiness, Resource Readiness, Data and Systems Readiness, Capability Readiness, and Goal Readiness. Organizations can apply the current scale to evaluate their persona readiness but using the dimensions for statistical analyses requires further empirical validation.
Garment e-commerce has become an indispensable part of the garment retail industry. However, the lack of a fitting process in garment e-commerce has led to high return rates and operating costs. The remote fitting method based on the fitting robot helps solve the above problem, but the accuracy of the fitting results needs to be improved to ensure the effectiveness of the remote fitting. The comfort of clothing is the main factor affecting the fitting results. This paper presents a comfort feedback method based on the fitting robot to obtain and feed back the comfort information of remote fitting. This method includes a pressure measuring device, a comfort conversion model, and a real-time feedback interface. And the test results demonstrate that this method is effective. This method’s application provides consumers with a new fitting experience and more accurate fitting results, which will promote the development of garment e-commerce.
Ubiquitous computing systems rely on ubiquitous methods to sense user interactions, which have manifested in our daily environments as physical buttons, switches, sliders, and beyond. These conventional controllers are either wired, which eliminates flexible deployments, or powered by batteries that require user maintenance. Additionally, built-in wireless communications such as Wi-Fi, Bluetooth, and RFID add up to the total cost. All aforementioned constraints prevent truly ubiquitous interactions from intelligent environments such as smart homes, industry 4.0, precision farming, and a wider range of Internet-of-Things (IoT) applications. We present CubeSense, a wireless and battery-free interactive sensing system which encodes user interactions into radar cross section (RCS) of corner reflectors. Through careful designs of corner reflector mechanisms, CubeSense achieves robust accuracies with controllers made of ultra-low-cost plastics and metal films, resulting in a total cost of around only 20 cents per unit.
Past research has proposed various hand redirection techniques for virtual reality (VR). Such techniques modify a user’s hand movements and have been successfully used to enhance haptics and 3D user interfaces. Up to now, however, no unified framework exists that implements previously proposed techniques such as body warping, world warping, and hybrid methods. In this work, we present the Virtual Reality Hand Redirection Toolkit (HaRT), an open-source framework developed for the Unity engine. The toolkit aims to support both novice and expert VR researchers and practitioners in implementing and evaluating hand redirection techniques. It provides implementations of popular redirection algorithms and exposes a modular class hierarchy for easy integration of new approaches. Moreover, simulation, logging, and visualization features allow users of the toolkit to analyze hand redirection setups with minimal technical effort. We present the architecture of the toolkit along with the results of a qualitative expert study.
Since the introduction of shape changing interfaces, research in the field has contributed a number of taxonomies to classify shape changing interfaces according to different characteristics, including shape, interaction mapping, material, actuation, and information affordances, in an attempt to grasp the diversity of these interfaces in terms of design and information. However, to our knowledge there exists no classifications of input techniques that are used to deform shape changing interfaces through physical interaction. The interaction affordances provided by shape changing interfaces are important for interaction design and interaction mapping. The work presented here aims to analyse how deformable properties in shape changing interfaces are related to deformation techniques, in order to provide a first step towards the development of design guidelines for physical interaction with shape changing interfaces. The results of the study contribute an overview of interaction techniques based on the analysis of a set of 7 models with different deformable properties, each presented in 2D and 3D form.
We present situated buttons, a highly personalized, affordable system to support users with complex needs such as people with learning disabilities and autism, dementia, etc., and promote independent living. The system addresses concerns many users with mental health issues have with accessing assistive services and technologies as a result of their health condition. The solution consists of a wireless push button that via a simple click can launch a video on a device such as a smartphone or a tablet. A trusted person like a family member or carer can appear in the video to illustrate how to complete certain activities of interest to the end-user. The button can be placed anywhere where support is needed, e.g. on a washing machine or at the kitchen table. We conducted workshops and online discussions with social and healthcare organizations, potential end-users (people with learning disabilities and older adults), and their carers. The results show that our solution has the potential to support individuals with complex needs, improve independence, and increase their quality of life, by giving simple access to helpful instructions in their daily life.
A third of global greenhouse gas (GHG) emissions are attributable to the food sector, however dietary change could reduce this by 49%. Many people intend to make eco-friendly food choices, but fail to do so at the point-of-purchase. Educating consumers on the environmental impact of their choices during their shop may be a powerful approach to tackling climate change. This paper presents the theory- and evidence-based development of Envirofy: the first eco-friendly e-commerce grocery tool for real shoppers. We share how we used the Behaviour Change Wheel (BCW) and multidisciplinary evidence to maximise the likely effectiveness of Envirofy. We conclude with a discussion of how the HCI community can help to develop and evaluate real-time tools to close intention-behaviour gaps and ultimately reduce GHG emissions.
Mentoring is a key part of career development, especially in emerging fields such as social entrepreneurship. Internet technologies have made it easier for novice social entrepreneurs to identify and connect with mentors online. Yet, we do not know the strategies mentors use to advise professionals navigating emerging fields. Knowing these strategies would allow us to create technologies for more effective career mentoring. This paper presents an expert model of career mentoring for novice social entrepreneurs. To build the model, we conducted a retrospective cognitive task analysis with 9 mentors who have at least 5 years of experience advising novice social entrepreneurs. We found that mentors help novice social entrepreneurs regulate career-related stress and make decisions about next career steps. Our findings suggest that further exploration of career mentoring strategies might allow designers to develop technologies based on the expert model, such as intelligent agents, to support and scale mentorship in emerging fields.
Avatar appearance, especially gender, influences user behavior in virtual environments (VE). However the effect is often examined only as a co-variant. In this paper, we use technology to empower individuals beyond traditional societal gender roles to effectively collaborate in virtual environments. We specifically investigate the impact of the partner’s avatar gender on the quality of collaboration in the VE, while performing an inherently male-dominated task. We designed a virtual garage, where pairs of same (C1) and mixed (C2) gender repair cars collaboratively. We evaluated the interaction using a collaboration questionnaire adopted from Team Effective Questionnaire (TEQ). Our results show that same-gender pairs were perceived as more productive and supportive. We envision that our work aids developers and researchers in enhancing the collaboration quality by supporting group cohesion and positive interactions.
Dark patterns in mobile apps take advantage of cognitive biases of end-users and can have detrimental effects on people’s lives. Despite growing research in identifying remedies for dark patterns and established solutions for desktop browsers, there exists no established methodology to reduce dark patterns in mobile apps. Our work introduces GreaseDroid, a community-driven app modification framework enabling non-expert users to disable dark patterns in apps selectively.
People with visual impairments face challenges in scene and object recognition, especially in unknown environments. We combined the mobile scene detection framework Apple ARKit with MobileNet-v2 and 3D spatial audio to provide an auditory scene description to people with visual impairments. The combination of ARKit and MobileNet allows keeping recognized objects in the scene even if the user turns away from the object. An object can thus serve as an auditory landmark. With a search function, the system can even guide the user to a particular item. The system also provides spatial audio warnings for nearby objects and walls to avoid collisions. We evaluated the implemented app in a preliminary user study. The results show that users can find items without visual feedback using the proposed application. The study also reveals that the range of local object detection through MobileNet-v2 was insufficient, which we aim to overcome using more accurate object detection frameworks in future work (YOLOv5x).
Patients with mild intellectual disabilities (ID) face significant communication barriers when attending primary care consultations. Yet there is a lack of two-way communication aids available to support them in conveying medical symptoms to General Practitioners (GPs). Based on a multi-stakeholder co-design process including GPs, domain experts, people with mild ID and carers, our previous work developed prototype technology to support people with mild ID in GP consultations. This paper discusses the findings of a usability study performed on the resulting prototype. Five experts in ID/usability, four caregivers, and five GPs participated in cognitive and post-task walkthroughs. They found that the application has the potential to increase communication, reduce time constraints, and overcome diagnostic overshadowing. Nevertheless, the participants also identified accessibility barriers relating to: medical imagery; the abstract nature of certain conditions; the use of adaptive questionnaires; and the overloading of information. Potential solutions to overcome these barriers were also discussed.
From sports to party games, almost every kind of game has been adapted into a digital video game format. While previous research has studied player motivations and experiences for certain categories of digital games, there has yet to be such a study on digital board games, especially in the modern context of smartphone apps. To address this, we conduct a case study of a popular board game, Ludo, to understand players’ opinions of its digital adaptation. For this, we study the functionality and user reviews of nine popular Ludo apps, to assess player opinions of how traditional gameplay has been re-imagined. Based upon our analysis, we conclude with recommendations for improving Ludo apps and other apps, based on random chance board games.
Research support that online multiplayer games build social capital and contribute to people’s well-being. Players build meaningful, strong relationships through games, resulting in complex communities, similar to traditional Online Social Networks (OSNs). In OSNs, the vast majority of the population consists of invisible users consuming content rather than actively engaging with the community: lurkers. While lurkers have been well-researched in OSNs, they have been under-investigated in games. In games, their behaviour may limit the social potential a game provides. Besides the big knowledge gap concerning lurkers in multiplayer environments, it is also yet unclear how lurkers differ from another class of non-social players: loners. In this work, we review and analyze the Games User Research (GUR) literature to understand (a) how lurkers and loners are defined in games and (b) which characteristics they exhibit. Our contributions are definitions of lurkers and loners in games and a future research agenda outlining opportunities to study them.
In the last years, researchers and therapists have pinpointed a number of critical aspects in current speech-language interventions. Several studies have explored the use of technology to overcome these barriers and to support speech-therapy in children with language impairments (e.g. DLD and ASD). In this paper, we propose a conceptual framework for designing linguistic activities (for assessment and training), based on advances in psycholinguistics. Moving from this theoretical framework, we identified a development process - from the UX design to coding of activities – which is based on a novel set of Design Patterns at multiple layers of abstraction. We then put this framework into practice by implementing these patterns into two technological solutions – tablet and robots – and performing an empirical study to evaluate their benefits. Our results, although still preliminary since they assessed only a tablet experimental condition, are promising for extending the identified Design Patterns to other technological solutions.
There is limited infrastructure for providing stress management services to those in need. To address this problem, chatbots are viewed as a scalable solution. However, one limiting factor is having clear definitions and examples of daily stress on which to build models and methods for routing appropriate advice during conversations. We developed a dataset of 6850 SMS-like sentences that can be used to classify input using a scheme of 9 stressor categories derived from: stress management literature, live conversations from a prototype chatbot system, crowdsourcing, and targeted web scraping from an online repository. In addition to releasing this dataset, we show results that are promising for classification purposes. Our contributions include: (i) a categorization of daily stressors, (ii) a dataset of SMS-like sentences, (iii) an analysis of this dataset that demonstrates its potential efficacy, and (iv) a demonstration of its utility for implementation via a simulation of model response times.
This paper presents a preliminary investigation into the cultural influences on social media use of middle class mothers in India. We focus on the interplay of cultural beliefs, norms, and social and familial structures that shape their perceptions which in turn influence information sharing and seeking, and social capital building. We conducted semi-structured interviews with 23 middle class mothers in India and found that cultural practices, traditions, and the presence of trusted offline strong-tie networks influenced their social media engagement.
Behavior change researchers frequently base their interventions on theory, targeting specific mechanisms of change to help users achieve their goals. Thus HCI researchers have sought to examine whether their intervention impacts the intended mechanism of change, in addition to evaluating the overall effect of the intervention. Yet an open question remains: how do we know our interventions successfully target the mechanisms we intend? We present results from two validation studies and one user study showing the difficulties of ensuring that behavior change interventions target the mechanisms designers intend. Our findings indicate that experts disagree about what mechanism an intervention targets, that expert consensus on this matter can be hard to achieve, and that end users’ reflections indicate they may follow different mechanisms than those predicted by experts. We recommend that researchers collect data about multiple potential mechanisms that their intervention could operate through, rather than the common single-mechanism approach.
People with dementia living at home experience difficulties in participating in social interactions, while staying in contact with family adds quality to their lived experience. Initiating communication can be challenging for family, since there is a natural disparity between their life patterns and those of people with dementia. In this paper, we present CoasterChat, an exploration in design that embeds asynchronous digital communication in a daily coffee routine to support social sharing between people with early stage dementia and their family. In this preliminary study, the aim is to explore suitable interaction design that provides an opportunity for technology to be embedded in existing individual routines. Initial results show the importance of personalization in digital guidance. Benefit for family who engage in the interaction lies in the flexibility and accessibility of asynchronous communication. Finally, we discuss that these routines enhance social relationships which demonstrates the opportunities of creating a meaningful interaction between people with dementia and family members.
In the context of office-work meetings, it has become a norm that one or more participants attend remotely while others are in the (physical) meeting room–the social situation that has been studied as “hybrid meetings’’. We examine whether incorporating the direction of sound in the audio can support the remote attendees to recognize more clearly who is speaking in the meeting room and eventually improve the experience of attending a hybrid meeting. We present the results of a user study, in which 42 participants followed six different discussions recorded in a meeting room, in six conditions: three audio formats are examined, once in a situation where the co-located conferees wore a face-mask and once without a mask. The results demonstrate that the binaural audio can support remote participation, especially in terms of general comprehension and confidence of comprehension, with higher effect for the face-mask conditions.
Modern cars create a high-tech interactive space by providing entertainment and information functionalities to the driver and partly to the passengers. By introducing rear-seat infotainment systems especially in luxury cars, manufacturers started to also focus on the passenger’s experience. However, such systems offer mainly standard entertainment and internet-based services. To enhance the user experience of rear seat passengers, we present the concept of an interactive car door that enables passengers to engage with and explore their surroundings. Our system consists of an interactive door panel that shows points of interest along the progressing route, more detailed information is shown on the AR-enabled side window in addition to the rear seat display. Results from a pilot study (n = 11) show that our concept leads towards a positive user experience. The qualitative feedback reveals that such an interactive car door helps to make riding as a passenger more attractive and pleasant.
Following the need to promote physical activity as a part of a healthy lifestyle, in this study we focus on encouraging more physical activity by improving the experience, with running as an example of a popular outdoor activity. Running in nature is often described as more pleasant and relaxing than running in the city, yet in urban environments it is difficult to integrate true nature in one's running route. To bridge this gap we designed Sensation, a sonified running track that provides sensations of nature using audio feedback. Sensation senses the footsteps of runners and produces synced sounds of footsteps in several nature environments to augment the urban landscape. This way, Sensation aims to enhance the environmental factors that contribute to the positive feelings people experience during a run. We report on insights gathered during our Research-through-Design process, as well as a preliminary user test of this sonified running track.
Health inequity is a critical problem in the United States and one that primarily affects marginalized communities. One critical aspect to interventions addressing this issue is the aim to increase health literacy, so that members of these communities can make informed decisions and feel empowered to take charge of their personal health. Our team developed a transformational game that uses self-efficacy to address players’ health literacy in context. Using properties from both visual novel and strategy simulation genres, we present a game in which players take on the role of a community manager who works to better their community by completing quests from non-player character (NPC) community members. The current paper contributes our research and iterative design processes, and highlights future directions utilizing focus groups and playtesting with community members, game development, and evaluation studies to assess impact.
Blockchain technology has enabled a thriving emergent ecosystem of tools and communities actively using decentralized systems. However, most blockchain infrastructure (e.g. Ethereum) requires users to pay some fees to execute their desired actions in these novel online services. To which extent an increase in the price of such fees negatively affects user activity? Would significant price surges deter users from using blockchain-enabled online services? In this work, we study the 2020 surge of transaction fee price in the Ethereum network, and analyze how that affected user activities. Our use cases are the blockchain-enabled Decentralized Autonomous Organizations (DAOs) from the platforms DAOstack and DAOhaus. Thus, we analyzed 5,580 transactions from 7,825 users grouped in 191 DAO communities, using a VAR model with a daily time series of the average fee value and the DAO operations. Our results show just a minor influence of the fee (gas) price and the activity of DAO users. The insensitivity of the activity to the fee price is an anomaly in a supposedly self-regulated market, and we consider this should be tackled in future implementations.
Abstract: Interactive voice response (IVR) forums such as CGNet Swara and Avaaj Otalo have played a pivotal role in empowering marginalised communities by providing an avenue to make their voice heard through simple phone calls. At the same time, growing internet penetration and affordable data plans are altering the ways in which rural Indian communities access and consume information. Within the context of a shift from voice to richer content environments, we present the design of a multi-modal awareness generation and data collection platform built around IVR and the WhatsApp Business API. This model was deployed for delivering virtual training modules to cotton farmers in rural Maharashtra. During the 27 day deployment, 176 people participated in the intervention, out of which 122 and 54 attempted the modules on IVR and WhatsApp, respectively. In this paper, we highlight some of the interesting findings and lessons learnt during the intervention.
It is a prevalent behavior of having a chat with strangers in online settings where people can easily gather. Yet, people often find it difficult to initiate and maintain conversation due to the lack of information about strangers. Hence, we aimed to facilitate conversation between the strangers with the use of machine learning (ML) algorithms and present BlahBlahBot, an ML-infused chatbot that moderates conversation between strangers with personalized topics. Based on social media posts, BlahBlahBot supports the conversation by suggesting topics that are likely to be of mutual interest between users. A user study with three groups (control, random topic chatbot, and BlahBlahBot; N=18) found the feasibility of BlahBlahBot in increasing both conversation quality and closeness to the partner, along with the factors that led to such increases from the user interview. Overall, our preliminary results imply that an ML-infused conversational agent can be effective for augmenting a dyadic conversation.
Live performance provides a good example of enthusiastic interaction between people gathered together in a large group and one or more performers. In this research, we focused on elucidating the mechanism of such enthusiastic group interaction (collective effervescence) and how technology can contribute to its enhancement. We propose a support system for co-experience and physical co-actions among participants to enhance enthusiastic interaction between performers and audiences during live performances. This system focuses on a physical synchronization between the performer and the audience as the key that generates enthusiastic interaction during a live performance. Also, it supports enhanced bidirectional communication of the performer's actions and the audience's cheering behaviors. An experiment in an actual live performance environment in which collective effervescence was already occurring demonstrated that the bidirectional communication and visualization of physical movements in the proposed system contributed to the further unification of the group.
A generic step in data analysis is to group data items into multiple sets based on specific attribute values. In this paper, we propose an interactive set-data exploration tool named BalloonVis, to make nested balloons as a visual metaphor to group set data over a timeline and visualize the set overlapping information while preserving the original layout. We employ a hybrid region-based and line-based scheme, which allows placing a representative image of each set data item at its region position and helps reduce visual clutter by line connection design. Energy optimization is exploited to compute the layouts of region-based set data items (balloons) and connected lines. The case study and the user study suggest the BalloonVis can visualize set data with more information. Besides, the proposed hierarchical scheme is more scalable on set data item size.
Vitiligo is a common pigmentary skin disorder. Children with vitiligo are likely to have low self-esteem and fear of social communication. The common treatment of vitiligo for children is physiological, rarely in psychological measures. In this paper, we present ColorGuardian, a new and interesting skin tattoo customization system, to relieve the psychological damage caused by vitiligo to children. The system scans the vitiligo area with the Light Detection and Ranging scanner to generate the 3D skin surface model and obtain the white patches image. Users can design their own tattoos or choose recommended designs that are available in the gallery to match up the pattern and the white patches’ shape through the application that is linked to the system. We evaluate our system through a light usability test and semi-structured interview. The results show that ColorGuardian has positive impacts on children by reducing their feelings of inferiority.
In order to better understand human emotion, we should not recognize only superficial emotions based on facial images, but also analyze so-called inner emotions by considering biological signals such as electroencephalogram (EEG). Recently, several studies to analyze a person’s inner state by using an image signal and an EEG signal together have been reported. However, there have been no studies dealing with the case where the emotions estimated from the image signal and the EEG signal are different, i.e., emotional mismatch. This paper defines a new task to detect hidden emotions, i.e., emotions in a situation where only the EEG signal is activated without the image signal being activated, and proposes a method to effectively detect the hidden emotions. First, when a subject hides the emotion intentionally, the internal and external emotional characteristics of the subject were analyzed from the viewpoint of multimodal signals. Then, based on the analysis, we designed a method of detecting hidden emotions using convolutional neural networks (CNNs) that exhibit powerful cognitive ability. As a result, this study has upgraded the technology of deeply understanding inner emotions. On the other hand, the hidden emotion dataset and source code that we have built ourselves will be officially released for future emotion recognition research.
From the perspective of the prevalence of social media rumors, we put forward the concept of cultural heuristic and explore how it influences the attitude and behavior of the Chinese citizens towards rumors when facing epidemic again, i.e., COVID-19. We recruit 12 Chinese citizens to conduct semi-structured interviews, and use grounded theory to analyze the data coding. The results show that China’s unique cultural, social and historical background and collective behavior convey the attitudes and beliefs to rumors thus lead to behavior. This study is an exploration of rumor culture in non-Western society. It can help us have a better understanding of the cultural cognition and value orientation between Chinese citizens and rumors.
Aimed at breaking up the myth of the “average user”, the notion of personalization promises to solve the contradiction between the population’s heterogeneity and one-size-fits-all security nudges. To further explore the promising avenue, we propose to design personalized security nudges targeted at different mindsets and make the Consideration for Future Consequences (CFC) the testbed. Namely, we designed two CFC-targeted security nudges, Promotion and Prevention, for the individuals who care about future and immediate consequences, respectively. An online survey (N = 145) was conducted to test their effectiveness. Results show that both the nudges can improve users’ security attitudes, while the moderation effects imply that the Promotion nudge is merely effective for the users having deep concerns about future consequences. The findings indicate the feasibility of designing security nudges targeted at future orientations and illustrate the importance of tailoring nudges according to individuals’ differences.
Computer voice is experiencing a renaissance through the growing popularity of voice-based interfaces, agents, and environments. Yet, how to measure the user experience (UX) of voice-based systems remains an open and urgent question, especially given that their form factors and interaction styles tend to be non-visual, intangible, and often considered disembodied or “body-less.” As a first step, we surveyed the ACM and IEEE literatures to determine which quantitative measures and measurements have been deemed important for voice UX. Our findings show that there is little consensus, even with similar situations and systems, as well as an overreliance on lab work and unvalidated scales. In response, we offer two high-level descriptive frameworks for guiding future research, developing standardized instruments, and informing ongoing review work. Our work highlights the current strengths and weaknesses of voice UX research and charts a path towards measuring voice UX in a more comprehensive way.
The COVID-19 pandemic forced many people to abruptly shift to remote work in early 2020. But as countries progressed through a recovery from the pandemic, as occurred in China beginning in the spring of 2020, companies went through a process of reopening their offices. People worked in a hybrid mode in which they could decide how to divide their time working from home and in the office. In this research, we explored what are the key factors that shaped employees’ decisions. We conducted a survey and interviews with employees in China of a global technology company. The data demonstrated people’s work time arrangements between home and office, their experiences when working from home, and their preferred work mode. Through the interviews, we identified people’s diverse strategies and reasons behind the decisions of where to work during the hybrid work phase.
We propose a new mobile head-mounted display, ModularHMD, that uses a modular mechanism with a manually reconfigurable structure to enable ad-hoc peripheral interaction with real-world objects and people while maintaining an immersive VR experience. We designed and built a proof-of-concept prototype of ModularHMD using a base commercial HMD and three removable display modules installed in the periphery of the HMD cowl. The user can rapidly switch between different HMD configurations according to their needs. The modules can be removed to ensure peripheral awareness of the real world, used as instant interaction devices (e.g, keyboards), and then returned to their original positions to terminate the peripheral interaction.
Accessibility maps enable impaired/elderly people to move around more smoothly and with less risk. However, very few examples satisfy both the accuracy and coverage requirements, owing to the high cost of physically auditing everchanging roads and pathways. Although crowdsourcing approaches can ostensibly solve this problem, existing studies have relied on volunteers with free time and high motivation. In this paper, we propose a crowdsourcing platform for constructing accessibility maps to support four participation modes: Reporter for people having plenty of free time and high motivation; Gaming reporter for people having plenty of free time but with low motivation; Walker for people lacking enough free time but with high motivation; and Gaming walker for people lacking enough free time and with low motivation. This design allows people to select a suitable participation method, depending on their time and motivation. In this study, we developed a prototype by integrating deep learning techniques, a game design theory, and heatmap visualization.
Due to the COVID-19 pandemic, online meetings have become the new normal amongst business professionals. The usage of video conferencing services such as Zoom has skyrocketed, and the usage of Social Virtual Reality (VR) services have also been taken under consideration to be the new normal as it enables users to have a spatial online presence; nonetheless, the usage of Social VR has been considerably lower compared to video conferencing services. The purpose of this study is to investigate the user interactions of business professionals regarding the web-based Social VR platform, Mozilla Hubs, and suggest alterations regarding the user experience in order to understand why the usage of Social VR is low amongst business professionals and to promote the usage of the platform that could resolve the issue having a lack of spatial presence in the online arena.
Prospective students looking for a human-computer interaction (HCI) laboratory that fits their needs and goals go through a challenging decision-making process. Often, labs and their body of work are ambiguous to the student. In this study, we aim to (1) design an HCI Lab browsing system named <HCI LAB DIRECTORY> and (2) investigate how the system supports decision making. We designed a system that can browse laboratories through research topics based on author-defined keyword (ADK) data from publications. We then conducted a user study, including an information browsing task and in-depth interviews. Findings show that our system supported decision making in (1) identifying what to look for at the initial stage and (2) developing one's own criteria by gaining saliency of the field at the drill-down stage. Our findings provide empirical understanding regarding decision-making theory and presents a new browsing system that organizes scattered and unstandardized information.
Animal-Computer Interaction (ACI) is a discipline that encourages study on animal interactions, and there are few studies that discuss disabled animals. This study will focus on cats with hearing loss and explore ways to build the emotional connection between humans and cats through interactions. The prototype design consisted of three parts, all of which were used in a two-week initial feasibility test. In the test with volunteers, two deaf cats and their keepers have participated in the interactive progress, and the owners’ feedback showed some valuable results, which may refer to further studies around this topic.
Despite there being evident benefits of using virtual reality (VR) in aged care, it is not yet widely used in residential aged care homes. One factor that may contribute to this is the willingness of staff to use VR as part of the social program offered in aged care homes. Therefore, we need to understand staff perceptions of VR programs, especially suggestions for improvement. In an analysis of responses from 10 staff working in residential aged care (also known as nursing homes), we found that staff have concerns about the suitability of VR for older people with cognitive impairments and mobility restrictions. Many older adults living in aged care have these conditions. Our findings suggest that providing staff with training on how to facilitate various kinds of valuable VR experiences and providing a clear picture of its benefits and drawbacks will help to make it suitable for people living in aged care. Furthermore, there should be greater investment in technological infrastructure and co-design of VR in aged care.
Virtual and augmented reality offer comparative performance in terms of remote usability testing to lab-based co-located settings. However, direct contact with a researcher is still required to provide setup, troubleshoot, and training. In this paper, we present Surrogate-Aloud as a remote ideation and usability method that establishes a surrogate relationship between participants and a facilitating researcher through video conferencing. The researcher wears a VR headset and shares their viewpoint through video conferencing to a remote participant, who applies think-aloud protocol to express movement and interaction commands to be executed by the researcher, alongside their thought process as they interact with virtual prototypes or scenarios. We conducted a preliminary study to evaluate the Surrogate-Aloud method for remote usability evaluation and ideation of a new instructional technique with volumetric recordings. Results show that Surrogate-Aloud leverages the surrogate’s technical expertise and enables sufficient capability to conduct truly remote usability evaluation and ideation.
It is difficult for people with visual impairments to have balanced nutrition, and one of the reasons is because it is challenging for them to shop for grocery items. In this study, we focused on designing descriptions on grocery items to people with visual impairments to help them with making purchase decisions independently. To identify types of information to be provided, we first conducted an online survey with 73 participants with visual impairments. Then we conducted an in-depth phone interview with eight participants to understand how to better design descriptions for different grocery items. Based on the findings, we provide implications for a camera-based wearable grocery shopping assistance system, which is currently in the prototype stage. This system will help taking the next step in providing effective assistance for people with visual impairments when shopping for groceries.
Currently, researches about automatic documentation tools seldom pay attention to user experience. In attempt to explore how existing works consider user experience of automatic documentation tools and user experience challenges automatic documentation tools are faced with, we qualitatively analyze evaluation sections in 21 papers. We find (1) User experience of automatic documentation tools is usually considered as the supplementary for document quality. (2) Automatic documentation tools are faced with three user experience challenges. Developers do not trust the generated documents. High expectations towards automatic documentation tools also hurt user experience. How developers work and acquire information presents another challenge to the design of automatic documentation tools. Apart from identifying three user experience related challenges, this paper also benefits none SE professionals who are interested in automatic documentation tools by selecting representative papers and summarizing how evaluations are conducted.
Despite the increasing presence of conversational agents (CAs) in our daily lives, the lack of information and technology behind them prevents CAs from answering many questions. One of the most typical problems facing conversational user interfaces today is that they often disappoint people by giving the same answer (e.g., “I don't know”). In this work, we focused on situations in which CAs do not provide a proper answer because of a lack of information. Under these situations, we aimed to find more effective answer strategies for CAs to provide people better user experiences. We tested four different response strategies using different degrees of inferences and information as ground. We found differences in usability and user experience depending on how CAs respond. Our results will help designers understand how people feel about the way CAs respond and create better CA responses in situations where it is difficult to provide accurate answers.
Cognitive load assessment has become an integral part of human-centered design as it enables cognition-aware systems to adapt to users’ needs. Among the range of physiological metrics, task-evoked pupillary responses have been implemented successfully to determine mental workload. Only recently, smooth pursuit eye-movements as another oculomotor indicator was suggested to index cognitive processing demands. The current study provides a comparison of both parameters with regard to the capacity to differentiate task load in an auditory n-back task. Results replicate earlier findings, suggesting pupil diameter to display the level of mental workload. However, task loads during high levels of difficulty are not distinguishable. Task difficulty also influences gaze behaviour during smooth pursuit eye-movements, although far less sensitive. A clear distinction is achieved only between low and high workloads. In contrast, a combination of control measures (reaction times, accuracy rates) is suited to distinguish between all n-back task stages.
Intersectional bias is a bias caused by an overlap of multiple social factors like gender, sexuality, race, disability, religion, etc. A recent study has shown that word embedding models can be laden with biases against intersectional groups like African American females, etc. The first step towards tackling intersectional biases is to identify them. However, discovering biases against different intersectional groups remains a challenging task. In this work, we present WordBias, an interactive visual tool designed to explore biases against intersectional groups encoded in static word embeddings. Given a pretrained static word embedding, WordBias computes the association of each word along different groups like race, age, etc. and then visualizes them using a novel interactive interface. Using a case study, we demonstrate how WordBias can help uncover biases against intersectional groups like Black Muslim Males, Poor Females, etc. encoded in word embedding. In addition, we also evaluate our tool using qualitative feedback from expert interviews. The source code for this tool can be publicly accessed for reproducibility at github.com/bhavyaghai/WordBias.
The behavior of self-driving cars may differ from people’s expectations (e.g. an autopilot may unexpectedly relinquish control). This expectation mismatch can cause potential and existing users to distrust self-driving technology and can increase the likelihood of accidents. We propose a simple but effective framework, AutoPreview, to enable consumers to preview a target autopilot’s potential actions in the real-world driving context before deployment. For a given target autopilot, we design a delegate policy that replicates the target autopilot behavior with explainable action representations, which can then be queried online for comparison and to build an accurate mental model. To demonstrate its practicality, we present a prototype of AutoPreview integrated with the CARLA simulator along with two potential use cases of the framework. We conduct a pilot study to investigate whether or not AutoPreview provides deeper understanding about autopilot behavior when experiencing a new autopilot policy for the first time. Our results suggest that the AutoPreview method helps users understand autopilot behavior in terms of driving style comprehension, deployment preference, and exact action timing prediction.
Home design services employ virtual real estate staging to visually convey proposals to prospective customers, instilling confidence in property owners on how spaces will look once they are furnished. However, creating room-scale visualizations requires days of expert labor to composite products on images provided by the customer. Advances in 3D capture now let users scan their spaces with a smartphone in a matter of minutes, enabling scale-accurate mixed reality experiences that can be leveraged to lower the skill bar and time required to produce visualizations of furniture in the user’s context. We present Home Studio, a web-based tool that lets designers stage any Matterport scan and generate photorealistic renders of products in the user’s context. Our tool enables a drag and drop experience that reconstructs the perspective view of the Matterport scene in a remote service, employing a rendering engine to produce a photorealistic composite.
Immersive virtual experiences are becoming more prominent in journalism, yet they largely focus on realistic simulations of the real world. We introduce a novel VR experience that incorporates surrealism and foregrounds the abstract concepts in the journalistic story of Occupy City Hall, the June 2020 sit-in protest in New York, NY. The experience’s virtual environment contains multimedia primary sources augmented with abstracted representations of their interrelated underlying themes. The user experiences an ambiguous interface, which is designed to provoke curiosity, exploration, and critical thinking. The experience has been live-streamed in 2D to audience observers, who responded positively to the diverse concrete and conceptual content, but felt the ambiguous interface was not supplemented with sufficient guidance. Future user studies will involve the full VR experience, to determine the impact of ambiguous interfaces on user engagement, enjoyment, and conceptual learning, as well as comparing this form of journalistic media to others.
Passively consuming digital social media content often precludes users from mindfully considering the value they derive from such experiences as they engage in them. We present a system for using Twitter that requires users to continuously turn a hand crank to power their social media screen. We evaluate the device and its effects on how users value Twitter with 3 participants over 3 weeks, with the middle week of Twitter usage directed exclusively through our system. Using our device caused a dramatic decrease in Twitter usage for all participants, which either persisted or rebounded in the post-intervention week. Our analysis of diary studies and qualitative interviews surfaced three themes indicating shifting focus on content, shifting awareness about the role of social media, and new social dynamics around content-sharing.
This ongoing work attempts to understand and address the requirements of UNICEF, a leading organization working in children’s welfare, where they aim to tackle the problem of air quality for children at a global level. We are motivated by the lack of a proper model to account for heavily fluctuating air quality levels across the world in the wake of the COVID-19 pandemic, leading to uncertainty among public health professionals on the exact levels of children’s exposure to air pollutants. We create an initial model as per the agency’s requirement to generate insights through a combination of virtual meetups and online presentations. Our research team comprised of UNICEF’s researchers and a group of volunteer data scientists. The presentations were delivered to a number of scientists and domain experts from UNICEF and community champions working with open data. We highlight their feedback and possible avenues to develop this research further.
Soft robotics use in haptic devices continues to grow, with pneumatic control being a common actuation source. Typical control systems used, however, rely on a digital on/off control allowing only inflation/deflation at a set rate. This limits the degrees of freedom available when designing haptic experiences. We present an alternative system to allow the use of analog control of the pneumatic waveform profiles to design and experiment with haptic devices, and to determine the optimum wave profile for the desired experience. Using a combination of off-the-shelf components and a user interface, our system allows for rapid experimentation with various pressure levels, and the ability to control waveform profiles in a common format such as attack-sustain-release. In this paper, we demonstrate that by altering the attack and release profiles we can create a more pleasant pulsing sensation on the wrist, and a more continuous sensation for communicating movement around the wrist.
Some people with upper body motor impairments but sound lower limbs usually use feet to interact with smartphones. However, touching the touchscreen with big toes is tiring, inefficient and easy to mistouch. In this paper, we propose FootUI, which leverages the phone camera to track users’ feet and translates the foot gestures to smartphone operations. This technique enables users to interact with smartphones while reclining on the bed and improves the comfort of users. We explore the usage scenario and foot gestures, define the mapping from foot gestures to smartphone operations and develop the prototype on smartphones, which includes the gesture tracking and recognition algorithm, and the user interface. Evaluation results show that FootUI is easy, efficient and interesting to use. Our work provides a novel input technique for people with upper body motor impairments but sound lower limbs.
Research has highlighted the need for customization of health-related technologies. However, few studies have examined its impact on wearable healthcare devices. We present a co-design study where we learned about people’s preferences and ideas for customized glucose monitors. We worked with people who have Type 1 Diabetes and learned about their challenges with current glucose monitors and ways to address them in physical product design. To understand people’s perception towards using customizable glucose monitors, we prototyped one simple example toolkit, DiaFit, consisting of multiple modular accessories for assembling glucose monitors. We invited participants to try DiaFit and learned about their acceptability of customizable glucose monitors. We conclude with preliminary lessons learned about customization as an approach to addressing individual differences in the context of health technologies.
As open source software (OSS) becomes increasingly mature and popular, there are significant challenges with properly accounting for usability concerns for the diverse end users. Participatory design, where multiple stakeholders collaborate on iterating the design, can be an efficient way to address the usability concerns for OSS projects. However, barriers such as a code-centric mindset and insufficient tool support often prevent OSS teams from effectively including end users in participatory design methods. This paper proposes preliminary contributions to this problem through the user-centered exploration of (1) a set of design guidelines that capture the needs of OSS participatory design tools, (2) two personas that represent the characteristics of OSS designers and end users, and (3) a low-fidelity prototype tool for end user involvement in OSS projects. This work paves the road for future studies about tool design that would eventually help improve OSS usability.
For immigrant families, instant messaging family groups are a common platform for sharing and discussing health-related information. Immigrants often maintain contact with their family abroad and trust information in shared IM family groups more than the information from local authorities and sources. In this study, we aimed to understand health-related information behaviors of immigrant families in their IM family groups. Based on the interviews with 6 participants from immigrant families to Canada, we found that immigrant families’ discourse on IM platforms is motivated by love and care for other family members. The families used local and international sources of information, judged information credibility by its alignment with their pre-existing knowledge, and mostly did not verify information further. The information shared by different users from different sources often contradicted one another. Yet, family members did not discuss the conflicting information due to their desire to avoid tensions.
This work looks into the current state of sharing ephemeral versus permanent content on common social media platforms. Previous research has indicated that ephemerality of content, content that has an expiration date or time, can support users’ identity construction. However, not much is known about whether current ephemeral interactions on social media are successful in doing so, or whether there are ephemerality-related design opportunities for social media platforms that can improve identity expression. In an 8-day qualitative diary study, participants reported when they posted on social media, and responded to questions about the type of content they shared, their motivation, and the content’s ideal duration. We discuss our findings about short-term and long-term ephemerality as part of the social media experience, and the potential impact on the evolving identities of teenagers and young adults.
The widespread adoption of social media brings both challenges and opportunities for the field of disaster management. However, there has been limited scholarly work on social media usage for disaster preparation. Our work explores how social media sites can be designed to help individuals and communities prepare for disasters, in particular with regard to crowdsourcing information, digital mobilization, and community resilience. We present preliminary findings from 4 online focus groups (N=31), which reveal two emergent themes that may prove useful for future research and design in HCI. We also propose four design recommendations for social media sites as tools for disaster preparation. These findings support the design and evaluation of social media sites to aid in community-based disaster preparedness.
Android smartphones have undergone various changes and, uniformity in design (both hardware and software) has become common in most phones. Yet, security and privacy issues persist and grow along with the evolution of smartphones. Recent research shows that user awareness is still low, and they do not follow secure behaviour while using smartphones. Although a handful of works exist for improving user awareness and self-efficacy, none of them educate the users about Android Permissions in a contextual manner. In this paper, we discuss our persuasive game - "PermaRun", which teaches and motivates users to follow secure smartphone behaviour, increases user awareness and self-efficacy about android permissions. We conducted a Heuristic Evaluation for Playability (HEP) and Persuasiveness Evaluation (PE). The result shows that players had a positive experience playing the game, and they found the game playable and persuasive.
Freestanding lace is a method to create a lace by machine embroidery. Lace has advantages compared to the other fabrics in terms of skin-compatibility, weight, and aesthetics. In this paper, we present SkinLace, a freestanding lace by machine embroidery for on-skin interface. Conductive thread, water-soluble stabilizer, and home embroidery machine can create SkinLace, which is aesthetic, light, skin-compatible, body-conforming, durable, and inexpensive. The freestanding lace approach enables aesthetic customizable lace and patches through a combination of non-conductive and colorful threads. We propose three different applications of SkinLace; on-skin displays, capacitive touch sensing, and RFID tag antenna. Tension of the thread in the embroidery machine and the accuracy of the stitch are the main challenges, but the advantages and potential to create more complex circuitry and to enhance sensing capability present rich opportunities for further exploration.
Many data scientists use computational notebooks to test and present their work, as a notebook can weave code and documentation together (computational narrative), and support rapid iteration on code experiments. However, it is not easy to write good documentation in a data science notebook, partially because there is a lack of a corpus of well-documented notebooks as exemplars for data scientists to follow. To cope with this challenge, this work looks at Kaggle — a large online community for data scientists to host and participate in machine learning competitions — and considers highly-voted Kaggle notebooks as a proxy for well-documented notebooks. Through a qualitative analysis at both the notebook level and the markdown-cell level, we find these notebooks are indeed well documented in reference to previous literature. Our analysis also reveals nine categories of content that data scientists write in their documentation cells, and these documentation cells often interplay with different stages of the data science lifecycle. We conclude the paper with design implications and future research directions.
Code puzzles are an increasingly popular approach to introducing programming to young learners. Today, code puzzles are predominantly introduced through static puzzle sequences with increasing difficulty. However, adaptive systems in other domains have improved learning efficiency. This paper takes a step towards developing adaptive code puzzle systems based on controlling learners’ cognitive load. We conducted a study comparing static code puzzle pathways and adaptive pathways that predict cognitive load on future puzzles. While the trialled adaptive recommendation policy did not result in better learning, our findings point us towards a different policy which may have a greater effect on learner experience. In addition, we identify predictors of student dropout, and use our experimental data to quantify learners’ puzzle-solving experiences into 7 principal component properties and use these factors to suggest approaches for future adaptive systems.
One of the most effective ways to learn a second language is to immerse oneself in it in the country where it is spoken. While this is not feasible for most language learners, language learning games may be able to replicate an immersive language learning experience. We developed Delivery Ghost, a web-based game to teach Mandarin Chinese to beginner-level learners, including complete beginners. In a 2x2 experiment (N=159), we tested how immersion and interactivity affect learning gains (pre-post test) and experiences (enjoyment, flow, perceived learning, cognitive load). Participants in all conditions showed significant learning gains even though no explicit, lecture-style teaching was provided. We also found evidence of substantial learning gains and positive learning experiences independent of the level of immersion (i.e., availability of hints in English) and interactivity (i.e., animation vs. gameplay). These results suggest that interactivity and immersion are less critical to learning at the beginner-level than a well-structured curriculum.
Accessibility research sits at the junction of several disciplines, drawing influence from HCI, disability studies, psychology, education, and more. To characterize the influences and extensions of accessibility research, we undertake a study of citation trends for accessibility and related HCI communities. We assess the diversity of venues and fields of study represented among the referenced and citing papers of 836 accessibility research papers from ASSETS and CHI, finding that though publications in computer science dominate these citation relationships, the relative proportion of citations from papers on psychology and medicine has grown over time. Though ASSETS is a more niche venue than CHI in terms of citational diversity, both conferences display standard levels of diversity among their incoming and outgoing citations when analyzed in the context of 53K papers from 13 accessibility and HCI conference venues.
We describe the early-stage development of a tangible block editor for the educational programming language Scratch that is intended to contribute to an environment that will allow blind and visually impaired (BVI) students (grades 6-12) to learn computer programming concepts alongside their sighted peers (both independently and in pairs) in mainstream classrooms. In this late breaking work, we describe our design that incorporates many of the key strategies of the Scratch visual code editor meant to promote engagement and lower hurdles to programming. Novel key elements of the design include: the strategic use of magnets and locally interlocking block edges to ensure only blocks with valid syntax can be connected, the use of telescoping tubing to specify parameter/operand location and allow their expansion for nested expressions and a block-sized-channel grid work surface that provides structure to aid BVI students in navigating and manipulating their programs.
Our research goal is to summarize the body of persona knowledge by identifying knowledge claims. This can aid HCI researchers to (a) navigate persona knowledge to form an understanding of what is known about personas quickly, (b) identify central research gaps of what is not known (or said) about personas, and (c) identify claims that are not substantiated with strong empirical evidence and warrant future work. To this end, we use computational and manual techniques to extract 130 knowledge claims based on 9139 sentences from 346 persona articles and analyze whether the existing literature supports these claims. The results, clustered into four groups (“Definition”, “Creation”, “Evaluation”, and “Use”), indicate that claims regarding persona definition are characterized by a higher degree of consensus. In contrast, persona creation and use contain a high proportion of unverified claims. There are few claims concerning evaluation. Empirical research should address unverified claims and develop the ontological understanding on persona evaluation.
Through vehicle automation, the human is more and more able to execute non-driving-related activities (NDRAs). However, control interfaces are still needed to allow human intervention and foster a sense of control. This paper presents a driving simulator experiment (N = 20) with five likely NDRAs (being idle, eating, smartphone use, conversation, listen to music) where participants freely intervened in the driving process, using either touch, voice, or mid-air gesture interaction. We found that NDRAs determine input modality choices. Generally, participants tended to avoid the modalities demanded by the NDRAs, e.g., voice interaction becomes less frequent while listening to music. Contrary, touch interaction increased during smartphone use, indicating that users tend to stick to a known interaction style during high multitasking workload. Overall, we recommend designing future vehicle interiors and interfaces that rely on multimodal interfaces to account for the diversity of situations and activities while driving automated.
With AR/VR devices becoming increasingly common around us, their user authentication has posed a critical challenge. While typing passwords is straightforward with a keyboard, it has been cumbersome with conventional AR/VR input techniques such as in-air gestures and hand-held controllers. In this work, we developed a fluent authentication technique that allows AR/VR users to unlock their profiles with simple head gestures (e.g., nodding). This resembles the powerful “Slide to Unlock” interaction on touch screen devices. Specifically, we extract bio-features such as neck length and head radius from IMU sensor readings of these head gestures for user identification with machine learning. Though our approach is less strict compared with conventional password-based methods, we believe its swiftness greatly facilitates scenarios with frequent user switching (e.g., device sharing across team and family members) which demand quick authentications. Through a 10-participant evaluation, we demonstrated that our system is robust and accurate with an average accuracy of 97.1% on groups of 5, simulating family and lab use.
Black, Indigenous, and other Women of Color (BIWOC) studying STEM are underrepresented in STEM and subject to its “chilly” climate; it is unsurprising that BIWOC STEM students report weaker senses of belonging and higher rates of attrition. Counterspaces, or spaces for mutual support for BIWOC at the margins of STEM, have long combated dominant STEM culture to support BIWOC to thrive and persist in STEM. Digital game design and playful interactions to counter oppression can be leveraged to create digital games that function as counterspaces for BIWOC STEM students to playfully cultivate their belonging and persistence. Our exploratory game design research aims to co-design counterspaces games with BIWOC STEM students, and here we present our initial focus group designs centered on exploring existing BIWOC counterspace practices, preliminary data and insights, and promising directions for developing game design strategies to support BIWOC belonging and persistence in STEM.
We conducted a daytime naturalistic driving study that involved the same 19 km town itinerary under similar light traffic and fair-weather conditions. We applied a real-time unobtrusive design that could serve as template in future driving studies. In this design, driving parameters and drivers’ arousal levels were captured via a vehicle data acquisition and thermal imaging system, respectively. Analyzing the data, we found that about half of the n = 11 healthy participants exhibited significantly larger arousal reactions to acceleration with respect to the rest of the sample. Acceleration events were of the mundane type, such as entering a highway from an entrance ramp or starting from a red light. The results suggest an underlying grouping of normal drivers with respect to the loading induced by commonplace acceleration. The finding carries implications for certain professions and the design of semi-autonomous vehicles.
Gendered voice based on pitch is a prevalent design element in many contemporary Voice Assistants (VAs) but has shown to strengthen harmful stereotypes. Interestingly, there is a dearth of research that systematically analyses user perceptions of different voice genders in VAs. This study investigates gender-stereotyping across two different tasks by analyzing the influence of pitch (low, high) and gender (women, men) on stereotypical trait ascription and trust formation in an exploratory online experiment with 234 participants. Additionally, we deploy a gender-ambiguous voice to compare against gendered voices. Our findings indicate that implicit stereotyping occurs for VAs. Moreover, we can show that there are no significant differences in trust formed towards a gender-ambiguous voice versus gendered voices, which highlights their potential for commercial usage.
Augmented Reality (AR) has the potential to revolutionize our workspaces, since it considerably extends the limits of current displays while keeping users aware of their collaborators and surroundings. Collective activities like brainstorming and sensemaking often use space for arranging documents and information and thus will likely benefit from AR-enhanced offices. Until now, there has been very little research on how the physical surroundings might affect virtual content placement for collaborative sensemaking. We therefore conducted an initial study with eight participants in which we compared two different room settings for collaborative image categorization regarding content placement, spatiality, and layout. We found that participants tend to utilize the room’s vertical surfaces as well as the room’s furniture, particularly through edges and gaps, for placement and organization. We also identified three different spatial layout patterns (panoramic-strip, semi-cylindrical layout, furniture-based distribution) and observed the usage of temporary storage spaces specifically for collaboration.
In this work, we explore a new sensing technique for smart eyewear equipped with Electrooculography (EOG) sensors. We repurpose the EOG sensors embedded in a JINS MEME smart eyewear, originally designed to detect eye movement, to detect midair hand gestures. We also explore the potential of sensing human proximity, rubbing action and to differentiate materials and objects using this sensor. This new found sensing capabilities enable a various types of novel input and interaction scenarios for such wearable eyewear device, whether it is worn on body or resting on a desk.
The advent of larger machine learning (ML) models have improved state-of-the-art (SOTA) performance in various modeling tasks, ranging from computer vision to natural language. As ML models continue increasing in size, so does their respective energy consumption and computational requirements. However, the methods for tracking, reporting, and comparing energy consumption remain limited. We present EnergyVis, an interactive energy consumption tracker for ML models. Consisting of multiple coordinated views, EnergyVis enables researchers to interactively track, visualize and compare model energy consumption across key energy consumption and carbon footprint metrics (kWh and CO2), helping users explore alternative deployment locations and hardware that may reduce carbon footprints. EnergyVis aims to raise awareness concerning computational sustainability by interactively highlighting excessive energy usage during model training; and by providing alternative training options to reduce energy usage.
Negative thoughts are a widespread everyday experience. Failures to appropriately cope with negative thoughts are related to serious mental health issues, such as depression. Consequently, preventive technology-mediated everyday interventions to support coping with negative thoughts are of interest. One promising platform for mental health applications is personalized virtual reality (VR). We developed an explorative VR prototype based on personally relevant textual messages from email, messengers and alike, which trigger negative thoughts. The prototype presented these messages in VR and allowed to physically manipulate them, for example, by physically punching and trashing them. A qualitative empirical exploration (N=10) revealed a general positive shift in thoughts and emotions after using the prototype, mainly in form of increased relaxation and self-reflection. Based on this and further insights, we derive four themes for VR in mental health, touching upon the importance of personalization, immersion and focus, interaction design and embodiment, as well as, integration into everyday life.
Children are increasingly exposed to virtual reality (VR) technology as end-users. However, they miss an opportunity to become active creators due to the barrier of insufficient technical background. Creating scenes in VR requires considerable programming knowledge and excludes non-tech-savvy users, e.g., school children. In this paper, we showcase a system called VRtangibles, which combines tangible objects and touch input to create virtual scenes without programming. With VRtangibles, we aim to engage children in the active creation of virtual scenes via playful hands-on activities. From the lab study with six school children, we discovered that the majority of children were successful in creating virtual scenes using VRtangibles and found it engaging and fun to use.
Military planners use “Operational Design” (OD) methods to develop an understanding of systems and relationships in complex operational environments. Here, we present Causeworks, a visual analytics application for OD teams to collaboratively build causal models of environments and use analytics to understand and find solutions to affect them. Collaborative causal modelling can help teams craft better plans, but there are unique challenges in developing synchronous collaboration tools for building and using causal models. Collaboration systems typically organize information around varying degrees of synchronization between data “values” and user “views.” Our contribution is in extending this collaboration framework to include analytics as layers that are by nature derived from the data values but utilized and displayed temporarily as private views. We describe how Causeworks overlays analytics inputs and outputs over a shared causal model to flexibly support multiple modeling tasks simultaneously in a collaborative environment with minimal state management burden on users.
Interactive virtual conferencing has become a necessity in adapting to travel reductions during the global pandemic. This paper reports experience with a recent 5-week VR conference with participants from academia and leading industry experts. Drawing on Activity Theory and Installation Theory, a structural grid for virtual conferencing activity analysis is described. We argue that for successful interactive virtual conferencing, the installation must facilitate both the development of knowledge and informal social interaction, the ‘epistemic’ and the ‘relational’. We focus on three specific aspects of the conference activity—onboarding, networking, and intersession transitions—to highlight key issues and illustrate the process of design thinking based on distributed architecture. We discuss lessons learned to inform this fast-growing field: provisions for meaningful social interactions remain underdeveloped in current conferencing systems.
During the rapid and forced move to online teaching during the Covid-19 pandemic, university courses that never would have been considered suitable for online teaching under normal circumstances, were moved online. While challenging, this also opened opportunities for developing innovative strategies for teaching and learning. We report on the experiences of moving online a course in embodied interaction, which due to its focus on situated and embodied design faced extensive challenges in moving online. We highlight in particular the way in which the course brought forward innovative methods for bodystorming and user-assisted trialling. This paper has been written jointly by students and teachers from the course.
Many researchers have been concerned with whether social media has a negative impact on the well-being of their audience. With the popularity of social networking sites (SNS) steadily increasing, psychological and social sciences have shown great interest in their effects and consequences on humans. In this work, we investigate Facebook using the tools of HCI to find connections between interface features and the concerns raised by these domains. Using an empirical design analysis, we identify interface interferences impacting users’ online privacy. Through a subsequent survey (n = 116), we find usage behaviour changes due to increased privacy concerns and report individual cases of addiction and mental health issues. These observations are the results of a rapidly changing SNS creating a gap of understanding between users’ interactions with the platform and future consequences. We explore how HCI can help close this gap and work towards more ethical user interfaces in the future.
In this exploratory study, we examine the possibilities of non-invasive Brain-Computer Interface (BCI) in the context of Smart Home Technology (SHT) targeted at older adults. During two workshops, one stationary, and one online via Zoom, we researched the insights of the end users concerning the potential of the BCI in the SHT setting. We explored its advantages and drawbacks, and the features older adults see as vital as well as the ones that they would benefit from. Apart from evaluating the participants’ perception of such devices during the two workshops we also analyzed some key considerations resulting from the insights gathered during the workshops, such as potential barriers, ways to mitigate them, strengths and opportunities connected to BCI. These may be useful for designing BCI interaction paradigms and pinpointing areas of interest to pursue in further studies.
The unprecedented pandemic of the infectious coronavirus disease (COVID-19) is still ongoing. Considering the limitations and restrictions imposed by COVID-19, we explored the role of technology and the extent of usage by end-users. In our online survey, we investigated users’ perspectives on their use of technologies in different contexts (e.g., work, entertainment), taking into consideration intrinsic factors such as health consciousness, perceived social isolation, and pandemic-related concerns. Results from 218 respondents show a significant increase in technology use in all investigated contexts after the pandemic occurred. Moreover, the results suggest that different factors may contribute to such increases, depending on the context. It appears that perceived social isolation, concerns about the pandemic, and tracking have the most prominent influence on different use of technology. Furthermore, open-ended responses include beneficial opportunities, concerns & consequences, and behavioral transformations & adaptations due to COVID-19. Our findings provide insights for designing and developing new technologies, especially for communication and entertainment, to support users’ needs during a pandemic.
Mountain biking as a recreational sport is currently thriving. During the ongoing COVID-19 pandemic, even more people started compensating for a lack of activity through individual outdoor sports, such as cycling. However, when executed beyond paved forest roads, mountain biking is a sport with subjective and objective risks, in which crashes often can not be entirely avoided and athletes may get injured. In this late-breaking work, we showcase a concept for a crash risk indication application for sports smartwatches. First, we review a wide range of related work, which formed the basis for our crash risk indication metric. We discuss options for the sensor-based detection of internal and external risk factors and propose a way to aggregate them, which will allow dynamic and potentially automatic fine-tuning by observing or obtaining feedback from the athlete. In addition, we present a concept for a smartwatch application that will provide constant feedback and an unobtrusive signal to the athlete when an unusually high risk is detected. Finally, we give an outlook on the necessary steps to implement our approach as a smartwatch app.
Breathing exercises reduce stress and anxiety and are commonly implemented in well-being applications. Here, we compare how well three synthetic auditory feedback stimuli (breath, music, and compound) can guide slow and fast breathing. The results indicate that all three feedback types helped participants entrain the target breathing rate, however, the deviation from the target rate was higher for fast compared to slow breathing. Importantly, when target rate was fast, the compound feedback type resulted in a significantly smaller average respiration error and a longer duration close to the target respiration rate and the breath feedback type resulted in a smaller average deviation from target pace compared to music feedback type. The results point towards an advantage of compound and ecological sound stimuli in particular when the target respiration rate is fast.
Agility is the ability to change your body’s position in a fast and efficient way while maintaining control of speed and direction. To develop this skill, the use of agility ladders is a widespread and well-known training method. While drills vary, having a human expert explaining and monitoring the exercises is usually advantageous. So far, only a few approaches to interactive systems for agility ladder training have been presented. In this work, we propose an interactive projection system to support an athlete in performing those drills. To investigate the effects of the location of those projections, we conducted an initial study with twelve participants using qualitative and quantitative methods. We found that while projecting instructions and feedback on the floor was favored, participants who were presented the same information in front of them on a projected screen partially performed better in a subsequent agility assessment.
So-called smart factories with networked physical machinery and highly automated manufacturing processes offer huge potential for efficiency and productivity increases. While respective user-centered research has been investigating assistance solutions for concrete maintenance or assembly tasks, this paper explores worker-oriented mobile and wearable systems for monitoring such complex and demanding manufacturing environments and for preparing for potential interventions. In four co-design workshops and focus groups, we investigated a manufacturing staff’s requirements for such monitoring systems and designed and evaluated low- and high-fidelity prototypes. Based on these insights, we derive a set of general design recommendations for mobile and wearable monitoring systems for smart factories.
Enabling healthier online deliberation around issues of public concerns is an increasingly vital challenge in nowadays society. Two fundamental components of a healthier deliberation are: i. the capability of people to make sense of what they read, so that their contribution can be relevant; and ii. the improvement of the overall quality of the debate, so that noise can be reduced and useful signals can inform collective decision making. Platform designers often resort to computational aids to improve these two processes. In this paper, we examine automated reporting as promising mean of improving sensemaking in discussion platforms. We compared three approaches to automated reporting: an abstractive summariser, a template report and an argumentation highlighting system. We then evaluated improvements in sensemaking of participants and the perception on overall quality of the debate. The study suggests that argument mining technologies are particularly promising computational aids to improve sense making and perceived quality of online discussion, thanks to their capability to combine computational models for automated reasoning with users’ cognitive needs and expectation of automated reporting.
Live performances are immersive shared experiences, traditionally taking place in designated, carefully designed physical spaces such as theatres or concert halls. As it is becoming increasingly common for audiences to experience this type of content remotely using digital technology, it is crucial to reflect on the design of digital experiences and the technology used to deliver them. This research is guided by the question: How can the design of streaming technologies support artists in creating immersive and engaging audience experiences? A series of audience studies, which took place as cultural organisations were forced to adapt and deliver their content remotely due to the COVID19 global pandemic, highlighted problems with existing streaming solutions and informed a set of design recommendations for audience experience and research.
The pandemic has caused a significant increase in the use of videoconferencing for oral presentations. Prior work demonstrated that an embodied conversational agent that co-delivers an oral presentation could be used in face-to-face presentations to reduce public speaking anxiety and increase presentation quality. In this work, we evaluate the use of a co-presenter agent in the delivery of virtual presentations given over a videoconferencing system, comparing them to presentations given without the agent. We found that participants were satisfied with the co-presenter agent, and those who liked the agent (scoring above the mean on a composite self-report measure of satisfaction) rated the presentations they gave with the agent as having significantly higher quality compared to those given without the agent. There was evidence the agent helped participants feel less nervous about their talks. Interviews confirmed these findings, and identified additional advantages and disadvantages of using co-presenter agents in virtual presentations.
The current state of audio rendering algorithms allows efficient sound propagation, reflecting realistic acoustic properties of real environments. Among factors affecting realism of acoustic simulations is the mapping between an environment’s geometry, and acoustic information of materials represented. We present a pipeline to infer material characteristics from their visual representations, providing an automated mapping. A trained image classifier estimates semantic material information from textured meshes mapping predicted labels to a database of measured frequency-dependent absorption coefficients; trained on a material image patches generated from superpixels, it produces inference from meshes, decomposing their unwrapped textures. The most frequent label from predicted texture patches determines the acoustic material assigned to the input mesh. We test the pipeline on a real environment, capturing a conference room and reconstructing its geometry from point cloud data. We estimate a Room Impulse Response (RIR) of the virtual environment, which we compare against a measured counterpart.
Adjusting the balance between the player’s game skill and the difficulty level is one of the most important factors to improve the player’s engagement. However, it is still quite rare to find works that aim to straightly control the subjective difficulty perceived by the player. Our research question is whether or not it is possible to control the perceived difficulty just by adding enemy objects that do not raise the actual difficulty level. To investigate this issue, we designed a simple shooting game with two ‘fake enemy bullets’: Unreachable Bullets and Non-collisionable Bullets, which do not damage the player character. The experiment suggests that the non-collisionable bullet can efficiently increase the perceived difficulty level but the unreachable bullet does not. Such a study in novel techniques that can control the perceived difficulty without changing the actual difficulty could contribute to both research and practices in game design.
While designing an HCI experiment, planning the sample size with a priori power analysis is often skipped due to the lack of reference effect sizes. On the one hand, it can lead to a false-negative result, missing the effect that is present in the population. On the other hand, it poses a risk of spending more resources if the number of participants is too high. In this work, I present the reference for small, medium, and large effect sizes for typing experiments based on a meta-analysis of well-cited papers from CHI conference. This effect size ruler can be used to conduct a priori power analysis or assess the magnitude of the found effect. This work also includes comparisons to other fields and conclude with a discussion of the existing issues with reporting practices and data availability. This paper and all data and materials are freely available at https://osf.io/nqzpr.
To show off their playing, musicians publish musical performance videos on streaming services. In order to find out typical characteristics of guitar performance videos, we carried out a quantitative survey of guitar performance videos. Then, we discuss key problems of creating effects informed by the survey. According to the discussion, authoring videos with typical effects takes a long time even for experienced users because they typically need to combine multiple video tracks (e.g., lyrics and videos shot from multiple angles) into a single track. They need to synchronize all tracks with the musical piece and set transitions between them at the right timing, aware of the musical structure. This paper presents Instrumeteor, an authoring tool for musical performance videos. First, it automatically analyzes the musical structure in the tracks to align them on a single timeline. Second, it implements typical video effects informed by the survey. In this way, our tool reduces manual work and unleashes the musicians’ creativity.
Thermal comfort is an important factor in building control, affecting occupant health, satisfaction, and productivity. Building management systems in commercial spaces commonly operate on predefined temperature setpoints and control strategies. Many systems target aggregated cohort comfort and neglect to consider the individual occupant’s thermal preferences, leading to high dissatisfaction rates. While recent studies focus on personalized comfort models, such systems mainly operate on occupant preference prediction and do not investigate the reasons for discomfort.
This paper presents TREATI’s human-in-the-loop decision-making process. TREATI is a framework that targets thermal comfort conflict resolution in shared spaces using rationale management techniques while considering both individual and cohort comfort. TREATI uses several levels of abstraction separating device management, event processing, context, and rationale management. This separation allows users to adapt the framework to existing building management systems to provide fair decision-making.
When the COVID-19 pandemic struck in March 2020, universities worldwide were forced to suddenly move all in-person students online. In isolation and away from their regular structures and coping mechanisms, students were forced to rely on online learning technology (OLT) as a full replacement for in-person learning. We hypothesize that students in this circumstance will experience feelings of learned helplessness regarding OLT and suffer from overall poorer mental health. We present a mixed-methods study to investigate these phenomena during the Spring 2020 semester among a diverse group of students. We explore multiple factors that contributed to these phenomena, such as motivation, growing exhaustion with online learning, and feelings of connectedness that were lost and cannot be recreated via online meeting software.
Deaf children born to hearing parents lack continuous access to language, leading to weaker working memory compared to hearing children and deaf children born to Deaf parents. CopyCat is a game where children communicate with the computer via American Sign Language (ASL), and it has been shown to improve language skills and working memory. Previously, CopyCat depended on unscalable hardware such as custom gloves for sign verification, but modern 4K cameras and pose estimators present new opportunities. Before re-creating the CopyCat game for deaf children using off-the-shelf hardware, we evaluate whether current ASL recognition is sufficient. Using Hidden Markov Models (HMMs), user independent word accuracies were 90.6%, 90.5%, and 90.4% for AlphaPose, Kinect, and MediaPipe, respectively. Transformers, a state-of-the-art model in natural language processing, performed 17.0% worse on average. Given these results, we believe our current HMM-based recognizer can be successfully adapted to verify children’s signing while playing CopyCat.
Virtual Reality (VR) is a valuable tool for studying pedestrian behaviour in complex and realistic scenarios. However, it has remained unknown how different VR technology would influence pedestrian behaviour. This paper presents VR experiments that were conducted with 70 participants using a desktop VR or a HMD VR to perform four different wayfinding tasks in a multi-story building. Quantitative analysis of pedestrian behaviour data and user experience data were performed in order to investigate the impact of the technological differences between the two VR techniques. It was found that participants had better wayfinding task performance in the desktop group. However, the route and exit choice and user experience were overall similar between the two groups. The findings suggest that one could adopt more ‘simple’ VR technologies for studies featuring ‘simple’ wayfinding tasks.
Context: Nowadays, video games look and feel increasingly realistic, and provide engaging and immersive experiences to players. Understanding what makes games so appealing and immersive is important to grasp in order to provide better experiences.
Objectives: To provide a better understanding of the concepts of flow, immersion and usability in video games and understand how usability interact with these different concepts.
Method: 20 participants took part in user tests and answered a survey about usability and immersion.
Results: Usability had a positive effect on immersion which was fully mediated by appeal. Learnability was correlated with absorption by activity, and usability was correlated with fluency of performance.
Conclusions: Usability and appeal are both contributing to the feeling of immersion which in turn increases the general appreciation of the game. Immersion and flow are both relevant to understand players’ appreciation of the game.
How working statically-typed functional programmers author code is largely understudied. And yet, a better understanding of developer practices could pave the way for the design of more useful and usable tooling, more ergonomic languages, and more effective on-ramps into programming communities. The goal of this work is to address this knowledge gap: to better understand the high-level authoring patterns that statically-typed functional programmers engage in. I did a grounded theory analysis of thirteen programming sessions of practicing functional programmers, eight of which also included a semi-structured interview. The theory I developed gives insight into how the specific affordances of statically-typed functional programming affect domain modeling, type construction, focusing techniques, exploratory strategies, mental models, and expressions of intent. The success of this approach in revealing program authorship patterns suggests that the same methodology could be used to study other understudied programmer audiences.
Email and other forms of electronic communication are becoming increasingly more essential to our everyday lives. However, with this growth comes the paralleled increased risk of email harassment, exacerbated by the current lack of platform support for managing these harmful messages. This paper explores different interfaces for the automated detection and management of email harassment using artificial intelligence in order to investigate what degree of platform intervention email users prefer when navigating their email platform. Through conducting a study involving three different email platform prototypes based on the Gmail platform, we evaluate how varying levels of platform intervention affect users’ perceived sense of safety, agency, and trust with their email platform. Our primary findings suggest that users generally benefited from each of the system intervention strategies and desired higher intervention features when combating email harassment, as well as ways of managing this intervention based on their unique preferences.
Applications of telehealth technologies have largely focused on management of chronic illnesses. However, the use of telehealth for preventative healthcare is becoming increasingly relevant as the aging population rapidly increases. In this paper we draw correlations between Quantified Self enthusiasts and those looking to monitor personal data for preventative healthcare. From this relation, we leverage aspects of the Quantified Self in order to define design requirements for preventative telehealth tools and present a telehealth system designed to aid prevention of age-related vision loss.
Human activity recognition systems combined with machine learning normally serve users based on the fixed sensor position. Uniform sensor position normally cannot satisfy the user’s demand according to different conditions. In this paper, we recognized the sensor position as an interface between the user and sensor system. We designed the optimization scheme to generate the best sensor position for activity recognition system. The user can indicate his/her preferred or disliked position and sensor numbers and the proposed optimization evaluates which position or positions combination can generate best accuracy under user’s preference. With the experiment, the proposed scheme can be employed to discover the optimal position to help the HAR system in a simple and customized way.
Patients with complex conditions and treatment plans often find it challenging to communicate with multiple providers and to prioritize various management tasks. The challenge is even greater for patients with discordant chronic comorbidities (DCCs), a situation where a patient has conditions that have unrelated and/or conflicting treatment plans. We identified possible needs and designed the Apps that addressed those needs (including: goal setting, ease of use, monitoring, and motivation). We then tested this App with patients with DCCs to see if those needs were addressed. We present results from that a usability study that highlight the design preference of patients with DCCs.
Reality Tales is a platform to facilitate interaction with fictional story characters for readers (13-20 years). The platform leverages binaural audio and multi-voice narration for an immersive story experience. Several paradigms for interactive storytelling have emerged in recent years, involving sequence-based story generation with user inputs. However, the current work in this domain rarely applies to existing fictional stories, with defined characters and plots. Our work introduces voice-based conversational interaction with story characters as a novel dimension to digital interactive storytelling. Through a user-centered process and qualitative studies, we discover that providing users with the agency to directly converse with story characters about their lives makes the users invested in the storyline.
In this paper, we introduce Expressive Skin, a 3D-printed, haptic display interface that links the virtual world with the real world through the user's body. This research's vision is to imagine an advanced version of the human body with an extended prosthesis that can leverage human cognitive abilities. We believe that by immersing our bodies in virtual content, we foster creative self-expression, imagination, and thus design new lifestyles. This is the concept of Extensive Immersion. The Express Skin is equipped with a projector on its back, four pyramid-shaped projection modules on its front, and a LED lights system strengthening the visualization of gaming content by reacting simultaneously with the projections and with the tactile interaction (Figure 1: Full sketch of high-fidelity prototype). Through the creation of Expressive Skin, we want to ignite the fuse of fashion and gaming, as a viable way to achieve Extensive Immersion.
Human-Drone Interaction is a fast-growing subset of the field of Human-Computer Interaction as drones enable novel and interesting interactions in 3D space as they can be regarded as pixels in physical space. We examine the intersection of drones and play and explore how drones can facilitate playful bodily experiences by pervading the physical space. We focus on “Paida” play to build explorative play experiences that pervade the player's physical environment. This work resulted in three play experiences built around simple interaction methods, in addition to observations from our design process, and three design strategies; drone-based play lends itself to collaborative play, such play benefits from simple designs, and designers benefit from designing for multiple players. From these design strategies, we set a starting point for designing novel, pervasive, and playful interactions whilst regarding drones as pixels in the physical space.
Human-Computer Integration is an extension of the Human-Computer Interaction paradigm that explores systems in which the boundary between user and computer is blurred. We build on this and explore its intersection with the fields of biodata and playful expression by presenting “Wigglears”, a wearable system that will wiggle the wearer's ears based on skin conductance, aiming to explore playful solutions towards integrated biodata-based self-expression. Through an autobiographical study, we demonstrate the system's ability to fuel social dialogue, amplify positive emotions, and triggering refocus. We intend for our system to be a novel solution to expressing emotions within social interactions, and hope to offer insights towards the social implications of biodata-based integration as a social cue, to help further the research within Human-Computer Integration.
Video games are often about being able to do things that are not possible in real life, about experiencing great adventures and visiting new places. Yet, as prolific as gaming is, it is inaccessible to a significant number of people with neuromuscular diseases who are unable to play games with traditional input methods like game controllers or keyboard and mouse combinations. While primarily used for entertainment in the early days, gaming now provides the possibility of countering social isolation and connecting with others through multiplayer games, online gaming communities and game streaming. In our work, we explore how facial expression recognition can be harnessed to provide quadriplegic individuals a way to play games independently and without complex mouth controller devices. We demonstrate our input interface with the design of a first person shooter game.
Since the advent of augmented reality, video game representations have been extended beyond the display. Many studies have used real objects to assist in the presentation of visual and haptic information, resulting in video games that are integrated with reality. Researchers often use tangibles UIs as a method to extend visual and haptic information presentation, but they commonly use speakers for auditory information presentation, and tangibles UIs dealing with real objects have few applications except for some music applications. In this study, we developed a device that presents game sound effects using real object sound sources. By assigning a real object sound source to each of the game sound effects, not only can auditory information be presented instead of speakers, but also a tangible UI can be realized where the user can access the video game by touching the sound source. We named this tangible UI as Tangible-Auditory User Interface (t-AUI), referring to GUI, and CLI. The production of t-AUI will enable the application of tangible UI to sound generating devices such as holding down and blocking the output of video game sound effects with hands, and reproducing sound effects with real objects for video game operation.
While ostensibly meritocratic, our society functions on an imperceptible yet undeniable paradigm of privilege. Privilege is an unearned, unasked for advantage gained because of the way society views an aspect of an individual's identity, such as race, caste, ethnicity, gender, socioeconomic status, and language. Privilege permeates nearly every facet of human existence and yet we seem to barely recognise its role in what are seemingly impartial outcomes. Tread Together is an interactive narrative based game that uses gameplay as a tool to help people better reflect on their own privilege and examine how it affects the people around them using a combination of critical and reflective design. The narrative transports players to embark on a journey through an alternative perspective in the game with the express intent of introspective contemplation and scrutiny of privilege as a dominant paradigm in society.
We constantly both learn from and play with the world around us. We interact, we experiment, and we are curious; and through the spirit of Zimmerman’s ideas on ‘gaming literacy’, we can see the world as opportunities for play. This project thus presents ChemCraft, an adventure computer game based on chemistry. The game focuses on a ludic approach for educational game design, translating digital game rules from real chemistry rules: gameplay mechanics follow chemical reactions, game objects represent chemical compounds, and game object properties reference real chemistry data values. The game ChemCraft targets the niche field of games for higher education, specifically IB Higher Level Organic Chemistry, to investigate to what extent chemistry systems may be translated into game systems. The project asks not ‘What does a chemistry game look like’ but instead: ‘How can chemistry be a game?’
ReWIND is a role-playing game (RPG) designed to enhance the emotion control of patients with generalized anxiety disorder (GAD) over the feeling of excessive worry and fear by integrating cognitive behavioral therapy (CBT) in a serious game. The goal of the game is to allow players to virtually encounter different anxiety-causing situations and provide constructive measures to deal with negative feelings following the ABC-model of CBT (antecedent, belief, consequence) along with disputation and new effect. The storyline's foundation focuses on four emotion regulation strategies common in GAD: catastrophizing, rumination, denial, and lack of refocus on planning. Each strategy consists of three scenarios that simulate real-life occurrences where GAD patients might find them hard to overcome. CBT elements are integrated with the diegetic components implemented in the game to provide psychoeducation in a fun way, which can be used to complement counseling.
In an era of digitalization, the relevance of programming in real-time development environments is constantly increasing. This also applies to the Unity Engine. Within the framework of a bachelor thesis, the first freely accessible Unity programming learning game called 'ENC#YPTED' was developed. ENC#YPED aims to support a positive learning experience in the acquisition of programming skills. This is achieved by immersing the learner in a game world. The application was iteratively improved and refined according to user-centered development through user experience testing. This adaptation creates a high level of usability. Game duration and difficulty were not relevant for most of the test subjects to achieve their goals. Considering the high level of motivation and entertainment of the testers, it can be assumed that participants had a flow experience while playing ENC#YPTED. This paper describes the game concept and the user-centered design research accompanying the implementation process.
The sudden transition to online education due to the COVID-19 pandemic created a lack of informal embodied social interactions between children. To address this, we developed What The Flock?, an online game for Active Breaks for children in primary and elementary school. Children collaboratively create migratory bird flock formations by controlling a bird character through bodily movement captured by their webcam. A calibration process allows each child to define the thresholds of movements to control their bird. Each player also controls the volume of a sound track, resulting in a unique song for every instance of game play.