This symposium showcases the latest HCI work from Asia and those focusing on incorporating Asian sociocultural factors in their design and implementation. In addition to circulating ideas and envisioning future research in human-computer interaction, this symposium aims to foster social networks among academics (researchers and students) and practitioners and grow a research community from Asia.
As HCI Across Borders aspires to celebrate its fifth year at CHI, and the CHI 2020 venue of Hawaii signifies a coming together of four continents, the goal of the 2020 symposium is to bring our focus to themes that unify and foster solidarity across borders. Thus we select the United Nations' Sustainable Development Goals as our object of study. Many communities within CHI focus on the constrained and ephemeral nature of resources, including the HCI for Development (HCI4D), Sustainable HCI (SHCI), and Crisis Informatics (CI) communities, among several others. We contend that it is time for these communities to come together in addressing issues of global relevance and impact, and for many more to care. Additionally, as the venue for CHI shifts to Asia in 2021, we aspire to prepare the conference and its participants to grapple with themes that might offer a different and novel perspective when engaged within the Global South.
The past few years has seen steady growth for the HCI Education Community of Practice (CoP), driven primarily by the "HCI Living Curriculum" workshop at CHI 2018 and the inaugural EduCHI symposium at CHI 2019. In discussions among HCI educators over the past two years, two themes have stood out: creating channels for discussions related to HCI education and providing a platform for sharing HCI curricula and teaching experiences. To that end, we are organizing EduCHI 2020: The 2nd Annual Symposium on HCI Education. Similar to last year's symposium, EduCHI 2020 will again feature paper presentations about HCI education trends, curricula, pedagogies, teaching practices, and diverse and inclusive HCI education. In addition, we will also be adding more opportunities for discussions among and between members of the HCI education community, particularly around solving current and future challenges facing HCI educators.
Death, whilst an inevitable part of being alive, factors more significantly in our lives than the event itself. The role that technology can play in how people live as they approach end of life as well as in bereavement is full of rich possibilities, but research here is also fraught with ethical and methodological dilemmas. Although there has been a turn to focus on the topic of death by some in HCI we need to go far further to embrace the contexts relating to it more meaningfully and broadly. Through this design focused workshop, we will bring experts and interested parties together to creatively explore opportunities and challenges for HCI at the end of life and beyond. Discussions and design activities will be supported by conceptual resources for design, lived experience accounts, design methods and ethical resources. The workshop will provide a time and place to bring together experts but will also provide an open and accepting environment for those for whom HCI at end of life and beyond is a new area of concern.
Much of the research on authentication in the past decades focused on developing authentication mechanisms for desktop computers and smartphones with the goal of making them both secure and usable. At the same time, the increasing number of smart devices that are becoming part of our everyday life creates new challenges for authentication, in particular since many of those devices are not designed and developed with authentication in mind. Examples include but are not limited to wearables, AR and VR glasses, devices in smart homes, and public displays. The goal of this workshop is to develop a common understanding of challenges and opportunities smart devices and environments create for secure and usable authentication. Therefore, we will bring together researchers and practitioners from HCI, usable security, and specific application areas (e.g., smart homes, wearables) to develop a research agenda for future approaches to authentication.
Recognizing human emotions and responding appropriately has the potential to radically change the way we interact with technology. However, to train machines to sensibly detect and recognize human emotions, we need valid emotion ground truths. A fundamental challenge here is the momentary emotion elicitation and capture (MEEC) from individuals continuously and in real-time, without adversely affecting user experience. In this first edition of the one-day CHI 2020 workshop, we will (a) explore and define novel elicitation tasks (b) survey sensing and annotation techniques (c) create a taxonomy of when and where to apply an elicitation method.
In recent years, the notion of smart cities has become the focus of a growing body of research. To date, much of this attention has revolved around the technical aspect, with related concerns including the creation and implementation of suitable smart city technologies. What is notably missing from these discussions, however, is a consideration of the lived experience of supposedly 'smart spaces' and the extent to which physical and digital environments are currently producing new forms of play and playfulness that can be contextualized within this field. With this in mind, the purpose of our workshop is as follows. First, to provide a platform for researchers and practitioners to engage with these issues that often remain hidden when discussions solely focus on technology. Second, to develop a draft research agenda for challenges that will serve as a primer for future studies examining the topic.
Artificial intelligence (AI) and Human Computer Interaction (HCI) share common roots and early work on conversational agents has laid the foundation for both fields. However, in subsequent decades the initial tight connection between the fields has become less pronounced. The recent rise of deep learning has revolutionized AI and has led to a raft of practical methods and tools that significantly impact areas outside of core-AI. In particular, modern AI techniques now power new ways for machines and humans to interact. Thus it is timely to investigate how modern AI can propel HCI research in new ways and how HCI research can help direct AI developments. This workshop offers a forum for researchers to discuss new opportunities that lie in bringing modern AI methods into HCI research, identifying important problems to investigate, showcasing computational and scientific methods that can be applied, and sharing datasets and tools that are already available or proposing those that should be further developed. The topics we are interested in including deep learning methods for understanding and modeling human behaviors and enabling new interaction modalities, hybrid intelligence that combine human and machine intelligence to solve difficult tasks, and tools and methods for interaction data curation and large-scale data-driven design. At the core of these topics, we want to start the conversation on how data-driven and data-centric approaches of modern AI can impact HCI.
Human error at the time of operation (i.e. when the system is in use) are implicated in most incidents involving safety critical systems.. In many domains, control and command interfaces are composed of a multitude of devices and machines from different brands in different generations have been crammed together. The resultant bridging of functions across devices, the decision making, the overview, the handling of partially imprecise or conflicting information are often just offloaded to the human. Thus, there appears to be a need to shift the attention from avoiding human error (at operation time) to avoiding error during design. In this workshop, we aim to provide a forum to discuss such a paradigm shift and the implication on the methods and tools for designing and evaluating HCI technology in safety-critical environments
There is mounting evidence acknowledging that embodiment is foundational to cognition. In HCI, this understanding has been incorporated in concepts like embodied interaction, bodily play, and natural user-interfaces. However, while embodied cognition suggests a strong connection between motor activity and memory, we find the design of technological systems that target this connection to be largely overlooked. Considering this, we are provided with an opportunity to extend human capabilities through augmenting motor memory. Augmentation of motor memory is now possible with the advent of new and emerging technologies including neuromodulation, electric stimulation, brain-computer interfaces, and adaptive intelligent systems. This workshop aims to explore the possibility of augmenting motor memory using these and other technologies. In doing so, we stand to benefit not only from new technologies and interactions, but also a means to further study cognition.
Accessibility concerns play an increasing role in Human-Computer Interaction (HCI) research. This workshop takes a look at the role Critical Disability Studies currently plays in the development of assistive technologies and the accessibility of technologies more generally. Accordingly, it has been ten years since Mankoff's seminal paper on "Disability Studies as a Source of Critical Inquiry for the Field of Assistive Technology'' drew out the requirement of actively involving disabled people in research about them. We find it a fitting time for reflecting on and revitalising the topic. We will examine untapped research opportunities and identify systemic obstacles that keep disabled scholars in the margins of associated research. The gathering additionally serves to establish a community of researchers interested in pursuing the perspective of Critical Disability Studies within HCI.
Uncertainty is prevalent characteristic of contemporary life, and a central challenge of HCI. This one-day workshop will explore how HCI has and might continue to engage uncertainty as a generative feature in design, as opposed to a force to mitigate and control. We hope to convene researchers from broad ranging areas to explore the many ways in which uncertainty appears in our research and the different types of responses that HCI has to offer. There is an incredible variety of conceptual formulations of uncertainty and related ideas like risk, ambiguity, and suspense that raise both difficult challenges as well as significant opportunities for creative engagement with societal challenges. During the workshop, we won't seek to "solve" uncertainty but rather expand the ways in which we think about and navigate it. In doing so, we will experiment with and contribute to new practices, methods, and concepts for embracing uncertainty. Outcomes of the workshop will include documentation of exercises designed to evoke uncertainty in participants, concept mappings, and a collection of short essays written and refined by participants.
The aim of this workshop is twofold. First, it aims to grow critical mass in Conversational User Interfaces (CUI) research by mapping the grand challenges in designing and researching these interactions. Second, this workshop is intended to further build the CUI community with these challenges in mind, whilst also growing CUI research presence at CHI. In particular, the workshop will survey and map topics such as: interaction design for text and voice-based CUI; the interplay between engineering efforts such as in Natural language Processing (NLP) and the design of CUI; practical CUI applications (e.g. human-robot interaction, public spaces, hands-free and wearables); and social, contextual, and cultural aspects of CUI design (e.g. ethics, privacy, trust, information exploration, persuasion, well-being, or decision-making, marginalized users). By drawing from the diverse interdisciplinary expertise that defines CHI, we are proposing this workshop as a platform on which to build a community that is better equipped to tackle an emerging field that is rapidly-evolving, yet is under-studied --- especially as the commercial advances seem to outpace the scholarly research in this space.
With rapid advances in streaming technology and the rise of esports, spectating other people playing video games has become a mass phenomenon. Today, both live video game streaming and esports are a booming business attracting million of viewers. This offers an opportunity for Human-Computer Interaction (HCI) research to explore how to support spectator experiences. This workshop aims to foster discussion on how technology and HCI can help to transform the act of spectating games and particularly esports from a passive (watching) to a more active - and engaging - experience. Through this workshop we aim to explore opportunities for research, promote interdisciplinary exchange, increase awareness, and establish a community on the subject matter.
We aim to bring together a number of designers, researchers and practitioners to share their experience of the influence of crime and legality on their work. Through these discussions, we aspire to highlight the existing knowledge base for discussions of crime within HCI, provide a space for sharing researcher's personal experiences in their work with and against crime, and highlight best practice going forwards. We will do this by using three considerations to inform our critical focus on crime: (1) mapping out the existing ways that HCI has addressed crime; (2) considering what part crime plays in approaches to social justice; (3) questioning who is thus morally responsible for the criminal activity of others, and what does this entail for ensuring fair approaches within technical design.
There is an urgent and ongoing need to engage critically with race in human-computer interaction. In this workshop, we consider two intertwining aspects: first, how HCI research and practice should engage with race; second, how the HCI community itself can become more racially inclusive and equitable. The workshop offers a safe space for HCI scholars and practitioners to discuss how their experiences with race and racism impact their research and work life. Insights from critical race theory will inform the discussion. Workshop participants will draft a set of guidelines for research and a set of recommendations for SIGCHI leadership and the CHI community.
HCI has long considered sites of workplace collaboration. From airline cockpits to distributed groupware systems, scholars emphasize the importance of supporting a multitude of tasks and creating technologies that integrate into collaborative work settings. More recent scholarship highlights a growing need to consider the concerns of workers within and beyond established workplace settings or roles of employment, from steelworkers whose jobs have been eliminated with post-industrial shifts in the economy to contractors performing the content moderation that shapes our social media experiences. This one-day workshop seeks to bring together a growing community of HCI scholars concerned with the labor upon which the future of work we envision relies. We will discuss existing methods for studying work that we find both productive and problematic, with the aim of understanding how we might better bridge current gaps in research, policy, and practice. Such conversations will focus on the challenges associated with taking a worker-oriented approach and outline concrete methods and strategies for conducting research on labor in changing industrial, political, and environmental contexts.
Design fictions are increasingly important and prevalent within HCI, though they are created through diverse practices and approaches and take many diverse forms. The goal of this workshop is both to create an overview of this diversity and to move towards a shared vision of design fiction within the CHI community. With this goal in mind, we invite reports, analyses, and examples of design fictions. An outcome will be development of a summary of the current state-of-the-art seeded by a diversity of perspectives within CHI, a descriptive orientation to this important domain of practices and outcomes, and a proposed set of evaluation guidelines for reviewers of design fiction submissions.
With social computing systems and algorithms having been shown to give rise to unintended consequences, one of the suspected success criteria is their ability to integrate and utilize people's inherent cognitive biases. Biases can be present in users, systems and their contents. With HCI being at the forefront of designing and developing user-facing computing systems, we bear special responsibility for increasing awareness of potential issues and working on solutions to mitigate problems arising from both intentional and unintentional effects of cognitive biases. This workshop brings together designers, developers, and thinkers across disciplines to redefine computing systems by focusing on inherent biases in people and systems and work towards a research agenda to mitigate their effects. By focusing on cognitive biases from content or system as well as from a human perspective, this workshop will sketch out blueprints for systems that contribute to advancing technology and media literacy, building critical thinking skills, and depolarization by design.
There is a growing need for effective remote communication, which has many positive societal impacts, such as reducing environmental pollution and travel costs, supporting rich collaboration by remotely connecting talented people. Social Virtual Reality (VR) invites multiple users to join a collaborative virtual environment, which creates new opportunities for remote communication. The goal of social VR is not to completely replicate reality, but to facilitate and extend the existing communication channels of the physical world. Apart from the benefits provided by social VR, privacy concerns and ethical risks are raised when the boundary between the real and the virtual world is blurred. This workshop is intended to spur discussions regarding technology, evaluation protocols, application areas, research ethics and legal regulations for social VR as an emerging immersive remote communication tool.
Privacy breaches and a drastic increase in the monetary value of our personal information require user experience (UX) designers and researchers to consider ethical choices more imperatively than ever before in their daily practice. The UX industry lacks ethical guidelines and standards or even a governing body that enforces any kind of universal framework of ethics. This workshop is designed to establish a foundation of understanding of ethical practice for designers, educators, developers, programmers, and individuals working in UX. Participants will collaborate and share knowledge between industries and practices, working together towards a framework that participants can take with them and implement them into their current UX workflow.
Will automated driving help or hurt our efforts to remedy climate change? The overall impact of transportation and mobility on the global ecosystem is clear: changes to that system can greatly affect climate outcomes. The design of mobility and automotive systems will influence key factors such as driving style, fuel choice, ride sharing, traffic patterns, and total mileage. However, to date, there are few research efforts that explicitly focus on these overlapping themes (automated driving & climate changes) within the HCI and AutomotiveUI communities. Our intention is to grow this community and awareness of the related problems. Specifically, in this workshop, we invite designers, researchers, and practitioners from the sustainable HCI, persuasive design, AutomotiveUI, and mobility communities to collaborate in finding ways to make future mobility more sustainable. Using embodied design improvisation and design fiction methods, we will explore the ways that systems affect behavior which then affect the environment.
In recent years there has been a growing body of work from the CHI communities that looks at designing for inclusivity and for the unique and specific constraints of diverse populations. This has included but is not limited to, work on designing within patriarchal contexts, designing around issues of gender and sexual orientation and designing around literacy. In tandem, local HCI initiatives such as ArabHCI  have emerged to address the misrepresentation of these populations in HCI research, highlighting the fact that Western originated design methods would require delicate adaptations to suit non-Western cultural contexts. With the same approach towards inclusivity and co-existence the aim of this workshop is to bring together HCI researchers and practitioners who engage in studies and interventions within Muslim majority communities around the world. The goal is to understand the Muslim identity and perceptions around it, the unique constraints and limitations within Muslim communities and to identify core issues and concerns within these populations. We will explore the following themes: refugees and islamophobia; Muslim feminism and Digital financial services.
We are concurrently witnessing two significant shifts: digital devices are becoming ubiquitous, and older people are becoming a very large demographic group. However, despite the recent increase in related CHI publications, older adults continue to be underrepresented in HCI research as well as commercially. Therefore, the overarching aim of this workshop is to increase the momentum for such research within CHI and related fields such as gerontechnology. For this, we plan to create a space for discussing and sharing principles and strategies to design interactions and evaluate user interfaces (UI) for the ageing population. We thus welcome contributions of empirical studies, theories, design, and evaluation of UIs for older adults. Building on the success of last two year's workshops, we aim to grow the community of CHI researchers across borders interested in this topic by fostering a space to exchange results, methods, approaches, and ideas from research on interactive applications in support of older adults that are reflective of international diversity that is representative of CHI.
Immersive virtual experiences are becoming ubiquitous in our daily lives. Besides visual and auditory feedback, other senses like haptics, smell and taste can enhance immersion in virtual environments. Most solutions presented in the past require specialized hardware to provide appropriate feedback. To mitigate this need, researchers conceptualized approaches leveraging everyday physical objects as proxies instead. Transferring these approaches to varying physical environments and conditions, however, poses significant challenges to a variety of disciplines such as HCI, VR, haptics, tracking, perceptual science, design, etc. This workshop will explore the integration of everyday items for multi-sensory feedback in virtual experiences and sets course for respective future research endeavors. Since the community still seems to lack a cohesive agenda for advancing this domain, the goal of this workshop is to bring together individuals interested in everyday proxy objects to review past work, build a unifying research agenda, share ongoing work, and encourage collaboration.
Current technologies designed for mental health support often have low adoption rates and may not fit people's routines. However, recent literature demonstrates that individuals managing mental illness often incorporate a variety of technologies into their self-management routines, including texting, music, wearables, online communities, games, and social media. Therefore, adopting a perspective focused on understanding the technology ecosystem of mental health management may be a better approach than focusing on singular platforms and applications. In this CHI workshop, we will examine a constellation of digital and non-digital mental health support resources. We will discuss the needs and practices of individuals managing mental illness, how these needs relate to future ecosystems of technologies and services, and the potentials and repercussions of current technologies. By convening a group of interdisciplinary researchers and practitioners, we will collectively address these topics to create resources attentive to individuals' lived experiences with mental illness, goals with respect to management and recovery, and interactions with existing forms of resources, care, and technology.
The continued proliferation of computing devices comes with an ever-increasing energy requirement, both during production and use. As awareness of the global climate emergency increases, self-powered and sustainable (SelfSustainable) interactive devices are likely to play a valuable role. In this workshop we bring together researchers and practitioners from design, computer science, materials science, engineering and manufacturing industries working on this new area of endeavour. The workshop will provide a platform for participants to review and discuss challenges and opportunities associated with self-powered and sustainable interfaces and interactions, develop a design space and identify opportunities for future research.
As AI changes the way decisions are made in organizations and governments, it is ever more important to ensure that these systems work according to values that diverse users and groups find important. Researchers have proposed numerous algorithmic techniques to formalize statistical fairness notions, but emerging work suggests that AI systems must account for the real-world contexts in which they will be embedded in order to actually work fairly. These findings call for an expanded research focus beyond statistical fairness to that which includes fundamental understandings of human use and the social impact of AI systems, a theme central to the HCI community. The HCI community can contribute novel understandings, methods, and techniques for incorporating human values and cultural norms into AI systems; address human biases in developing and using AI; and empower individual users and society to audit and control AI systems. Our goal is to bring together academic and industry researchers in the fields of HCI, ML and AI, and the social sciences to devise a cross-disciplinary research agenda for fair and responsible AI systems. This workshop will build on previous algorithmic fairness workshops at AI and ML conferences, map research and design opportunities for future innovations, and disseminate them in each community.
HCI research on mobility and transport has been dominated by a focus on the automobile. Yet urgent environmental concerns, along with new transport technologies, have created an opportunity for new ways of thinking about how we get from A to B. App-based services, innovations in electric motors, along with changing urban transport patterns, are transforming public transit. Technology is creating new collective transit services, as well as new ways for individuals to move, such as through rental, free-floating e-scooters, so called 'micro-mobility'. This workshop seeks to discuss and establish HCI perspectives on these new mobilities - engaging with and even inventing new modes of transport, fostering collaboration between scholars with varied topical interests around mobility. We seek to bring together a group of industry and academic collaborators, bringing new competences to HCI around the exciting opportunities of redesigning our contemporary mobilities.
Inbodied Interaction focuses on understanding how the state of the body's internal processes affects human performance, with a goal of aligning interactive technology to support all aspects of human performance - cognitive, social, physical. In this third Body As Starting Point workshop, we aim to bring together insights from the past two workshops and the 2019 Inbodied Interaction Summer School to present and refine emerging themes and co-define case studies to further the state of Inbodied Interaction research and design practices. This one-day hands-on workshop invites newcomers and experts to help shape the future of Inbodied Interaction through the foundations we have co-created with previous participants. The key takeaways of the workshop are to review existing themes, formulate case studies, research questions, and form research teams to chart a path forward for HCI designs that take the body as a starting point.
The notion of 'giving voice' to under-represented groups in design is fraught with issues of power and interpretation. This workshop will address socio-methodological issues at play across design contexts, and build a community comprised of those who seek to support the expression and inclusion of diverse needs, abilities, and experiences in design. Research often involves participants who communicate in ways which are in line with assumed social norms and practices (e.g. those who communicate verbally), while those who communicate differently can be under-represented in design research. We highlight a need to broaden the inclusion and support of communities who have different communication needs, including; people with functional communication impairments, neurodiverse people, people who experience adverse life situations or trauma, people who are discriminated against or under-represented, and many, many other individuals and communities. We aim to unpick - and develop pragmatic solutions to - several challenges around working with diverse communities in research and practice, producing functional guidelines for design research in this area and problematising current methods and terminology.
Art and design are essential aspects of our culture and how we interact with the world. Artists and designers use a wide selection of tools, whose impact is rapidly growing with the progression of digital technologies. This change has opened up new opportunities for the CHI community to build creative supportive tools. The digital switch has come with many benefits such as lowering barriers, mobile work environments and mass production for distribution of work. Along with these benefits we also see challenges for art and design work and its future perception in society. As technology takes a more significant role in supporting art and design what will this mean for the individual artist or designer? The focus of this workshop is to bring together researchers and practitioners to explore what the future of digital art and design will hold. The exploration will centre around synthesizing key challenges and questions, along with ideas for future interaction technologies that consider mobile and tangible aspects of digital art.
As networked sensing technologies continue to infiltrate new corners of the built environment, innocuous daily activities -turning on a light, streaming TV, commuting- increasingly create personal data trails. Assisting users in negotiating permissions for services and managing known personal data trails is currently a significant design challenge. However, as this landscape is rapidly evolving year by year, what is urgently needed is for designers to speculate, ideate and design for emergent and near-future data trails. This one-day workshop will bring together a wide range of researchers, practitioners and designers working in areas such as smart cities, smart homes, responsible research & innovation, surveillance, privacy, IoT, smart energy and beyond, in order to ideate and design around emergent personal data trails. The intended outcome of the workshop is to co-create a suite of signs, signals and signifiers with the purpose of acquainting and empowering users to navigate and manage these types of data.
Privacy researchers and designers must take into consideration the unique needs and challenges of vulnerable populations. Normative and privileged lenses can impair conceptualizations of identities and privacy needs, as well as reinforce or exacerbate power structures and struggles-and how they are formalized within privacy research methods, theories, designs, and analytical tools. The aim of this one-day workshop is to facilitate discourse around alternative ways of thinking about privacy and power, as well as ways for researching and designing technologies that not only respect the privacy needs of vulnerable populations but attempt to empower them. We will work towards developing best practices to help academics and industry folks, technologists, researchers, policy makers, and designers do a better job of serving the privacy needs of vulnerable users of technology.
The extended abstract describes the background, goals and organization of the eighth International Workshop of Chinese CHI (Chinese CHI 2020).
Conversational agents have increasingly been deployed in healthcare applications. However, significant challenges remain in developing this technology. Recent research in this area has highlighted that: i) patient safety was rarely evaluated; ii) health outcomes were poorly measured, and iii) no standardised evaluation methods were employed. The conversational agents in healthcare are lagging behind the developments in other domains. This one-day workshop aims to create a roadmap for healthcare conversational agents to develop standardised design and evaluation frameworks. This will prioritise health outcomes and patient safety while ensuring a high-quality user experience. In doing so, this workshop will bring together researchers and practitioners from HCI, healthcare and related speech and chatbot domains to collaborate on these key challenges.
Human-drone interaction (HDI) is becoming a ubiquitous topic in daily life, and a rising research topic within CHI. Knowledge from a wealth of disciplines -- design, engineering, social sciences, and humanities -- can inform the design and scholarship of HDI, and interdisciplinary communication is essential to this end. The Interdisciplinary Workshop on Human-Drone Interaction (iHDI 2020) aims to bring together diverse perspectives; advancing HDI and its scholarship through a rich variety of activities involving an assortment of research, design, and prototyping methods. The workshop intends to serve as a platform for a diverse community that continuously builds on each other's methods and philosophies, towards results that "take off."
Automation takes up an increasingly important role in everyday life. The objective of the workshop is to provide a forum for a holistic view on design foundations for automated systems in everyday private, public and professional surroundings. We conceive the workshop as an interdisciplinary forum for user-centered design and research, taking inspiration from diverse problem areas and application fields. Given their current relevance for automation experience, four key aspects (intelligibility, interventions, interplay, and integrity) will be addressed in expert talks, participant presentations and group-wise creative thinking exercises. The workshop will provide a further step towards a research agenda for comprehensive design and research approaches that provide a transfer and consolidation across different application domains, user requirements and system capabilities.
As larger parts of our lives are determined in the digital realm, it is critical to reflect on how democratic values can be preserved and cultivated by technology. At the city-scale, this is studied in the field of 'digital civics'; however, there seems to be no corresponding focus at the level of buildings/building inhabitants. The majority of our lives are spent indoors and therefore the impact that 'indoor digital civics' may have, might exceed that of city-scale, digital civics. The digitization of building design and building management creates an opportunity to better identify, protect, and cultivate civic values that, until now, were centralized in the hands of building designers and building owners. By bringing together leading architecture/HCI academics and commercial stakeholders, this workshop builds on previous workshops at CHI. The workshop will provide a forum where a new agenda for research in 'HabiTech' can be defined and new research collaborations formed.
In recent years, Mixed Reality (MR) headsets have increasingly made advances in terms of capability, affordability and end-user adoption, slowly becoming everyday technology. HCI research typically explores positive aspects of these technologies, focusing on interaction, presence and immersive experiences. However, such technological advances and paradigm shifts often fail to consider the "dark patterns", with potential abusive scenarios, made possible by new technologies (cf. smartphone addiction, social media anxiety disorder). While these topics are getting recent attention in related fields and with the general population, this workshop is aimed at starting an active exploration of abusive, ethical, social and political scenarios of MR research inside the HCI community. With an HCI lens, workshop participants will engage in critical reviews of emerging MR technologies and applications and develop a joint research agenda to address them.
As computing has increasingly contributed to different aspects of life, considerations of ethics, values, accessibility, diversity, and inclusivity have become more urgent. The human-computer interaction community has helped to give such issues visibility and emphasis, even while recognizing how much work is yet to be done. This addresses ways we can build on that foundation to continue to improve our community and the world, while acknowledging the difficulty of the problems, the understandable disagreements about how best to pursue them, and the fact that these issues hit home for thousands of participants and volunteer organizers alike. It asks: How can we as a community recognize and address and incorporate the very real critiques of our current systems to produce a more thoughtful, just, equitable field and world? What actions can we take-beyond virtue signaling and slacktivism-that will effect meaningful change within the community and beyond? In this hands-on workshop, working with an expert in effective activism, we bring together activists from our community with current volunteers in these domains to improve our field and build together the world we would like to see.
Immersive technologies have opened many opportunities for the visual analytics and visualisation community. However, despite an emerging focus on immersive analytics research, there is currently a lack of coherent and broadly applicable visualisation and interaction design in order to make immersive analytics systems productive and usable in real world scenarios. In this workshop, we propose to identify a road-map towards productive user interfaces for immersive analytics. In particular we aim to understand how we can effectively support specific visualisation and analytics tasks in VR/AR, and identify the interactions needed to support these. Our goal is to catalyse the research community and lead toward a reference paper to inform future research.
Virtual and augmented realities (VR/AR) allow artists to create 3D content in a three-dimensional space — both display and inputs are 3D. Getting rid of 2D proxies such as screens and graphic tablets removes a significant barrier from 3D creation and allows artists to create more intuitively, and potentially more efficiently. However, creating in VR/AR introduces new control, precision, and ergonomic challenges. Designing interactive tools for 3D creation is therefore non-trivial. A deep understanding of human factors, user preferences, as well as biases stemming from users' experience with 2D tools is essential to develop effective creative tools for VR/AR. My research combines exploratory user studies and technical advancements to build novel tools for creating 3D content in immersive spaces. I present two computer graphics applications which utilize 3D interactions to improve existing creative workflows and devise novel ones for visual creative expression in three-dimensions. The first studies concept sketching, while the second explores animation of dynamic physical phenomena. I then describe my ongoing work and planned future work on other creative applications.
There has been an increasing focus to skill, reskill and upskill the Indian population to capitalize the upcoming demographic dividend of the country. To address this, a broad spectrum of government schemes are being introduced, however, most of them have not been able to fulfil their requirements so far. There is a need for finding out ways to develop dynamic, demand-driven, quality conscious training solutions to enhance the ability to adapt to changing technologies and labour market demands. My dissertation work proposes a potential solution by exploring Virtual Reality as a medium to rapidly develop effective, scalable training modules for vocational skills in new-age employment sectors in India. After creating a framework based on formative research and case studies, I am building and evaluating one such system. I further plan to examine the efficacy of such systems and make them easy to be developed by existing vocational trainers with the help of an authoring tool.
This research investigates the combination of Natural Language Visualization and Conversational Technology to support children enhancing linguistic and communication abilities. In particular, I refer to those skills related to self-description that are fundamental in the process of self-knowledge, self-awareness, and self-acceptance. The challenge of this project is to design, develop, and evaluate a dialogue system able to stimulate children to describe themselves with questions and hints; besides, it must be capable of drawing their avatar consistently with what they say and with how they look like.
Memory is necessary for our daily lives and activities, yet it is often fallible. Memory augmentation technology could improve and support our memory by facilitating memory training and providing memory assistance respectively. However, there remains a lack of research on utilising users' internal states to enable just-in-time delivery of these interventions to improve receptivity and effectiveness. With the focus on helping older adults, my research involves the design, development and evaluation of memory training and memory assistance artifacts which infer users' internal and cognitive context through physiological signals (biosignals). This work will contribute new concepts that build on previous research in the field of mobile computing and design guidelines for future work on augmenting human memory.
Chatbots promote new forms of human-computer interaction by using natural language to achieve communicative goals. However, little is known about the impact of chatbots' linguistic choices on users' experiences. My research draws on sociolinguistic theory to investigate how a chatbot's language choices can adhere to the expected social role the agent performs within a given context; i.e., understand whether chatbots design should account for linguistic register . This research involves adapting chatbot utterances to the register of tourist assistants and evaluating to what extent this strategy meets users' expectations. Ultimately, I want to determine whether register-specific language influences users' perceptions and experiences with chatbots.
With an aim to extend and enlarge the slow technology concept, this overview describes my dissertation goal of exploring the diversity of temporal design within HCI. My motivation started from the real-world need to better support people to live with their vast and still growing digital possessions. I adopt Research through Design methodology and plan to propose two quality design cases and following empirical studies in order to understand how 'slow' everyday technologies should be to provide alternative thinking in temporality and curious interactive experience with digital possessions. Building on my emerging body of published research, the research outcome will be anticipated to support the creation of methodological practice-based insights and to provoke different perspectives on designing future technologies as contributions to HCI communities.
With the recent popularity of self-tracking, individuals are increasingly using and interacting with their personal health data. Fertility is a health issue in which people often track and interact with diverse health-related data potentially associated with their fertility cycles. Fertility tracking is often impacted by different social factors and taboos, and it has been progressively assisted by consumer health technologies. My dissertation research combines multiple studies focusing on understanding the data practices, the influence of technology, and the consequences of using fertility-related personal data and technology in self-tracking for fertility. Based on my findings, I plan to explore how design and technology can be used to reinforce positive experiences, avoid negative emotional burden, and support holistic tracking for fertility.
Researchers in HCI commonly use controlled experiments to evaluate artifacts and interaction techniques. However, experiment design and statistical analysis are complex tasks that are prone to errors, especially for novice researchers. In part, this is because researchers need to make numerous decisions about the design and analysis while it is hard to immediately anticipate their effect. In this dissertation, I aim to study how interactive systems that provide real-time feedback, enable direct manipulation, and facilitate exploration positively influence decision making and reproducibility. My previous work, Touchstone2, shows how researchers can benefit from comparing trade-offs among experimental designs. I contribute an empirical understanding of how researchers design and analyze experiments, as well as a set of tools that support researchers during that process.
Older adults experience a decrease in physical ability which may lead to psychological issues, poor quality of life and even death. Therefore, it is very important for them to maintain their physical prowess by encouraging the performance of physical activity. Previous research has focused primarily on physical activity as a single task without consideration of numerous additional coinciding events that almost always co-occur. The most common concomitant, secondary task (i.e., dual-task) is related to the cognitive requirements associated with daily activity pursuits. Thus, any procedure designed to promote greater physical activity should include complex tasks. In addition, the older population is diverse, and therefore, physical activity should be personalized according to individual capabilities and skills. We intend to combine navigation as a secondary task to walking, as a task "instructor", guiding the users through their personalized routes and also provide an added value of doing physical activity outside the home.
Through my doctoral research, I aim to reduce privacy risks in the context of photo-sharing online by developing tools and techniques that are both effective (in minimizing privacy risks) and usable. In solving this problem, I am taking a holistic socio-technical approach and proposing mechanisms to lessen the privacy risks of both the photo-sharers (or owners) and other people captured (sometimes unintentionally) in their photographs. More specifically, my goal is to i) design image transforms to effectively obfuscate privacy-sensitive content while preserving utility for human viewers, ii) develop techniques to automatically detect scene elements that may threaten privacy of people appearing photographs, and, iii) design behavioral interventions to persuade people to be more protective of their own and others' privacy. With my research, I hope to contribute to promoting privacy-protective behaviors and making online space more privacy-friendly
My doctoral research identifies and addresses the design and policy challenges of content moderation on the levels of smaller communities and larger platforms, using a combination of interviews and surveys. My prior work studies voice-based communities on Discord, and shows that voice as a new technology brings unique challenges to community moderators and highlights the fundamental evidentiary need in content moderation. My dissertation expands on this work by (1) quantifying severity of violations as a way to address social media platforms' challenge of prioritization, and (2) empirically evaluating existing moderation approaches and design recommendations by soliciting moderator and user perceptions of them. Combining these perspectives, my dissertation will provide guidelines for improving content moderation in online communities.
As participatory design (PD) approaches become a normative part of contemporary business practice, it becomes increasingly important to ensure that PD tools and methods are accessible and understandable to a larger audience, to fully mobilise and harness their value. This research seeks to explore the materiality of different PD tools and methods as a means to articulate the value of design to broader participant groups, explore what tools can be brought in from everyday use artefacts to reduce the barrier of participation, and what implications their use has on understanding the value of design. A series of experiments are conducted in a variety of distinct, specific contexts to explore the materiality as part of a broader program of research. The contribution aims to develop a suite of strong concepts that can be used to inform design practice and subsequent development of PD tools.
Visual data exploration enables users to identify trends and patterns, generate and verify hypotheses, and detect outliers and anomalies. However, the growing scale and complexity of data exploration present a barrier to discovering useful, actionable insights from data. Often, users may not know which visualizations would lead to desirable insights, resulting in wasted efforts in manual searching. Users can also be overwhelmed or lose track of where to look across a large potential space of attributes and filter combinations, leading to missed insights or even potentially erroneous conclusions. My thesis research involves designing systems that democratize data science by providing automated guidance to analysts in visual data exploration. My current work includes accelerating manual data exploration tasks and supporting user exploration in analytical workflows. Finally, I will describe future work in developing a high-level language for addressing analytical inquiries.
Deaf and hard of hearing (DHH) individuals face several barriers to communication in the workplace, particularly in small-group meetings with their hearing peers. The impromptu nature of these meetings makes scheduling sign-language interpreting or professional captioning services difficult. Recent advances in Automatic Speech Recognition (ASR) technology could help remove some of these barriers that prevent DHH people from becoming involved in group meetings. However, ASR is still imperfect, and it contains errors in its output text in many real-world conversation settings. My research proposes to investigate whether there are benefits in using ASR technology to aid understanding and communication among DHH and hearing individuals. My dissertation research will evaluate the effectiveness of using ASR in small group meetings (through empirical studies with DHH and hearing participants), as well as develop guidelines for system design to encourage hearing participants to communicate and speak more clearly.
The home is often the most private space in people's lives, and not one in which they expect to be surveilled. However, today's market for smart home devices has quickly evolved to include products that monitor, automate, and present themselves as human. After documenting some of the more unusual emergent problems with contemporary devices, this body of work seeks to develop a design philosophy for intelligent agents in the smart home that can act as an alternative to the ways that these devices are currently built. This is then applied to the design of privacy empowering technologies, representing the first steps from the devices of the present towards a more respectful future.
In recent years, 3D-printing technology has become widely accessible to non-experts and hobbyists, enabling them to fabricate objects of varying geometries. In contrast to this new ease of producing new forms, fabricating objects that can sense user input traditionally requires the assembly of complex circuits and physical parts. With my work, I explore what I call \pap: 3D-printed interactive objects without requiring any post-print activities such as assembly or calibration. I approach this by by externally sensing how the user's actions interact with well-studied physical phenomena (e.g., acoustic resonance and fluid dynamics). I have developed two novel techniques using this principle: , a blow-based interaction technique for fabricated objects using acoustic resonance; and , a technique for fabricating touch-sensitive objects using pneumatic sensing and fluid dynamics principles.
Since its codified genesis in the 18th century, ballet training has largely been unchanged: it relies on the word of mouth expertise passed down generation to generation and on tools that do not adequately support both dancers and teachers. Moreover, top-tier training comes at an exceptional price and only found in a few locations around the world. In this context, artificial intelligence (AI)-based video tools could represent an affordable and non-invasive alternative: they would allow dancers and teachers to quantitatively self-assess as well as enable skilled ballet teachers to connect with a wider audience. In my dissertation research, I study how to design and evaluate AI-based tools for ballet dancers and teachers to quantify performance and facilitate learning.
Sound plays a vital role in our relationship with food, this is highlighted through the term "gustosonic experience". However, when it comes to designing celebratory technology for eating - i.e. technology that celebrates the experiential and in particular often playful aspects of eating , the use of sound has been mostly underexplored. My PhD research explores the opportunity to use interactive technology to enrich playful eating experiences through sounds. Via a research-through-design approach, I designed and evaluated two systems that generate digital sounds as a result of eating ice cream, contributing to our understanding of the design of interactive gustosonic experiences. I hope that this work can guide designers in creating gustosonic experiences supporting a more playful relationship with food.
Threats to online safety continue to be an extensive problem for social media users. In particular, women and the LGBTQ+ community are often disproportionately affected by severe forms of threats that could transfer to risks to their physical safety. However, safety tools and platform policies often assume users are more homogeneous which may not acknowledge the varying needs of vulnerable populations affected by social oppression. This problem is exacerbated in small closely-knit communities like the Caribbean where it may be difficult to recover from or avoid abusers. Although legislation for online safety exists, they are enforced in few countries throughout the region, and there are few cases of victims being heard in court. My dissertation work attempts to address this problem by developing a holistic understanding of how to design safety tools that acknowledge cultural differences, legislative constraints, and social oppression for vulnerable groups.
Virtual agent-based interventions have been used to change user health-related attitudes and behaviors. Current virtual agents use several persuasion techniques to influence user behavior. However, most virtual agent-based interventions do not account for individual differences between users. Individual differences between users determine the effect of different persuasion strategies. My proposal aims to build a persuasive virtual agent-based mobile health intervention that adapts to individual differences in personality between college students to promote anxiety coping strategies.
The Interaction Attention Continuum (IAC) is a scale for the attention that is required from users while they interact with a product. Research on the IAC has shown that four considerations should be taken into account in its application as a design tool. In this paper, we explore how well the IAC performs in design education. A case study reveals that design students are able to take all four considerations into account in the application of the IAC. However, the fourth consideration - to take contextual considerations into account - poses a significant challenge to them. Our discussion of this finding leads us to two obstacles for the application of the IAC in design education. Based on this discussion, we concluded that the fourth consideration may have to be replaced by the consideration to let the context of use directly inform the design process.
This paper presents a case study using a WebAR-based application for learning intangible cultural heritage. This study highlights the embodied interaction perspective for assisted learning of intangible cultural heritage. A practice-based qualitative analysis contributed to the human-computer interaction and suggests further design methodologies for holistic collaborative design.
Tactile maps of indoor spaces have great potential for supporting pre-journey spatial learning by blind travelers. Due to the limited resolution of tactile sensing, though, only a limited amount of spatial detail can be embossed in a map at a certain scale. We conducted a focus group with blind participants in order to obtain some insight on the perceived utility of using multiple maps at different spatial scales, and thus different levels of detail, to represent the interior of a building.
The case study presented in this paper is concerned with the applicability of natural user interfaces (NUI) in the context of previsualization (previs). For this purpose, we have developed a virtual reality (VR) based tool that includes NUIs as a novel way to perform previs-related tasks. For the application domains of animation, film, and theater, we conducted a quantitative and qualitative assessment of the prototype by realising projects that resembled real life productions in the creative industries. In collaboration with industry experts with different creative backgrounds, we conducted a large-scale evaluation and examined the potential of NUIs in a professional work context. Our results indicate that NUIs can offer a usable alternative to standard 3D design software, requiring only a short familiarization phase instead of extensive training to achieve the intended outcome.
Conducting HCI research with people living with HIV in face-to-face settings can be challenging in terms of recruitment and data collection due to HIV-related stigma. In this case study, we share our experiences from conducting research remotely in two studies using the Asynchronous Remote Communities method with participants recruited from in-person and online support groups, respectively. Our findings and discussion around challenges, best practices, and lessons learned during the phases of recruitment and data collection expand and further support the suitability of the method to conduct research remotely with a highly stigmatized population.
Artificial Intelligence (AI) is continuously moving into our surroundings. In its various forms, it has the potential to disrupt most aspects of human life. Yet, the discourse around AI has long been by experts and for experts. In this paper, we argue for a participatory approach towards designing human-AI interactions. We outline how we used design methodology to organise an interdisciplinary workshop with a diverse group of students – a workbook sprint with 45 participants from four different programs and 13 countries – to develop speculative design futures in five focus areas. We then provide insights into our findings and share our lessons learned regarding our workshop topic – AI and Space – our process, and our research. We learned that involving non-experts in complex technical discourses – such as AI – through the structural rigour of design methodology is a viable approach. We then conclude by laying out how others might use our findings and initiate their own workbook sprint to explore complex technologies in a human-centred way.
Design Systems are an increasingly popular means for technology companies to improve development efficiency and design consistency across their products. In this case study, we present findings from a community-wide survey (n=1,513) and interviews at the Clarity 2019, a design systems conference. Our findings describe the community's evolving perception of what makes up a design system. We describe the members of the community who build and maintain these design systems. Our findings highlight i) the changing definition of design systems, ii) the practice of developing in-house design systems (in place of adopting an existing on one), and iii) the role of Design Systems in promoting collaboration between the Design and Engineering functions of a company.
How would your organization's employees respond to a deployment of virtual reality based training? Would they be excited, inspired, or perhaps intimidated by new technology? We tested the accessibility of virtual reality (VR) as a modality for training PricewaterhouseCoopers (PwC) employees, and we observed negative and positive sentiments associated with various approaches to VR engagement. This case study presents our efforts to improve VR headset onboarding and game play experience by analyzing user experience from the time they show up for the VR learning experience to the time they leave. We describe learner capabilities and skills required to interact with the headset and voice-recognition technology in a simulation based learning experience, the methods we used to identify user sentiments and infer causes, as well as our efforts to address user needs by adjusting our VR learner experience design. In selecting our test user population, we attempted to represent the diversity of PwC learner demographics, including gender, ethnicity, employee role / function, practice group within PwC, and other characteristics that could have had a material impact on VR equipment use, such as prior experience with VR, use of eyeglasses, and presence of various accents when speaking. Our heterogenous group of testers pushed us to acknowledge user sensitivities, which we had previously underestimated. We changed our VR onboarding protocol in order to mitigate reticence to learning in VR. We believe that as the technology becomes more prevalent in the enterprise and more employees experience the value VR brings to the training experience, the overall sentiment toward VR will continue to improve. We hope our study motivates continued research to improve accessibility of VR technology, enterprise implementation of VR, user-interface design, and voice recognition technologies used in gameplay.
Current augmentative communication technology has had limited success conveying the needs of some individuals with minimally verbal autism spectrum disorder (mvASD). Primary caregivers report being able to better understand these individuals' non-traditional utterances than those less familiar with the individual such as teachers and community members. We present an eight-month multi-phase case study for a translational platform, ECHOS, that uses primary caregivers' unique knowledge to enhance communicative and affective exchanges between mvASD individuals and the broader community. Through iterative development and participatory design, we discovered that physiological sensors were impractical for long-term use, on-body audio was content rich and easily accessible, and a custom in-the-moment labeling app was transformative in obtaining accurate labels from caregivers for machine learning advancements. This paper presents the design methodology, results, and reflections from our case study and provides insights into development with and for the special needs community.
One of the most intriguing features of the human-computer interaction is that human users rarely use full-fledged natural language utterances when communicating with computers. I investigate properties of linguistic expressions that arise in the context of human-computer interaction, focusing specifically on word order in search queries. I report results of two experimental studies that suggest that word order is meaningful and communicates user intent: the attribute that is more important or desirable to the user tends to be mentioned first. Study 1A and Study 1B show that these results hold for typed and spoken queries. Study 2 suggests that users do not have meta-awareness of this strategy. These findings can contribute to a better detection of user intent. Incorporation of word order sensitivity into technology design can lead to more efficient and satisfactory human-computer interactions.
In recent years, the retail industry has increasingly gained interest in the ICT system for enriching the customers' shopping experience and is now being deployed in some stores. As the most common implementation method is limited with its high cost and large consumption of space, it is a challenge for smaller or temporal stores to install such services. In this paper, we explore the usage of a load-sensitive board to improve the retail shopping experience specifically in smaller and temporal stores. As a case study, we develop and examine KI/OSK, an easy-to-install modular table-top retail application using SCALE a previously developed load sensing toolkit specifically for the Farmers Market. Our study uses iterative user research including surveys with Farmer's Market managers to assess design requirements, and testing and revising through a field study in a Farmers Market.
Building off of previous Human-Computer Interaction research on reminiscent and art therapy, this case study examines using a multi-modal album to help older residents with dementia engage with others while sharing portions of their life story. The narratives contained within the album were designed in cooperation with assisted care home residents with dementia. However, due to the progression of their dementia, adaptions to standard interviewing processes for gathering the narratives were needed. Through the use of multi-sensory stimulates (visual and somatic), over time, we were able to understand our participants' narratives as well as their personhood. Furthermore, the adaptions led to questioning the importance of narrative accuracy and the ultimate purpose of building the albums. The next phase is to evaluate the long-term effects of the creation process and the albums themselves.
This paper will introduce Uber's cross-disciplinary insights driven process behind building Uber Pro, a global loyalty program for drivers. Uber Pro's global rollout was preceded by extensive, original research, design and implementation. In this paper, we will cover the discovery phase lightly and go more in depth into the actual user and business decisions that needed to be taken by UX Research in collaboration with Product, Operations and Data Science to ensure we rolled out the right set of benefits in our key markets in a short time and enabled local teams to own the benefits platform to customize further, in an ongoing fashion. As Uber Pro grows to be a record-setting global rewards program, we are committed to developing close ties to customers around the world, via research and analytics that put users first.
Satellites have useful lifetimes of only a few decades; however, these could easily be extended if consumable resources could be replaced. Many groups are now exploring the possibility of using unmanned orbital robots to perform satellite servicing operations. These orbiting robots are currently semi-autonomous and require monitoring and control by highly trained ground teams in order to complete their activities. This provides for a unique and tightly constrained operating environment with very particular interface needs. Here, we document the process we used for developing a tele-robotic interface specific to NASA's Restore-L mission, and describe the choices and considerations we weighed when implementing our designs. We discuss the technical and mission parameters as well as the social and operational context of our work, and articulate the need for a design framework that is capable of better connecting these two domains.
In recent years, an increasing number of teleoperated robots have been used to provide services from a remote location. Most earlier teleoperated robot systems informed the customers that the robots are being remotely controlled, while the customers believed that they were communicating with the operators through the robots. However, it has already been shown that there are some disadvantages in informing customers about the robot teleoperation. To investigate whether customers could accept and use teleoperated robots that acted as if they were autonomous, we developed a teleoperated system in which operators represent autonomous robots. Our system was experimentally tested by employing operators that provided services to customers in a real field. It was found that many customers were particularly satisfied with the service of the teleoperated robots that behaved as if they were autonomous, while we demonstrated that customers who did not realize the robot teleoperation rated the service higher than the customers who realized the same.
Through technological advancements more and more companies consider virtual reality (VR) for training of their workforce, in particular for situations that occur rarely, are dangerous, expensive, or very difficult to recreate in the real world. This creates the need for understanding the potentials and limitations of VR training and establish best practices. In pursuit of this, we have developed a VR Training simulation for a use case at Grundfos, in which apprentices learn a sequential maintenance task. We evaluated this simulation in a user study with 36 participants, comparing it to two traditional forms of training (Pairwise Training and Video Training). This case study describes the developed virtual training scenario and discusses design considerations for such VR simulations. The results of our evaluation support that, while VR Training is effective in teaching the procedure for a maintenance task, traditional approaches with hands-on experience still lead to a significantly better outcome.
This case study describes the development of a mid-air haptic solution to enhance the immersive experience of visitors who are deaf, blind or wheelchair users to the Aquarium of the Pacific's movie theatre. During the project we found that adding a sense of touch, using an innovative ultrasound technology, to an immersive experience can improve the sense of engagement users have with the content, and can help to improve agreement with the topics presented. We present guidelines on the design of haptic sensations. By describing how this project took place within the tight timelines of a commercial deployment, we hope to encourage more organisations to do similar work.
DREAM 2.2 is an immersive art installation that gives form to our mind's ephemeral data. Participants wear an EEG headset and use their neural activity to alter visuals that are projection-mapped, in real-time, onto an explorable maze in an exhibition space. Their neural data also influences interactive audio. Audiences wear an EEG headset and help shape the installation audio-visuals, or they can explore the exhibition maze and be immersed in audio-visuals that are being shaped by another person's neural data. This case study investigates ways of personalising Brain-Computer Interface (BCI) displays so that people can feel a closer connection to their neural data. It also provides insights into how BCIs can support novel interpersonal engagement.
Hackathons provide rapid, hands-on opportunities to explore innovative solutions to problems, but provide little support to teams in moving those solutions into practice. We explore the use of post-hackathon Learning Circles to connect hackathon teams with key stakeholders, to reflect on prototypes and consider business models. We conducted a qualitative field study with 4 post-hackathon teams on the theme of technology, social isolation, and aging. Our results show that Learning Circles are an effective way to involve stakeholders early in the development process, and to develop a deeper understanding of users, markets, and technology.
Existing smart cane prototypes provide audio and/or haptic feedback to inform people who are blind or visually impaired about upcoming obstacles. However, limited user research is conducted to evaluate the usefulness of the haptic feedback provided by these devices. To better understand the users' perceptions of haptic feedback, we developed a smart cane prototype called Intelligent Mobility Cane (IMC) that consists of 2 haptic vibrators on the handle. They are used to inform different parts of the user's hand that an obstacle is detected. 8 people who are blind and 3 people who have low vision explored the IMC's handset by navigating an indoor obstacle path. The participants provided their feedback on the IMC's haptic notification system with regards to the intensity of the vibration and location of the vibrators and discussed various scenarios where the feedback will or will not be useful to them. In this case study, IMC handle design recommendations based on the participant's feedback and suggestions are presented.
In this paper, we summarize our findings and recount lessons that we learned using the diary method in a pilot study exploring mobile app use among children with autism spectrum disorder (ASD). Participants included two teachers and five parents. We found that the diary study method worked well to collect data about app use; however, the design of our study inadvertently introduced problems, especially for the participating parents. Problems included feelings of pressure among our participants for their children to engage with apps that they had little interest in. Collecting data about how children with ASD use commercially available technologies like mobile apps is challenging and requires experimentation of methods; this case study paper will help other researchers who are working with similar user groups in navigating these challenges.
This case study describes the expansion of the BRAVEMIND virtual reality exposure therapy (VRET) system from the domain of combat-related posttraumatic stress disorder (PTSD) to the domain of PTSD due to Military Sexual Trauma (MST). As VRET continues to demonstrate efficacy in treating PTSD across multiple trauma types and anxiety disorders, adapting existing systems and content to new domains while simultaneously maintaining clinical integrity is becoming a high priority. To develop BRAVEMIND-MST we engaged in an iterative participatory design process with psychologists, engineers, and artists. This first-person account of our collaborative development process focuses on three key areas (1) VR Environment, (2) User-Avatar State, and (3) Events, while detailing the challenges we encountered and lessons learned. This process culminated in eight design guidelines as a first-step in defining a VRET domain transfer methodology.
Citizen science gets increasing recognition for its potential to democratize science and support environmental governance. In this paper we present our experiences and lessons learned from a set of 'extreme' citizen science initiatives in developing countries, where data collection applications are used to support low-literate people in identifying solutions to issues that are of significant local concern. This paper aims to bring to the attention of the HCI community of developments in extreme citizen science and contribute knowledge to the field of HCI4D, especially to research studies which concern the design and use of smartphones for low-literate users.
User Experience (UX) Design of Artificial Intelligence (AI) applications that assist knowledge-intensive tasks are usually the result of an iterative and fuzzy process. Although the combination of state-of-the-art technology with human expertise is undoubtedly valuable in the industry context, designing a user-centered tool that fits in the everyday activities of knowledge workers – supporting and augmenting their potential – is not a trivial work. User research uncovers pain points and opportunities, bringing the insights that inspire designers and Human Computer Interaction (HCI) experts. It allows a clear understanding of the context, which is the essence of a consistent interaction experience. Nevertheless, revealing the UX of the application and how AI in this context reshapes knowledge workers' everyday tasks is not straightforward and might also be an iterative process. In this paper, we present findings from a case study of the use of a storytelling method to design and validate the UX of AI assistants for the Oil & Gas industry by using an animated sketch-based video.
The aim of this study was to identify relevant aspects of sign avatar animation and to quantify their effect on human perception, to better address user expectation in the future. For this, 25 users assessed two types of avatar animations, one upper baseline generated utilizing motion capture data, and a reference with absolute total positional differences in the mm range generated utilizing machine learned synthetic data. As expected, user evaluation of the synthesized references showed a considerable loss in rating scores. We therefore computed a variety of signal-specific differences between both data types and investigated their correlations to the collected user ratings. Results reveal general statistically significant inter-dependencies of avatar movement and perception helpful for the generation of any type of virtual avatar in the future. However, results also suggest that it is difficult to determine concrete avatar features with a high influence on the user perception in the current study design.
This paper explores the challenges of HCI work within multidisciplinary research projects across health sciences, social sciences, and engineering, through discussion of a specific case study research project focused on supporting under-resourced pregnant women. Capturing the perspectives of community-based health care workers (N=14) using wearable technology for servicing pregnant women provided insight into considerations for technology for this specific population. Methods of inquiry included design and development of a prototype wearable and mobile system as well as self-report via a survey detailing their experience with the system and how it can possibly benefit their duties of monitoring their pregnant patients. The process outcomes of this work, however, provide broader insight into the challenges of conducting this kind of interdisciplinary work that remain despite decades of effort and financial investment to support interdisciplinary research, particularly in health informatics and interactive technologies for health.
This case study follows the research process of rethinking the design and functionality of a personal email client, Yahoo Mail. Over three years, we changed the focus of the product from composing emails towards automatically organizing specific categories of business to consumer email (such as deals, receipts, and travel) and creating experiences unique to each category. To achieve this, we employed iterative user research with over 1,500 in-person interviews in six countries and surveys to many thousands of people around the world. This research process culminated in the launch of Yahoo Mail 6.0 for iOS and Android devices in the fall of 2019.
The media industry has a practice of reusing news content, which may be a surprise to news consumers. Whether by agreement or plagiarism, a lack of explicit citations makes it difficult to understand where news comes from and how it spreads. We reveal news provenance by reconstructing the history of near-duplicate news in the web index - identifying the origins of republished content and the impact of original content. By aggregating provenance information and presenting it as part of news search results, users may be able to make more informed decisions about which articles to read and which publishers to trust. We report on early analysis and user feedback, highlighting the critical tension between the desire for media transparency and the risks of disrupting an already fragile ecosystem.
While much research has been done on games as engaging strategies for assessment and education, little has been done to address the specific human computer interaction questions relating to the impact of player engagement in game experiences. This case study examines a section (284 adults) of a larger study on play preference of over 700 participants who were given four versions of an assessment game. While the content remained the same, the versions varied by their structure (rewarding players for acting either appropriately or inappropriately) and by play perspective aesthetics (direct or indirect character embodiment). The results indicate that player preference was for the aesthetics of indirect embodiment and the goals of negative behavior. While this is perhaps unsurprising to makers of commercial games, this is contrary to typical education assessment game design. The case study also demonstrates a difference in player character embodiment that should prove useful to those determining the perspective of their player environments.
We performed an ex situ Wizard of Oz study of young adults interacting with an idealized digital personal assistant to discuss daily scheduling concerns and stress levels. We further varied rates of "learning" and personalization with the system to test user preferences and changes in participants' linguistic and psychological interactions with an unadapted versus adapted user model, and to determine whether those changes were attributable to acclimatization with the system or to the modeling capabilities, seeking to address 3 research questions: What are the psycholinguistic characteristics of user interactions with a dialogue system designed to act as a scheduling assistant? How does a system's ability to learn about a user and maintain a user model affect these interactions? Are changes in interaction styles uniquely attributable to user modeling ability rather than simply user familiarity or acclimation with the system? We present a linguistic analysis of the results using summary measures generated by a widely used psycholinguistic text analysis tool. Some of the measures seem to present the slightly paradoxical effect of reduced user engagement when a conversational agent explicitly discloses information about its user model to the user. These results suggest that future studies should take care to consider the degree to which the user model is directly exposed to the user. That is, being overly forthcoming about what has been learned about a user may undermine attempts to tailor conversational agents to actively engage and relate to users.
University educators actively seek realistic projects to include in their educational activities. However, finding an actually realistic project is not trivial. The rise of crowdsourcing platforms, in which a variety of tasks are offered in the form of an open call, might be an alternative source to help educators scale up project- based learning. But how do university students feel about executing crowdsourcing tasks instead of their typical assignments? In a study with 24 industrial design students we investigate students' attitudes on introducing crowdsourcing tasks as assignments. Based on our study we offer four suggestions to universities that consider integrating crowdsourcing tasks in their educational activities.
This case study focuses on the design and evaluation of a goal setting web application for use in online courses. Our process included background research, competitive analysis, internal feature brainstorming, persona creation, a Lightning Decision Jam, and high-fidelity prototyping . We assessed our design using e-Learning System Evaluations and usability tests . Key takeaways include: (a) the Lightning Decision Jam is an engaging, inclusive, and helpful exercise for narrowing down a project's scope; (b) e-Learning System Evaluations elicit more detailed feedback than usability tests, but they may not be well-suited for testing prototypes with limited functionality; and (c) the conversational nature of usability tests may lead to more ideas for new features, but these tests do not give participants the opportunity to think deeply before providing feedback. These findings have implications for user experience designers and researchers, as well as those interested in new brainstorming and system evaluation methods.
This research reports on heuristics in the design and implementation of games as polling systems. Adapting previous research in human computation games, this paper examines the opportunity to collect player opinion through game mechanics. The goal is to make more engaging experiences and exploit poll-taker as player instinct response. This case study describes the design, development and data collected through three playable polls focused on news sources and media literacy. This is a pilot analysis, outlining the propensities and limitations of using games as polling systems and reporting on the findings from a limited release involving 287 play sessions.
This short paper describes how to adapt user experience research methods for artificial intelligence (AI)-driven applications. Presently, there is a dearth of guidance for conducting UX research on AI-driven experiences. We describe what makes this class of experiences unique, propose a preliminary foundational framework to categorize AI-driven experiences, and within the framework we show an example of methodological adaptations via a case study.
Educational technology practitioners at the University of Michigan created a web application called the Gallery Tool, which provides a space for learners to share their work and receive feedback on it. After piloting the tool in two online courses for seven months, we interviewed learners from these courses.
We found that learners most often used the Gallery Tool to "check all the boxes" of the course or to find inspiration for their own assignments. They liked its aesthetic and ease of use, but low levels of feedback activity decreased its value to them. As a result, it typically had a neutral impact on their course experience.
Our findings are most relevant to other educational technology practitioners, as they reveal insights for balancing and improving learner experience and user experience in web applications for massive open online courses.
We present a suite of interfaces for the visual exploration and evaluation of graph embeddings -- machine learning models that reveal implicit relationships not directly observed in the input graph. Our focus is on the embedding of navigation graphs induced from search engine query logs, and how visualization of similar queries across different embeddings, combined with the interactive tuning of results through multi-attribute ranking and post-filtering (e.g., using raw query frequency or derived entity type), can provide a universal foundation for query recommendation. We describe the process of technology transfer from our applied research team to the Microsoft Bing product team, examining the critical role that visualization played in their decisions to ship the technology on bing.com.
Brainstorming 101 is an introductory course intended for beginner researchers and practitioners. Throughout the course, they will get acquainted with various ideation techniques to aid them in different phases of product design. The course progresses from a problem statement to a matured solution by introducing various unconventional ideation techniques addressing questions such as 1) How to divide problems into sub-problems? 2) How to find initial solutions? 3) How to dig deeper into the solutions? 4) How to combine ideas into one solution? 5) How to find the benefits and drawbacks of the solution? As a result, the participants will get hands-on experience of various ideation techniques suited to address these questions. Furthermore, they shall explore the key features, pros, and cons of each technique.
This course is for students, practitioners and academics interested in understanding the evolution of the field of human-computer interaction that now presents us with new opportunities and serious challenges. Early visions were achieved, bringing anticipated benefits but also unanticipated consequences. Hindsight can identify future possibilities and past oversights. Creativity will be needed to address them. The course surveys HCI in computer science, human factors, information systems, and information science, and discusses relationships to design and artificial intelligence. In a diverse field, it is useful to understanding the forces that have guided the interaction of related disciplines. The principal goal of this course is to better understand significant issues that will engage us for years to come.
We are witnessing an increase in fieldwork within the field of HCI, particularly involving marginalized or under-represented populations. This has posed ethical challenges for researchers during such field studies, with "ethical traps" not always identified during planning stages. This is often aggravated by the inconsistent policy guidelines, training, and application of ethical principles. We ground this in our collective experiences with ethically-difficult research, and frame it within common principles that are common across many disciplines and policy guidelines - representative of the instructors' diverse and international backgrounds.
Inbodied Interaction (II) aligns interaction design strategies with how the body's internal processes function optimally. The purpose of this course is to provide an introduction to II by (i) overviewing these internal neuro-, physio-, chemic0- systems, (ii) offering a practical model to start to apply them in design (iii) exploring each through the lens of HCI examples. With this introduction, participants will have a functional map to deepen their knowledge of inbodied systems, and how using this alignment opens design spaces. This is also the foundation course for Inbodied Interaction 102: new measures for HCI and human performance design.
As a means to validate the effects of interaction designs, particularly those involving physiological processes, like: breathing in mindfulness; heartrate in exertion games, and blood flow to the brain for cognitive load assessments, HCI researchers are increasingly turning to body-based signals as signals to quantify effects and guide design decisions. These design decisions can be informed by Inbodied Interaction principles of aligning knowledge of how the body performs optimally (physiologically, neurologically) with our designs. The purpose of this course is to present new-to-HCI neuro-physiological measures including peripheral awareness, deep HRV, and new pre-cortical assessments to open new design opportunities. Students will leave the course with this set of new assessments, as well as practical worked examples of how to choose and apply which measures as best suited for a particular design and evaluation context.
Writing research papers can be extremely challenging specifically for scientific communities with their own review and style guidelines like CHI. The impact of everything that we do as researchers is based on how we communicate it. That is why writing for CHI is a core skill to learn because it is hard to turn a research project into a successful CHI publication. This fourth edition of the successful CHI paper writing course offers hands-on advice and more in-depth tutorials on how to write papers with clarity, substance, and style. It is structured into three 80-minute units with a focus on writing CHI papers.
Online surveys are an extremely popular research method in HCI and UX research. Surveys are often perceived to be easy to create  and sometimes used even if they are not the most appropriate method . This course will review state-of-the-art methods, drawing from the past 20 years of research on online surveys [3, 4] and present applications for the user experience and HCI context . Upon completion of the class, attendees will have a framework of survey quality, a roadmap to plan & implement, and resources to extend their knowledge of surveys for HCI and UX research. The instructors have conducted hundreds of surveys that tech companies use on a regular basis to inform business and product decisions. They also consult and reviews surveys on a daily basis. Finally, they regularly teach survey research, and have been instrumental in connecting User Experience Research with survey research principles and measurement best practices.
This course will provide an introduction to techniques for rapid prototyping for Augmented Reality (AR) and Virtual Reality (VR) experiences. With the rise of consumer head mounted displays and powerful mobile phones, AR and VR are becoming increasingly popular. However, until recently, developing AR and VR experiences required strong programming skills. In this course participants will learn how to rapidly prototype AR/VR experiences without the need for programming. Using a mixture of lecture and hands-on activities participants will learn about methods for quickly creating their own AR/VR interfaces. The course will use a mixture of traditional prototyping tools such as sketching, as well as easy to use, free tools for creating AR/VR experiences. This is an ideal course for people who was to quickly prototype and test the core elements of AR/VR experiences before developing their final applications.
Hand-drawn sketching is a practice as old as our ancestors. From cave painting to picture-books, we have explored the world with our visual senses. Within Human-Computer Interaction, sketches can be used to document, ideate, and describe concepts between researcher, user, or client. Attendees will leave the course with the confidence to engage actively with sketching on an everyday basis in their research practice.
Sketching in Human Computer Interaction is a valuable tool for subjective practice, but also a tool for engagement with collaborators, stakeholders, and participants. This hands-on practice can be utilised in a variety of contexts. The course enables those already in possession of sketching skills the confidence to take their work to the next level. Drawing from expertise gained by working in both academia and industry, the instructors will lead course attendees on a journey through practical applications of sketching in HCI, from subjective sketching to participant engagement and publishing, using hands on tasks and group activities.
Half-Day course which utilizes both lectures and interactive activities to demonstrate the practical UX research methodology of contextual inquiry. Experts from UEGroup, a Silicon Valley research and design company, will lead an interactive discussion and give practical suggestions for developing contextual inquiry methodologies including: ensuring how to get the best results, and understanding how to extend learning past the initial visit.
Re-programmable materials, such as those which can change their color in response to external stimuli, hold the promise for a future in which objects will re-configure according to a user's needs. In this course, we will provide participants with an in-depth understanding of color-changing materials, brainstorm novel applications in HCI, discuss technical solutions to realize participants' ideas, and conduct a hands-on session with one color-changing system from our prior works. Our team is uniquely positioned to host this workshop since we combine expertise in materials, optics, computational geometry, and personal fabrication. At the end of this course, we will summarize the results as a research agenda for future work on re-programmable color-changing materials.
A key challenge for people that are new to reviewing is pitching the review at the right level, and getting the tone and structure of a review right. This course aims to help participants understand a) the different expectations of different venues and submission types, b) the processes they use to make decisions, and c) good techniques for producing a review for these different circumstances. Combined with developing a good understanding of these different expectations, participants have a chance to critique anonymised proto-reviews, and try to guess the venue they are written for and the recommendation they make.
Today, AI is used in many high-stakes decision-making applications in which fairness is an important concern. Already, there are many examples of AI being biased and making questionable and unfair decisions. Recently, the AI research community has proposed many methods to measure and mitigate unwanted biases, and developed open-source toolkits for developers to make fair AI. This course will cover the recent development in algorithmic fairness, including the many different definitions of fairness, their corresponding quantitative measurements, and ways to mitigate biases. This course is open to beginners and is designed for anyone interested in the topic of AI fairness.
Machine learning (ML) for data analysis have attracted the HCI community in the recent years. Multiple prebuilt ML libraries are available for popular programming languages such as R and Python to build and evaluate ML models. However, their usage demands good programming knowledge. The proposed course relaxes the need of programming by offering the model building via an open-source data mining tool called Orange. Orange features drag-and-drop functionality which enables ML developers to focus on model building rather than worrying about coding syntax. This course introduces the complete data mining pipeline with hands-on exercises on build and evaluating ML models.
This course is a hands-on introduction to the fabrication of flexible, transparent free-form displays based on electrochromism for an audience with a variety of backgrounds, including artists and designers with no prior knowledge of physical prototyping. Besides prototyping using screen printing or ink-jet printing of electrochromic ink and an easy assembly process, participants will learn essentials for designing and prototyping electrochromic displays.
The objective of this CHI course is to provide CHI attendees with an introduction and overview of the rapidly evolving field of automotive user interfaces (AutomotiveUI). The course will focus on UI aspects in the transition towards automated driving. In particular, we will also discuss the opportunities of cars as a new space for non-driving-related activities, such as work, relaxation, and play. For newcomers and experts of other HCI fields, we will present the special properties of this field of HCI and provide an overview of new opportunities, but also general design and evaluation aspects of novel automotive user interfaces.
Drones are becoming increasingly interactive and offer novel opportunities for mobile interactions. This hands-on course will present rapid prototyping techniques for designing drone interfaces. We will use physical prototyping to approach participatory design techniques when designing future interactions, and digital prototyping tools with small-sized drones using Python (with an option for visual programming) to implement part of the interaction. The course will include an introduction to the field of human-drone interaction and its methodologies, two hands-on prototyping sessions with physical materials and digital tools, and a presentation session where participants get to show off their prototypes!
Virtual Reality has the potential to transform the way we work, rest and play. We are seeing use cases as diverse as education and pain management, with new applications being imagined every day. Virtual Reality technology comes with new challenges, and many obstacles need to be overcome to ensure good user experience. Recently many new Virtual Reality systems with integrated eye-tracking have become available. At the same time, many research labs are using Functional Near-Infrared Spectroscopy (fNIRS) to non-invasively measure brain activity and serve as a brain-computer interface. This course presents timely, relevant information on how Virtual Reality can leverage eye-tracking and brain activity data to optimize the user experience and to alleviate usability issues surrounding many challenges in immersive VEs. The integration of these sensors allows us to determine additional insights into human behavior, including where the viewer is focusing their attention, and monitor cognitive load. Advancing these approaches could make the Virtual Reality experience more comfortable, safe and effective for the user, and open a new world to facilitate new experimentation for Human Computer Interaction (and Brain Computer Interaction) researchers.
This course explores somatic approaches to experience design in HCI. Designing for Sensory Appreciation focuses on cultivating our bodily sensory experience as a resource for design. This course exemplifies how somatic approaches can be applied through sensory appreciation in the form of case studies that incorporate experience-based activities. We invite a rethinking of the process of designing for technology based on the emerging somatic turn within Human Computer Interaction that acknowledges design for the experience of the self and recognizes the interiority of human experience as an equal partner in technological design processes.
This course is a hands-on introduction to interactive electronics prototyping for people with a variety of backgrounds, including those with no prior experience in electronics. Familiarity with programming is helpful, but not required. Participants learn basic electronics, microcontroller programming and physical prototyping using the Arduino platform, then use digital and analog sensors, LED lights and motors, to build, program and customize a small paper robot.
Half-day course which utilizes both lectures and interactive activities to demonstrate the practical UX research methodology of usability testing. Experts from UEGroup, a Silicon Valley research and design company, will lead an interactive discussion and give practical suggestions for developing usability testing methodologies including: understanding the available metrics for measuring; writing a test plan, recruiting participants, insuring the participant is engaged and comfortable, as well as moderating, data analysis and reporting guidance. Given that technology and methodologies are constantly changing, this course will focus on ways to be creative with the resources available, in order to best satisfy research needs.
As Artificial Intelligence (AI) technologies are increasingly used to make important decisions and perform autonomous tasks, providing explanations that allow users to understand the AI has become a ubiquitous concern in human-AI interaction. Recently, a number of open-source toolkits are making the growing collection of of Explainable AI (XAI) techniques accessible for researchers and practitioners to incorporate explanation features in AI systems. This course is open to anyone interested in implementing, designing and researching on the topic of XAI, aiming to provide an overview on the trends and methods of XAI and also help attendees gain hands-on experience of creating different styles of explanation with an XAI toolkit.
Sign language interfaces offer rich, timely research problems. Recent advances in computational methods have made a wider range of sign language interfaces possible. At the same time, a recent interdisciplinary review and call-to-action outlines the most pressing challenges for the field, given these recent advances. This special interest group (SIG) will meet to discuss and make progress along these challenges, providing continuity for researchers working in this space, while exchanging ideas with the broader HCI research community.
With increasing automation of the driving task, cars' cockpits are transforming towards living spaces rather than pure modalities of transport. The promise of automated vehicles being individual places for relaxation and productivity while on-the-go, however, requires significant research. Not only safety-critical questions, but also issues related to ergonomic design, human factors for interactive systems, and social aspects have to be investigated. This special interests group presents an opportunity for connecting various CHI communities on these problems, which need to be solved under time-pressure, because automated vehicles are coming - whether or not the HCI-related issues are solved.
Over the last 20 years, the Latin American Human-Computer Interaction (HCI) community has been working to shed light on how the diverse populations in the region are adopting, using, and making sense of computational technologies. Latin America's tense socio-political context, plurality of languages, collectivist culture, and historical relationship with the Global North make it a unique and rich space for HCI research. Considering the growing number of studies about Latin American communities and the emergent efforts to contribute to the HCI literature, we propose to host a SIG meeting at the 2020 ACM CHI conference. Our goal is to consolidate these efforts to better promote HCI research in, by, and for Latin America, by (1) bringing together researchers, practitioners, and students who are interested in engaging with Latin America through their research and practice, (2) envisioning a shared research agenda, and (3) identifying strategies for making its contributions more visible and impactful in the international community.
In this SIG, we aim at gathering researchers and practitioners to reflect on using XR technologies to support collaborative learning and co-creation, and to foster a joint force by connecting the Learning and Education community and the XR community at CHI. We witness a significant increase in CHI publications relating to these research areas: 292 titles about "collaborative learning" or "co-creation" since 2015 compared to 96 in 2010-2014; and 1180 titles about XR since 2015 compared to 288 in 2010-2014. This SIG will bring together researchers, educators, designers and practitioners to 1) stimulate a cross-disciplinary discussion on the opportunities of collaborative learning and co-creation in XR; 2) foresee the future directions, standards and obstacles to introduce XR to education; and 3) build a joint community connecting XR and education research at CHI.
Recent advancements and economic feasibility have led to the widespread adoption of conversational digital assistants for everyday work. While research has focused on the use of these conversational assistants such as Siri, Google Assistant or Alexa, by young adults and families, very little work focuses on the acceptance and adaptability amongst the older adults. This SIG aims to discuss the use and benefits of these conversational digital assistants for the well being of older adults. The goals for this SIG are to: (i) explore the acceptance/adoption of voice-based conversational agents for older adults. (ii) explore anthropomorphism in the design of conversational digital assistants. (iii) understand triggers (scenarios of use) that can initiate the process of reminiscence thus leading to meaningful conversation. (iv) explore conversational User Experience. (v) explore the co-existence of non-conversational use cases.
As Queer Human-Computer Interaction (HCI) becomes an established part of the larger field, both in terms of research on and with queer populations and in terms of employing queering theories and methods, the role of queer researchers has become a timely topic of discussion. However, these discussions have largely centered around member-researcher status and positionality when working with queer populations. Based on insights gathered at multiple ACM events over the past year, we identified two pressing issues: (1) we need to better support queer people doing HCI research not specific to queer populations, and (2) we need to identify how to best support member-researchers in leading Queer HCI while including collaborators beyond the queer community. This Special Interest Group (SIG) aims to directly address these challenges by convening a broad community of queer researchers and allies, working not only on explicitly-queer topics but across a broad range of HCI topics.
Whilst studying Human-Computer Interaction, students and work-place learners rarely encounter sketching, yet such practice has been shown to improve cognitive processes and increase retention of information. Additionally, it is a valuable method of ideation and communication for both subjective and group-based projects. We propose further integration of sketching practice within HCI and computer science curricula, both to preserve this valuable skill for use in research and industry, and to widen the perspectives of those working with subjects often seen as grounded in code or logic. SketCHI #3 will bring together those interested in enhancing student's and colleagues experience in a hands-on meeting of minds and sketching, with the aim to share best practice and knowledge for those interested in expanding our views on education in the field, and to co-create a Sketching in HCI education plan with a body of knowledge.
This SIG will provide child-computer interaction researchers and practitioners an opportunity to discuss future directions for the field after 18 years of Interaction Design and Children conferences. Topics for discussion include interdisciplinarity, theory and rigor, impact, emerging areas of research, and ethics.
Software agents with truly human-level intelligence may be years or decades away - or a practical impossibility. Yet from an end-user perspective, we already live in an era of seemingly sentient agents. Virtual personal assistants are now ubiquitous on mobile and smart home devices. But as long as human-level artificial intelligence remains speculative, the ability of today's pseudo-sentient agents to understand and fulfill the full range of human user demands is, by definition, limited and failure-prone. We need better strategies for designing natural, graceful, failure-tolerant user experiences with pseudo-sentient agents. To better support researchers and practitioners actively working on this problem, we propose a Special Interest Group at CHI 2020: a pop-up studio to design the near-future of user interactions with pseudo-sentient agents.
Eye movement recording has been extensively used in HCI and offers the possibility to understand how information is perceived and processed by users. Hardware developments provide the ubiquitous accessibility of eye recording, allowing eye movements to enter common usage as a control modality. Recent A.I. developments provide powerful computational means to make predictions about the user. However, the connection between eye movements and cognitive state has been largely under-exploited in HCI. Despite the rich literature in psychology, a deeper understanding of its usability in practice is still required. This EMICS SIG will provide an opportunity to discuss possible application scenarios and HCI interfaces to infer users' mental state from eye movements. It will bring together researchers across disciplines with the goal of expanding shared knowledge, discussing innovative research directions and methods, fostering future collaborations around the use of eye movements as an interface to cognitive state, and providing a solid foundation for an EMICS workshop at CHI 2021.
The Games and Play community has been a consistent contributor to the ACM SIGCHI community over the last decade, introducing a significant number of participants (academic researchers, students, and practitioners) to CHI and contributing an increasing number of games- and play-related submissions across all tracks of the conference. Beyond CHI, the community has benefited from the CHI PLAY conference series (an ACM SIGCHI sponsored conference) focusing on all areas of games and play in Human-Computer Interaction (HCI). CHI 2020 marks the start of a new decade. This provides an opportunity for us to reflect on the advances made in HCI and games in the last decade, and to discuss and plan our focus for the new decade. Therefore, this SIG aims to engage the Games and Play community members in a discussion about the directions that we can take to advance the field forward, highlighting key research and development challenges to help navigate the complexities that we are facing.
This SIG proposes starting a discussion on the CHI platform about the issues pertinent to the design of digital payments and digitization of financial services. Although there has been a lot of discussion in HCI around domains such as health and education, the domain of financial HCI is still nascent. The purpose of this SIG is to engage researchers and the broader community at CHI in the discussion and debate around digital payments for the next billion users. We propose creating a live working document starting before the SIG which continues to develop during and after the SIG. This live document will enable to engage with a wider audience of researchers, and industry practitioners outlining processes, methods, and tools that HCI4D researchers have created to work with emergent users to develop ICT interventions.
The number of phishing attacks has increased each year, becoming more sophisticated and creating the need for practical training in identifying these scams. A recent study explored the potential of training staff at a university to recognize phishing attacks through either facts or utilizing stories shared by peers or experts. The study found that facts from an expert, rather than stories from experts or peers, remain the most effective form of training. Here we replicate and extend this study by targeting current college students. The results show that college students respond differently than university staff and are more receptive to social stories or facts from peers. These results indicate the importance of taking the target demographic into account when designing and conducting phishing training.
In many instances of collaborative writing, ideation and deliberation about what to write happen in a separate space from the actual document writing. However, having discussion and writing separated may result in a final document that has little connection to the discussion that came before. In this work, I build upon a hybrid discussion and document-writing tool called Wikum+ to allow groups to mix having discussions and summarizing those discussions in real-time, until the process results in a final document that incorporates and links all discussion points. I conduct a pilot study where a group used Wikum+ to collaboratively write a proposal, while a control group used a messaging platform along with Google Docs. From analyzing survey and interview results, I found preliminary evidence that Wikum+'s integration of discussion and summarization helped users be more organized as well as more inclusive of ideas in the final document. Wikum+ allowed for more light-weight coordination and iterative improvements through the incorporation of new ideas.
The pursuit for blending specialized knowledge with applicable skills is an emerging trend in the higher education. Therefore, proficiency in practical work is considered increasingly pivotal, especially in STEM areas. We introduce ARchemist - laboratory task management system, aiding students in controlling procedures and reflecting on their actions during hands-on exercises in chemistry. The proposed solution employs augmented reality technology to support learning and offers in-situ assistance for laboratory procedures. We present the operational research prototype designed in a user-centered process. The system was evaluated in a preliminary study in a real-case class scenario. ARchemist was assessed as advantageous over traditional guidance tools and contributing to student's confidence and efficiency, whilst being a promising endeavour to facilitate chemistry education.
Many CSS tutorials exist online, yet novice web developers struggle to learn and apply professional CSS techniques. In this paper, we introduce Knowledge Maps (KM), a platform that guides novice developers to understand and compare professional web examples in order to learn and apply each example's professional techniques. By comparing professional techniques, learners are able to identify the tradeoffs and use cases associated with each technique. Knowledge Maps (KM) introduces three process management mechanisms to help learners understand examples: highlighted CSS properties, interactive CSS properties, and guided reflection prompts. In a user study where 9 users interacted with the Knowledge Maps (KM) platform on two examples using CSS grid layout, learners were able to understand the pros and cons between the CSS grid layout technique used in each example. Learners also demonstrated that they could apply their understanding to new use cases.
Primary research is often conducted by interviews, observation and surveys. Exploring other methods for the same, this paper talks about how theatre and movement sessions can be used to conduct primary research and understand the emotions and experiences of a user. Allowing people to express their experiences through movement, expressions and actions is a way to understand their opinions, mental model, fears and expectations from a product, service or environment. Including different age groups and people from various communities will allow researchers to gain insights about the difference in perceptions of the people through the difference in actions or expressions. This paper talks in depth about the workshop model that allows participants to describe their experiences through movement and theatre.
An estimated 7.3 million hectares of forest, which is roughly the size of the country of South Africa have been lost since 1990. Reforestation, which is often considered as a best solution to overcome this problem is proven to be expensive. Despite various funding process, it still falls short of their objectives . This paper analyzes the design process of mobile application for millennials to donate with or without money. Resulting application can show non-intrusive ads to the user to generate funds for restoration program. Researchers used User-centered design by Donald Arthur Norman to design and develop the product  and applied the principles of psychology of interaction design and gamification to increase the user engagement. Personalization features are used to put user in control on how and when they see ads. The usability testing of the app shows positive feedbacks and yields good result (1.246) in User Experience Questionnaire (UEQ) by Hinderks et al.
Smart home devices have been successful in fulfilling functional requirements but have often failed at incorporating user-centric security and privacy. This research project addresses the problem of security and privacy in the smart home through the lens of User Experience (UX) Design. Using qualitative interviews with users and designers, we explore the relationship between UX design, security, and privacy in the smart home. This is followed by participatory design workshops with smart home stakeholders to gain an in-depth knowledge of UX design challenges of security and privacy. Our results are further broadened by the development of a conceptual framework for UX design of security and privacy in the smart home.
Hepatobiliary cancer patients and the high-risk population have a unique need to be constantly acquiring the latest prevention and treatment information. A typical example of an interface for presenting information for hepatobiliary cancer is the clinical guideline and guideline for patients. However, no detailed discussion has been made on the framework guideline for patients.In this paper, we proposed user-friendly hepatobiliary cancer guidelines for patients from previous research and experts review. From the evaluation by general users and experts opinion, it is considered to be sufficiently practical.
This paper is part of a larger study aimed to examine and enhance the role of parents in their child's learning. The work done thus far aims to understand one aspect of it - how low literate parents react to invitations of at-home engagement with their children. An ICT (Information and Communication Technology) based intervention was designed and deployed and it was found that the parents responded positively to the invitation. Nuances of the parents' responses and shortcomings of the intervention have been discussed.
What if interfaces allowed visually challenged users to access more auditory information at a time? This graduate research project explores this question by studying Spatial Audio Interfaces in general, and the use of Concurrent Speech in particular. We present the process of designing an experimental study to measure user performance on web-based search tasks using concurrent speech screen readers, and to understand user preference and perception of more general spatial audio interfaces. The findings from a pilot run of the study, and their implications on future work in this project are discussed.
Designing and creating physical computing system can be challenging for novice user.In this paper, we present FritzBot, an intelligent conversational agent offering assistance for novice users on constructing physical-computing systems through natural-language interaction. We create a lexical circuit-event database based on 152 student reports from the undergraduate physical-computing course in a local art school. The LSTM-CRF network of FrzitBot is trained on that database, and is able to extract the input and the output events from the user's description, and generate the circuit and the code along with the construction guidelines. A user study shows that FritzBot can significantly reduce the construction effort and time spent for novice users on physical-computing task.
Scatterplots are one of the most common data visualisation methods. Designing scatterplots for large data sets is challenging, as overlapping markers are likely to cause loss of information. Poorly chosen marker opacity, shape, or size may lead to e.g. overplotting or diminished visibility of outliers. The challenge is amplified by having to wait for rendering every time the design is changed. To reduce designer effort, optimisation-based approaches to scatterplot design have been proposed, most comprehensive being an algorithm by Micallef et al. (2017) that applies image-based perceptual quality measures to automatic evaluation of scatterplots. However, their approach suffers also from poor rendering performance, discouraging usage in interactive applications. This paper presents an algorithm applying abstract rendering for efficiently updating scatterplot markers regardless of data set size. We show how our approach enables fast, interactive design and adjustment of overlap in scatterplots, demonstrated with a proof-of-concept visualisation tool.
Electroencephalogram (EEG) based emotion recognition has received significant research attention since it allows the direct reflection of the inner state of the brain activity. However, it is difficult to improve the performance of recognizing cross-subject emotion due to subjectivity and the noise associated with EEG data collection. In this paper, we propose a novel cross-subject EEG-based emotion recognition method with combination of channel-wise features and LSTM. The channel-wise feature is defined by the symmetric matrix, the element of which is calculated by the Pearson correlation coefficient between two-pair channels to consider the spatial interaction among all channels. Then, the channel-wise features are fed to the 2-layer stacked Long Short-Term Memory (LSTM), which can extract temporal features and learns an emotional model that can complementarily handle subjectivity and noise. The experiments on two publicly available datasets DEAP and SEED demonstrate the effectiveness of the combined use of channel-wise features and LSTM. Experimental results achieve state-of-the-art classification rate of 98.93% and 99.10% over 2-classes valence and arousal on DEAP respectively and show the accuracy of 99.63% over 3-classes classification on SEED.
Humans learn what information is important and interesting in the world by observing and interacting with other agents. Given the important function of social attention (i.e., looking where others are looking, being motivated to engage with others, and attending to social information), the author proposes the use of eye-tracking technology to assess and guide the social attention of typically developing children and children with autism spectrum disorder, both of whom may benefit from learning systems designed to improve social attention. The result is the development of an Eye-Movement Modeling (EMM) intervention designed to enhance social attention by tracking and guiding the visual attention of users in real time during simulated dyadic social interactions. The design and implications of the EMM intervention will be discussed.
In response to the severe lack of video games geared towards individuals with visual impairment and blindness, we designed and developed a maze game ("Lost in Spaze") that can be enjoyed by players of varying visual ability. Using controllers, players navigate through four mazes of increasing difficulty, guided by audio cues indicating what direction they should take. Given that Lost in Spaze is a video game, there is still a visual component that is not meant to be used by players. However, the visual component may allow a non-player audience who are not visually impaired or blind to engage in the fun of this local multiplayer party game.
We present TimeBOMB, an interactive game station that enables players to experience the history of computers and video game development. They compete with each other playing an adaption of the classic game "Bomberman". As a novelty, each of the four sides of the station represents a different time period with corresponding input and output modalities. They consist of an oscilloscope interface with self-made, analogue control dials, a text-based interface controlled by a keyboard, a 2D arcade interface controlled by a joystick, and a 3D interface controlled by a gamepad. These four styles resemble iconic examples from the history of computer games. The game's art style also differs for each side accordingly.
We outline the ideation, development, and gameplay of Safecracker, an interactive heist game. The game provides immersion through a multisensory gameplay experience. Players are engaged in thematic, audio-based mechanics while interacting with a tangible controller. Players attempt to break into a life-sized safe by determining its 3-number combination. Their primary crime-committing tool is a stethoscope, which listens for audio cues as they rotate the safe's dial. Headphones embedded within the stethoscope play distinct sounds when the dial approaches each number in the sequence. Once the player has identified all 3 numbers in the combination, the safe unlocks and opens, revealing different treasures depending on the selected difficulty level.
Critical components of today's video games are their ability to provide life-like visuals. Using excellent graphics quality, they offer an immersive gaming experience to its users. However, graphics prove to be ineffective when we delve into developing games for the visually impaired. In our work, we explore how haptics and tactile interfaces can be harnessed to provide the visually impaired a better gaming experience. We have proposed the use of audio plus haptic data exchange between the user and the system. We have created a gaming console that is light, portable, easy to use and reprogrammable to provide the visually impaired a new source of entertainment. In this work, we present a proof of concept by conducting a preliminary evaluation of HapTech.
In this paper we present "Sophroneo: Fear not", a VR horror experience inspired by Japanese folklore. In order to emphasize the supernatural side of the experience we introduce several innovative techniques in the interface of the game system. We utilize various liminal audio design techniques, a custom made wearable thermal feedback setup with water-cooled Peltier TEC elements for long cold sensations as well as physiological signal loops to improve the immersion and to increase the sense of fear and unease. In some parts of the game, the player will be asked to close their eyes and depend on their other senses.
Chronic pain is an ailment that affects over 60 million  people all over the world. It is a poorly understood condition by both clinicians and the society at large. Chronic pain manifests in many different forms and imposes immense physical limitations on the sufferer's body. However, the social and mental issues associated with chronic pain are often overlooked. Although modern medicine alleviates the physical symptoms of chronic pain, it fails to address issues that feast on the sufferer's mental health. Moreover, awareness, acceptance and empathy towards the sufferers are lacking in the society. On the other side is an interactive narrative that employs gameplay as a medium to induce empathy and awareness about social stigmatization and isolation that patients with chronic pain conditions face. The narrative transports the player into the troubled life of a chronic pain patient and their altered relationship with their own body.
Interest in interactive bodily play and game design has increased during the last decade, often fueled by the medical industry's focus on exergames and a need for basic movement training. By dividing bodily interactions into bodily preconditions and surrounding conditions for interaction, Move Maker systematically explores basic bodily play dynamics in combination with digital interactive devices. This way, Move Maker offers a movement-based game system challenging basic movement abilities through bodily play explorations. Move Maker is designed for elderly people to play with their grandchildren (through a suite of ready to play minigames) and for designers and physiotherapists wanting to explore and develop novel bodily play constructions.
The rate at which species are becoming extinct has never been this fast and human impact on the environment is the main reason. To promote behavioural change, we developed a game to raise awareness amongst the generation of young creatives, and provide them with knowledge about everyday behaviours that are beneficial for, or threatening to a specific endangered animal. Using a human-centred design process, we gathered user requirements and developed a multisensory educational group game in which players have to cooperate to control an animal and avoid harmful or collect helpful items using voice and movement. Users who played the game during a final Wizard-of-Oz user test found the game highly engaging, successful in raising awareness, and reported increased motivation to take action in their personal lives. Our innovative game could be of great value in decreasing human-induced animal extinction.
Planet Bug is an educational game that teaches players about insects through a fun and interactive medium. Insect populations worldwide are declining, but little media coverage or publicity has highlighted this growing environmental concern . Incorporating a unique gaming console and engaging graphics, Planet Bug tells a story that educates players on insect populations of the world, and how to take actions to protect them. Starting with a message from the future, the journey begins with an important mission to photograph insects from three different habitats (corresponding to three levels). Upon successful competition of the final level, the true message of the game will be displayed.
In the United States, the community center is one of the essential resources for seniors by providing seniors programs and services, such as meals, information, transportation, and recreation. For seniors, it is necessary to have a good social network and continuous physical activities to maintain good overall health. Currently, although some older adults depend on community centers to meet these needs, they still sometimes feel less connected with others. In this paper, we design the game, Team Bingo, that aims to reduce social isolation and increase physical activity among seniors in a community setting.
Over the last twenty years the CHI conference has grown substantially. However, with the reframing of climate change as a climate crisis, environmental concerns have become increasingly pervasive in the community. In 2019 CHI introduced a sustainability role and set a goal to make CHI more sustainable. In 2020 CHI is in Hawaii. This work looks back over the last two decades and estimates what are substantial and growing CO2 emissions from conference travel. First, it posits how, in the short term, potential environmental damage can be minimised. Second, and longer-term, it invites the community to reflect on research dissemination and how the conference experience may need to change.
The poem 'An Oldy's Lament' was written by Julie Butler, an 86 year old creative writer. Julie describes her personal reflections on technology and society, while at the same time questioning her part in this seemingly pre-ordained arrangement of the technological world, as technology colonises societies. We analyse this literary expression from a postcolonial perspective, discussing how the 'othered' of innovation bring to the fore pertinent issues that need overcoming in both HCI research and the community at large.
As the field of Human-Computer Interaction (HCI) increasingly engages with matters of social change for the Global South, more students from this region — seeking to use HCI for impacting their countries - emigrate to HCI programs in the Global North. In turn, this challenges the assumed targeted audience, intentionality, and pedagogical approaches of traditional HCI educational structures. Drawing from Latin American decolonial thinking, we reflect on our experiences as Latin American students seeking a graduate education in the United States. From there, we discuss paths for HCI educators and students to engage with the co-creation of pluriversal learning spaces that resist universal notions of language, class content, and knowledge production. As we build a more inclusive research community, these discussions become critical for imagining an HCI education aware of its social and political dimensions.
It can be difficult to critically reflect on technology that has become part of everyday rituals and routines. To combat this, speculative and fictional approaches have previously been used by HCI to decontextualise the familiar and imagine alternatives. In this work we turn to Japanese Shinto narratives as a way to defamiliarise voice assistants, inspired by the similarities between how assistants appear to 'inhabit' objects similarly to kami. Describing an alternate future where assistant presences live inside objects, this approach foregrounds some of the phenomenological quirks that can otherwise easily become lost. Divorced from the reality of daily life, this approach allows us to reevaluate some of the common interactions and design patterns that are common in the virtual assistants of the present.
Contemporary computing devices contain a concoction of numerous hazardous materials. Though users are more or less protected from these substances, recycling and landfilling reintroduce them to the biosphere where they may be ingested by people. This paper calls on HCI researchers to consider these corporal interactions with computers and critiques HCI's existing responses to the e-waste problem. We propose that whether one would consider eating a particular electronic component offers a surprisingly useful heuristic for whether we ought to be producing it on mass with vanishingly short lifespans. We hypothesize that the adoption of this heuristic might affect user behaviour and present a diet plan for users who wish to take responsibility for their own e-waste by eating it. Finally we propose an alternative direction for HCI researchers to design and advocate for those affected by the material properties of e-waste.
In this paper, we make an argument for using "the absurd" as a useful lens through which to critique modern developments in interactive technology. We argue that absurd positions are generative and engaging; they provide scope and direction for developing artefacts that people want to talk about and discuss. We argue for adopting absurd positions because; 1) as publicly funded academics, unbeholden to commercial interests, we can, 2) it's fun, and 3) doing so draws out, highlights, and plays with the often weird, fake, nonsense, bizarre, and surreal aspects of modern interactive technology artefacts - and the often weird situations that arise when interacting with those artefacts. In order to illustrate this argument, we present a number of case studies drawn from 10 years of our absurd research papers, many of which were published at previous iterations of this conference.
Humans are skilled at closing their minds to the suffering of others. This paper argues that, over the past several decades, human computer interaction tools and techniques have both exacerbated and ameliorated this human tendency. It contributes a theoretical framework for thinking about the relationship between HCI and various forms of suffering, and a set of principles to support suffering-centered design — design activities that foreground the suffering of others rather than obscure it. This paper proposes that there is a need for further exploration of ways that HCI can improve the collective human experience by supporting ways to open people's minds to the suffering of others.
On the eve of the General Data Protection Regulation (GDPR) coming into effect we, a university laboratory, marked the occasion with an interactive installation called Compliance. Data traces from Compliance were subsequently processed by the lab, here enacted in the form of a play. While much discussion has centered around modern 'black-boxed' processing of data, less attention has been paid to the value of the data itself, and whether it merits use. We draw on dramaturgical methods for both analysis and presentation , allowing for readers to imagine staging their own, different, versions of the event. Drawing on the ambiguous ontological status of (yet unexamined) data, we offer a discussion on the value of data, its use and non-use, as well as how to live with this ambivalence, continuously negotiating social contracts about our further conduct with the data.
Our interactions with data, visual analytics included, are increasingly shaped by automated or algorithmic systems. An open question is how to give analysts the tools to interpret these "automatic insights" while also inculcating critical engagement with algorithmic analysis. We present a system, Sortilège, that uses the metaphor of a Tarot card reading to provide an overview of automatically detected patterns in data in a way that is meant to encourage critique, reflection, and healthy skepticism.
Computer-generated algorithmic art has undergone significant developments since its emergence in the 1960s. With further integration of art and technology in the 21st century, artists continue to respond, take risks and challenge the ways computers can be thought of as a creative medium. This project specifically addresses speculative forms of artificial intelligence particularly the possibilities for creative collaboration between human and machine-generated embodiments of poetic expression. The Artificial Creative Intelligence (ACI) is a fictional AI Poet whose spoken word poetry signals the horizon of a new type of authorship that questions the philosophical implications of artificial intelligence for creative practitioners.
While the Human-Computer Interaction (HCI) field widely recognises the need for interdisciplinarity, academic traditions from older fields still limit the scope of cross-disciplinary discourse. As HCI scholars come from a variety of backgrounds, developing new HCI-specific research agendas requires a deep immersion in a cross-disciplinary discussion and a thorough understanding of those with a different academic upbringing. This may be difficult in the eventful life of an HCI academic. In this paper, we propose a twelve-step programme that helps foster a better cross-disciplinary discourse. Having successfully survived a multitude of intense arguments between a computer scientist and a psychologist, we offer an easy way to have fruitful and civilised cross-disciplinary discussions. Our programme is designed to help HCI academics broaden their mind and interface with other disciplines while preserving their sanity.
We explore the potential of deoxyribonucleic acid (DNA) molecules to enable new ways for humans to interact with their stories and memories via a physical interface that engages senses such as touch, smell and taste. Specifically, we embed the memories of an elderly woman inside a micro-organism by means of computing and genetic engineering. To do so, we first encoded the stories into a string of nucleotides. We next designed and fabricated a circular string by appending restriction enzymes and backbone genes. We developed specific bio-protocols to insert the fabricated molecule inside Komagataeibacter rhaeticus bacteria. The transformed bacteria were presented in an exhibition as a sculpture - Semina Aeternitatis, containing billions of copies of the original stories that people could see, touch, smell and taste. Our work is a first step towards a future where the interaction with our past will go beyond words, and take a more tangible format.
The invisible norms of Christian culture - which includes religious, secular, and atheist Christianity - are hard for those raised within that culture to see. This paper asks what it might look like to design human-computer interactions from outside Christian hegemony. It shares concepts from Jewish thought that can serve as exemplars for innovation in design: elu v'elu, tzedakah, the eishet chayil, and ma'alin bakodesh. This work seeks to inspire others to resist Christian hegemony in their research and system design.
A body of Human-Computer Interaction (HCI) research addresses a wide range of wellbeing issues including digital and online health-care, formal and informal healthcare and wellbeing infrastructure, and health informatics. We focus on the wellbeing practices among the underrepresented marginalized communities in rural Bangladesh. The goal of this work is to understand their wellbeing challenges and how those are connected to their religious beliefs and adaptations - that we define as 'Parareligion'. This paper presents our findings from a 3-year long ethnographic study with 150 participants in 15 villages. We extend this discussion by arguing for better inclusion of para-religious practices in Wellbeing-HCI discourse to design more culturally appropriate and sustainable technologies for such target communities.
Researchers and designers working in industrial sectors seeking to incorporate Artificial Intelligence (AI) technology, will be aware of the emerging International Organisation for AI Legibility (IOAIL). IOAIL was established to overcome the eruption of obscure AI technology. One of the primary goals of IOAIL is the development of a proficient certification body providing knowledge to users regarding the AI technology they are being exposed to. To this end IOAIL produced a system of standardised icons for attaching to products and systems to indicate both the presence of AI and to increase the legibility of that AI's attributes. Whilst the process of certification is voluntary it is becoming a mark of trust, enhancing the usability and acceptability of AI-infused products through improved legibility. In this paper we present our experience of seeking certification for a locally implemented AI security system, highlighting the issues generated for those seeking to adopt such certification.
What for and how will we design children's technologies in the transhumanism age, and what stance will we take as designers? This paper aims to answer this question with 13 fictional abstracts from sixteen authors of different countries, institutions and disciplines. Transhumanist thinking envisions enhancing human body and mind by blending human biology with technological augmentations. Fundamentally, it seeks to improve the human species, yet the impacts of such movement are unknown and the implications on children's lives and technologies were not explored deeply. In an age, where technologies such as under-skin chips or brain-machine interfaces can clearly be defined as transhumanist, our aim is to reveal probable pitfalls and benefits of those technologies on children's lives by using the power of design fiction. Thus, main contribution of this paper is to create diverse presentation of provocative research ideas that will foster the discussion on the transhumanist technologies impacting the lives of children in the future.
In a world where public engagement is increasingly important, where there is the urge of leaving the research laboratories to tell people what is being done, and where the effort from Academy is still limited, we propose Death on the Nile, a novel experience to get people in touch with innovative interactive technologies. In our exhibition, visitors are invited to solve a crime while they get in touch with the new frontiers of Human-Computer Interaction. Also, participants can experience the different stages of the research process through the metaphor of the investigation. An empirical study (N=969) shows that Death on the Nile is engaging and effective as a method through which to present HCI research. Finally, we conceptualize the experience into a framework that can be used with potentially any interactive technology.
As humans our view of the world is predominantly restricted to our own experience and are largely oblivious to the alternate perspective of reality experienced by the objects that cohabit our spaces even though such objects are often integral components of our lives. This paper considers the growing phenomenon whereby non-human objects such as cutlery and appliances are having what might be considered human-like experiences through integration of advanced computational programming. By examining the services provided by Madame Bitsy's Fantastic Future Forecasting and Fortune Telling Emporium for the Internet of Living Things, which is a fully autonomous online fortune telling service for Internet of Things enabled objects and services, we attempt to illuminate what it means to be a digitally connected object.
While academic research culture varies across schools, disciplines, and individual labs, the material and mental well-being of both graduate students and faculty are often negatively impacted by systemic factors in academia. Here we unpack these patterns in order to counter the narrative that individualistic solutions can bring about change. We illustrate how focus on quantitative outcomes, perfectionism, competition, time scarcity, power dynamics, bias towards maintaining the status quo, and financial stress contribute to negative lab culture. We describe specific, concrete, and actionable practices we institute in our lab to counter these systemic factors. We end by opening the conversation to other researchers to examine and counter toxic lab culture to promote supportive, inclusive, and ethical research.
As Michael Polanyi claims, "we know more than we can tell." As such, much of the author's tacit knowledge remains beyond the reader's grasp. However, by engaging with the text through embodied action, readers can gain their own tacit knowledge which may approximate or even expand upon the author's knowledge through the reader's own situated analysis. I present a situated analysis of a CHI paper, "Design for Collaborative Survival," to propose an embodied method of reading through situated action that enables communication and expansion of the knowledge beyond the text. I responded to each of the concepts in that paper in an active way to better approach the authors' tacit knowledge by developing my own. Through this activity I intend to stimulate discussion on how authors and readers can benefit from similar methods that take reading beyond the text.
I(am)MEI: 013709002488246. I was born in many countries - my accelerometer came from Germany, my battery from China, the lithium in my battery was mined in Chile, my gyroscope from Switzerland, my camera... from Japan. I was assembled carefully from these component parts, and had two less than careful owners before R picked me up from a reseller, and brought me back to his house in London, UK. We had a good time together - at first: he revelled in my speed and ability to find things, we viewed the world via a lens with infinite options. But I was not built to last. This is my story.
The increasing popularity of smart personal assistants has meant the rapid inclusion of data-collecting technology in homes. Research has shown that the privacy notices for these smart devices can be ineffective, as users often have incorrect mental models about what happens to data collected from them. To provide more effective data collection cues, we present a redesign of traditional, friendly smart assistant personas: eGregor, an eldritch hive mind being. Using science fiction concepts in conjunction with visceral notice, a concept that eschews purely text-based privacy indicators, eGregor more clearly represents the data practices of a hypothetical parent company. It has various aesthetic and auditory indicators and an intuitive and terrifying persona. We draw attention to the potential for major smart personal assistant companies to improve upon their user interface designs.
Technology for disabled people is often developed by non-disabled populations, producing an environment where the perspectives of disabled researchers - particularly when they clash with normative ways of approaching accessible technology - are denigrated, dismissed or treated as invalid. This epistemic violence has manifest material consequences for our lives as disabled researchers engaging with work on our own states of being. Through a series of vignettes, we illustrate our experiences and the associated pain that comes with such engagement as well as the consequences of pervasive dehumanization of ourselves through existing works. Our aim is to identify the epistemic injustice disabled people experience within HCI, to question the epistemological base of knowledge production leading to said injustice and to take ownership of a narrative that all too often is created without our participation.
HCI is complicit in the climate crisis, as the systems and services that we design engender unsustainable energy use and waste. HCI has equal potential to find solutions for environmental challenges and script, by design, the behavioral changes needed for sustainable net futures. We explore this dichotomy through the lens of data transmission, examining the energy consumption and environmental impact of web communications. This work begins with a critical revisiting of legacy web design that mines the past for actionable ideas towards sustainable net futures. We query how we can reduce our own net energy consumption, and plot a path to design our low-power website. In addition we speculate on a redesign of the background systems, outlining the practical steps we have taken towards solar, wind, gravity and micro-hydro low-power web hosting solutions.
Driving from the novel technology of LIME (LIquid MEtal shape-changing interfaces), MetaLife presents a series of installations for educational, entertainment, or aesthetic purposes. In this paper, we discuss how we translate LIME's interaction paradigms into design vocabulary and how we use such vocabulary to design the installation. As industry and academia become disconnected from each other in the field of tangible user interfaces, MetaLife gives an example of how we can promote the bonds between them.
The roads of your veins is an interactive biometric-data artwork that allows participants to scan their veins and find the roads that match their vein lines. The vein data as one of the fascinating forms of biometric data contain uniquely complicated lines that resemble the roads and paths surrounding us. The roads resemble how our vein lines are interconnected and how the blood circulates in our bodies in various directions, at various speeds, and in different conditions. This new artwork explores the line segmentation and the structure of veins and compares them to roads in the real world. Through this project, users can explore the correlation between individuals and environments using the hidden patterns under the skin and the vein recognition techniques and image processing. This project also has the potential to lead the way in the interpretation of complicated datasets while providing aesthetically beautiful and mesmerizing visualizations.
Communicating emotional experiences is something core to being human, yet also notoriously difficult. With this considered, we acknowledge previous work in bio-responsive and neurofeedback systems which facilitate the externalization of subjective experiences, which highlight potential for the appropriation of Neurofeedback for communication. We present a demonstration that explores this opportunity through "Neo-Noumena", a communicative neuro-responsive system that augments the interpersonal communication of emotion through brain-computer interfacing and artificial intelligence, which interprets the users affective state and dynamically to others in mixed reality through two head-mounted displays. The user will, with a partner, experience their affective state translated into an aural swarm of procedurally generated, emotionally informative fractals.
We present MicroAquarium, a new hybrid digital-biological installation that provides an immersive experience of interacting with real living cells. MicroAquarium uses a custom-built light-projection microscope equipped with an interactive input device and an immersive display to mediate the interaction between humans and microorganisms. Users' hand motions are recognized and converted into a pattern of light that is projected onto the photo-tactic microorganisms inside the microscope. The view inside the microscope is displayed on a large screen display, providing users with an immersive experience of being inside an aquarium of living cells. Our system effectively bridges the differences in size and the sensing modalities between human users and microscopic organisms and allows for unique playful and exploratory inter-species interactions.
"Vera" is an adaptation of a short story by New Zealand's early XXth-century feminist writer, Katherine Mansfield. It is a unique artificial reality experience that allows viewers to cross the so-called fourth wall between the film reality and the audience's reality. The experience allows the viewer to "enter" a film frame via a mixed reality tabletop experience. To our knowledge, "Vera" is the first immersive experience that utilizes both volumetric capture (instead of avatar animation) and photogrammetric reconstruction of the set decoration and props. It is also the first experiment that literally rather than metaphorically crosses the fourth wall.
We present Augmented Displays, a new class of display systems combining high-resolution interactive surfaces with head-coupled Augmented Reality. This extends the screen estate beyond the display and enables placing AR content directly at the display's borders or within the real environment. Furthermore, it enables people to interact with AR objects with natural pen and touch input in high precision on the surface. This combination allows a variety of interesting applications. To illustrate them, we present two use cases: An immersive 3D modeling tool and an architectural design tool. Our goal is to demonstrate the potential of Augmented Displays as a foundation for future work in the design space of this exciting new class of systems.
The golden hour following a traumatic injury holds the highest likelihood where surgical treatment may prevent mortality and morbidity. However, the increasing occurrence of large-scale disasters, overwhelm local medical systems and patients find themselves without timely access to medical expertise. To respond to this problem medical care experts are exploring telemedicine, which typically attempts to use synchronous audiovisual communication. While current telemedicine approaches have limited ability to impact the physical care required for trauma, AR technology allows medical professionals to see their patients while seeing additional digital information. In this paper we describe ARTEMIS (Augmented Reality Technology to Enable reMote Integrated Surgery), an immersive AR-VR telementoring infrastructure that allows experienced surgeons to remotely aid less experienced medical professionals in the field. ARTEMIS provides Mixed Reality immersive visual aids by tracking a patient in real-time and showing a reconstructed 3D point cloud in a VR environment; expert surgeons can interact with the 3D point cloud representation of the patient, instruct the remote novice through real-time 3D annotations projected in AR on the patient's body, using hand-maneuvers shown in AR through an avatar of the expert surgeon, and by projecting small video clips of specific procedures in the AR space for the novice to follow.
We present a novel gestural interaction strategy for multi-device interactions in augmented reality (AR), in which we leverage existing physical affordances of everyday products and spaces for intuitive interactions in AR. To explore this concept, we designed and prototyped three demo scenarios: pulling virtual sticky notes from a tablet, pulling a 3D model from a computer display, and 'slurping' color from the real-world environment to smart lights with a virtual eyedropper. By merging the boundary of digital and physical, utilizing metaphors in AR and embodying the abstract process, we demonstrate an interaction strategy that harnesses the physical affordances to assist digital interaction in AR with hand gestures.
ARLooper is an augmented reality mobile interface that enables the user to record sound, visualize it as a 3D waveform, play and control the waveform in an AR space. By employing the shared AR features of iOS ARKit, it also supports a collaborative audiovisual environment in which multiple users can interact with each other through the real-time synchronization of activities such as sound recording, playing, and manipulating. For this collaborative experience design, it uses user ID colors to distinguish the ownership status of the waveforms. This paper provides the design and technical overview of ARLooper in addition to the background of the research.
We introduce an advanced computer vision-based AI system that offers people with vision impairments (VI) dynamic, in-situ access to information about the location, identity and gaze-direction of other people nearby. Our AI system utilizes the camera technology of a head-worn HoloLens device, which captures a near 180° field-of-view surrounding the person who is wearing it. Captured images are then processed by multiple state-of-the-art perception algorithms whose outputs are integrated into a real-time tracking model of all people that the system detected. Users can receive information about those people acoustically (via spatialized audio) using a wrist-worn input controller. Having such dynamic access to information, through AI system interactions, enables people with VI to: develop their communication skills; more easily focus on others; and be more confident in their social interactions. Thus, our work explores how AI systems can serve as a useful resource for humans, helping expand their agency to develop new or extend existing skills.
We present Tree Illustrator, an interactive authoring tool of tree visualizations. Tree Illustrator is based on GoTree, a declarative grammar allowing users to create tree visualizations by configuring three aspects: visual elements, layout, and coordinate system. Within the set of all possible tree visualization techniques, we identify a subset of techniques that are both "unit-decomposable" and "axis-decomposable" (terms we define). For tree visualizations within this subset, Tree Illustrator provides the users with flexible and fine-grained control over the parameters of the techniques, supporting not only existing techniques but also undiscovered and hybrid visualizations.
User Experience designers, software engineers and computer scientists alike are rarely tasked with thinking about the environmental impact and resource consumption of their design decisions, both when creating online platforms and experiences and when publishing multimedia content. Furthermore, free or low-cost web and hosting services encourage designers and users alike to overlook the substantial energy footprint of their online behaviors and interactions. The Solar-Powered Server is as a system designed to more deeply investigate the power consumption of our online actions and compare different design elements and choices, while experimenting with the intrinsic qualities of renewable energy sources in order to create more resource-efficient content, to make information storage more accessible under low resource conditions and foster more energy positive behaviors.
Chameleon is a software that combines computer vision feature-matching algorithms with an open database format to allow the incorporation of dynamic HTML5 interactive content over any type of document (e.g. PDF files, PowerPoint documents, etc.) without modifying existing applications or the source document. It thus allows the provision and viewing of an enhanced version of a research paper with embedded interactive demonstrations or videos. It can also be used to perform live demonstrations of interaction techniques while giving a presentation without having to switch tools.
Rapid prototyping of interactive textiles is still challenging, since manual skills, several processing steps, and expert knowledge are involved. We demonstrate Rapid Iron-On User Interfaces, a novel fabrication approach for empowering designers and makers to enhance fabrics with interactive functionalities. It builds on heat-activated adhesive materials consisting of smart textiles and printed electronics, which can be flexibly ironed onto the fabric to create custom interface functionality. To support rapid fabrication in a sketching-like fashion, we developed a handheld dispenser tool for directly applying continuous functional tapes of desired length as well as discrete patches. We demonstrate versatile compositions techniques that allow to create complex circuits, utilize commodity textile accessories and sketch custom-shaped I/O modules. We further provide a comprehensive library of components for input, output, wiring and computing. Three example applications demonstrate the functionality, versatility and potential of this approach.
We demonstrate G-ID, a method that utilizes the subtle patterns left by the 3D printing process to distinguish and identify objects that otherwise look similar to the human eye. The key idea is to mark different instances of a 3D model by varying slicing parameters that do not change the model geometry but can be detected as machine-readable differences in the print. As a result, G-ID does not add anything to the object but exploits the patterns appearing as a byproduct of slicing, an essential step of the 3D printing pipeline.
We introduce the G-ID slicing & labeling interface that varies the settings for each instance, and the G-ID mobile app, which uses image processing techniques to retrieve the parameters and their associated labels from a photo of the 3D printed object.
Food 3D printing enables the creation of customized food structures based on a person's individual needs. In this paper, we demonstrate the use of food 3D printing to create perceptual illusions for controlling the level of perceived satiety given a defined amount of calories. We present FoodFab, a system that allows users to control their food intake through modifying a food's internal structure via two 3D printing parameters: infill pattern and infill density. In two experiments with a total of 30 participants, we studied the effect of these parameters on users' chewing time that is known to affect people's feeling of satiety. Our results show that we can indeed modify the chewing time by varying infill pattern and density, and thus control perceived satiety. Based on the results, we propose two computational models and integrate them into a user interface that simplifies the creation of personalized food structures.
We demonstrate a simple and accessible method for enhancing textiles with custom piezo-resistive properties. Based on in-situ polymerization, our method offers seamless integration at the material level, preserving a textile's haptic and mechanical properties. We demonstrate how to enhance a wide set of fabrics and yarns using only readily available tools. During each demo session, conference attendees may bring textile samples which will be polymerized in a shared batch. Attendees may keep these samples. While the polymerization is happening, attendees can inspect pre-made samples and explore how these might be integrated in functional circuits. Examples objects created using polymerization include rapid manufacturing of on-body interfaces, tie-dyed motion-capture clothing, and zippers that act as potentiometers.
CurveBoards are breadboards integrated into physical objects. In contrast to traditional breadboards, CurveBoards better preserve the object's look and feel while maintaining high circuit fluidity, which enables designers to exchange and reposition components during design iteration.
Since CurveBoards are fully functional, i.e., the screens are displaying content and the buttons take user input, designers can test interactive scenarios and log interaction data on the physical prototype while still being able to make changes to the component layout and circuit design as needed.
We present an interactive editor that enables users to convert 3D models into CurveBoards and our fabrication technique for making CurveBoard prototypes.
ProtoSpray is a new fabrication method that combines 3D printing and spray coating, to create touch-sensitive displays of arbitrary shape. Our approach makes novel use of 3D printed conductive channels to create base electrodes and shape displays. A channelled 3D printed object is then combined with spraying active, electroluminescent, materials to produce illumination. This demonstration involves multiple different devices, created through the ProtoSpray process, showing its free-form applicability to irregular shapes such as a mobius strip and spherical surfaces. Our work provides a platform to empower makers with displays as a fabrication material.
We present Jubilee, an open-source motion platform extensible to custom applications and application media by means of interchangeable bed plates and automatic tool-changing. We describe Jubilee as an piece of infrastructure that can be readily adapted to a specialty task requiring precise computer control of one more tools without necessitating machine design expertise. To this end, Jubilee is designed to be readily reproduced solely from the documentation in a worldwide setting without relying on specialized manufacturing processes or volume discounts. Additionally, our paper provides a series of application examples involving various tools spanning from multimaterial 3D printing to multicolor pen plotting to multi-syringe liquid handling to microscopy.
We present Haptic-go-round, a surrounding platform that allows deploying props and devices to provide haptic feedbacks in any direction in virtual reality experiences. The key component of Haptic-go-round is a motorized turntable that rotates the correct haptic device to the right direction at the right time to match what users are about to touch. We implemented a working platform including plug-and-play prop cartridges and a software interface that allow experience designers to agilely add their haptic components and use the platform for their applications.
We present a demonstration of Chasm, a broadband screw-based linear actuator that renders rich and expressive haptic feedback on wearable and handheld devices. Chasm renders low-frequency skin-stretch and high-frequency vibrations, both simultaneously and independently, through a single tactor and thereby augmenting user interactions with multidimensional haptic feedback in a light and compact form factor. We embody Chasm in a marker-shaped prototype and integrate it with a virtual reality headset through a robust software framework for real-time control of haptic features. We develop a set of VR scenarios to demonstrate rich tactile feedback rendered with the handheld marker and augment the user experience with feeling of impacts, textures, object stiffness and weight on the hand.
We propose ElastOscillation, mounted on a virtual reality(VR) controller to provide 3D multilevel force feedback for damped oscillation to enhance VR experiences. ElastOscillation consists of a proxy, six elastic bands and DC motors. It leverages the motors to control the bands' elasticity to restrain the movement of the proxy, which is connected with the bands. Therefore, when users shake the ElastOscillation device, the proxy shakes or moves in the corresponding movement ranges or levels. The users then perceive the force from oscillation in different levels. In addition, elastic force from the bands further reinforces the oscillation force feedback. In the demonstration, users can explore four VR applications that feel the sensation of pan-flipping, bartender-shaking, wine-swirling and fishing.
While standard VR controllers lack means to convey realistic, kinesthetic impressions of size, resistance or inertia, this demonstration presents Drag:on, an ungrounded shape-changing interaction device that provides dynamic passive haptic feedback based on drag, i.e. air resistance, and weight shift. Drag:on leverages the airflow at the controller during interaction. The device adjusts its surface area to change the drag and rotational inertia felt by the user. When rotated or swung, Drag:on, conveys an impression of resistance, which we previously used in a VR user study to increase the haptic realism of virtual objects and interactions compared to standard controllers. Drag:on's feedback is suitable for rendering virtual mechanical resistances, virtual gas streams, and virtual objects differing in scale, material and fill state. In our demonstration, participants learn about this novel feedback concept, the implementation of our prototype and can experience the resistance feedback during a hands-on session.
Most widespread haptic feedback devices for augmented and virtual reality (AR/VR) fall into one of two categories: simple hand-held controllers with a single vibration actuator, or complex glove systems with several embedded actuators. In this work, we explore haptic feedback on the wrist for interacting with virtual objects. We use Tasbi, a compact bracelet device capable of rendering complex multisensory squeeze and vibrotactile feedback. Leveraging Tasbi's haptic rendering, and using standard visual and audio rendering of a head mounted display, we present several interactions that tightly integrate sensory substitutive haptics with visual and audio cues. Interactions include push/pull buttons, rotary knobs, textures, rigid body weight and inertia, and several custom bimanual manipulations such as shooting an arrow from a bow. These demonstrations suggest that wrist-based haptic feedback substantially improves virtual hand-based interactions in AR/VR compared to no haptic feedback.
We exhibit a handheld robotic gadget, named OMOY, that is equipped with a movable weight inside its body. By controlling the translational and rotational motion of the weight via four parameters (target position, trajectory, speed, and repetition), the gadget can present weight shifts to the user who holds it. In addition, by using weight shifts together with other robotic behaviors, such as hand gestures, facial expressions, and speech dialogues, it is expected that emotional and/or intentional messaging between users is enhanced. In this hands-on demonstration, visitors will have an opportunity to hold OMOY and feel some weight shift patterns. This demonstration, as well as the extended abstract, is based on the content of the CHI'20 Paper No. 646.
Push-buttons provide rich haptic feedback during a press via mechanical structures. While different buttons have varying haptic qualities, few works have attempted to dynamically render such tactility, which limits designers from freely exploring buttons' haptic design. We extend the typical force-displacement (FD) model with vibration (V) and velocity-dependence characteristics (V) to form a novel FDVV model. We then introduce Press'Em, a 3D-printed prototype capable of simulating button tactility based on FDVV models. To drive Press'Em, an end-to-end simulation pipeline is presented that covers (1) capturing any physical buttons, (2) controlling the actuation signals, and (3) simulating the tactility. Our system can go beyond replicating existing buttons to enable designers to emulate and test non-existent ones with desired haptic properties. Press'Em aims to be a tool for future research to better understand and iterate over button designs.
Kirigami Haptic Swatches demonstrates how kirigami and origami based structures enable sophisticated haptic feedback through simple cut-and-fold fabrication techniques. We leverage four types of geometric patterns: rotational erection system (RES), split-fold waterbomb (SFWB), the overlaid structure of SFWB and RES (SFWB+RES), and cylindrical origami, to render different sets of haptic feedback (i.e., linear, bistable, snap-through, and angular rotational force behaviors, respectively). In each structure, form factor and force feedback properties can be tuned through geometric parameters. Based on the experimental results and analysis, we implemented software to automatically generate 2D patterns for desired haptic properties. We also showcased five example applications: assistive input interfaces, rotational switch, multi-sensory toy, task checklist, and phone accessories. We believe the Kirigami Haptic Swatches helps tinkerers, designers, and even researchers to create interactions that enrich our haptic experience.
Online courses are popular among learners of programming, but many learners have trouble completing the courses. A common approach to increase learner engagement is to provide co-learner presence via chat and forums. In this work, we present Cocode, an online learning system where learners can share their presence without any explicit action; their normal learning activities would signal co-learner presence. Cocode is a web application for online programming courses that shows other learners' code editors and running screens in the programming environment to the learners while working on exercises. Results from our between-subject studies show that learners with Cocode are more engaged and work on more programming exercises compared to the learners using the system without social features.
Living Jiagu is an interactive, wall-sized exhibition for the engaging learning of Chinese writing. Living Jiagu leverages state-of-the-art machine learning technologies to facilitate the recognition and recall of Chinese characters via constructive etymology in context - i.e., learning the writing and meaning of pictographic characters by designing them from image prompts similar to the creators of Oracle Bone Script (OBS) 3000 years ago and experiencing how these characters function and interact in natural scenes. An installation of Living Jiagu received positive feedback from over one thousand users.
We present PneuModule, a tangible interface platform that enables users to reconfigure physical controls on pressure-sensitive touch surfaces using pneumatically-actuated inflatable pin arrays. PneuModule consists of two types of different passive modules: a main module and extension modules. The main module can be customized by attaching extension modules that have distinct physical input modalities. The extension modules are hot-swappable, enable users to quickly customize the interface layout. We showcase the feasibility of PneuModule through a series of interactive demonstrations.
We envision "Soft Mobility," which is a new type of personal mobility made of soft, lightweight, and inflatable materials. A soft body enables safer user interactions with pedestrians and drivers; the lightweight and inflatable properties allow the users to easily carry it as a "portable device," by deflating, folding, and packing it into, for example, a backpack. As embodiment of such soft mobility, we prototyped "poimo" (POrtable and Inflatable MObility). To evaluate the poimo, we first conducted mechanical tests to verify that it can endure the weight of a human. We also measured the time needed to inflate / deflate it to demonstrate its property. Finally, we reported on the preliminary results of riding on it, and clarified the requirements to further investigate and implement soft mobility transport.
Acoustic levitation offers a novel alternative to traditional volumetric displays. With state-of-the-art hand-tracking technology, direct interaction and manipulation of levitating objects in 3D is now possible. Further, adding game-elements like completing simple tasks can encourage participant exploration of new technologies. We have therefore developed a gesture controlled levitating particle game, akin to the classic wire-loop game, that combines all these elements (levitation, hand-tracking, and gameplay) together with physical obstacles. Further, we have designed a gesture input set that constrains false triggering gestures and dropping of the levitating particle.
We designed and created a new self-contained tangible pen-like input device prototype that can sense all 26 contacts and works with any capacitive display using a conductive case designed with pliable corners. Contacts are distinguished using the device angle from an internal IMU. We further designed a 3D "mirror" visualization that displays a re-configurable mapping of commands to contacts to enable discovery of command-to-contact mappings.
Misinformation spread presents a technological and social threat to society. With the advance of AI-based language models, automatically generated texts have become difficult to identify and easy to create at scale. We present "The Rumour Mill", a playful art piece, designed as a commentary on the spread of rumours and automatically-generated misinformation. The mill is a tabletop interactive machine, which invites a user to experience the process of creating believable text by interacting with different tangible controls on the mill. The user manipulates visible parameters to adjust the genre and type of an automatically generated text rumour. The Rumour Mill is a physical demonstration of the state of current technology and its ability to generate and manipulate natural language text, and of the act of starting and spreading rumours.
The AirSticks are an audio-visual gestural instrument de- signed to inspire the improvisation and performance of real- time explorative electronic music through subtle and not-so- subtle physical movement. The instrument has been used professionally across the world. In doing so the designers have created hundreds of mappings of movement to sound, along with a real-time visualisation system that uses the same movement data from handheld motion controllers to unify the sound, graphics and movement. The instrument has shown to have a broad appeal beyond professional practice, which has led to hands-on demos with children through to older people. We invite visitors, for the first time, to play not only with the AirSticks, but also with the latest iteration of our audio-visual system, to inspire new collaborations in the field of computer-human interaction. We argue that we need more new tools to unlock people's creativity through moving, listening and music making.
Puppet Book is a new concept of a digital storybook that is incorporated with puppetry. It enables parents to manipulate characters in real-time through a back-of-device interface while reading a storybook to their children. Puppet Book aims to provide enhanced expressiveness for parents and immersion for children. The Puppet Book interface was implemented carefully to minimize parents' task workload and maximize the expressiveness of puppeteering. A user study with 11 parent-child groups was conducted. Parents who easily adapted to the interface showed a higher motivation to tell the story, by identifying themselves with the characters. Children showed increased concentration and motivation for reading.
The home of the future will be an interface (in itself) much like current, omni-present screen-based, interfaces. Designing the interior of your home will be as gratifying and immediate as a click is today, with modular digital furniture becoming the status quo and their design democratized and in the hands of users.
In this paper, we present LightSpace, an exploration for the future of design for the home. LightSpace focuses on a very specific solution to a simple problem – designing and stylizing an entire room with playful delight in mind. With our small-scale proof-of-concept tangible user interface (TUI), we account for the direct physicality of designing within a physical space by allowing users to move mini 3D printed furniture in a projection mapped dollhouse and switch materials using a magical wand (NFC).
This exploration acts as a small-scale vehicle for us to investigate the usefulness, feasibility, and possible ubiquity for such an interface in the future.
Elastic Legs Illusion (ELI) provides the striking illusory experience of having a longer legs where the virtual legs are visually stretched from the first-person perspective. This illusion works without any experimenter's assist; a single subject wearing a HMD sits on a floor with both legs stretched straight out against a wooden-plate and pulls backward a handle connected with rubber-material tube, that is anchored at the plate. A weight scale (Wii Balance Board) is placed sandwiched between the both legs and the plate where the more strongly the subject pulls the handle the more load is transferred to both legs as well as to the weight scale. ELI correlates this increment of the weight scale with the stretch ratio of the legs in HMD space. Thus, the sense of legs being pushed back against the plate is transformed into a sense of having longer legs. The system was tested in our laboratory exhibition, showing 33 out of 44 participants agreed with having longer legs strongly.
Kilo Hoku is a virtual reality simulation of sailing on the Hokule'a, a Polynesian double-hulled sailing canoe built in Hawai'i in 1974, which completed its worldwide voyage in 2017. By developing the simulation, we aimed to observe how a virtual reality environment could aid in the cultural preservation of the star navigation portion of Hawaiian wayfinding techniques, and to help to educate future generations of non-instrument navigators. The reaction to the simulation from current practicing Modern Hawaiian wayfinders was positive, and indicates that further study is warranted in testing the efficacy of the simulation for teaching Hawaiian wayfinding to future navigators. This paper will describe the implementation and features of the simulation.
New VR experiences allow users to walk extensively in the virtual space. Bigger tracking spaces, treadmills and redirected walking solutions are now available. Yet, certain connections to the user's movement are still not made. Here, we specifically see a shortcoming of representations of locomotion feedback in state-of-the-art VR setups. As shown in our paper, providing synchronized step sounds is important for involving the user further into the experience and virtual world, but is often neglected. VRsneaky detects the user's gait and plays synchronized gait-aware step sounds accordingly by attaching force sensing resistors (FSR) and accelerometers to the user's shoe. In an exciting bank robbery the user will try to rob the bank behind a guards back. The tension will increase as the user has to be aware of each step in this atmospheric experience. Each step will remind the user to pay attention to every movement, as each step will be represented using adaptive step sounds resulting in different noise levels.
We demonstrate Head-Coupled Kinematic Template Matching (HC-KTM) - a new technique to predict a ray pointer's landing position (end-position and angle) for selection movements in virtual reality (VR) environments. The technique adapts and extends a prior 2D kinematic template matching method to VR environments where ray pointers are used for selection. It builds on the insight that the kinematics of a controller and Head-Mounted Display (HMD) can be used to predict the ray's final landing position and angle. In our VR game, Hangry Piggos, we leverage HC-KTM to generate target predictions within a scene, and we apply pointing visualization techniques on top of them to accelerate a player's selection.
Head-mounted displays (HMDs) that provide a virtual reality (VR) experience using a simple configuration have been become popular. However, HMDs still have a well-known problem in that they hide the user's face when worn making it difficult for their avatars in VR space to communicate with each other because the users cannot relay their facial expressions and gaze information to their avatars. In this paper, we propose a new HMD constructed using polarizing plates that can capture the user's face and expressions while wearing it. The proposed method enables the user's face to be captured by an internal camera while blocking the scenery from the user's view using orthogonal polarizing plates. The proposed method can be implemented at low cost and can directly capture a full-color image of the user's face unlike conventional methods using infrared cameras. We implemented a prototype and confirmed that the proposed HMD captures the user's face while showing VR images. This paper discusses the unique characteristics and possible applications of the proposed method.
Navigation systems for runners commonly provide turn-by-turn directions via voice and/or map-based visualizations. While voice directions require permanent attention, map-based guidance requires regular consultation. Both disrupt the running activity. To provide more natural and less intrusive navigation support, we designed RunAhead, a navigation system providing head scanning based navigation feedback. According to the runner's head scanning movement and his actual head direction, we provide the runner with simple and intuitive feedback on the path s/he is looking at, highlighting the one to follow. Initially, we proposed three different feedback modes: haptic, music and audio cues. In a user experiment, RunAhead proved as effective as voice-based guidance but with a better user experience for the haptic and music feedback modes . In our demonstration we will thus propose to experience these preferred feedback modes on a sample intersection.
A facial mask, as part of the daily outfit for people living in an air polluted environment, has become a barrier for social interaction and self-expression. airMorphologies, a pneumatic wearable device, shapes our body language and the manner of social interaction. Controlled by users' voices, airMorpholgies allows two users to interact and express when they wear the shape-changing devices. By changing their appearance, users gain a novel way to communicate and socialize with their new body language and expressions.
ShArc is a new, geometric technique for real-time measurement of complex curves. These sensors have a flexible strip that can be dynamically formed into different shapes in a plane. Unlike traditional bend sensors which provide only a single measure of angle, inexpensive ShArc sensors provide detailed information about their precise shape. They can be thought of as "digital calipers for curves". The exhibit includes an explanation of the operating principle and the basic construction. Demonstrations of the sensors in action allow attendees to appreciate how these devices can be applied to understanding human pose and the potential for shape as an expressive input mechanism.
We engineered a wearable microphone jammer that is capable of disabling any microphones in the user's surroundings, including hidden microphones. Our device is based on a recent exploit that leverages the fact that when exposed to ultrasonic noise, commodity microphones will leak the noise into the audible range. This allowed us to build a novel wearable microphone jammer that is worn as a bracelet on the user's wrist and jams ubiquitously. We found that our device outperforms state-of-the-art jammers. (1) Existing jammers built from multiple transducers exhibit blind spots, i.e., locations in which transducers destructively interfere and where a microphone cannot be jammed; instead, our wearable jammer leverages natural hand gestures that occur while speaking, gesturing or moving around to blur out blind spots. (2) Existing jammers are directional, requiring users to point the jammer to a microphone; instead, our wearable jams in multiple directions. This is beneficial in that it allows our jammer to even protect against microphones out of sight, such as those hidden behind everyday objects.
I have long been interested in how people seek information from external sources and make sense of the results. While the sources of information have continued to evolve over the years from libraries, to web search engines, to virtual assistants, many important challenges and opportunities remain. The success of any information retrieval systems depends critically on both the ability to support people in articulating their information needs and making sense of the results to solve the problem that motivated their search in the first place, as well as the need to efficiently and effectively find relevant information. My research combines these two dimensions into an interdisciplinary, user-centered perspective on information systems.
My interest in information retrieval started in the early 1980's with the observation that different people use a surprisingly wide variety of words to describe the same object or concept. This fundamental characteristic of human language set limits on how well simple word-matching techniques can do in satisfying information needs. In a paper at the pre-CHI Gaithersburg conference in 1982  we describe this problem as statistical semantics. (It is symptomatic of the problem that we subsequently used vocabulary mismatch, verbal disagreement, and statistical semantics to refer to the same problem.) Over the next decade, with colleagues at Bell Labs, I developed and evaluated solutions that involved collecting multiple descriptors for objects, and reducing the dimensionality of the representation using techniques like Latent Semantic Indexing (LSI)  to mitigate the disagreement between the vocabulary that authors use in writing and searchers use to express their information needs. Similar approaches (combined with a lot more data and compute) are used to power modern word-embedding techniques that widely used in natural language processing.
Throughout my career I have tried to make computers easier to use-and therefore more useful- for ordinary people. Along the way, I've invented a few things that have proven to be helpful. At the operating system level, I tried to make the interface to applications more intuitive by introducing the concepts of icons and metaphor. Within applications, I tried to make the interface simpler by reducing the number of commands, making them more general ("universal"), and making their options and parameters visible with dialog boxes. I also tried to make it possible for ordinary people to program computers, since as long as people can only use predefined applications, they will be able to access only a fraction of the power of computers.
These efforts have all appeared in commercial products, from the Xerox Star office computer to Stagecast Creator's educational software. These innovations have affected the personal computer industry in significant ways.
Our world has been animated and enriched by digital technologies used for creativity, collaboration, learning, health, politics, and commerce. Yet there is much that is troubling. We depend upon software that nobody truly understands and that is vulnerable to hackers and cyberterrorism. Privacy has been overrun by governments and surveillance capitalism. Our children are addicted to their devices; we have become workaholics. Jobs and livelihoods are being demolished without adequate social safety nets. A few digital technology leviathans threated to control not only their domains, but all commerce. Among all these issues, I am most deeply concerned about the hype associated with modern artificial intelligence, and the risks to society stemming from premature use of AI software. We are particularly vulnerable in domains such as medical diagnosis, criminal justice, seniors care, driving, and warfare. Here AI applications have begun or are imminent. Yet much current AIs are unreliable and inconsistent, without common sense; deceptive in hiding that they are algorithms and not people; mute and unable to explain decisions and actions; unfair and unjust; free from accountability and responsibility; and used but not trusted. Happily, we are not helpless victims of forces totally outside our control. We can raise our voices as citizens; we can enact procedures and legislation as a society. My talk will mention steps of both kinds, then focus on what we as digital technology, HCI, and usability professionals can and must do.
Some years ago, I began to read environmental philosophy such as deep ecology and Marxist interpretations. At the time I did not know what to do with this material, but my tree hugging instincts and experience of having seen my rural grandparents live a decent but very simple life inspired an abiding interest in the topics these philosophers were writing about. My grandparents grew a lot of their own food, kept an ancient car that my grandfather maintained, and enjoyed strong social ties with family and friends. It seems now that we must develop our own forms of simplicity and self-reliance if we are to weather the devastations the market economy is visiting on the planet. When I discovered computing colleagues worried about the same environmental issues I was, we collaborated to create the Computing within LIMITS community.
LIMITS is a series of workshops focused on how to use computing tools and strategies to address global environmental problems. Inspired by the classic 1972 Limits to Growth report (itself a product of computing), the premise is that we must accept and embrace the reality that we live on a finite planet. That reality must inform and animate our work. The first LIMITS Workshop was held in 2015, with one each year since. The Sixth Annual Workshop is in June.
Most computing research does not address or take into account the finiteness of Earth. In 2018, I first authored a paper with LIMITS colleagues for Communications of the ACM to discuss this matter. I was skeptical of acceptance, but perhaps it is a sign of the times that reviewers were surprisingly positive. The paper argues that problems of sustainability are socioeconomic in origin, and we can turn to the field of ecological economics to set a course. In a subsequent paper, "Design in the Age of Climate Change," I examined social movements that consider sustainability within an economic frame, such as the post-growth movement in Europe, the Transition Town movement, and food sovereignty movements worldwide. These movements draw from earlier ideas such as Gandhi's notion of abundance within simplicity.
Many technologies available to autistic children functionally focus on the medical characteristics of a diagnosis of autism. All too often, these technologies are not oriented towards their specific modes of sense-making, but rather towards an outcome behaviour defined by a neurotypically dominant society . These technologies are then also evaluated according to the extrinsic motivations driving their design. Recently, though, more and more Participatory Design (PD) projects create technologies together with autistic children, albeit still mostly remaining in a medicalised view of autism. Hence, there is a lack of research into participatory design with autistic children aiming to develop technologies that reflect their intrinsic interests, holistic wellbeing and considers the embodied experiences they have with these technologies.
Constructive notions of experience in the research field of Human-Computer Interaction (HCI) rely on empathy as a core component of experience-driven evaluations . However, autistic individuals perceive the world differently and, hence, make sense of it differently than non-autistic researchers . This divide becomes especially pronounced when working with children, whose life worlds vastly differ from those of adult researchers. While empathy is a core requirement for the evaluation of the experience of autistic children, my work shows that researchers cannot rely solely on their empathy. Hence, evaluating these experiences requires a structured process capturing multiple views .
When interacting with materials, we infer many of their properties through tactile stimuli. These stimuli are caused by our manual interaction with the material, they are therefore closely coupled to our actions. Similarly, if we are subjected to a vibrotactile stimulus with a frequency directly coupled to our actions, we do not experience vibration - instead we experience this as a material property. My thesis explores this phenomenon of 'material experience' in three parts. Part I contributes two novel devices, a flexible phone which provides haptic feedback as it is being deformed, and a system which can track a finger and simultaniously provide haptic feedback. Part II investigates how vibration is perceived, when coupled to motion: what are the effects of varying feedback parameters and what are the effects of different types of motion? Part III reflects and contextualizes the findings presented in the previous sections. In this extended abstract I briefly outline the most important aspects of my thesis and questions I've left unanswered, while also reflecting on the writing process.
This panel will provoke the audience into reflecting on the dark side of interaction design. It will ask what role the HCI community has played in the inception and rise of digital addiction, digital persuasion, data exploitation and dark patterns and what to do about this state of affairs. The panelists will present their views about what we have unleashed. They will examine how 'stickiness' came about and how we might give users control over their data that is sucked up in this process. Finally, they will be asked to consider the merits and prospects of an alternative agenda, that pushes for interaction design to be fairer, more ethically-grounded and more transparent, while at the same time addressing head-on the dark side of interaction design.
What is the future of the CHI conference? What will it look like in 2030? In this panel, we will present some data on the current state of the CHI conference - from paper submissions to attendance - and the initial findings on what our community has said is their 'ideal' CHI conference. We will then make wild predictions about the future of CHI and encourage audience discussion on the form that the conference and academic publishing could take in the future.
Transparency in process and its reporting is paramount for establishing the rigor of qualitative studies. However, the CHI conference receives submissions with varying levels of transparency and oftentimes, papers that are more transparent can be inadvertently subjected to more scrutiny in the review process, raising issues of fairness. In this panel, we bring together researchers with diverse qualitative work experiences to present examples of transparency-related initiatives and their corresponding review responses. We aim to work towards setting standards for transparent reporting in qualitative-work submissions and increasing fairness in the review process. We focus on the challenges in achieving transparency in qualitative research and current workarounds to overcome frictions in the reviewing process through engaging discussions involving panelists and the audience.
Artificial Intelligent (AI) and Machine Learning (ML) algorithms are coming out of research labs into the real-world applications, and recent research has focused a lot on Human-AI Interaction (HAI) and Explainable AI (XAI). However, Interaction is not the same as Collaboration. Collaboration involves mutual goal understanding, preemptive task co-management and shared progress tracking. Most of human activities today are done collaboratively, thus, to integrate AI into the already-complicated human workflow, it is critical to bring the Computer-Supported Cooperative Work (CSCW) perspective into the root of the algorithmic research and plan for a Human-AI Collaboration future of work. In this panel we ask: Can this future for trusted human-AI collaboration be realized? If so, what will it take? This panel will bring together HCI experts who work on human collaboration and AI applications in various application contexts, from industry and academia and from both the U.S. and China. Panelists will engage the audience through discussion of their shared and diverging visions, and through suggestions for opportunities and challenges for the future of human-AI collaboration.
We seek to engage a broad and diverse audience in discussing emerging challenges in HCI technologies that have potential for significant social impact. In a town hall forum, members of the ACM Technology Policy Council will introduce four emerging challenges for discussion: ethical HCI in global contexts; privacy protection in human-AI interaction; accessible interactions in HCI design; and the environmental impact of HCI. Discussion will be launched with a question from the panel; additional questions will be posted and ranked from the audience. The session will support digital and remote audience participation, and participants will have access to a summary report when the session concludes. These discussions provide an opportunity for CHI members to contribute to emerging policy and governing environments to facilitate ethical, accessible, and environmentally sensitive HCI research, design, and development.
A large portion of the software side of the global information technology infrastructure, including web search, email, social media, and much more, is in many cases provided free to the end users. At the same time, the corporations that provide these services are often enormously profitable. The business model that enables this involves customized advertising and sometimes behavior manipulation, powered by intensive gathering and cross-correlation of detailed personal information. These companies provide some great products and services at no upfront cost to the end users. But the model has a dark side as well, with negative impacts for privacy, autonomy, human dignity, and democracy. The purpose of this panel is to provide a civil forum for the CHI community as a whole to discuss this business model, including its advantages and disadvantages, and its impacts on CHI and HCI and society more generally, with an eye toward responsible innovation.
We investigate the effects of different ways of visualizing the virtual gait of the avatar in the context of Walk-in-Place (WIP) based navigation in a virtual environment (VE). In our study, participants navigated through a VE using the WIP method while inhabiting an avatar. We varied the leg motions of the avatar while performing the WIP gesture: (1) Fixed Body the legs stood still; (2) Prerecorded Animation the legs moved in a fixed predetermined pace (plausible but not in accordance to that of the user in general); (3) Synchronized Motion the legs moved according (synchronized) to the those of the users. Our results indicate that the sense of presence improved significantly by visualizing the leg movement, synchronized or not. This in turn further enhanced the sense of body ownership especially when the leg motion was synchronized to that of the user (Synchronized Motion). However, a significant level of simulation sickness was reported when the virtual leg motion did not match the user's (Fixed Body and Prerecorded Animation). We discuss the implications for representing the avatar locomotion in immersive virtual environments.
One of the utilities of Virtual Reality is to provide its users with new perspectives, which is a promising application for architectural and interior design. In this paper, we investigate the effects of varying spatial scale perception (SSP), the perception of risks, and the ability to detect them in the virtual environment. We conducted a user study where participants experienced four unique perspectives, that of a two-year-old child, an eight-year-old child, an adult and a person in a wheelchair by manipulating their virtual inter-pupillary distance and eye height. We found that varying SSP had significant impacts on the perceived level of risk, heights of the identified risk, and the number of risks discovered. The results yielded empirical evidence to support that experiencing different SSP, can potentially help identify issues during an architectural design process for various groups of users.
ARphy is a tangible interface that extends current ways of organizing photo collections by enabling people to interact with digital photos using physical objects in Augmented Reality. ARphy contextually connects photos with real objects and utilizes physical affordances so that people can add more meanings to their collections and interact with them naturally. We also created an ARphy Interaction Design Toolkit, which can add ARphy-compatible interactions to any object, so that people can register their own things for organizing collections. We developed a prototype using seven everyday objects and evaluated ARphy through a qualitative user study. Our findings indicate that ARphy is intuitive, immersive, and enjoyable and has the potential for selectively managing collections using photos and objects that have personal meanings.
In this work, we present findings from an online survey (N=77) in which we assessed situations of users wishing for features or devices in their home to be smart(er). Our work is motivated by the fact that on one hand, several successful smart devices and features found their way into users' homes (e.g., smart TVs, smart assistants, smart toothbrushes). On the other hand, a more holistic understanding of when and why users would like devices and features to be smart is missing as of today. Such knowledge is valuable for researchers and practitioners to inform the design of future smart home devices and features, in particular with regards to interaction techniques, privacy mechanisms, and, ultimately, acceptance and uptake. We found that users would appreciate smart features for various use cases, including remote control and multi-tasking, and are willing to share devices. We believe our work to be useful for designers and HCI researchers by supporting the design and evaluation of future smart devices.
Text input in virtual reality is not widespread outside of labs, although being increasingly researched. Current setups require powerful components that are expensive or not portable, hence preventing effective in-the-wild use. Latest technological advances enable portable mixed reality experiences on smartphones. In this work, we propose a portable low-fidelity solution for text input in mixed reality on a physical keyboard that employs accessible off-the-shelf components. Through a user study with 24 participants, we show that our prototype leads to a significantly higher text input performance compared to soft keyboards. However, it falls behind on copy editing compared to soft keyboards. Qualitative inquiries revealed that participants enjoyed the ample display space and perceived the accompanied privacy as beneficial. Finally, we conclude with challenges and future research that builds upon the presented findings.
Despite much discussion in HCI research about how individual differences likely determine computer users' personal information management (PIM) practices, the extent of the influence of several important factors remains unclear, including users' personalities, spatial abilities, and the different software used to manage their collections. We therefore analyse data from prior CHI work to explore (1) associations of people's file collections with personality and spatial ability, and (2) differences between collections managed with different operating systems and file managers. We find no notable associations between users' attributes and their collections, and minimal predictive power, but do find considerable and surprising differences across operating systems. We discuss these findings and how they can inform future research.
Fashion is one of the areas in which decision-making relies on the subjective experiences of fashion professionals. Fashion style trend analysis is an important process in fashion; however, due to a lack of quantitative style criteria, analysis results tend to vary by fashion professionals, often making it difficult to apply the analysis results to other fashion cases. In this paper, we propose an interface that provides fashion professionals with objective support which can aid in making more generalizable decisions on fashion analysis. Through interviews and interactions with fashion professionals, we identified quantitative-style classification criteria and analysis requirements in decision making. Based on such design guidelines, we introduce FashionQ (Fashion Quant), which provides three main features: A quantitative-based style clustering (FashionQStyle), style trend analysis (FashionQTrend), and style comparison analysis (FashionQMap). Professionals positively evaluated FashionQ, showing its usefulness and feasibility of fashion analysis in the future.
A lack of trust is a major barrier to the adoptions of Automated Vehicles (AVs). Given the ties between expectation and trust, this study employs the expectation-confirmation theory to investigate trust in AVs. An online survey was used to collect data including expectation, perceived performance, and trust in AVs from 443 participants which represent U.S. driver population. Using the polynomial regression and response surface methodology, we found that higher trust is engendered when perceived performance is higher than expectation, and perceived risk can moderate the relationship between expectation confirmation and trust in AVs. Results have important theoretical and practical implications.
In an Experience Sampling Method (ESM) based emotion self-report collection study, engaging participants for a long period is challenging due to the repetitiveness of answering self-report probes. This often impacts the self-report collection as participants dropout in between or respond with arbitrary responses. Self-reflection (or commonly known as analyzing past activities to operate more efficiently in the future) has been effectively used to engage participants in logging physical, behavioral, or psychological data for Quantified Self (QS) studies. This motivates us to apply self-reflection to improve the emotion self-report collection procedure. We design, develop, and deploy a self-reflection interface and augment it with a smartphone keyboard-based emotion self-report collection application. The interface provides feedback to the users regarding the relation between typing behavior and self-reported emotions. We validate the proposed approach using a between-subject study, where one group (control group) is not exposed to the self-reflection interface and the other group (study group) is exposed to it. Our initial results demonstrate that using self-reflection it is possible to engage the participants in the long-term and collect more self-reports.
Media assets, such as overlay graphics or comments, can make video streaming a unique and engaging experience. Appropriately managing media assets during the live streaming, however, is still difficult for streamers who work alone or in small groups. With the aim to ease the management of such assets, we analyzed existing live production tools and designed four low fidelity prototypes, which eventually led to two high fidelity ones, based on the feedback from users and designers. The results of a usability test, using fully interactive prototypes, suggested that a controller and predefined media object behavior were useful for managing objects. The findings from this preliminary work help us design a prototype that helps users to stream rich media presentations.
This paper summarizes findings from a qualitative research effort aimed at understanding how various stakeholders characterize the problem of Explainable Artificial Intelligence (Explainable AI or XAI). During a nine-month period, the author conducted 40 interviews and 2 focus groups. An analysis of data gathered led to two significant initial findings: (1) current discourse on Explainable AI is hindered by a lack of consistent terminology; and (2) there are multiple distinct use cases for Explainable AI, including: debugging models, understanding bias, and building trust. These uses cases assume different user personas, will likely require different explanation strategies, and are not evenly addressed by current XAI tools. This stakeholder research supports a broad characterization of the problem of Explainable AI and can provide important context to inform the design of future capabilities.
Interest towards building and operating climate services is constantly growing globally, especially regarding utilization of mobile technologies in those services in the Global South, in order to reach rural farmers effectively. However, multiple issues are currently limiting the design and development of these services for reaching their full impact.
Based on a recent field study, we present the criteria for a mobile climate service app for small-scale farmers in Namibia. It will be based on the KaiOS "smart feature phone" platform, and combines holistically climate, weather and agricultural information in a form that guides the farmer through the agricultural cycle. The app is currently in its early development phase, with a set of pre-selected features being built. Further field work with the local farmers will define the final set of features and the eventual user interface (UI) design of the app.
Learning by teaching is an established pedagogical technique; however, the exact process through which learning happens remains difficult to assess, in part due to the variability in the tutor-tutee pairing and interaction. Prior research proposed the use of teachable agents acting as students, in order to facilitate more controlled studies of the learning by teaching phenomenon. In this work, we introduce a learning by teaching platform, Curiosity Notebook, which allows students to work individually or in groups to teach a conversational agent a classification task in a variety of subject topics. We conducted a 4-week exploratory study with 12 fourth and fifth grade elementary school children, who taught a conversational robot how to classify animals, rocks/minerals and paintings. This paper outlines the architecture of our system, describes the lessons learned from the study, and contributes design considerations on how to design conversational agents and applications for learning by teaching scenarios.
Nursing students learn a variety of skills to work in a clinic, such as dealing with patients with particular requirements, handling expensive equipment, and assisting doctors with treatments. However, a specific situation may require hands-on experience that cannot be easily conveyed through a textbook or a video, such as caring for a schizophrenic patient. Simulation has been considered as an effective method to replace observation-based clinical placement to overcome safety issues. In this paper, we investigate the possibility of employing a virtual reality (VR) learning platform for nursing students to learn how to care for schizophrenic patients. Using 360-degree video and a head-mounted display (HMD), students experienced virtual patients who have schizophrenia portrayed by professional actors. Our key contribution is in the insights about the design of educational VR applications, highlighting the potential value of VR for training students with non-technical backgrounds.
Despite being accessible and affordable, online education presents numerous challenges for online learners due to the absence of face-to-face interactions. Lack of community-belongingness, in particular, negatively impacts online learners' learning outcomes and learning experience. To help online learners build communities and foster connections with their peers, we designed and deployed Jill Watson SA (stands for Social Agent). Jill Watson SA is a virtual agent who can match students with shared identity, defined by similarities in location, timezone, hobby, class schedule, etc., on the Piazza class discussion forum. We implemented Jill Watson SA in two online classes and conducted three short surveys with online students to evaluate Jill Watson SA.
Emotion regulation (ER) is foundational to mental health and well-being. In the last ten years, there has been an increasing focus on this use of interactive technologies to support ER training in a variety of contexts. However, work has been done by researchers from diverse fields, and no cohesive research agenda exists that explicates how and why interactive technologies may benefit ER training. To address this gap, this paper presents the initial results of a descriptive review of 38 peer-reviewed papers on this topic. Qualitative analysis revealed four opportunity themes where interactive technologies appear to provide unique benefits. The analysis also revealed three challenge themes where design guidance, particularly around emotion representation, is ambiguous or underspecified. Based on our findings, we propose future research in these thematic areas; we also propose intersectional themes and underexplored areas that researchers and designers may find productive to explore.
Science-oriented television and video programming can be an important source of science learning for young children. However, the educational benefits of television have long been limited by children not being able to interact with the content in a contingent way. This project leverages an intelligent conversational agent -an on-screen character capable of verbal interaction-to add social contingency into children's experience watching science videos. This conversational agent has been developed in an iterative process and embedded in a new PBS KIDS science show "Elinor Wonders Why." This Late Breaking Work presents the design of the conversational agent and reports findings from a field study that has proven feasibility of this approach. We also discuss our planned future work to examine the agent's effectiveness in enhancing children's engagement and learning.
APIs have been recognized in the CHI community and beyond as designed objects worthy of usability analysis. Some work in this vein has investigated the learnability of APIs in particular. Drawing on activity theory, we argue that APIs can also potentially have broader learning consequences for their users. By mediating interactions with code, APIs can shape their users' understanding of computing problems. We thus suggest that, by envisioning high-level API affordances as scaffolds, API designers can not only enhance users' productivity, but they can also help drive adoption of software components attempting to radically innovate on their past inheritance. We propose scaffold design recommendations that can augment existing API usability frameworks.
Much research has sought to provide a flow experience for students in gamified educational systems to increase motivation and engagement. However, there is still a lack of quantitative research for evaluating the influence of the flow state on learning outcomes. One of the issues related to flow experience identification is that used techniques are often invasive or not suitable for massive applications. The current paper suggests a way to deal with this challenge. We describe a methodology based on multimodal learning analytics, aimed to provide automatic students' flow experience identification in the gamified assignments and measuring its influence on the learning outcomes. The application of the developed methodology showed that there are correlations between learning outcomes and flow state, but they depend on the initial level of the user. This finding suggests adding dynamic difficulty adjustment and other flow experience dimension to the gamified assignment.
Collaborative coding offers many benefits to students, but there has been little research on evaluating the applications that students use to collaborate on code. In this preliminary work, we ask "are students' needs being met by existing applications for collaborative coding"? A survey was distributed to students and faculty of computer science to determine if students had experience collaborating on programming projects and identify what applications, if any, they used to facilitate their collaborations. Survey respondents were also asked about the strengths and weaknesses of the applications they used. From the 126 student responses and 23 faculty responses representing 31 unique institutions, over 50 applications were mentioned. We manually clustered the applications based on their affordances and used participant responses to identify opportunities for improvement. We found that many students are retrofitting non-coding applications for their programming projects as a workaround to facing the large learning curves that many collaborative coding tools require. Our findings suggest a need for more novice-friendly collaborative tools.
Probabilistic thinking has been one of the most powerful ideas in the history of science, and it is rapidly gaining even more relevance as it lies at the core of artificial intelligence (AI) systems and machine learning (ML) algorithms that are increasingly pervading our everyday lives. In this paper, we introduce Let's Chance-a novel computational microworld that extends the widely popular Scratch Programming Language with new types of code blocks and representations that make it accessible for children to encounter and tinker with the rich ideas and sophisticated concepts of probabilistic modeling and learning. Using the tool, children can imagine and code their own expressive, playful, and personally meaningful probabilistic projects, such as-generative art, music, or text; chance-based games and stories; interactive visualizations; and even advanced projects for making a computer learn from input data using simple Markov models of probabilistic learning, among many other creative possibilities.
We examine the experience of students who used ViewPoint to participate in a technology-supported, role-based simulation in a large university course, where graduate students designed and built a simulation about the college admissions process for undergraduate students. We focus on the user experience with ViewPoint — a web application to author, structure, and manage role-based simulations. Users noted four ways that ViewPoint supported their experience: it provided convenient and equitable access to resources, facilitated communication among roles, created focus through its self-contained environment, and mirrored real-world tasks through its interface. Users noted two dimensions to consider for future support of role-based simulations: maintaining the conceit of the simulation narrative and creating awareness of auxiliary information streams.
Rigorous blood glucose management is vital for individuals with diabetes to prevent states of too low blood glucose (hypoglycemia). While there are continuous glucose monitors available, they are expensive and not available for many patients. Related work suggests a correlation between the blood glucose level and physiological measures, such as heart rate variability. We therefore propose a machine learning model to detect hypoglycemia on basis of data from smartwatch sensors gathered in a proof-of-concept study. In further work, we want to integrate our model in wearables and warn individuals with diabetes of possible hypoglycemia. However, presenting just the detection output alone might be confusing to a patient especially if it is a false positive result. We thus use SHAP (SHapley Additive exPlanations) values for feature attribution and a method for subsequently explaining the model decision in a comprehensible way on smartwatches.
Time and its lack of play a central role in our everyday lives. Despite increasing productivity, many people experience "time stress" exhaustion and longing for time affluence, and at the same time, a fear of not being busy enough. All this leads to a neglect of natural time, especially the patterns and rhythms created by physiological processes, subsumed under the heading of chronobiology. The present paper presents and evaluates a calendar application, which uses chronobiological knowledge to support people's planning activities. Participants found our calendar to be interesting and engaging. It especially made them think more about their bodies and appropriate times for particular activities. All in all, it supported participants in negotiating external demands and personal health and wellbeing. This shows that technology does not necessarily has to be neutral or even further current (mal-)practices. Our calendar cares about changing perspectives and thus about enhancing users' wellbeing.
We previously developed a tablet-based speech therapy game called Apraxia World to address barriers to treatment and increase child motivation during therapy. In this study, we examined pronunciation improvements, child engagement over time, and caregiver evaluation performance while using our game. We recruited ten children to play Apraxia World at home during two four-week treatment blocks, separated by a two-week break; nine of ten have completed the protocol at time of writing. In the treatment blocks, children's utterances were evaluated either by caregivers or an automated pronunciation framework. Preliminary analysis suggests that children made significant therapy gains with Apraxia World, even though caregivers evaluated pronunciation leniently. We also collected a corpus of child speech for offline examination. We will conduct additional analysis once all participants complete the protocol.
Health dialog collection is the primary bottleneck for the training and deployment of conversational agents into clinical practice. Current tools for the development of dialog systems are primarily focused on writing intent-slot schemas for natural language understanding and finite-state models of dialog management. However, this is a time-consuming process that can be opaque for clinical teams and the limitations of both approaches are well understood. The health domain presents additional barriers including access to patients and concerns about exposing personal health information. The contribution of this work is two-fold. First, it describes an interface designed to augment the ability of clinicians to efficiently engage in high-quality and empathetic health counseling dialog with their patients. Second, it presents a WOZ protocol for collecting this dialog using standardized patient actors each playing a role from a pool of caregiver personas.
Mental health conditions pose a major challenge to healthcare providers and society at large. Early intervention can have significant positive impact on a person's prognosis, particularly important in improving mental health outcomes and functioning for young people. Virtual Reality (VR) in mental health is an emerging and innovative field. Recent studies support the use of VR technology in the treatment of anxiety, phobia, eating disorders, addiction, and pain management. However, there is little research on using VR for supporting, treatment and prevention of depression - a field that is very much emerging. There is also very little work done in offering individualised VR experience to users with mental health issues. This paper proposes iVR, a novel individualised VR for improving users' self-compassion, and in the long run, their positive mental health. We describe the concept, design, architecture and implementation of iVR and outline future work. We believe this contribution will pave the way for large-scale efficacy testing, clinical use, and potentially cost-effective delivery of VR technology for mental health therapy in future.
In the emerging maker movement, clinicians have long played an advisory role in the development of customized assistive technology (AT). Recently, there has been a growing interest in including clinicians as builders of Do-It-Yourself (DIY) AT. To identify the needs of clinicians-as-makers, we investigated the challenges that clinicians faced as they volunteered in an AT building project where they were the primary designers and builders of assistive mobility devices for children. Through observation and co-building of modified ride-on toy cars with clinicians, we found that the rapid pace of development and transient relationship between user and builder did not allow for a complete assessment of the child's mobility. Furthermore, clinicians struggled to actualize concepts borne out of their clinical intent due to a lack of engineering skill. This study highlights the need for tools that support clinicians-as-makers in the AT maker process and a new conceptualization of the role of DIY-AT maker programs within the AT provider ecosystem.
We explore factors contributing to poor mental healthcare, treatments and help-seeking behaviors among communities in Nigeria and across Africa. The findings from the interview of 25 stakeholders reveal some socio-cultural factors such as negative perceptions, stigmatizations, religious beliefs, and absence of automated supports, which hinder mental healthcare and help-seeking. Also, delays in seeking appropriate medical attention and intake of untested local herbs could lead to severe depressive symptoms, suicidal risks, and adversely affect the mental health of clients. Based on our findings, and in collaboration with the stakeholders, we designed "Gwam-Okwu" [Talk to Me]; a culturally-appropriate interactive app that is hyper-localized, safe and secured, and tailored to support communication and collaboration between health workers and clients/relations, personalized self-monitoring, and guided self-learning for the clients.
Peripheral Neuropathy (PN) is a condition which causes diminished and potentially lost sensation in the extremities of the body, typically affecting diabetics and the elderly. We present PaNDa-Glove (Peripheral Neuropathy Displacement Glove), an arm-mounted device which displaces tactile sensation in the fingertips to the forearm, and substitutes thermosensitivity of the hand with vibrotactile and audio feedback. We hypothesize PaNDa-Glove will help patients with PN better recognise the tightness of their grip, and reduce the frequency of burns to the hand. A preliminary quantitative experiment with healthy users strongly suggests that PaNDa-Glove enhances the sensitivity of grip, and an informal qualitative study suggests that the substituted feedback is clear, distinguishable and comfortable.
Some have argued that human skills and abilities are essential to effective health coaching and cannot be replicated by conversational agents. We sought to understand the differences in interaction patterns between two group of participants. In one group, participants messaged with a wizard-of-oz (woz) conversational health coach. In the other group, participants messaged with an actual health coach who was given the same script as the agent, but encouraged to deviate from the script when necessary. We found that conversational patterns differed between the groups, with longer conversations in the human coach group. However, participants were not more likely to respond to messages from the human coach than the woz agent, and were more likely to proactively message the woz agent than the human coach. We discuss implications for the design of conversational health coaches.
This exploratory research examines how we might nudge consumers towards making healthier food choices in online grocery shopping or other digitally mediated food consumption contexts. Our pilot study investigated how different forms of social comparisons could be used to encourage consumers to reduce the number of calories contained in their online grocery basket. Our findings show that participants who were less interested in trying new diets were more willing to reduce calories when presented with a comparison to people unlike them, an out-group member comparison, while those who were interested in trying new diets were more willing to reduce calories regardless of social comparison type. These findings imply that one size does not fit all when nudging. More research is needed to see how social comparisons influence the effectiveness of digital health behavior projects.
Humanoid robots through their embodied features and range of interactivity are proving to be effective as service or information disseminating agents. However in the Australian context, the deployment and evaluation of robots in public spaces is limited. In this study, we report on an observation based exploratory study of university students interaction with the Pepper humanoid robot over 8 days in the library of an Australian university. The students' first impressions of Pepper using the top-of-mind association showed that they were in general wary and scared of service robots. Many considered Pepper as creepy. Their positive remarks were related to the novelty of Pepper's features and technology. In conclusion, we speculate on the results obtained and what they mean for the integration of humanoid robots in mainstream Australian society.
Quantitative methods are becoming more common for persona creation, but it is not clear to which extent online data and opaque machine learning algorithms introduce bias at various steps of data-driven persona creation (DDPC) and if these methods violate user rights. In this conceptual analysis, we use Gillespie's framework of algorithmic ethics to analyze DDPC for ethical considerations. We propose five design questions for evaluating the ethics of DDPC. DDPC should demonstrate the diversity of the user base but represent the actual data, be accompanied by explanations of their creation, and mitigate the possibility of unfair decisions.
We present a case study of multimodal interaction design for public window displays. Using a classic fairytale as the theme story, a prototype system that integrates mobile, gesture, tangible, touchscreen, and puppet interfaces has been implemented. The preliminary field deployment results demonstrate that our interactive window is well-received, with a significantly extended duration of user's interaction time. We conclude with a discussion of lessons learned and potential new research problems for interactive public window design. We believe our findings are useful in future design for interactive shop windows, theater showcases, and exhibition displays.
Virtual Qwerty is the most popular method of text entry in virtual reality. Since virtual keyboards are not constrained by the physical limitations of actual keyboards, designers are taking the liberty of designing novelty keys for these keyboards. However, it is unknown whether key design affects text entry performance or user experience. This work presents results of a user study that investigated the effects of different key shapes and dimensions on text entry performance and user experience. Results revealed that key shape affects text entry speed, dimension affects accuracy, while both affect user experience. Overall, square-shaped 3D keys yielded the best actual and perceived performance, also was the most preferred by the users.
Play and playfulness permeate our daily lives and are often the target of interaction designers. Yet, designing for play while embracing the idiosyncrasies of users and their contexts is challenging. Here we address recent calls for new situated and emergent play design methods by turning to social media, which is currently a source of inspiration for arts, crafts, fashion, and more. We present @chasing.play: An exploration of how Instagram may help designers capture and share instances of mundane playful engagement to inspire play design. We report on the findings of a pilot study where we experimented with the tool, and raise a challenges and open questions we plan to address in the future. Our work can trigger discussions among researchers about the potential of social media as a design tool and inspire action towards collectively defining strategies to leverage that potential.
Design coursework is iterative and continuously-evolving. Separation of digital tools used in design courses disaffects instructors' and students' iterative process experiences.
We present a system that integrates support for design ideation with a learning analytics dashboard. A preliminary study deployed the system in two courses, each with ~15 students and 1 instructor, for three months. We conducted semi-structured interviews to understand user experiences.
Findings indicate benefits when systems contextualize creative work with assessment by integrating support for ideation with a learning analytics dashboard. Instructors are better able to track students and their work. Students are supported in reflecting on relationships among deliverables. We derive implications for contextualizing design with feedback to support creativity, learning, and teaching.
A "creative city" can promote creativity among its citizens and provide them fulfilling lived experience. Such a concept has captivated city authorities worldwide, and motivated plenty of works to investigate how to make our cities more "creative". We argue that there is a need and an opportunity to design interactive technologies to push the creative city agenda. In this paper, we present WeMonet, a design prototype supporting citizens engaging in participatory street art creation via human-AI collaboration. Citizens' sketches are synthesized, enhanced to be more vivid through machine machine learning algorithms, and projected on a screen, forming a participatory artwork. WeMonet aims to promote citizens' engagement in creative practices and hence the city's creativity. More broadly speaking, we hope this work could inspire designers to consider the role of interaction design in the creative city agenda.
We describe preliminary explorations with a format to engage young indigenous students from remote communities in design and making. Digital technologies are often seen, not without merit, as reflecting and embodying cultural values that are at odds with indigenous ways of learning. To overcome this we organised a 'Coding on Country' workshop. The workshop took place outdoors, at a culturally significant location, was embedded within cultural practices, as well as other mundane activities, and drew participation from young people as well as from Elders. Key insights are that activities that take place on country are also intrinsically about country and culture. A site of cultural significance can, better than a classroom, draw together younger and older community members, and offer a social centre which people can gravitate around while engaging in different activities. This contributes to reconciling technology and tradition, and offers opportunities to participate on one's own terms.
Contemporary WYSIWYG (what-you-see-is-what-you-get) design software utilizes digital Artboards which are finite graphics frames that sit atop a scrollable zooming canvas; where many graphics frames can be arbitrarily arranged, scaled and duplicated to explore and juxtapose design ideas. Despite creative practitioners increasingly writing code to explore design spaces, programming environments for Creative Coding regularly only display one graphics frame per program. This hinders the non-linear nature of the creative process where seeing a trail of process work can spark new ideas. We propose bridging this gap and introduce Stamper, an Artboard-Oriented authoring environment for the popular creative coding library p5.js.
This paper introduces a nascent project aimed at exploring new avenues to support creativity, socialisation and community through smart interfaces for augment reality. Augmented reality has been so far largely conceptualised from a point of view of 'power users' seeking to support very specific applications, e.g. in training and simulation. With the availability of devices to mass market, new applications become possible, and new research problems open up. We offer a preliminary framework consisting of 2 orthogonal continua (virtual-real and human-thing) and 2 critical perspective (postphenomenology/posthumanism and cultural interface). With this poster we hope to stimulate valuable discussion and seek input from the CHI community about the challenges, opportunities, and theoretical perspectives underpinning a smart, socialised AR.
Recent studies have articulated that DIY and the maker culture enrich practice-led HCI communities. Drawing on the specialties, intrinsical playfulness, and tangibility of the maker culture, we took a participatory and design fiction approach to technology imagination with a local maker community. We made a Speculative Kit composed of four series of catalogs and props of fictional products to embody the knowledge produced by an IoT research center. The tool kit will be used in situated making activities for technology practitioners to create alternatives for emerging technologies.
Community policing faces a combination of new challenges and opportunities due to both citizens and police adopting new digital technologies. However, there is limited scholarly work providing evidence for how technologies assist citizens' interactions with the police. This paper reports preliminary findings from interviews with 13 participants, both citizens and police officers, in England. We recognize four key types of actors in the current practice of community policing, alongside existing technologies and challenges faced by citizens and the police. We conclude with three design implications for improving citizen-police engagement.
Active language use refers to people's use of a language in their everyday lives and activities. Tangible User Interfaces (TUIs) can facilitate children's participation in language activities such as language learning, communication, storytelling, and social play. However, few TUI projects take the lens of active language use, and exploit the benefits of tangibles for maintaining and revitalising endangered languages. We present the Crocodile Language Friend, co-designed with the Wujal Wujal community, to foster children's use of the Kuku Yalanji Aboriginal language. We contribute a discussion of the ways in which the crocodile's physical characteristics (e.g. size, shape, materials, and personalization) can encourage language use in individual and social activities beyond the affordances of screen-based systems.
4D printing with a hobbyist FDM printer has enabled a rapid fabrication and self-assembly process for 3D shapes. Researchers have leveraged novel structure design and material techniques to create a wide range of 4D shapes. Meanwhile, 4D printing texture (i.e., shape-changing texture) on objects which could easily augment the haptic sensation, has drawn more attention to this field. In this paper, we introduce 4DTexture, a novel design and fabrication approach that integrates texture design through the 4D printing process. Compared to conventional 3D printing surfaces with texture, which usually requires support structures, 4DTexture can effectively reduce production material, time and the post-process effort after printing. Specifically, 4DTexture prints flat substrates with a customized texture design on its top surfaces, which can easily be triggered and can self-morph to target 3D shapes. Overall, our approach enables the design and fabrication of 3D surfaces with texture and can be leveraged by designers and researchers in the field of personal fabrication.
The project focuses on the human centered design approach for aiding crewed space operations in microgravity. The key element is enhancing the floating experience, while enabling humans to adapt in microgravity environments. The metaphor of the undersea world inspired the design of a body extension that can complement the interiors of Zero-G habitats. The analysis of the unique seahorse's tail structure became the insight into the overall biomimetic design. In fact, a seahorse tail enables movement, gripping and protection to the seahorse while floating. SpaceHuman is an additive prosthetic that can move around the body to grasp objects and handles in microgravity, protecting the wearer from injuries that might occur while floating in a confined habitat, while providing an adaptable and kinematically stable base. SpaceHuman has been designed through different computational design methods, to simulate its behavior in microgravity, and has been worn and tested on a Zero-G flight.
In recent years, in-mold electronics (IME) technique was introduced as a combination of printing electrically functional materials and vacuum plastic forming. IME has gained significant attention across various industries since it enables various electrical functionalities on a wide range of 3D geometries. Although IME shows great application potentials, hardships still exist during design for the manufacturing stage. For example, printed 2D structures experience mechanical bending and stretching during vacuum forming. This results in challenges for designers to ensure precise circuit-to-3D mold registration or to prevent over deformation of circuit and attached components. To this end, we propose a software toolkit that provides real time 2D-to-3D mapping, guided structural and electrical circuit design with interactive user interface. We present a novel software-guided IME process that leads to fully functional 3D electronic structures with printed conductive traces and assembled surface-mount components.
When new things, such as buildings or physical products, are designed, the design process typically explores multiple design alternatives and undergoes several iterations. The associated artefacts typically grow from low-fidelity prototypes, such as paper sketches, to high-fidelity prototypes, such as 3D scale models. While previous work has focused on capturing the design rationale behind the decisions that happen during such a design process, this information typically remains secluded and is not easily accessible for the stakeholders. In this paper, we explore how to augment both physical and digital designs with their associated design rationale and decisions. Our exploratory inquiry with three experts in architecture provides qualitative feedback on our augmented reality tool and concepts. We expect that these preliminary results are valuable for future traceability tools for physical and digital designs, even beyond the domain of architecture.
In this paper, we analyzed driver behavior during automated driving in two experimental conditions, a Driving Simulator (DS) and a Wizard of Oz vehicle (WOz). Twenty-nine drivers in the DS condition and nine drivers in the WOz condition performed three different requests to intervene (RTI) during automated driving (AD). Three variables were measured, the number of control checks during AD and non-driving related tasks (NDRT), the reaction time to resume manual control and the strategy used to recover control. Differences were found concerning road monitoring during NDRT, there are more interruptions in the WOz condition than in the DS condition. Additionally, the strategies used to recover control were different between conditions, the steering wheel and brake pedal were used more often in the WOz condition while the accelerator was used more often in the DS condition. However, no difference was found concerning reaction time to resume control.
Many non-expert Machine Learning users wish to apply powerful deep learning models to their own domains but encounter hurdles in the opaque model tuning process. We introduce SCRAM, a tool which uses heuristics to detect potential error conditions in model output and suggests actionable steps and best practices to help such users tune their models. Inspired by metaphors from software engineering, SCRAM extends high-level deep learning development tools to interpret model metrics during training and produce human-readable error messages. We validate SCRAM through three author-created example scenarios with image and text datasets, and by collecting informal feedback from ML researchers with teaching experience. We finally reflect upon our feedback for the design of future ML debugging tools.
Manuscript speech is a common type of speech in various official events. However, we often observe that many speakers simply read the script without making eye contact, thereby lowering audience engagement. Practice with proper tools could benefit a speaker considerably. In this work, we iteratively designed ScriptFree, an adaptive speech practice environment where off-the-shelf automatic speech recognition (ASR) is leveraged to measure a speaker's preparation level, and accordingly, the script is adaptively compressed to reduce the speaker's visual reliance toward script mastery. The user study results confirmed that ScriptFree helped the participants to success- fully improve their speech over multiple practice iterations. The results have significant design implications for building adaptive speech practice systems.
In general, a system with touch input waits for a certain period of time (typically 350 -- 500 ms) for a subsequent tap to determine whether the initial tap was a single tap or the first tap of a double tap. This results in latency of hundreds of milliseconds for a single-tap event. To reduce the latency, we propose a novel machine-learning-based tap recognition method called "PredicTaps". In the PredicTaps method, by using touch-event data gathered from the capacitive touch surface, the system immediately predicts whether a detected tap is a single tap or the first tap of a double tap. Then, in accordance with the prediction, the system determines whether to execute a single-tap event immediately or wait for a subsequent second tap. This paper reports the feasibility study of PredicTaps.
This paper proposes a novel interaction via electrically stimulating the salivary glands. The salivary glands are one of the human effector organs along with the muscles and secretion glands. A method of inducing and/or promoting saliva secretion would benefit eating, drinking, and speaking experience and emotion. Hence, this paper firstly reviews previous studies and the limitations of the conventional technologies, secondly reports our novel percutaneous electrical stimulation method to promote saliva secretion, and thirdly discusses the potential contribution of saliva secretion promoting technologies to the interaction, psychology, and health areas.
The interaction with distant displays often demands complex, multi-modal inputs which need to be achieved with a very simple hardware solution so that users can perform rich inputs wherever they encounter a distant display. We present Simo, a novel approach, that transforms a regular smartphone into a highly-expressive user motion tracking device and controller for distant displays. Both the front and back cameras of the smartphone are used simultaneously to track the user's hand as well as the head, and body movements in real-world space and scale. In this work, we first define the possibilities for simultaneous face- and world-tracking using current off-the-shelf smartphones. Next, we present the implementation of a smartphone app enabling hand, head, and body motion tracking. Finally, we present a technical analysis outlining the possible tracking range.
The implementation of adaptative applications for navigation and ubiquitous interfaces for accessibility defines a new era on spatial computing; especially on solutions that understand reacting to the environment, delivering navigation capabilities with high accuracy and utility for users. Real-time key information and awareness is presented in augmented reality (AR) by using interactive visual elements and auditive instructions. This paper describes a solution running on smartphones by using spatial computing capabilities for positional determination, tracking the user movements over the environment space. Besides the user-friendly minimalistic visual interface, the system is also able to instruct, guiding a completely blind person from a start position into a certain destination, avoiding obstacles. The communication is completely independent and uses voice recognition for accessibility capabilities.
On the golf-swing, the trajectory and face angle (angle of the golf-club head) of the golf-club greatly affect the direction and trajectory of the ball. Though the instruction of the golf-swing has been conducted to the beginners by the instructor or the consumer device specialized in the golf-swing training, it was difficult to modify the posture of golf-club during the swing. This is because the users require time to understand and express the instruction. Also, these instructions cannot convey detailed information to the users. Therefore, we propose a golf club type device mounting ungrounded force feedback module to modify the posture of golf-club during the golf-swing. In this paper, we implemented a prototype of the proposed device and the application using the device. The prototype was evaluated its performance and the results suggested the prototype generate sufficient torque to move the golf-club grasped by the user.
Free-hand manipulation in smartphone augmented reality (AR) enables users to directly interact with virtual contents using their hands. However, human hands can ergonomically move in a broader range than a smartphone's field of view (FOV) can capture, requiring users to be aware of the limited usable interaction and viewing regions at all times. We present Portalware, a smartphone-wearable dual-display system that expands the usable interaction region for free-hand manipulation and enables users to receive visual feedback outside the smartphone's view. The wearable is a lightweight, low-cost display that shares the same AR environment in real-time with the smartphone.This setup empowers AR applications such as mid-air drawing and object manipulation by providing a 180-degree horizontal interaction region in front of the user. Other potential applications include wearing the smartphone like a pendant while using Portalware to continue interacting with AR objects. Without having to hold their phone up with their hand, users can benefit from resting their arms as needed. Finally, we discuss usability explorations, potential interactions, and future plans for empirical studies.
Storytelling is an important means for children to build literacy while sharing beliefs, values, and cultural norms. Our work investigates how augmented reality (AR) can fit into creative storytelling practices. We introduce Living Paper, a system for authoring AR narratives that span both digital and tangible media. Our augmented storybook prototype integrates animated AR characters from hand drawings with programmable LED lights that shine through the pages. Living Paper combines the flexibility of digital objects with the tangibility of physical cues to enable the creation of immersive and shareable stories.
In this paper, we report the results from a Hybrid Wizard of Oz experiment consisting of a critical incident interview and an in-situ simulation. The study aimed at validating the need for a contextualised and personalised Point-Of-Interest (POI) recommender and understanding the detailed user needs for it. Our key findings include: feeling bored as a key trigger to search for POIs, trust issues with the existing recommendation sources, intent to find free activities, information needs on areas-of-interest beyond points-of-interest, support for socialising, and language barriers. With this study, we also exemplify a cost- and time-effective approach for design of intelligent systems.
The major sports leagues, including The National Basketball Association (NBA) and the EPL (the English Premier League), are adopting conversational systems (chatbots) as an innovative outlet to deliver game information and engage fans. However, current sports chatbots only provide scores and game highlight videos, which are often inadequate for statistical data related requests. We present GameBot, an interactive chatbot for sports fans to explore game statistical data. GameBot features (1) the direct answers to user's stats-related questions, and (2) the use of data visualizations as supporting context for sports fans' stats-related questions.
In order to improve the performance in putting, we design a weight switching device that can provide various switching angles. This paper proposes a system that can change the center of mass by switching the weight to different positions around the head of a putter. The proposed system starts switching the weight at the beginning of a downswing and ends before the putter hits a ball. To verify the effectiveness of the proposed system, we conducted a user study and examined if the face to path angle of the putter changed when the weight was switched to different positions. In the user study, the participant performed putting with different switching angles. In the analysis, we focused on the differences in the face to path angle among the switching angles. The user study showed that the proposed system is effective in changing the face to path angle when switching the weight away from the putter's shaft. Based on the experimental results, the proposed system contributes to affecting the face to path angle dynamically in real-time.
AI models and services are used in a growing number of high-stakes areas, resulting in a need for increased transparency. Consistent with this, several proposals for higher quality and more consistent documentation of AI data, models, and systems have emerged. Little is known, however, about the needs of those who would produce or consume these new forms of documentation. Through semi-structured developer interviews, and two document-creation exercises, we have assembled a clearer picture of these needs and the various challenges faced in creating accurate and useful AI documentation. Based on the observations from this work, supplemented by feedback received during multiple design explorations and stakeholder conversations, we make recommendations for easing the collection and flexible presentation of AI facts to promote transparency.
The competitiveness of the video game market has increased the need for understanding players. We generate player personas from survey data of 15,402 players' 195,158 stated game preferences from 130,495 game titles using the methodology of automatic persona generation. Our purpose is to demonstrate the potential of data-driven personas for segmenting players by their game preferences. The resulting prototype personas provide potential value for game marketing purposes, e.g., targeting gamers with social media advertising, although they can also be used for understanding demographic variation among various game preference patterns.
Explainable Artificial Intelligence (XAI) is an emerging topic in Machine Learning (ML) that aims to give humans visibility into how AI systems make decisions. XAI is increasingly important in bringing transparency to fields such as medicine and criminal justice where AI informs high consequence decisions. While many XAI techniques have been proposed, few have been evaluated beyond anecdotal evidence. Our research offers a novel approach to assess how humans interpret AI explanations; we explore this by integrating XAI with Games with a Purpose (GWAP). XAI requires human evaluation at scale, and GWAP can be used for XAI tasks which are presented through rounds of play. This paper outlines the benefits of GWAP for XAI, and demonstrates application through our creation of a multi-player GWAP that focuses on explaining deep learning models trained for image recognition. Through our game, we seek to understand how humans select and interpret explanations used in image recognition systems, and bring empirical evidence on the validity of GWAP designs for XAI.
Failure, often represented through death, is a central aspect of almost every video game. In-game death can drive player perceptions of difficulty and greatly impact the core player experience; however, there is surprisingly limited amounts of research examining how games actually handle this occurrence. We posit that this is a rich, underexplored space with significant implications for player experience and the design of many games. This paper presents our initial exploration into the space of player death and rebirth through the creation of a generalized taxonomy developed from 62 recent platformer games. Our taxonomy consists of five key dimensions: 1) obstacles, 2) death conditions, 3) aesthetics, 4) changes to player progress, and 5) respawn locations.
Extended screen-time can have negative health effects in both children and adults. With the advent of motion de- tection sensors and other novel interaction methods, it is possible to envision digital games that move away from screen-centered design. However, such interaction is still underrepresented in both the mainstream and academia. To address this issue, this paper proposes a screenless dance game, called Jingle Jigsaw, that encourages players to physically explore the space around them by using spatial tracking and audio feedback. We conducted a preliminary usability evaluation that conveys the fact that such interaction is perceived as enjoyable by users and pointed to promising future work.
Ingestible sensors are pill-like digital sensors performing sensing functions inside the human body. Such technology is becoming increasingly common in clinical uses. However, we believe there exists an opportunity to also investigate ingestible sensors as design material for bodily play to facilitate intriguing bodily experiences. This argument is inspired by a long history of utilizing the intersection of medical technologies and play to bring about intriguing bodily experiences. By designing and investigating the user experience of three playful systems around ingestible sensors, we articulate a preliminary framework showing how ingestible sensors can be used as design material to support the design of playful bodily experiences.
Given that an increasing number of people cultivate poor eating habits, encouraging people to eat healthy is important. One way to motivate people eating healthy is using gamification, i.e. using game elements in a non-game context. Often, a static set of gamification elements is used. However, research suggests that the motivational impact of gamification elements differs substantially across users, demanding personalized approaches. In this paper, we contribute to this by investigating the perception of frequently used gamification elements in the healthy eating domain and correlations to Hexad user types in an online study (N=237). To do so, we created storyboards illustrating these gamification elements and show their comprehensibility in a lab study (N=8). Our results validate and extend previous research in the healthy eating domain, underline the need for personalization and could be used to inform the design of gamified systems for healthy eating.
To provide universal accessibility, public community spaces such as museums must be designed considering the experience of all patrons, including visitors living with Autism Spectrum Disorder. To develop a better understanding of the experience of visitors with autism at the Canada Science and Technology Museum, we invited four school children and one adult male for a visit, all of whom identified as being on the spectrum. They were joined by their support persons. We interviewed the adult, his caregiver and the teaching staff accompanying the school children. We analyzed our interviews and observation notes using thematic analysis to formulate key findings and suggestions to enhance the experience for autistic people. They include adding elements at a variety of developmental levels, offering options to reduce sensory stimulation, improving navigational resources and providing more resources for support persons.
Computing impacts almost every aspect of modern life. In this current wave of computing, designers strive for ubiquitous use across contexts, across platforms, and across users. Yet, there is still much to be explored in the area of accessible technologies for collaborative work. Assistive technologies are often designed for single use or single users. Interoperability between assistive technologies and computer-supported collaborative systems is essential to co-located work that inherently involves interdependencies between coworkers. The interoperability of collaborative workplace systems with assistive technologies for deaf-blind users is limited to screen readers used with refreshable braille displays . How can a deaf-blind person collaborate with coworkers in real time? In this work, we explore socio-technical challenges of a work environment that depends on collaborative and assistive technologies. We observed an Information Systems and Technology (IS&T) department at a University that is made up of 65 employees-one of whom is a deaf-blind programmer analyst. Through Contextual Inquiry, our observations revealed communication tensions. We present 3 Wizard of Oz prototypes to facilitate communication between the assistive technology user and the other employees in the department. We discuss the socio-technical features weaved through the prototypes. Future work is discussed.
Visual attention guides the integration of two streams: the global, that rapidly processes the scene; and the local, that processes details. For people with autism, the integration of these two streams can be disrupted by the tendency to privilege details (local processing) instead of seeing the big picture (global processing). Consequently, people with autism may struggle with typical visual attention, evidenced by their verbal description of local features when asked to describe overall scenes. This paper aims to explore how one adult with autism see and understand the global filter of natural scenes.
Blind people face various local navigation challenges in their daily lives such as identifying empty seats in crowded stations, navigating toward a seat, and stopping and sitting at the correct spot. Although voice navigation is a commonly used solution, it requires users to carefully follow frequent navigational sounds over short distances. Therefore, we presented an assistive robot, BlindPilot, which guides blind users to landmark objects using an intuitive handle. BlindPilot employs an RGB-D camera to detect the positions of target objects and uses LiDAR to build a 2D map of the surrounding area. On the basis of the sensing results, BlindPilot then generates a path to the object and guides the user safely. To evaluate our system, we also implemented a sound-based navigation system as a baseline system, and asked six blind participants to approach an empty chair using the two systems. We observed that BlindPilot enabled users to approach a chair faster with a greater feeling of security and less effort compared to the baseline system.
Telerobots may give people with motor disabilities access to education, events and places. Eye-gaze interaction with these robots is an option when hands are not functional. Gaze control of telerobots has not yet been evaluated by people from this target group. We conducted a field study with five users in a care-home to investigate their preferences and challenges when driving telerobots via our gaze-controlled robotic telepresence system. We used a Wizard of Oz method to explore gaze and speech interaction, and experience prototyping to consider robot designs and types of displays (e.i. monitors versus head-mounted displays).
It is known that many people with visual impairments desire to take photos. In taking photos, however, they often face difficulty in pointing the camera at the subject. In this paper, we address this problem by proposing a novel photo-taking system named VisPhoto. Unlike existing ones, the proposed system generates a photo as post-production of an omni-directional camera image. That is, when the shutter button is pressed, the proposed system captures an omni-directional camera image. In the post-production, from the objects detected in the image, the user selects the ones that should be included and excluded. Finally, as a "photo," the system outputs a cropped region satisfying the user's preference and looking beautiful from an aesthetic point of view.
Video-based eye trackers increasingly have potential to improve on-screen magnification for low-vision computer users. Yet, little is known about the viability of eye tracking hardware for gaze-guided magnification. We employed a magnification prototype to assess eye tracking quality for low-vision users as they performed reading and search tasks. We show that a high degree of tracking loss prevents current video-based eye tracking from capturing gaze input for low-vision users. Our findings show current technologies were not made with low vision users in mind, and we offer suggestions to improve gaze-tracking for diverse eye input.
This study aims at developing a 3D tactile system to help visually impaired people recognize colors via texture perception when they appreciate art paintings. The system is designed based on fundamentals of color perception, tactile acuity, and braille information. The scheme of tactile-color texture was designed to be intuitively learnable for the visually impaired, who has a lack of color perception. The result of focus interviews showed that visually impaired people could significantly distinguish and recognize the variation of color by grating textures. The study of color perception from grating texture indicates tactile efficiency of color recognition as a method for appreciating art painting. Regarding the practical approach not only can be used to improve accessibilities for blind people of art museums but can be references to develop 3D printed haptic devices.
The inclusion of content warnings for sensitive topics on webpages contributes to creating a psychologically safe internet for all users. Yet the pervasiveness of these warnings is limited by their reliance on content creators and hosts. Rather than placing the sole responsibility of content moderation on content creators and hosts, our system shows a strong proof-of-concept for automatically generating content warnings on the user's side by utilizing keyword identification, sentiment analysis, and online intervention user interface principles. We designed our system as a Chrome extension and evaluated it by testing its accuracy on a dataset of websites with and without sensitive content, and performing a user-interaction lab study. With promising future areas of research such as the ability to personalize thresholds and customize content warnings for specific user needs, this research is a step towards a psychologically safer internet for everyone.
We are interested in the ways pedestrians will interact with autonomous vehicle (AV) in a future AV transportation ecosystem, when nonverbal cues from the driver such as eye movements, hand gestures, etc. are no longer provided. In this work, we examine a subset of this challenge: interaction between pedestrian with reduced mobility (PRM) and AV. This study explores interface designs between AVs and people in a wheelchair to help them interact with AVs by conducting a preliminary design study. We have assessed the data collected from the study using qualitative analysis and presented different findings on AV-PRM interactions. Our findings reflect on the importance of visual interfaces, changes to the wheelchair and the creative use of the street infrastructure.
As automatic speech recognition (ASR) becomes more accurate, many deaf and hard-of-hearing (DHH) individuals are interested in ASR-based mobile applications to facilitate in-person communication with hearing peers. We investigate DHH users' preferences regarding the behaviors of the hearing person in this context. Using an ASR-based captioning app, eight Deaf/deaf participants held short conversations, with a hearing actor who exhibited certain behaviors, e.g. speaking quietly/loudly or slowly/quickly. Participants indicated some of the hearing individual's behaviors were more influential as to their subjective impression of the communication efficacy. We also found that these behaviors differed in how noticeable they were to the Deaf participants. This study provides guidance, from a Deaf perspective, about the types of behaviors hearing users should ideally exhibit in this context, motivating a focus on such behaviors in future design or evaluation of ASR-based communication apps.
Mealtimes serve important social functions in our everyday lives. Public dining spaces on college campuses are positioned to be social and engaging spaces to make new connections. With the prevalence of digital devices, technology usage introduces new dynamics into students' mealtimes. In this study, we explored the current mealtime technology usage patterns of college students and rethought the role of technology in eating. We proposed Meal Chat - a technology probe to explore the alternative role of technology during mealtimes by encouraging social interaction for students eating at on-campus public dining areas. Meal Chat aims to provide an opportunity for college students to socialize and reduce the barrier of starting mealtime socialization with a stranger. Rethinking the role of technology in mealtimes, Meal Chat seeks to prompt rather than to replace social interaction during mealtimes.
In this paper, we explore how the emotional behavior of a robot affects interactions with humans. We introduce the EMI platform - an expressive, mobile and interactive robot - consisting of a circular diff-drive robot base equipped with a rear-projected expressive face, and omni-directional microphone for voice-interaction. We exhibited the EMI robot at a public event, in which attendees were given the option to interact with the robot and participate in a survey and observational study. The survey and observations focused on the effects of the robot's expressiveness in interactions with users of different ages and cultural backgrounds. From the survey responses, video observations and informal interviews we highlight key design decisions in EMI that resulted in positive user reactions.
Planning for personal financial security is complex, and better-informed investors are likely to make better investment choices. However, the number of alternatives presented by most personal finance platforms pose novices a dual challenge of choice overload coupled with a lack of domain knowledge. We present a study investigating socially-informed sorting, in which users are offered subtle guidance in the form of visual and textual cues that aim to encourage information-seeking when choosing between large numbers of options. We evaluate this approach in an online experiment in which participants go through ten rounds of retirement saving budget allocation, making choices among 77 different funds. While we found that this technique increased novice investors' information-seeking, and offered significant benefit in terms of return performance, it may also be detrimental for more experienced investors. We discuss these findings in light of prior research.
The perception and experience of avatars has been critical to understand the social dynamics in virtual environments, online gaming, and collaborative systems. How would emerging sociotechnical systems further complicate the role of avatars in our online social lives? In this paper we focus on how people perceive and understand their avatars in social virtual reality (VR) - 3D virtual spaces where multiple users can interact with one another through VR head-mounted displays (HMDs). Based on 30 interviews, we identified three key themes emerging in people's perceptions and experiences of their avatars across various social VR applications. Our study contributes to further improving social VR technologies and better understanding emerging social interaction dynamics and consequences within social VR.
Conversation design is an essential step in building a chatbot. Much like visual user interface design, conversation design benefits from prototyping and user testing to allow for conversation exploration and improvement. However, it can be overwhelming to quickly iterate on the conversation design as the iterative process requires not only designing a conversation but also building and testing a working chatbot equipped with the conversation. We developed ProtoChat, a prototype system that supports an iterative conversation design by allowing designers to (1) prototype conversations, (2) test the conversations with the crowd, and (3) review and analyze the crowdsourced conversation data. Results of an exploratory study with four conversation designers show that the designers successfully iterated on their conversation design by reviewing how the crowd followed the conversation, which provided insights into concrete action items for improving their conversation design.
Effective moderation of online communities is an important, but challenging, topic in HCI. In this paper, we study people's co-editing behavior on one of the most popular social Q&A websites in China, called Zhihu.com. We examined question logs to understand who/when/how a question is edited differently by multiple users; we also conducted semi-structured interviews with users who edited others' questions on Zhihu to understand their motivations and their perceptions of co-editing behavior, as well as their concerns and suggestions for future website designs for moderating such behavior. Our findings reveal that although co-editing questions is perceived as a positive and effective approach for improving questions' answerability and shaping norms in the online community, effective moderation mechanisms need to be designed to improve transparency and communication about co-editing behavior and to address possible tensions as a result of co-editing wars.
Care in communities has a powerful influence on potentially disruptive social encounters. Practising care in moderation means exposing a group's core values, which, in turn, has the potential to strengthen identity and relationships in communities. Dissent is as inevitable in online communities as it is in their offline counterparts. However, dissent can be productive by sparking discussions that drive the evolution of community norms and boundaries, and there is value in understanding the role of moderation in this process. Our work draws on an exploratory analysis of moderation practices in the MetaFilter community, focusing on cases of intervention and response. We identify and analyse MetaFilter moderation with the metaphor: "taking care of a fruit tree", which is quoted from an interview with moderators on MetaFilter. We address the relevance of care as it is evidenced in these MetaFilter exchanges, and discuss what it might mean to approach an analysis of online moderation practices with a focus on nurturing care. We consider how HCI researchers might make use of care-as-nurture as a frame to identify multi-faceted and nuanced concepts characterising dissent and to develop tools for the sustainable support of online communities and their moderators.
Ray casting is frequently used to point and select distant targets in Virtual Reality (VR) systems. In this work, we evaluate user performance in 3D pointing with two different ray casting versions: infinite ray casting, where the cursor is positioned on the surface of the first object along the ray that said ray points at, and finite ray-casting, where the cursor is attached to the ray at a fixed distance from the controller. Twelve subjects performed a Fitts' law experiment where the targets were placed 1, 2, or 3 meters away from the user. According to the results, subjects were faster and made fewer errors with the infinite ray length. Interestingly, their (effective) pointing throughput was higher when the ray length was constrained. We illustrate the advantages of both methods in immersive VR applications and provide information for practitioners and developers to choose the most appropriate ray-casting-based selection method for VR.
We propose UTAKATA, an ephemeral display device that presents information by using floating clusters of bubbles. In our previous study, we implemented an electrolysis bubble display using drinkable beverages. Although clear clusters of bubbles could be created in short times, they did not disappear for a long time; consequently, the refresh time period of the display was considerably long. To overcome this deficiency, we present UTAKATA, a ticker-like bubble display using a running-water channel. To establish this display, a linear array of seven electrodes is fabricated on the bottom of a water channel. By activating the appropriate electrodes among the seven ones, circular bubble clusters are generated above the electrodes and then imposed to float downstream using a water flow. This allows representing an N × 7 dot-matrix display to have a shorter refresh time compared with our previous method, and expanding the range of expressions of ephemeral user interfaces using bubbles.
Enhancing a user's sense of presence and immersion with haptic technologies has recently been getting increasing attention in virtual reality (VR). Yet while several devices that use wind as a haptic modality have been proposed, it is difficult to precisely and quickly control the stimulation of the wind. In this study, we tackle creating an illusion of wind blowing with multiple air cannons that provides clear and low-latency tactile sensations in a specific area. Although air cannons are limited to providing short one-time stimuli, we propose a method for creating a sense of wind blowing with air cannons based on the perception of apparent tactile motion (ATM). We conducted an experiment with 12 participants to investigate the occurrence of the wind blowing sensation and the optimal parameters for it using three air cannons that were placed to hit three parts on the face. Finally, based on the results, we have exhibited a demonstration of the wind-blowing illusion with three air cannons. It created a sense that virtual objects passed in front of the participants by synchronizing with the sound source movement.
The keycube is a tangible cubic device including a text entry interface for different apparatuses such as augmented, mixed or virtual reality headsets, as well as smart TVs, desktop computers, laptops, tablets. The keycube comprises 80 keys equally disposed on 5 faces. In this paper we investigate keycube text entry performances and the potential typing skill transfer from traditional keyboard. Using prototype implementations, we conducted a user study comparing different cubic layouts and included a baseline from traditional keyboards. Experiments show that users are able to attain about 19 words per minute within one hundred minutes of practice with a QWERTY-based cubic layout, more than twice the speed of an unknown-based cubic layout with similar error rate, and about 30% of their speed with a traditional keyboard.
We propose ThermalBitDisplay, which is a haptic display with tiny Peltier devices providing thermal feedback perceived differently depending on the parts of the body touching them. We focus on the different thresholds of thermal sensitivity of body parts and the spatial summation of thermal perception. Utilizing these characteristics with ThermalBitDisplay, for example, enables providing thermal feedback perceived only on the lips, not on other parts of the body. New interactions using thermal feedback will have several usages for notifications, communication, alerts, and entertainment. We conducted user studies with 12 participants and demonstrations with more than 200 participants to validate the feasibility of ThermalBitDisplay and got feedback from them to improve our prototype.
In this paper, we address gaze estimation under practical and challenging conditions. Multi-view and multi-modal learning have been considered useful for various complex tasks; however, an in-depth analysis or a large-scale dataset on multi-view, multi-modal gaze estimation under a long-distance setup with a low illumination is still very limited. To address these limitations, first, we construct a dataset of images captured under challenging conditions. And we propose a simple deep learning architecture that can handle multi-view multi-modal data for gaze estimation. Finally, we conduct a performance evaluation of the proposed network with the constructed dataset to understand the effects of multiple views of a user and multi-modality (RGB, depth, and infrared). We report various findings from our preliminary experimental results and expect this would be helpful for gaze estimation studies to deal with challenging conditions.
To enhance viewing experiences during digital media consumption, both research and industry have considered ambient feedback effects to visually and physically extend the content presented. In this paper, we present AmbiPlant, a system using support structures for plants as interfaces for providing ambient effects during digital media consumption. In our concept, the media content presented to the viewer is augmented with visual actuation of the plant structures in order to enhance the viewing experience. We report on the results of a user study comparing our AmbiPlant condition to a condition with ambient lighting and a condition without ambient effects. Our system outperformed the no ambient effects condition in terms of engagement, entertainment, excitement and innovation and the ambient lighting condition in terms of excitement and innovation.
During the initial stage of design, 2D perspective sketching is an essential tool for designers. However, for products with multiple moving parts that assume multiple poses during usage, 2D perspective sketching can be painstaking and time-consuming. In this study, we show that such multi-pose products are prevalent in the context of product design and therefore propose a 3D sketching system tailored to multi-pose products. Our system enables designers to sketch 3D curves that can be freely posed and easily viewed from different directions; it makes sketching chained moving parts and propagating changes to different poses and perspectives effortless. We show that, with interactions closely resembling traditional 2D perspective sketching and the physical manipulation of an articulated object, designers can focus solely on ideating, iterating, and communicating multi-pose product concepts during the initial stage of design.
The proliferation of ubiquitous computing introduces several challenges to user privacy. Data from multiple sensors and users is aggregated at various scales to produce new, fine-grained inferences about people. Users of these systems are asked to consent to sharing their data without full knowledge of what data are recorded, how the data are used, who has access to the data, and most importantly risks associated with sharing. Recent work has shown that provoking privacy speculation among system users, by visualizing these various aspects, improves user knowledge and enables them to make informed decisions about their data. This paper presents a conceptual model of how researchers can make inferences that provoke privacy speculation among system users and a case study applying the model.
Traditional measuring devices separate probes from their data visualisation, requiring the operator to switch attention between their metering and result frequently. We explored the efficiency of four different visualisation modalities during a circuit analysis task that utilises the output of an oscilloscope. We argue that the spatial alignment of an oscilloscope's display and probe interferes with the cognitive processing of data visualisations, hence increasing the probability of errors and required time. We compared a fixed placed oscilloscope, in-situ projections, user positioned tablets, and head-mounted display while measuring completion times, subjective workload, number of errors, and personal preferences after each task. Results indicate that the oscilloscope produced the lowest completion time compared to other modalities. However, visualising data on a user positioned tablet or through in-situ projections yielded lower subjective workload and a lower number of errors. We discuss how our work generalises for assistive systems that support practitioners during their training in circuit analysis.
The Midas touch problem is a well-known problem in eye gaze interaction techniques. We present Verge-it as a Midas touch free input technique using modulated disparity vergence eye movement for a binocular head-worn display. We conducted a feasibility study under two different visual background conditions, namely dynamic background (TV) and static background (Wall) conditions. This study revealed a low false positive rate (TV: 0%, Wall: 2.10%) for Verge-it and acceptable performance.
Long-term mindfulness meditation is known to improve one's health. However, many novice meditators find long-term adoption of the practice challenging due to difficulties in maintaining focused attention (FA). With the positive effects provided by the sense of touch (warmth, compression), this work seeks to investigate if haptic-based wearable technologies can help promote FA in novice meditators and improve the meditation experience. A user study on novice practitioners (n=10, 4M/6F) showed potential in using compression/warmth stimuli to positively augment meditation and we discuss implications for future haptic meditation practices.
When we share ideas through conversation, we convey a specific mental image to another person. However, mental images don't always match, and the effort it takes to stop and achieve common ground would interrupt conversational flow. We are currently investigating the design of a system that translates a speaker's gestures and speech into a visualization of their idea. The challenge of designing such a system is complex, as conversation is composed of many intricate factors. We focus the results in this paper on one factor in particular: speaker of reference. We ran a study with 26 participants, with a prototype meant to handle gestured descriptions of object size and noted the effects of speaker frame of reference. From our analysis of reference frames used during size descriptions, we draw implications for how our proposed system may detect and translate frame of reference to produce visualizations of the user's mental image.
Wearable devices have received tremendous interest in fitness and personal assistance sectors. Yet most are still worn as auxiliary hardware; falling short in ubiquity and convenience. We examine the potential of a novel deformable wearable device that embeds interactive technologies into garment buttons, and seek to enhance the form factor of buttons to incorporate deformation and motion as inputs. We surveyed garment buttons in everyday clothing to inform an exploratory study, where we investigated social acceptance and elicited interaction gestures using mockups. Our results indicate people mostly prefer smaller sizes, and regard sleeves as the most comfortable area to operate and look at when seen by others. Based on our findings, we discuss potential context of use, possible applications, and future work.
We present a prototype for a targeted behavioural intervention, Disruptabottle, which explores what happens when a 'nudge' technology becomes a 'shove' technology. If you do not drink water at a fast enough rate, the bottle will overflow and spill. This reminds the user that they haven't drunk enough, aggressively nudging them to drink in order to prevent further spillage. This persuasive technology attempts to motivate conscious decision making by drawing attention to the user's drinking habits. Furthermore, we evaluated the emotions and opinions of potential users towards Disruptabottle, finding that participants generally received the device positively; with 59% reporting that they would use the device and 92% believing it to be an effective way of encouraging healthy drinking habits.
Maintaining positive digital well-being has become essential as we spend more and more time working at desks in offices, interacting with computers and typing for hours at a time. In this paper we present PauseBoard: A computer keyboard designed to unintrusively encourage users to take regular breaks. Through the use of motorised linear potentiometers, the force required to activate each key is increased towards the end of a set work period, until a maximum level of resistance is reached. Preliminary testing shows that 75% of users respond well to this novel gentle encouragement, being reminded to take breaks while still being able to concentrate and finish their current task, especially when the resistance is increased slowly over time.
We propose a small, soft, thin, lightweight, and flexible tactile display consisting of just a single actuator that can provide multiple, mechanical tactile stimuli such as vibration and heating. Various other tactile displays require the usage of multiple actuators such as motors, voice coils, speakers, Peltier elements, and heaters. However, our proposed tactile display uses just one actuator, the Dielectric Elastomer Actuator (DEA), to output vibration and heating. By configuring a tactile display with a DEA, our proposed tactile display can become very small, soft, thin, lightweight, and flexible while being able to output multiple mechanical stimuli. We describe the design of the prototype, control method for the output vibration and heating, advantages and disadvantages of this concept, and discuss future applications.
Exploiting the potential of automated vehicles and future traffic concepts like platooning or dynamic intersections requires the integration of human traffic participants. Recent research investigating how automated vehicles can communicate with other road users has focused mainly on pedestrians. We argue that cyclists are another important group of vulnerable road users that must be considered, as cycling is a vital transportation modality for a more sustainable future. Within this paper, we discuss the needs of cyclists and claim that their integration will demand to think of other concepts, which support moving communication partners. We further sketch potential approaches for augmented reality applications based on related work and present results of a pilot study aiming to evaluate and improve those. Initial findings show that people are open towards concepts that increase cyclist safety. However, it is key to present information clearly and unambiguously to produce a benefit.
As digital fabrication tools become commonplace and widely used for prototyping, the need for simple and flexible solutions to make prototypes interactive is increasing. Different approaches and systems have been proposed to add functional controls to prototypes. We have developed Manypulo, a wearable system that can turn anything into a digital control, without requiring integration of electronics or programming. It is a highly flexible approach based on RFID and accelerometers, which aims to facilitate and speed up the creation of functional prototypes. Hand movements are captured and then translated into digital input by a Bluetooth-connected app.
Smartwatches can be used independently from smartphones, but input tasks like messaging are cumbersome due to the small display size. Parts of the display are hidden during interaction, which can lead to incorrect input. For simplicity, instead of general text input a small set of answer options are often provided, but these are limited and impersonal. In contrast, free-form drawings can answer messages in a very personal way, but are difficult to produce on small displays. To enable precise drawing input on smartwatches we present a magnetic stylus that is tracked on the back of the hand. In an evaluation of several algorithms we show that 3D position estimation with a 7.5x20mm magnet reaches a worst-case 6% relative position error on the back of the hand. Furthermore, the results of a user study are presented, which show that in the case of drawing applications the presented technique is faster and more precise than direct finger input.
In this paper we present the Smart Hard Hat; an interactive hard hat that aims to protect the hearing health of construction workers by utilising shape shifting technology. The device responds to loud noises by automatically closing earmuffs around the wearer's ears, warning them of the damage that is being caused while taking away the need to consciously protect yourself. Construction workers are particularly vulnerable to noise-induced hearing loss, so this was the target user for this design. Initial testing revealed that the Smart Hard Hat effectively blocks out noise, that there is a possibility to expand the design to new user groups, and that there is potential in using shape-changing technologies to protect personal health.
The present study examines whether computer-generated speech is perceived to have an age, and if so, whether we can manipulate the perceived age of the voice. We conducted an experimental study with 51 participants where each computer-generated voice had different age-related characteristics such as the speed rate of the voice and frequency of the pitch. Participants listened to vehicle reviews presented by computer-generated voices with age-related characteristics we manipulated and then evaluated the age of the voice. Results show that we can change the perceived age of a computer-generated voice by manipulating the age-related characteristics of the voice. This work contributes to communities of HCI researchers interested in voice user interfaces (VUIs), conversational agents, and age stereotypes.
Adults on the autism spectrum commonly experience impairments in attention management that hinder many other cognitive functions necessary to appreciate relationships between sensory stimuli. As autistic individuals generally identify as visual learners, the effective use of visual aids can be critical in developing life skills. In this brief paper, we propose a Mobile Augmented Reality for Attention (MARA) application which addresses a lack of supportive and simple cost- effective solutions for autistic adults to train attention management skills. We present the proposed design, configuration and implementation. Lastly, we discuss future directions for research.
Human-AI Collaboration is emerging all around with the increasing utilisation of AI. Few prior studies have investigated the perceived effectiveness of users solving tasks with AI. To expand on these, we conducted a within-subjects repeated measures study involving 35 participants sorting household waste according to recyclability both with and without the help of an AI system. Our results show that people both sorted more effectively and perceived themselves more effective. Furthermore, we document a trend where people sorting without suggestions perceived themselves more effective than they were, while the opposite was true for people when sorting receiving suggestions. Based on our results we propose open questions for future research on perceived effectiveness when collaborating with AI systems.
This study describes the production of a novel taste display which uses ion electrophoresis in five gels containing electrolytes that supply controlled amounts of each of the five basic tastes to apply an arbitrary taste to the user's tongue, analogous to optical displays that produce arbitrary colors from lights of three basic colors. When applied to the tongue with no voltage, the user can taste all five tastes. However, when an electric potential is applied, the cations in the gel move to the cathode side and away from the tongue, so that the flavor is tasted weakly. In this way, we have developed a taste display that reproduces an arbitrary taste by individually suppressing the sensation of each of the five basic tastes (like subtractive synthesis.) Our study differs from previous work in that it uses an electric current for electrophoresis rather than electrically stimulating the tongue, and it does not involve ingestion of a solution to deliver the taste.
Terms of Service (ToS) agreements are typically written to satisfy corporate legal requirements rather than user understanding. Users are often deeply dissatisfied with online ToS agreements, which they typically find incomprehensible. Nevertheless, users frequently agree to the ToS, potentially surrendering their data and rights. We therefore designed alternative ways of presenting existing ToS documents using crowdsourced sentiment highlighting, to make documents more readable. This highlighting visualizes different sections of the agreement as positive or negative. We evaluated the visualization, finding that participants recognized highlighted information better, and most participants praised the visualization. We discuss design implications and the next steps for more interactive ToS.
Online platforms such as Google and Facebook make inferences about users based on data from their online and offline behavior that can be used for various purposes. Though some of these inferences are available for users to view, there exists a gap between what platforms are actually able to infer from collected data and what inferences users are expecting or believe to be possible. Studying users' reactions to inferences made about them, especially what surprises them, allows us to better understand this gap. We interviewed users of Google and Facebook to learn their current beliefs and expectations about how these platforms use their data to make inferences, and identified four common sources of surprise for participants: irrelevant inferences, outdated inferences, inferences with no connection to online activity, and inferences related to friends or family. We discuss the implications for designing inference-generating systems.
Android unlock patterns are a widely used form of graphical passwords, and like all password schemes, numerous studies have shown that users select a relatively guessable and non-diverse set of passwords. While proposals have been put forth for hardening patterns, such as increasing the number of or changing the location of contact points, none of these proposals has been implemented in the decade-plus since the interface's launch. We propose a new approach; instead of increasing the individual complexity, users select two sequential Android patterns, so called Double Patterns, that are visually super imposed on one another. This allows more complexity without dramatically changing the interface. We report on our preliminary findings of a large user study (n=634) of Double Patterns, finding strong evidence that the scheme is highly usable and increases the complexity of user choice.
When signing up for new mobile apps, users are often provided the option of using their Facebook or other social media credentials (i.e., single sign-on services; SSO). While SSO is designed to make the login process more seamless and convenient, recent social media data breaches may give pause to users by raising security concerns about their online transactions. In particular, users logging into sensitive services, such as dating apps, may feel hesitant to adopt SSO due to perceived potential data leakage to their social networks. We tested this proposition through a user study (N = 364) and found that individual differences in online security perceptions predict the use of SSO for certain sensitive services (e.g., affair apps), but not others (e.g., matchmaking apps). Informed by theory, potential mediators of this relationship (perceived security, ease of sharing, and usability) were also explored, thus shedding light on psychologically salient drivers of SSO adoption.
There is a growing need for usable and secure authentication in virtual reality (VR). Established concepts (e.g., 2D graphical PINs) are vulnerable to observation attacks, and proposed alternatives are relatively slow. We present RubikAuth, a novel authentication scheme for VR where users authenticate quickly by selecting digits from a virtual 3D cube that is manipulated with a handheld controller. We report two studies comparing how pointing using gaze, head pose, and controller tapping impacts RubikAuth's usability and observation resistance under three realistic threat models. Entering a four-symbol RubikAuth password is fast: 1.69 s to 3.5 s using controller tapping, 2.35 s to 4.68 s using head pose, and 2.39 s to 4.92 s using gaze and highly resilient to observations; 97.78% to 100% of observation attacks were unsuccessful. Our results suggest that providing attackers with support material contributes to more realistic security evaluations.
Modern meteorologists in the National Oceanic and Atmospheric Administration (NOAA) use the Advanced Weather Interactive Processing System (AWIPS) to visualize weather data. However, AWIPS presents critical challenges when comparing data from multiple satellites for weather analysis. To address its limitations, we iteratively design with Earth Science experts and developed MeteoVis, an interactive system to visualize spatio-temporal atmospheric weather data from multiple sources simultaneously in an immersive 3D environment. In a preliminary case study, MeteoVis enables forecasters to easily identify the Atmospheric River event that caused intense flooding and snow storms along the western coast of North America during February 2019. We envision that MeteoVis will inspire future development of atmospheric visualization and analysis of the causative factors behind atmospheric processes improving weather forecast accuracy. A demo video of MeteoVis is available at https://youtu.be/pdkXhkTtimY.
Geo-visualizations are invaluable aids for communicating complex scenarios that place data in spatial location and across time. With the advancement of 3D fabrication (and future potential for actuated displays,) more installations opt to project geo-visualization data on physical objects to provide an additional layer of context. In this paper, we set out to compare the effectiveness and user engagement of geo-visualizations data projected on either a flat 2D surface or a 3D representation of the terrain. We present our methodology and observations from this study. Our results so far are inconclusive, which leads us the propose that physical geo-visualization is a thriving field for further research.
We propose an interface with two novel techniques to visualize occluded graph nodes and edges that help the user edit map data with a 2.5D geographical structure (e.g., multi-floor indoor maps). We first design a visualization technique -Repel Signification- that employs micro-animation to signify the graph elements that are overlapping with each other (and potentially erroneous). We also design a technique that enables the user to edit the occluded components with Expansion Interaction, which simultaneously visualizes both in-floor and across-floor occluded connections between the map elements. The combination of the two methods would enable the map editors (non-experts) to effectively find and fix erroneous data in 2.5D maps without changing the operation manner from the existing 2D map-editing interface.
Visualizers are usually added into music players to augment the listening experiences of hearing users. However, for the case of most members of the Deaf and Hard of Hearing (DHH) community, they have partial deafness which may give them a "limited" listening experience as compared to their counterparts. In this paper, we present ViTune, a visualizer tool that enhances the musical experiences of the DHH through the use of an on-screen visualizer generating effects alongside music. We observed how members of the DHH community typically experienced music through an initial user study. We then iterated on developing a visualizer prototype where we did multiple usability tests involving at least 15 participants from the DHH community. We observed how they experienced music with the help of our prototype and noticed that certain music features and elements augmented these experiences. Visualization attributes, their matching qualitative descriptions and equivalent subjective attributes were identified with the help of music experts. Also, affects hypothesized to be induced and dissuaded were identified in improving these listening experiences.
A visualization sequence is an effective representation of meaningful data stories.Existing visualization sequencing approaches use heuristics to arrange charts in a meaningful order.While they perform well in specific scenarios, they do not customize the generated sequences to individual users' preferences. In this work, we present VisGuide, an assistive data exploration system that helps a user create contextual visualization sequence trees by sequentially recommending meaningful charts tailoring to the user's preference on data exploration. Our results show that VisGuide can recommend chart sequences that interest users and are also considered meaningful by domain experts.
We present an interactive visualization based on parallel coordinates that enables comparison, generation, and modification of multiple parametric design alternatives. Such capabilities are lacking in existing tools. Initial evaluation suggests that our proposal improves usability over existing tools, has novel parameter space exploration capabilities, and also reveals a space for designing direct interactions with visualizations to support parametric exploration.
Artificial generation of facial images is increasingly popular, with machine learning achieving photo-realistic results. Yet, there is a concern that the generated images might not fairly represent all demographic groups. We use a state-of-the-art method to generate 10,000 facial images and find that the generated images are skewed towards young people, especially white women. We provide recommendations to reduce demographic bias in artificial image generation.
Spatial clustering can reduce visual clutter on maps and facilitate pattern recognition. However, interactive map exploration needs the spatial clustering to be a dynamic generation and representation process. Users may change derivative representations of clusters during the exploration process. To address this issue, we present ClusterLens, a new interaction technique that brings a focus+context approach into multi-resolution spatial clustering process. A lens is laid on a base map to avoid occlusion with the original and actual point locations. The lens can aggregate the data points at various spatial resolutions as map zoom level changes. We propose three primitives of resolution for spatial clustering: heatmap, circle, and grid, to generate and represent clusters in a separate mapping system. The lens and the base map are linked at all times. We also incorporate coordinated views into the ClusterLens system to facilitate context switching and comparison. We discuss the applicability of our technique and present a use case where ClusterLens can be useful to explore data distributions and reveal spatial patterns.
Streetscape visualizations are necessary for the understanding and evaluation of urban design alternatives. Alongside blueprints and textual descriptions, these design aids can affect city-form, building-codes and regulations for decades to come. Yet despite major advancements in computer graphics, crafting high-quality streetscape visualizations is still a complex, lengthy and costly task, especially for real-time, multiparty design sessions. Here we present DeepScope, a generative, lightweight and real-time HCI platform for urban planning and cityscape visualization. DeepScope is composed of a Generative Neural Network (DCGAN) and a Tangible User Interface (TUI) designed for multi-participants urban design sessions and real-time feedback. In this paper we explore the design, development and deployment of the DeepScope platform, as well as discuss the potential implementation of DeepScope in urban design processes.
Transportation decision-makers from government agencies play an important role in addressing the traffic network conditions, which in turn, have a major impact on the well-being of citizens. The practices, challenges, and needs of this group of practitioners are less represented in the HCI literature. We address this gap through an interview study with 19 practitioners from Transports Québec, a government agency responsible for transportation infrastructures in Québec, Canada. We found that this group of decision-makers can most benefit from research about data analysis tools and platforms that (1) provide information to support data quality awareness, (2) are interoperable with other tools in the complex workflow of the practitioners, and (3) support intuitive and customizable visual analytics. These implications can also be informative to the design of tools supporting other decision-making tasks and domains.
Via generative adversarial networks (GANs), artificial intelligence (AI) has influenced many areas, especially the artistic field, as symbol of a human task. In human-computer interaction (HCI) studies, perception biases against AI, machines, or computers are generally cited. However, experimental evidence is still lacking. This paper presents a wide-scale experiment in which 565 participants are asked to evaluate paintings (which were created by humans or AI) on four dimensions: liking, perceived beauty, novelty, and meaning. A priming effect is evaluated using two between-subject conditions: Artworks presented as created by an AI, and artworks presented as created by a human artist. Finally, the paintings perceived as being drawn by human are evaluated significantly more highly than those perceived as being made by AI. Thus, using such a methodology and sample in an unprecedented way, the results show a negative bias of perception towards AI and a preference bias towards human systems.
Reliance on technology has diminished our use of mental computation. However, mental computation's inherent privacy features are becoming central to new research on creating more secure and usable passwords than one gets with approaches such as password managers. This work empirically studies the validity of cognitive assumptions relative to mental computation for making codes like passwords, using as a starting point password algorithms and a cost model for mental computation developed by Blum and Vempala. Through a study on 126 participants, we refute some of their model's assumptions, and introduce evidence of behaviours where human computing costs behave counter-intuitively. We also identify three empirical questions around symmetry, repeatability, and distribution of costs whose resolution would allow the development of more predictive cognitive computation models. This would then allow the efficient creation of better security algorithms.
The Semantic Web can be defined as an extension of the current Web, in which data is given well-defined meaning, better-enabling computers and people to work together. Linked Data (LD) has been envisioned as an essential element for the Semantic Web, listing a set of best practices for publishing and connecting structured data on the Web. Enabling humans to interact with this data is a crucial and challenging step to bring the Semantic Web forward. In order to better understand how the Human-Computer Interaction community has contributed to this effort, this late-breaking work presents a review focusing on the ACM Special Interest Group on Computer-Human Interaction (SIGCHI) venues. Our findings show that despite LD being a topic of interest to a variety of stakeholders, there are missing possibilities for end-users to query, browse and visualize LD, underlying the need for further investigations.
Automatic figure captioning is widely useful for improving the readability and accessibility of figures. Despite recent advances in figure question answering and parsing figure elements that enable machines to accurately read information from figures, the machine learning community still lacks sufficient understanding of this problem, on what contents are important to include in a caption and how to make it sound natural. In this work, we crawled, annotated, and analyzed a corpus of real-world human-written figure captions. Our study results show that real-world captions usually consist of a finite set of caption units and that automatic figure captioning should be formulated as a multi-stage task. The first stage is to generate caption units with high accuracy and the second is to stitch together the units with diverse stitching patterns, to form a natural caption.
In conditionally automated driving, drivers decoupled from operational control of the vehicle have difficulty taking over control when requested. To address this challenge, we conducted a human-in-the-loop experiment wherein the drivers needed to take over control from an automated vehicle. We collected drivers' physiological data and data from the driving environment, and based on which developed random forest models for predicting drivers' takeover performance in real time. Drivers' subjective ratings of their takeover performance were treated as the ground truth. The best random forest model had an accuracy of 70.2% and an F1-score of 70.1%. We also discussed the implications on the design of an adaptive in-vehicle alert system.
Relatively little research exists on the use of experiences with EEG devices to support brain-computer interface (BCI) education. In this paper, we draw on techniques from BCI, visual programming languages, and computer science education to design a web-based environment for BCI education. We conducted a study with 14 10th and 11th grade high school students to investigate the effects of EEG experiences on students' BCI self-efficacy. We also explored the usability of a hybrid block-flow based visual interface for students new to BCI. Our results suggest that experiences with EEG devices may increase high school students' BCI self-efficacy. Furthermore, our findings offer insights for engaging high school students in BCI.
Food journaling is an effective method to help people identify their eating patterns and encourage healthy eating habits as it requires self-reflection on eating behaviors. Current tools have predominately focused on tracking food intake, such as carbohydrates, proteins, fats, and calories. Other factors, such as contextual information and momentary thoughts and feelings that are internal to an individual, are also essential to help people reflect upon and change attitudes about eating behaviors. However, current dietary tracking tools rarely support capturing these elements as a way to foster deep reflection. In this work, we present Eat4Thought - a food journaling application that allows users to track their emotional, sensory, and spatio-temporal elements of meals as a means of supporting self-reflection. The application enables vivid documentation of experiences and self-reflection on the past through video recording. We describe our design process and an initial evaluation of the application. We also provide design recommendations for future work on food journaling.
Collecting continual labeled activity data entails considerable effort from users to label a series of activity data. We propose Checkpoint-and-Remind (CAR), a hybrid approach that combines participatory (PART) and context-trigger ESM labeling (ESM). Checkpoint-and-Remind has the advantage of user control but reduces users' burden in recording activities. Meanwhile, it features a context-trigger mechanism of ESM as a backup to remind users of labeling. Our preliminary evaluation of CAR with nine participants, who collected and labeled their mobility activity data for 15 weekdays, showed that compared with PART and ESM, participants collected a larger amount of annotated mobility data using CAR. In addition, participants had a higher annotation rate when using CAR than when using ESM. Our results show that the hybrid approach that combines manual and automated recording is promising. Our future work is validating these results and measure more metrics related to compliance with more participants.
ATHack, an annual assistive technologies hackathon at MIT, is unique in that community members living with disabilities (co-designers) propose projects and work with hackers to create prototypes over a two-week period. Since 2014, over 75 co-designers and 400 students have participated in ATHack. We present an overview of the program goals and implementation and share our reflections on the strengths and challenges surrounding the event as organizers, participants, and co-designers. Our reflections include that open communication between co-designers and participants is crucial, and that working on well-scoped, feasible projects is motivating for participants. From a survey (n=48) of ATHack participants from 2014-2019, 89% of respondents would recommend the event and 75% reported that they learned about disability and user-centered design. Our reflections suggest that the collaborative hackathon model can engage students spark innovations in accessible interfaces. We hope this report will inspire and guide others in implementing similar educational and design initiatives.
Conversational robots face the practical challenge of providing timely responses to ensure smooth interactions with users. Thus, those who design and implement robots will need to understand how different levels of delay in response may affect users' satisfaction with the conversation, how to balance the trade-off between a robot's quality of voice and response time, and how to design strategies to mitigate possible negative effects of a long delay. Via an online video-prototype study on a service robot with 94 Chinese participants, we find that users could tolerate up to 4s delay but their satisfaction drops at the 8s delay during both information-retrieval conversations and chitchats. We gain an in-depth understanding of users' preference for the trade-off between the voice quality and the response speed, as well as their opinions on possible robot graphic user interface (GUI) design to alleviate negative user experience with response latency.
Currently one of the biggest challenges regarding demographic detection in images and social media is the lack of labelled demographic data. A big part of the challenge is that no suitable mechanism exists to replace traditional intercept surveys in a way that ensures fairness and inclusion. The lack of labelled data has also impacted the training of AI algorithms. That is the lack of labels relevant to the target domains has made it hard to estimate the accuracy of the AI algorithms when they are applied to real world situations. In this paper, we propose a framework for collecting in-the-wild images and demographic labels from ordinary people (e.g., park visitors) that ensures that privacy is integrated at every stage of the data collection process from storage to processing and sharing.
We envision a combinative use of an AR headset and a smartphone in the future that can provide a more extensive display and precise touch input simultaneously. In this way, the input/output interface of these two devices can fuse to redefine how a user can manage application windows seamlessly on the two devices. In this work, we conducted a formative interview with ten people to provide an understanding of how users would prefer to manage multiple windows on the fused interface. Our interview highlighted that the desire to use a smartphone as a window management interface shaped users' interaction practices of window management operations. This paper reports how their desire to use a smartphone as a window manager is manifested.
Voice guidance for car navigation typically considers drivers as docile actors. Recent works highlight limitations to this assumption which make drivers rely less on given directions. To explore how drivers can make better navigation decisions, we conducted a pilot Wizard-of-Oz study that gives turn suggestions in conversations between two voice agents. We asked 30 participants to drive in a simulation environment using voice guidance that gives three types of suggestions: familiar, optimal, and new routes. We examined their route choices, perceived workload and utterances while driving. We found that while most drivers followed directions appropriate for the given scenarios, they were more likely to make inappropriate choices after hearing alternatives in conversations. On the other hand, two-party conversations allowed drivers to better reflect on their choices after trips. We conclude by discussing preliminary design implications for car navigation voice guidance specifically and recommender systems in general.
While studies in HCI4D have been advanced by the shift of perspective from developmental studies to a range of other discourses, current analytical concepts for understanding the sociality of society in Africa have arguably led to some misinterpretations of the place of technology. This provocation suggests that an 'African Standpoint' based on a combination of various standpoint positionalities and the Wittgensteinian approach of Winch can offer conceptual and analytical sensitivities for articulating social relations, transnational engagements and the conceptualisation of technological innovation. This provides an approach for seeing and accounting for things as they are - right here, right there and right now - and not some idealised conception of an African reality
Artificial Intelligence (AI) is increasingly augmenting and generating online content, but research suggests that users distrust content which they believe to be AI-generated. In this paper, we study whether introducing a confidence indicator, a text rating of an algorithm's confidence in its source data alongside rationale for why the data is more or less trustworthy, affects this distrust in Airbnb host profiles believed to be computer-generated. Our results indicate that a low-confidence indicator decreases participant trust in the rental host, but high-confidence indicators have no significant impact on trust. These findings suggest that user trust of AI-generated content can be negatively, but not positively, affected by a confidence indicator.
Exploratory data analysis is an open-ended iterative process, where the goal is to discover new insights. Much of the work to characterise this exploration stems from qualitative research resulting in rich findings, task taxonomies, and conceptual models. In this work, we propose a machine-learning approach where the structure of an exploratory analysis session is automatically learned. Our method, based on Hidden-Markov Models, automatically builds a storyline of past exploration from log data events, that shows key analysis scenarios and the transitions between analysts' hypotheses and research questions. Compared to a clustering method, this approach yields higher accuracy for detecting transitions between analysis scenarios. We argue for incorporating provenance views in exploratory data analysis systems that show, at minimum, the structure and intermediate results of past exploration. Besides helping the reproducibility of the different analyses and their results, this can encourage analysts to reflect upon and ultimately adapt their exploration strategies.
Moving target selection is one of the most fundamental interaction tasks in user interfaces involving dynamic content. In such kind of systems, many stimuli will cause individual positive or negative emotions. Users need to make the selection with the influences of these emotions, such as waving a stick to hit a fast-moving spider. In this study, we explored the effects of induced emotion on user performances in a time-constrained moving target selection task. We found that participants tended to select the targets faster but make more mistakes in positive emotion condition, while they select the targets slower with fewer mistakes in negative emotion condition. We discussed future research directions on this topic and how it could potentially help the design in user interfaces with dynamic content.
With the increasing availability of head-mounted displays (HMDs) that show immersive 360° VR content, it is important to understand to what extent these immersive experiences can evoke emotions. Typically to collect emotion ground truth labels, users rate videos through post-experience self-reports that are discrete in nature. However, post-stimuli self-reports are temporally imprecise, especially after watching 360° videos. In this work, we design six continuous emotion annotation techniques for the Oculus Rift HMD aimed at minimizing workload and distraction. Based on a co-design session with six experts, we contribute HaloLight and DotSize, two continuous annotation methods deemed unobtrusive and easy to understand. We discuss the next challenges for evaluating the usability of these techniques, and reliability of continuous annotations.
Intelligent connected vehicles (ICVs) are the foundation to create an intelligent transport eco-system. While the population of ICV users is gradually increasing, the motives underlying use of ICVs remain unclear. Here, we developed a scale to measure motives for ICV use. Taken the results of explorative factor analysis and confirmatory factor analysis together, we suggested a four-factor model: symbolic, instrumental, affective-self, and affective-other motives. We found that ICV users with different characteristics in age, annual income and driving distance evaluated the importance of motives differently. ICV users who are richer and drive longer distances are more likely to show higher symbolic and affective-self motives. Also, users' symbolic motive grows with age. Finally, we found that symbolic and affective motives correlated highly with social norms and loyalty, while instrumental motives related to perceived control feelings toward operating the vehicle. These findings shed novel insights on how to design ICVs and make related policies.
This paper presents an investigation into the differences found in participants' comfort levels with using new humanoid robots in addition to the tendency to give a gender to a robot designed to be gender neutral. These factors were used to examine participants' perception of occupational competency, trust, and preference for a humanoid robot over a human male or female for various occupations. Our results suggest that comfort level influences these metrics but does not cause a person to ascribe a gender to a gender-neutral robot. These findings suggest that there is no need to perpetuate societal gender norms onto robots. However, even when designing for robot gender neutrality people are still more likely to ascribe a gender to the robot, but this gendering does not significantly impact occupational judgements.
We address a recommendation task for next likely flight destination to customers of a major international airline company. We compare performance using historical flight data and an actual user evaluation. Using two years of historical flight data consisting of tens of millions of flights, an ensemble and a collaborative filtering approach obtained an accuracy of 47% and 20% using a test set of 100,000 customers, respectively, highlighting the challenge of the domain. We then evaluated our recommendations on 10,000 actual customers, with a 45-45-10 split among ensemble, collaborative filtering, and control group. The overall predictive power employed with real users was 23%, with the ensemble method having a predictive power of 19% and 30% for collaborative filtering. Results indicate that, in complex and shifting domains such as this one, one cannot rely solely on historical data for evaluating the impact of user recommendations. We discuss implications for recommendation systems and future research in this and related domains.
We present research on how the perception of intelligent systems can be influenced by early experiences of machine performance, and how explainability potentially helps users develop an accurate understanding of system capabilities. Using a custom video analysis system with AI-assisted activity recognition, we studied whether presenting explanatory information for system outputs affects user perception of the system. In this experiment, some participants encountered AI weaknesses early, while others encountered the same limitations later in the study. The difference in ordering had a significant impact on user understanding of the system and the ability to detect AI strengths and weaknesses, and the addition of explanations was not enough to counteract the strong effects of early impressions. The results demonstrate the importance of first impressions with intelligent systems and motivate the need for improved methods of intervention to combat automation bias.
In this work, we address the problem of measuring and predicting temporal video saliency - a metric which defines the importance of a video frame for human attention. Unlike the conventional spatial saliency which defines the location of the salient regions within a frame (as it is done for still images), temporal saliency considers importance of a frame as a whole and may not exist apart from context. The proposed interface is an interactive cursor-based algorithm for collecting experimental data about temporal saliency. We collect the first human responses and perform their analysis. As a result, we show that qualitatively, the produced scores have very explicit meaning of the semantic changes in a frame, while quantitatively being highly correlated between all the observers.
Apart from that, we show that the proposed tool can simultaneously collect fixations similar to the ones produced by eye-tracker in a more affordable way. Further, this approach may be used for creation of first temporal saliency datasets which will allow training computational predictive algorithms. The proposed interface does not rely on any special equipment, which allows to run it remotely and cover a wide audience.
Gamettes are playful tools for agent-based participatory simulation and have shown to be valid for collecting rich behavioral data from human decision-makers. However, there is still a question that how such data can be used to create or update agent-based and behavioral models. In this paper, we evaluate and compare the performance of different methods for imitating human behavior. We use extracted data from gamettes in an empirical study on supply chain decisions, and compare the performance of a nonlinear regression model with two imitation learning algorithms. Our results demonstrate that each method is capable of modeling and thus predicting human behavior by considering multiple trajectories from different players.
In this work, we proposed a personalized trust predictor for modeling trust dynamics in human-robot teaming. The proposed method models trust by a Beta distribution to capture the three properties of trust dynamics, which takes the performance-induced positive attitude and negative attitude as parameters. The model learns the prior distribution of the parameters from a training dataset, and estimates the posterior distribution based on a short training session and occasionally reported trust feedback. The experiments showed that the proposed method accurately predicted people's trust dynamics, achieving a root mean square (RMS) of 0.0724 out of 1.
While the Human-Computer Interaction (HCI) research on perceived ownership over physical and digital possession is actively developing, there is still a noticeable lack of common conceptual understanding of ownership that would allow for systematizing our explorations and discoveries. In this work, we analyze the ownership literature in different domains and propose the HCI-focused adaptation of the theoretical conceptualization of ownership, including (I) five relevant structural dimensions of ownership, (II) three synthesized motivational aspects of ownership, and (III) three related social behaviors. Developing a common conceptual ground for ownership will allow future technology researchers and practitioners to effectively communicate and articulate their findings.
We conducted a preliminary online study (N=261) investigating whether people's susceptibility to fake news on social media depends on how fake news are associated with real news that they viewed previously, as well as individuals' cognitive ability. Across two phases, we varied the association in three between-subjects conditions, i.e., associative inference, repetition, and irrelevant (control). Our study results showed limited impact of association type on participants of low cognitive ability. In contrast, for participants of high cognitive ability, their discrimination of fake news from real news tended to be worse for the associative inference condition than for the other two conditions. Thus, our findings suggest that individuals of high cognitive ability are likely to be susceptible to form the belief of fake news, but differently from those of low cognitive ability.
Many employees who join work video meetings remotely are frustrated by technological constraints on their engagement. While we should seek to deepen engagement for those who want it, in this paper we explore how low engagement by remote employees in video meetings can also be a social choice. Employees don't always need to engage fully in all or part of a meeting, and they use the technology to help communicate that choice. We argue that video meeting systems should expand the spectrum of engagement levels for remote meeting participants.
We introduce Urban Immersion, a web-based platform for collecting crowdsourcing immersive perception data of urban space. Current research based on crowdsourcing data mainly utilize 2D representation tools, and it has limitations in the studies involving spatial features. Thus, in our design and implementation of the platform, which aims to help architects, urban researchers, or people in spatial management to understand their users' preferences, we incorporate webVR and 360-video techniques to display both realistic and abstract representation of the urban environment. We chose the city Shanghai for the first round of the experiment to test our crowdsourcing platform. We first did internal testing for the prototyping of the platform and then published it in social media and invited 771 people to participate in rating their perception preference. We did a rating analysis and visual mapping of the 5735 valid data we collected. We evaluated this round of crowdsourcing data collection on Urban Immersion platform. We checked the effectiveness of applying 3D representation techniques to crowdsourcing platforms and proposed how we could improve the platform in future work.
Despite many benefits of automated driving, such as reducing fuel consumption, traffic congestion and crashes, a lack of trust hinders the adoption of automated vehicles (AVs). Prior research focused on people's trust in AVs based on AVs' overall performance. The present study is focused on people's trust change in AVs over time in a sequential decision making context. We conducted a human-in-the-loop experiment with 16 participants in a virtual 3D environment wherein participants acted as passengers riding an AV. We manipulated two independent variables: level of stochasticity (high vs. low) and source of stochasticity (external vs. internal). Dependent variables included participants' moment-to-moment trust in AVs and post-experiment trust. Our results revealed that when the stochasticity was due to internal errors (e.g. AV's sensor errors) as compared to external errors (e.g. traffic jams or road blocks), participants' trust in AVs decreased more significantly. Also, the larger the cost due to an error, the larger the trust decrement.
One strength of educational games is their adaptivity to the individual learning progress. Methods to assess progress during gameplay are limited, especially in virtual reality (VR) settings, which show great potential for educational games because of their high immersion. We propose the concept of adaptive hints using progress assessment based on player behavior tracked through a VR-system's tracking capabilities. We implemented Social Engineer, a VR-based educational game teaching basic knowledge about social engineering (SE). In two user studies, we will evaluate the performance of the progress assessment and the effects of the intervention through adaptive hints on the players' experience and learning effects. This research can potentially benefit researchers and practitioners, who want to assess progress in educational games and leverage the real-time assessment for adaptive hint systems with the potential of improved player experience and learning outcomes.
Virtual reality (VR) enables immersive applications that make rich content available independent of time and space. By replacing or supplementing physical face-to-face meetings, VR could also radically change how we socially interact with others. Despite this potential, the effect of transferring physical collaborative experience into a virtual one is unclear. Therefore, we investigated the experience differences between a collaborative virtual environment (CVE) and a physical environment. We used a museum visit as a task since it is a typical social experience and a promising use case for VR. 48 participants experienced the task in real and virtual environments, either alone or with a partner. Despite the potential of CVEs, we found that being in a virtual environment has adverse effects on the experience which is reinforced by being in the environment with another person. Based on quantitative and qualitative results, we provide recommendations for the design of future multi-user virtual environments.
Social Virtual Reality (VR) invites multiple users to "interact" in a shared immersive environment, which creates new opportunities for remote communication, and can potentially be a new tool for remote medical consultations. Using knee osteoarthritis consultation as a use case, this paper presents a social VR clinic that allows patients to consult a nurse represented as a virtual avatar with head, upper body and hands visible. We started with an ethnographic study at a hospital with three medical professionals and observed three patient consultation sessions to map the patient treatment journey (PTJ) and distill design requirements for social VR consultation. Based on the results of the study, we designed and implemented a social VR clinic to meet the identified requirements. Our work expands on the potential of social VR to help reshape patient treatment by reducing the workload of medical staff and the travel time of patients. In the future, we plan to conduct user studies to compare face-to-face (F2F) with social VR consultations.
Autonomous vehicles are about to enter the mass market and with it a complex socio-technical system including vulnerable road users such as pedestrians and cyclists. Communication from autonomous vehicles to vulnerable road users can ease the introduction of and aids in understanding the intention of these. Various modalities and messages to communicate have been proposed and evaluated. However, a concise design space building on work from communication theory is yet to be presented. Therefore, we want to share our work on such a design space consisting of 4 dimensions: Message Type, Modality, Locus, and Communication Participants.
The traffic system is a complex network with numerous individuals (e.g., drivers, cyclists, and pedestrians) and vehicles involved. Road systems vary in various aspects such as the number of lanes, right of way, and configuration. With the emergence of autonomous vehicles, this system will change. Research has already addressed the missing communication possibilities when no human driver is needed. However, there is no common evaluation standard for the proposed external communication concept with respect to the complexity of the traffic system. We have therefore investigated the evaluation of these in Virtual Reality, in monitor-based, and in prototypical setups with special regard to scalability. We found that simulated traffic noise is a non-factor in current evaluations and that involving multiple people and multiple lanes with numerous vehicles is scarce.
Augmented reality (AR) is gradually becoming used for navigation in urban environments, allowing users to see instructions in their physical environment. However, viewing this information through a smartphone's screen is not ideal, as it can cause users to become inattentive to their surroundings. AR head-mounted displays (HMD) have the potential to overcome this issue by integrating navigational information into the user's field of view (FOV). While work has explored the design of turn-by-turn egocentric AR navigation interfaces, little work has explored the design of exocentric interfaces, which provide the user with an overview of their desired route. In response to this, we examined the impact of three different exocentric AR map displays on pedestrian navigation performance and user experience. Our work highlights pedestrian safety concerns and provides design implications for future AR HMD pedestrian navigation interfaces.
Advancements in Augmented Reality (AR) technologies and processing power of mobile devices have created a surge in the number of mobile AR applications. Nevertheless, many AR applications have adopted surface gestures as the default method for interaction with virtual content. In this paper, we investigate two gesture modalities, surface and motion, for operating mobile AR applications. In order to identify optimal gestures for various interactions, we conducted an elicitation study with 21 participants for 12 tasks, which yielded a total of 504 gestures. We classified and illustrated the two sets of gestures, and compared them in terms of goodness, ease of use, and engagement. The elicitation process yielded two separate sets of user-defined gestures; legacy surface gestures, which were familiar and easy to use by the participants, and motion gestures, which had better engagement. From the interaction patterns of this second set of gestures, we propose a new interaction class called TMR (Touch-Move-Release), which defines for mobile AR.
Smart service chatbots, aiming to provide efficient customer service, have increased rapidly. Currently, few task-oriented chatbots had employed emotion strategies together with problem solving process. Meanwhile, customers can have different preference across channels (e.g., social media and webpages). This paper presented the design for a series of emotional strategies (ES) combined with task-solving, and tested its effectiveness across channels. In the multi-channel service chatbot, Moli, we compared ES with brief apology (BA) and no emotion responses (NER) in a Wizard of Oz study. The results indicated the effectiveness of the emotion strategies borrowed from psychotherapy technologies. Meanwhile, we found customers preferred brief and direct strategies on webpage, whereas rich emotional strategies in all stages were more favorable on social media. With detailed strategy design, the "cold comfort" was found helpful for building human-machine relationships, pacifying emotions, and improving satisfaction and system usability.
Does an affordance exist on the touchscreen? This paper examines an affordance effect eliciting users' touch gestures. Previous gesture studies under a user-centered approach have neglected the effects of visual characteristics of interface objects. To validate the effects, we collect users' multiple gesture responses through a gesture elicitation experiment for nine functions on thirteen stimuli systemically manipulated by visual properties. In the analyses, we investigate on 1) user agreements for the function-gesture mappings, and 2) differences in gesture selection in terms of the visual pattern. In the results, users respond differently in gesture selection in terms of the visual pattern-layout and number of images. This result indicates that designers should consider the visual properties of on-screen objects beyond user-elicited gestures. This study contributes toward a method for designing intuitive gestures on the touchscreen.
Trigger-action programming lets end-users automate and connect IoT devices and online services through if-this-then-that rules. Early research demonstrated this paradigm's usability, but more recent work has highlighted complexities that arise in realistic scenarios. As users manually modify or debug their programs, or as they use recently proposed automated tools to the same end, they may struggle to understand how modifying a trigger-action program changes its ultimate behavior. To aid in this understanding, we prototype user interfaces that visualize differences between trigger-action programs in syntax, behavior, and properties.
Drivers and pedestrians use various culturally-based nonverbal cues such as head movements, hand gestures, and eye contact when crossing roads. With the absence of a human driver, this communication becomes challenging in autonomous vehicle (AV)- pedestrian interaction. External human-machine interfaces (eHMIs) for AV-pedestrian interaction are being developed based on the research conducted mainly in North America and Europe, where the traffic and pedestrian behavior are very structured and follow the rules. In other cultures (e.g., South Asia), this can be very unstructured (e.g., pedestrians spontaneously crossing the road at non-cross walks is not very uncommon). However, research on investigating cross-cultural differences in AV-Pedestrian interaction is scarce. This research focuses on investigating cross-cultural differences in AV-Pedestrian interaction to gain insights useful for designing better eHMIs. This paper details three cross-cultural studies designed for this purpose, and that will be deployed in two different cultural settings: Sri Lanka and Germany.
Small-scale automation tools, or "bots," have been widely deployed in open-source software development to support manual project maintenance tasks. Though interactions between these bots and human developers can have significant effects on user experience, previous research has instead mostly focused on project outcomes. We reviewed existing small-scale bots in wide use on GitHub. After an in-depth qualitative and quantitative evaluation, we compiled several important design principles for human-bot interaction in this context. Following the requirements, we further propose a workflow to support bot developers.
Community media forums such as CGNet Swara and Mobile Vaani leverage interactive voice response (IVR) technology to enable feature phone owners in underprivileged regions of the world to call a toll-free phone number and listen to and report local stories. This excludes potential users living in areas without any mobile network. In this paper, we describe the deployment of an app that both facilitates a new means of media dissemination using a novel incentive scheme, and enables content collection, from no-network areas. In partnership with CGNet Swara, we deployed our app in the rural Indian states of Chhattisgarh and Telangana. In one month, 20,955 stories were transferred through Bluetooth to 2,443 unique phones, for which a total of $930 in mobile airtime credits was disbursed to 307 of the 680 users that installed the app. In an 81-day period, 537 local stories were reported using the app from 117 unique users. We present a quantitative and qualitative analysis of user behavior.
While most traditional user experience (UX) evaluation methods (e.g., questionnaires) have made the transition to the "wild", physiological measurements still strongly rely upon controlled lab settings. As part of an ongoing research agenda, this paper presents a novel approach for UX research which contributes to this transition. The proposed method triangulates GPS and physiological data to create emotional maps, which outline geographical areas where users experienced specific emotional states in outdoor environments. The method is implemented as a small portable recording device, and a data visualization software. A field study was conducted in an amusement park to test the proposed approach. Emotional maps highlighting the areas where users experienced varying levels of arousal are presented. We also discuss insights uncovered, and how UX practitioners could use the approach to bring their own research into the wild.
Active use of voice assistant highlights the importance of interpersonal relationships with the conversational voice agent. When the agent evaluates a user's task performance, empathic feedback is needed to prevent the user's negative feelings. Voice reflects emotions through its nonverbal voice features that can either strengthen or lessen the feeling of empathy. Our study investigated the effect of nonverbal vocal cues in speech interaction on the user's perception toward the agent. 39 university students participated in the experiment, and MANOVA was tested to analyze their responses regarding intimacy, similarity, connectedness, enjoyment, and ease of use. The study result showed that using nonverbal vocal cues on empathic feedback contributes to establishing an interpersonal relationship with the agent, which gives implications to the fields of human-centered agent design.
We present Kinetic AR, a holistic user experience framework for visual programming of robotic motion systems in Augmented Reality. The Kinetic AR framework facilitates human-robot collaboration in a co-located environment. Our goal is to present a deployable guide for the creation and visualization of manifold robotic interfaces while maintaining a low entry barrier to complex spatial hardware programming. A two phase validation process has been conducted to assess our work. As an initial phase, we have performed a set of interviews with robotics experts. Based on these interviews we have established three main areas that our framework tackles in different time domains. In a second phase, we have developed a set of prototypes using mobile Augmented Reality that apply the principles of Kinetic AR to multiple hardware actors including an AGV, a robotic arm, and a prosthetic system. Additional feedback from experts indicate the potential of the Kinetic ARframework.
In the past two years, an emerging body of HCI work has been focused on classroom proxemics-how teachers divide time and attention over students in the different regions of the classroom. Tracking and visualizing this implicit yet relevant dimension of teaching can benefit both research and teacher professionalization. Prior work has proved the value of depicting teachers' whereabouts. Yet a major opportunity remains in the design of new, synthesized visualizations that help researchers and practitioners to gain more insights in the vast tracking data. We present Dandelion Diagram, a synthesized heatmap technique that combines both teachers' positioning and orientation (heading) data, and affords richer representations in addition to whereabouts-For example, teachers' attention pattern (which directions they were attending to), and their mobility pattern (i.e., trajectories in the classroom). Utilizing various classroom data from a field study, this paper illustrates the design and utility of Dandelion Diagram.
Data-driven dashboards have been increasingly integrated into various contexts, particularly in educational settings. There is a growing need to understand how to design learning dashboards to help educators support learning experiences by providing real-time formative feedback. We are studying the design of a learning dashboard that can support educational facilitation tasks in a museum setting. In our approach, we use discrete facilitation tasks as the cornerstone of our design process. Using this task-based approach, we conducted pilot studies and participatory design sessions to better understand the context of design. In this paper, we offer preliminary findings and design considerations for supporting and digitally augmenting facilitation tasks in a highly interactive, open-ended learning environment.
Monologue style lecture video is the majority format delivered on most distance learning platforms such as MOOCs. However, studies found that observing dialogue between a tutor and a tutee could produce better learning than the monologue one, as the observer student learns more from the tutee than the tutor when they are observing the dialogue. In this study, we enhanced the existing monologue video into a dialogue-like video by adding a tutee agent. We propose a system that transforms monologue video into a dialogue-like video. In the initial user study, we found that most of the observer students in our experiment preferred watching dialogue-like lecture videos with tutee agent and had better learning experiences than the monologue one.
The advancement of technology has paved the way for sophisticated school violence detection and intervention systems. Existing researches, however, have come short of reflecting the goals and contexts of the target users: teachers, students, and parents. Therefore, we conducted interviews with 35 teachers about school violence and technology adoption. While there was wide consensus on the necessity of technology, the teachers pointed out the possible adverse effects of its adoption: greater burden on teachers, privacy concerns, and consequences of inaccurate algorithms. Based on the findings, we derived design implications in the stages of data collection, decision making, and data conveyance. These design implications could make a contribution towards shaping the design of school violence detection and intervention system from the teachers' perspective.
A learner's guessing behavior while playing educational games can be a key indicator of her disengagement that impacts learning negatively. To distinguish a learner's guessing behavior from solution behavior, we present an explorative study of using motion features, which represent a learner's finger movements on a tablet screen. Our data was collected from the Missing Number game of KitKit School, a tablet-based math game designed for children from pre-K to grade 2 in elementary school. A total of 5,040 problem solving logs, which were collected from 168 students, were analyzed. A two-sample t-test showed a significant difference between guessing and solution behavior for four groups of motion features that indicate distance, curvedness, complexity, and pause (p<0.001). Additionally, our empirical results showed the possibility of using motion features in automatic detection of guessing behavior. Our best model yielded an accuracy of 0.778 and AUC value of 0.851 by using the random forest classifier.
Academic procrastination, referred to delay in executing scheduled academic tasks, has become a common problem in student life. In this paper, we examine whether group size would influence the effects of goal setting and goal sharing on reducing academic procrastination. Through a three-week field study followed by interviews, 21 participants were randomly assigned to three 2-people groups and three 5-people groups. They used a suite of mobile applications to set up their study goals, track their study and distraction time, and share their goals with other group members for 2 weeks. Results suggest that the 5-people groups had achieved better outcomes than the 2-people groups considering less fluctuation and more deduction in average distraction rate (distraction time divided by study time). We conclude by proposing several design implications concerning the design of mobile applications to reduce academic procrastination, and identify several future research directions on this topic.
Code puzzles can be an engaging way to learn programming concepts, but getting stuck in a puzzle can be discouraging when no help or feedback is available. Intelligent tutoring systems can provide automatic individualized help, but they rely on having a robust and useful representation of student state. One common challenge for Intelligent tutoring systems in the programming domain is a large state space of possible students states. We propose a constrained set of features of student code based on detecting and classifying the bugs present in the code.
Experienced programmers are capable of learning new programming languages independently using various available resources, but we lack a comprehensive understanding of which resources they find most valuable in doing so. In this paper, we study how experienced programmers learn Rust, a systems programming language with extensive documentation and example code, an active online community, and descriptive compiler errors. We develop a task that requires learning Rust syntax and comprehending the Rust-specific approach to mutability and ownership. Our results show that users spend 43% of online time viewing example code and that programmers appreciate in-line compiler errors, choosing to refresh, on average, every 30.6 seconds after first discovering this feature. The average time between these refreshes predicted total task time, but individual resource choices did not. Based on our findings we offer design implications for language and IDE developers.
Smart home devices are growing in popularity due to their functionality, convenience, and comfort. However, they are raising security and privacy concerns for users who may have very little technical ability. User experience (UX) focuses on improving user interactions, but little work has investigated how companies factor user experience into the security and privacy design of smart home devices as a means of addressing these concerns. To explore this in more detail, we designed and conducted six in-depth interviews with employees of a large smart home company in the United Kingdom. We analyzed the data using Grounded Theory, and found little evidence that UX is a consideration for the security design of these devices. Based on the results of our study, we proposed user-centered design guidelines and recommendations to improve data protection in smart homes.
Recent advances in Natural Language Processing (NLP) bear the opportunity to design new forms of human-computer interaction with conversational interfaces. We hypothesize that these interfaces can interactively engage students to increase response quality of course evaluations in education compared to the common standard of web surveys. Past research indicates that web surveys come with disadvantages, such as poor response quality caused by inattention, survey fatigue or satisficing behavior. To test if conversational interfaces have a positive impact on the level of enjoyment and the response quality, we design an NLP-based conversational agent and deploy it in a field experiment with 127 students in our lecture and compare it with a web survey as a baseline. Our findings indicate that using conversational agents for evaluations are resulting in higher levels of response quality and level of enjoyment, and are therefore, a promising approach to increase the effectiveness of surveys in general.
We explore the information needs of hospitalized children by analyzing co-design prototypes created by children. We facilitated two simultaneous, yet separate co-design sessions with mixed groups including both children who had been hospitalized and siblings of children who had been hospitalized. The sessions focused on identifying information needs in the inpatient hospital setting. Our findings revealed that both hospitalized children and their siblings view information needs from multiple perspectives, including those of their parents, their physician, and of the hospitalized child themselves. Both co-design groups identified similar needs, including: communicating with people outside their room, tracking important people, and having entertainment for the hospitalized child.
People with dementia often experience a lack of social engagement after moving from the home environment into long-term care. Professional caregivers aim to build social relations with residents in dementia care homes to empower and socially include them during everyday care. However, the natural imbalance between the caregiver and the cared-for positions the person with dementia in a passive role, making it difficult for them to initiate and build relations with caregivers. In this paper, we present 'Turnaround' as a design exploration into care relations and the potential role of design to support these relations. Turnaround is a musical interface to facilitate collaborative acts of music-making. Our preliminary results reveal how the role of resident and caregiver shifted towards equal partnership during the exploration and interaction with Turnaround. Furthermore, we argue that technologies should foster partnerships in care activities by facilitating shared forms of expression and reinforcement of agency.
Exposure to continuous stress can have a negative impact on a person's mental and physical well-being. Stress monitoring and management, with the aim to analyze or mitigate the effects of stress, are an active area of research. A promising approach for detecting stress is by measuring bio-signals such as an electroencephalogram (EEG) or an electrocardiogram (ECG). In this study, we introduce a wearable in- and over-ear device that measures EEG and ECG signals simultaneously. The device is composed of dry and soft sensing electrodes which are conformally integrated on the surface of earbuds. We carried out a pilot study exposing test subjects to three standard stressors (stroop, memory search, and mental arithmetic) while measuring their EEG and ECG signals. Preliminary results indicate the feasibility of classifying various stress conditions using a convolutional neural network.
Individuals with paranoia often experience a high level of self-criticism and negative emotions. Guided compassion-focused (CF) imagery has shown to be successful in reducing these negative emotions and paranoid thoughts. However, some individuals have difficulties with CF imagery. By enabling a sense of presence, immersive virtual environments can overcome these limitations and induce specific emotional responses to support the development of self-compassionate feelings. In our study, we compared an immersive CF (CF-VR) with a controlled VR condition in a student sample of N = 21 participants with slightly elevated symptoms of paranoia. A virtual mission on the moon was designed and implemented to induce self-compassionate feelings with the help of interacting with a space nebula that represented the power of compassion. Our results show that the CF-VR intervention was well accepted and effective in reducing state paranoid thoughts. Worry decreased significantly within the CF-VR group, while self-compassion increased.
We present a study comparing physiological and psychological restoration in matched real and virtual natural environments. Participants (n=24) experienced a real forest, or one of two audiovisual virtual forests wearing a head-mounted display: A 3D forest or a 360-degree video. The results showed that some of the benefits of the real forest could also be obtained using virtual equivalents. Furthermore, we found the 3D forest to be emotionally more restorative than the 360-degree video forest. The findings can be used in creating restorative virtual environments for people who are unable to visit real natural environments.
"Good" intentions, such as to exercise more, only rarely spur action. In contrast, so-called "implementation intentions" explicitly relate goal-directed behavior to particular situations (e.g., when, where, and how). Studies show that this has a positive effect on goal achievement. This paper explores whether technology can support the transformation of "good" intentions into concrete implementation intentions and their triggering as well as routinization. Specifically, we report three single case studies with a functional prototype. This prototype supported creating implementation intentions, putting them into a calendar, and being reminded through an object representative for the planned activity. Through the prototype, all three participants engaged more in the activities chosen to fulfill the intention. All in all, the notion of supporting individual implementation intentions through technology seems a viable strategy to support behavior change.
The only known therapy for stroke, a major leading cause of death and disability, has to be administered within 3 hours of the onset of symptoms for it to be effective. Accurately diagnosing a stroke as soon as possible after it occurs is difficult as it requires a subjective evaluation by a clinician in a hospital. With the narrow time window required for diagnosis, stroke evaluation would benefit from being aided by computational approaches that identify and quantify stroke symptoms in an efficient way. Here, we propose the design of a novel interface that provides clinicians with visualizations of the results of a machine learning-based technological aid for stroke diagnosis. To effectively support clinicians in determining stroke type, the proposed approach allows them to compare their own manual stroke evaluation with the results of the diagnostic system. By developing and evaluating our prototypes with neurologists, we explore how to best integrate technological aids into busy hospital workflows without burdening clinicians or biasing their decision making processes. We found that properly balancing the predictions of humans with that of technology is key to promoting the adoption of the latter in hospitals.
A Facebook Messenger chatbot, Sunny, was designed and deployed to promote positive social connections and enhance psychological wellbeing. A 10-day study was conducted with three pre-existing social groups of four members each in control (n=12) and experimental groups (n=12). Both groups completed initial assessments and daily reports, and the experimental groups interacted with Sunny. Exit interviews indicated three key themes: 1) Sunny prompted self-reflection, boosting participants' senses of self-worth and the depth of their relationships, 2) using Sunny encouraged participants to send heartfelt messages they would not have shared otherwise, 3) participants enjoyed accessing positive messages "on-demand". Experimental groups showed an average increase in psychological wellbeing of 1.73 (std = 6.08), compared to 0.5 (std = 5.94) in control. Our results suggest that an AI-based chatbot like Sunny could provide preventative care, promoting strong social connections and psychological wellbeing.
The operating room is fertile ground for creativity: problem solving is common, different surgical tasks impose particular constraints, and a wide range of tools are available. The introduction of telemanipulated robots in Minimally Invasive Surgery (MIS) has impacted the work of surgeons and their teams in terms of communication, use of perceptual senses, social structure, roles among other things. But do they also impact creativity? I present preliminary results of data re-analysis from an earlier field study focusing on creative tool use. Results suggest that current surgical robots limit creativity as they impose hard constraints: they remove haptic feedback, isolate the surgeon, and are built in rigid ways.
This paper presents a preliminary study for the hands-on creation of an intelligent decision support tool (IDST) for occupational health (OH) physicians. We addressed this challenge through an iterative design process consisting of three phases with different levels of stakeholder involvement, spanning from understanding the context to developing the concept and consolidating the design. We identified a set of design considerations that focused on enriching data collection, improving the accessibility of information, and blending the decision support into the workflow. To demonstrate these insights, we developed the concept of an AI-based OH consultation, called ConsultAI. ConsultAI is a conversational assistant that can provide real-time decision support to OH physicians during clinical interviews. Based on this case study, we discussed stakeholder engagement in the design of IDSTs for OH physicians.
We introduce a novel interactive narrative exhibit supporting general public learning about Hip Hop culture and history developed as a collaboration of the MIT Center for Advanced Virtuality, the Universal Hip Hop Museum, and Microsoft and supported by the TunesMap Educational Foundation and internationally known Afrofuturist artists Black Kirby. The exhibit's narrative system is personalized by categorizing users based on evaluating their input data light of a social psychology-based model based in musical identity theory. The system uses user input to determine which interactive narrative and customized music playlist to present to the user. The system has been deployed as the central interactive display within the [R]Evolution of Hip Hop for an exhibit of the Universal Hip Hop Museum. Future work will involve analysis of user feedback data from the thousands of local and international exhibit visitors to determine the impact of personalization on visitor engagement, satisfaction, and learning.
User perceptions of personas affect the adoption of personas for decision-making in real organizations. To investigate how experience affects the way an individual perceives a persona, we conduct an experimental study with individuals less and more experienced with personas. Quantitative results show that previous experience increases several important perceptions, including willingness to use, empathy, likability, and completeness. Results suggest that methods that increase experience (e.g., training, workshops, scenarios) should be applied alongside persona deployment, as desirable persona perceptions increase with individuals' experience.
For musicians, improvising with other musicians is not uncommon. But what happens when musicians engage in musical improvisation with semi-autonomous machines? We investigated a seminar in which design students built machines for musicians to improvise with. We explored the experiences of musicians when improvising with non-human musicians, as well as the challenges of designing non-human musicians. Among other things, we found that while from an outside perspective, the machines appeared as independent actors that interact with the musicians, the musicians experienced them as additional instruments they controlled. The interaction design of non-human actors was challenging for designers.
Wearable technology for sports and fitness have increased in popularity in the last decade, but most technological solutions in research are designed for a single specific fitness practice and target group. Towards validating a design approach and resulting wearable designs across several fitness practices, we used three wearable Training Technology Probes (TTPs) originally designed for, and tested in, the context of Yoga and Circus training. They were used in a design activity with the goal of exploring and opening up the design space of technology for weightlifting. Our exploration proved fruitful and substantiated the versatility, adaptability and usefulness of the TTPs on account of their design features. Here we present initial insights from deploying the TTPs in that domain. The TTPs served as probing tools, helping to surface goals and challenges of weightlifting. They were appropriated to fit and assist in new TTP uses for weightlifting exercises, leading to interesting design iterations that will inform future work.
Underwater robot is essential equipment for exploring the marine environment. It is important that children get exposed to these technologies as earlier as possible, especially there is a high demand for developing expertise and awareness in the underwater robot. Although examples of making toolkit for children currently exist, few focus specifically on integration with the water environment. In this paper, we explore the making toolkit, ModBot, which can be applied to the water environment. The hardware was developed using electronic, counterweight, and shape modules that can be manipulated to build underwater robots. The software application allows children to learn concepts and receive construction feedback. This paper presents the system design of ModBot, the design rationale, and a user study for the usability of ModBot. Our system is expected to spark children's interests and creativity of underwater robots, and foster their understanding of the water environment.
In this late-breaking work, we describe the legacy of feminist theory within HCI literature, focusing on Shaowen Bardzell's seminal publication "Feminist HCI: Taking Stock and Outlining an Agenda for Design," which was one of the first to propose adoption of feminist theories into HCI research and practice. We conducted a citation analysis of 70 published texts that cited this paper, using the Harwood functions to identify how feminist theory concepts have been cited in HCI and whether the implementation of proposed frameworks has taken place. This paper was mostly given 'credit,' and most frequently 'signposted' to keep readers on track of the topical issues in HCI, with little evidence of explicit use or extension of proposed frameworks. These results demonstrate a largely one-dimensional impact, characterized by a lack of deep engagement in feminist theories. We identify opportunities to expand feminist approach to further improve research and practice in HCI.
Time is a growing topic of interest in HCI. Technological developments and changes in everyday life routines are creating new perceptions of time that are getting widespread in HCI. While these perceptions expand our current understanding of time, designing for them is an important yet an overlooked challenge. In this work, we investigated five time concepts from HCI literature (right time, clock time, digital time, plastic time, and collective time). We conducted a photography capturing task with four design researchers to understand how they interpret and perceive these time concepts. We then created two design speculations based on their perceptions and interpretations to illustrate how emerging time concepts can inspire new design ideas around representations of time.
This paper presents the design and development of CoolCraig, a mobile application supporting the co-regulation of behaviors and emotions of children with ADHD. The application works in both a smartwatch, for children, and a smartphone for their caregivers. We describe a scenario of use for how CoolCraig can support co-regulation between children and their caregivers by following a goals-rewards system and tracking emotions and behaviors.
This paper reports on the exploration of a tablet-based augmented reality (AR) application for use by occupational therapists (OTs). The application enables OTs to support individuals with physical impairment and disability when making home modifications. This is a necessary process that empowers these individuals to compensate for their reduced abilities and to maintain independent living. Through semi-structured interviews and a participatory workshop, current home modifications and the challenges involved in the process of implementation were investigated, and an AR prototype was co-designed with the OTs. They found the AR tool to be potentially beneficial as it allows them to search, find, and select assistive technologies (ATs) for use in homes and to demonstrate the home-modification plans to users. The tool will enable users to envision the most appropriate scenarios when purchasing and utilizing ATs in the home.
This paper presents a series of preliminary studies aimed at developing a framework for designing personal stress-care technology for women. We engaged 28 women in different research activities to investigate their relationships with daily-life stress and self-care. The paper concludes by presenting the initial findings and future plans for taking this research forward, which include the design and development of a Voice User Interface (VUI) to facilitate self care.
This paper presents two contributions: (i) an algorithm for generating visualizations of the historical interfield relations of a topic within a scientific corpus by parsing scientific literature and linking it using citation metrics, and (ii) a poster generated using aforementioned algorithm, that depicts the historical development of 'interaction' within the field of Human-Computer Interaction, based on all CHI papers and their citation data from Google Scholar. Furthermore, we discuss possibilities and limitations of disseminating scientific developments through artistic practice.
Conversational systems are inherently disadvantaged when indicating either what capabilities they have or the state they are in. The notion of habitability, the appropriate balancing in design between the language people use and the language a system can accept, emerged out of these early difficulties with conversational systems. This literature review aims to summarize progress in habitability research and explore implications for the design of current AI-enabled conversational systems. We found that i) the definitions of habitability focus mostly on matching between user expectations and system capabilities by employing well-balanced restrictions on language use; ii) there are two comprehensive design perspectives on different domains of habitability; iii) there is one standardized questionnaire with a sub-scale to measure habitability in a limited way. The review has allowed us to propose a working definition of habitability and some design implications that may prove useful for guiding future research and practice in this field.
During VR demos we have performed over last few years, many participants (in the absence of any haptic feedback) have commented on their perceived ability to 'feel' differences between simulated molecular objects. The mechanisms for such 'feeling' are not entirely clear: observing from outside VR, one can see that there is nothing physical for participants to 'feel'. Here we outline exploratory user studies designed to evaluate the extent to which participants can distinguish quantitative differences in the flexibility of VR-simulated molecular objects. The results suggest that an individual's capacity to detect differences in molecular flexibility is enhanced when they can interact with and manipulate the molecules, as opposed to merely observing the same interaction. Building on these results, we intend to carry out further studies investigating humans' ability to sense quantitative properties of VR simulations without haptic technology.
With the development of 3D printing techniques, recreating household objects has become a trend. We present ShrinkyKit - a material-orientation method that allows novices to easily make adaptations to everyday objects with a desktop fused deposition modeling (FDM) 3D printer. Compared to existing methods, our method can benefit from the shrinking property of printed thermoplastic to fasten arbitrary shapes without high-fidelity manual requirements. By means of a material experiment, we construct a design tool and multiple trigger environments through a set of daily design cases, which can enable users to custom design and quickly fabricate their adaptations to reform old items or prototype assistive technologies.
In office environments, workers spend the majority of their workday sitting in a static position behind a desk or around a meeting table. Prolonged sitting time and sedentary behavior have severe negative health effects. Through this explorative study, we studied how different postures can be stimulated during meetings. We designed PositionPeak: three pieces of furniture aimed at composing a 'dynamic meeting room', subtly encouraging participants to avoid static postures. We video-recorded 5 meetings (N=16) and coded the number of position changes per participant. Participants also filled out a pre- and post-questionnaire about their experience. Our findings show that PositionPeak triggers people to adopt a variety of postures. Participants on average experienced a more efficient meeting but reported physical discomfort with some objects. We discuss the influence of PositionPeak on the meetings' social dynamics, the acceptance of new conventions and design recommendations for new meeting facilities.
Wildlife rehabilitation centers are tasked with the difficult challenge of providing medical care to wildlife while limiting human contact to ensure a successful transition into the wild. Building off of interviews with volunteers and 6 months of participatory observation work, we present a smart habitat design for the rehabilitation of Virginian opossum joeys. Using maker technology, we crafted a prototype utilizing sensors, a microcontroller, and an android application. We then discuss the future direction for this project including improvements and a field deploy.
Telepresence robots allow users to freely explore a remote space and provide a physical embodiment in that space. However, they lack a compelling representation of the remote user in the local space. We present VROOM (Virtual Robot Overlay for Online Meetings), a two-way system for exploring how to improve the social experience of robotic telepresence. For the local user, an augmented-reality (AR) interface shows a life-size avatar of the remote user overlaid on a telepresence robot. For the remote user, a head-mounted virtual-reality (VR) interface presents an immersive 360° view of the local space with mobile autonomy. The VR system tracks the remote user's head pose and hand movements, which are applied to an avatar. This provides the remote user with an identifiable self-embodiment and allows the local user to see the remote user's head direction and arm gestures.
Teachable Machine (teachablemachine.withgoogle.com) is a web-based GUI tool for creating custom machine learning classification models without specialized technical expertise. (Machine learning, or ML, lets systems learn to analyze data without being explicitly programmed.) We created it to help students, teachers, designers, and others learn about ML by creating and using their own classification models. Its broad uptake suggests it has empowered people to learn, teach, and explore ML concepts: People have created curriculum, tutorials, and other resources using Teachable Machine on topics like AI ethics at institutions including the Stanford d.school, NYU's Interactive Telecommunications Program, the MIT Media Lab, as well as creative experiments. Users in 201 countries have created over 125,000 classification models. Here we outline the project and its key contributions of (1) a flexible, approachable interface for ML classification models without ML or coding expertise, (2) a set of technical and design decisions that can inform future interactive machine learning tools, and (3) an example of how structured learning content surrounding the tool supports people accessing ML concepts.
SensorNets is a bioinspired electronic skin integrated with multimodal sensor networks for interactive media applications, from wearables, self-aware objects, to intelligent environments. It is developed by connecting miniaturized flexible printed circuit boards as two-dimensional sensor arrays with stretchable interconnects. The system is embedded in between soft deformable layers, such as textiles or rubbers. The result is a soft sensate surface that can be distributed and conformally wrap and adapt to curved structures. Each node contains a microprocessor together with a collection of nine sensors and a light-emitting diode, providing multimodal data that can be used to detect various deformation, proxemic, tactile, and environmental changes. We show that the electronic skin can sense and respond to a variety of stimuli simultaneously, as well as open up a possibility for sensor-rich virtual and augmented reality-based visualization and interaction.
Modern electronic design automation (EDA) tooling tends to focus on either the system-level design or the low-level electrical connectivity between physical components on a printed circuit board (PCB). We believe that a usable and functional system for circuit design needs to be able to interleave both levels of abstraction seamlessly and allow designers to transition between them freely. Existing work has experimented with approaches like circuit synthesis, functional characterization, or fine grained physical modeling. Each of these approaches augment the design process as it exists today, with its fundamental split between various levels of abstraction. We notice that hierarchical block diagrams can capture both high-level system structure as well as fine grained physical connectivity, and use that symmetry to construct a model for electronic circuits that can span the entire design process. Additionally, we construct user interfaces for our model that can support users of different skill levels throughout a design task. We discuss the design of our system, detailing both fundamental abstractions and usability trade-offs, and demonstrate its current capabilities through the design of example electronics projects.
Technological advances in autonomous transportation systems have brought them closer to road use. However, little research is reported on children's behavior in autonomous buses (ABs) under real road conditions and on improving parents' trust in leaving their children alone in ABs. Thus, we aim to answer the research question: "How can we design ABs suitable for unaccompanied children so that the parents can trust them?" We conducted a study using a Wizard-of-Oz method to observe children's behavior and interview both parents and children to examine their needs in ABs. Using an affinity diagram, we grouped children's and parents' needs under the following categories: entertainment, communication, personal behavior, trust and desires. Using an iterative human-centered design process, we created an Otto system, a smartphone app for parents to communicate with their children and a tablet app for children to entertain during the ride.
People increasingly use online video platforms, e.g., YouTube, to locate educational videos to acquire knowledge or skills to meet personal learning needs. However, most of existing video platforms display video search results in generic ranked lists based on relevance to queries. These relevance-based information display does not take into account the inner structure of the knowledge domain, and may not suit the need of online learners. In this paper, we present ConceptGuide, a prototype system for learning orientations to support ad hoc online learning from unorganized video materials. ConceptGuide features a computational pipeline that performs content analysis on the transcripts of YouTube videos queried by the user and generates concept-map-based visual recommendations of conceptual and content links between videos, forming learning pathways to provide structures feasible and usable for learners to consume.
When configuring furniture during sales consultancy in a furniture store, customers are usually confronted with abstract 2D drawings or simplistic renderings of the discussed configuration on a display. We present a novel application based on virtual reality (VR) to support furniture store consultations. Our system allows customers to elaborate different configurations of a couch in dialogue with a sales expert and lets customers experience them through immersive VR in a variety of virtual environments. While the sales-expert can modify the couch layout and fabric, the customer can stay immersed and experience a realistic tactile feeling of the configured couch through passive haptic feedback provided by a sample piece the customer can sit on. A preliminary field study in a furniture store showed that the system is immersive, conveying realistic impressions of the couch configurations. Customers perceived the VR configurator as useful since it would make their purchase decisions easier.
Competitions that directly pit software agents against one another have proven to be an effective and entertaining way to advance the state of the art in a multitude of AI domains. Less frequently, human-agent competitions have been held to gauge the relative competence of humans vs. agents, or agents vs. agents as measured indirectly by their performance against humans. We are developing a platform that supports a new type of AI competition that involves both agent-agent and human-agent interactions situated in an immersive environment. In this competition, human buyers haggle (in English) with two life-size AI agents that attempt to sell them various goods. We describe several research challenges that arise in this context, present the platform architecture and accompanying technologies, and report on early experiments with simple agents that establish feasibility and suggest that human participants enjoy the experience.
With an increasing number of artificial agents operating in the real-world, understanding the formation of conversational groups, or F-formations, around them can provide critical information for interactions. In this paper, we report on an experiment involving conversational groups of up to seven people. By capturing their positions and orientations, we compare participant F-formations when interacting with a moderator that is either physically present or shown on a screen. We observe differences in group formation based on the moderator embodiment, but also find that people's positions can be predicted as a function of the number of people in the group and the moderator's position.
This work proposes a combination of embedded computation and marker tracking to provide more robust augmentations for composed objects in Tangible Augmented Reality. By integrating conductive elements into the tangibles' sides, communication between embedded microprocessors is enabled, such that a connected composition can be computed without relying on any marker tracking information. Consequently, the virtual counterparts of the tangibles can be aligned, and this virtual composition can be attached to a single marker as a whole, increasing the tracking robustness towards occlusions and perspective distortions. A technical evaluation shows that this approach provides more robust augmentations if a tangible block in a composition is occluded by at least 50% or perspectively distorted by at least 40 to 50 degrees, depending on the block's size. Additionally, a test with users relying on the use case of a couch configuration tool shows promising results regarding usability and user experience.
A new kind of widget has begun appearing in the data science notebook programming community that can fluidly switch its own appearance between two representations: a graphical user interface (GUI) tool and plain textual code. Data scientists of all expertise levels routinely work in both visual GUIs (data visualizations or spreadsheets) and plaintext code (numerical, data manipulation, or machine learning libraries). These work tools have typically been separate. Here, we argue for the unique role and potential of fluid GUI/text programming to serve data work practices. We contribute a generalized method and API for robust fluid GUI/text coding in notebooks that addresses key questions in code generation and user interactions. Finally, we demonstrate the potential of our method in two notebook tool examples and a usability study with professional data science and machine learning practitioners.
We present PneuFetch, a light haptic cue-based wearable device that supports blind and visually impaired (BVI) people to fetch nearby objects in an unfamiliar environment. In our design, we generate friendly, non-intrusive, and gentle presses and drags to deliver direction and distance cues on BVI user's wrist and forearm. As a concept of proof, we discuss our PneuFetch wearable prototype, contrast it with past work, and describe a preliminary user study.
TAILOR is a wearable device in a form of sleeve designed to improve the incidence of repetitive strain injuries and carpal tunnel syndrome. With the significant increase of people working in offices and the growing proportion of computer use, injuries around the wrist and the elbow are becoming more prevalent. TAILOR aims to deliver a relatively unobtrusive solution for monitoring the strain on the wrists and elbows, and provide a more proactive way of preventing these injuries. In our initial evaluation we found that our system is desirable for most of the users showing 91% of people would use this device again.
This paper describes the design of a virtual reality learning environment that reconstructs a traditional Chinese painting, Spring Morning in the Han Palace, using a head-mounted platform. The issues of art styles, enhanced interpretation and interaction design strategies that may impact learning experience are considered. We discuss the future plan for conducting user studies with young undergraduate students.
Prior work has explored improving the efficacy of exergames through participatory design with children. Children are not necessarily able to make informed decisions about their fitness, so their perspectives form only half the picture. Adults who are invested in the problem of children's fitness (e.g., PE teachers) are a valuable missing perspective. As a first step to understanding what we can learn from these stakeholders to aid the design of exergames, we conducted one in-depth interview with a PE teacher and several focus groups with children. Our findings showed that, although both children and the PE teacher like similar game elements, children viewed the elements through the lens of fun while the PE teacher viewed the elements through the lens of effectiveness. Our preliminary findings establish the importance of including such stakeholders in the formative design of exergames.
The notion of gustosonic refers to the link between eating actions and listening within a combined multisensory experience. When it comes to designing celebratory technology for eating, i.e. technology that celebrates the experiential and playful aspects of eating, the use of sound has been mostly underexplored. In this paper, we present our work based on two case studies for the design of playful gustosonic experiences. Through an analysis of user experiences of our work, we propose a design framework for designing playful gustosonic experiences to understand the interrelationship between interactive sounds and eating experiences. Ultimately, with our work, we aim to inspire designers in creating gustosonic experiences supporting a more playful relationship with food.
In this paper we present a novel method of hybridizing physical board games by adding dynamic and digitally controlled fields utilizing electrochromic inks. In particular we built electrochromic displays that fit the hexagon fields on the Settlers of Catan board game and thereby added the ability to change a fields resources during game play. In this paper we report on the prototypical implementation and two preliminary studies that indicate how these dynamic fields can increase the excitement and reward of playing the game.
Haptic Feedback brings immersion and presence in Virtual Reality (VR) to the next level. While research proposes the usage of various tactile sensations, such as vibration or ultrasound approaches, the potential applicability of pressure feedback on the head is still underexplored. In this paper, we contribute concepts and design considerations for pressure-based feedback on the head through pneumatic actuation. As a proof-of-concept implementing our pressure-based haptics, we further present PneumoVolley: A VR experience similar to the classic Volleyball game but played with the head. In an exploratory user study with 9 participants, we evaluated our concepts and identified a significantly increased involvement compared to a no-haptics baseline along with high realism and enjoyment ratings using pressure-based feedback on the head in VR.
Concepts utilizing applied ethics, such as responsible conduct of research (RCR), can prove difficult to teach due to the complexity of problems faced by researchers and the many underlying perspectives involved in such dilemmas. To address this issue, we created Academical, a choice-based interactive storytelling game for RCR education that enables players to experience a story from multiple perspectives. In this paper, we describe the design rationale of Academical, and present results from an initial pilot study comparing it with traditional web-based educational materials from an existing RCR course. The preliminary results highlight that utilizing a choice-based interactive story game may prove more effective for RCR education, with significantly higher engagement and comparable or better scores for tests of RCR topics.
This study investigates the potential for learning about and motivating earthquake preparedness through video game play. 112 participants played a custom-built video game for between 5 and 30 minutes in an experiment involving two avatar selection conditions (choice vs. random assignment) and two avatar power conditions (more resources vs. fewer resources). We assessed pre- and post-test changes in levels of self-efficacy, outcome expectation, and intent to act relating to various preparedness and response actions. We found that playing the game increases these scores significantly in all game conditions immediately after play. Where avatar characteristics were significant, more resources led to higher scores. However, contrary to our predictions, randomly assigned avatars led to higher increases in scores than when players chose and named their avatar. Future studies are planned to explore a variety of other game features designed to maximize motivation and behavior change in young adults.
Video accessibility is crucial for blind and visually impaired individuals for education, employment, and entertainment purposes. However, professional video descriptions are costly and time-consuming. Volunteer-created video descriptions could be a promising alternative, however, they can vary in quality and can be intimidating for novice describers. We developed a Human-in-the-Loop Machine Learning (HILML) approach to video description by automating video text generation and scene segmentation while allowing humans to edit the output. Our HILML system was significantly faster and easier to use for first-time video describers compared to a human-only control condition with no machine learning assistance. The quality of the video descriptions and understanding of the topic created by the HILML system compared to the human-only condition were rated as being significantly higher by blind and visually impaired users.
People have an innate ability to process environmental information by integrating and updating multiple streams of sensory input. When local and global visual information are not integrated smoothly, it becomes difficult for an individual to formulate the whole from the details. This ability is important when it comes to understanding a scene or "seeing the big picture." In this work, we design, develop, and evaluate a system to augment the integration of global with local features, alongside Speech-Language Pathologists (SLPs) who work with autistic children having significant speech and language disorders. We present our preliminary experiment with a high-fidelity global filter and hypothesize that the filtered image can shift eye gaze (and therefore visual attention) from local details to global features. Global interpretation will be further encouraged by asking participants "what was the picture about?" We present results and discuss limitations and future work.
We present HeliCoach, an adaptive O&M (orientation and mobility) training system to help visually impaired people to gain audio-orientation ability efficiently. HeliCoach is a mastery-based adaptive learning system consists of three components: A drone-based physically simulated 3D audio space, a UWB (Ultra-Wide Band)-supported assessment system, an intelligent belt with haptic feedback as training scaffolds. We evaluated the acceptability and efficiency of HeliCoach on both blindfolded sighted participants and visually impaired participants in an audio-orientation task. The results showed that HeliCoach was efficient and interesting. Thus, We believe HeliCoach can be applied to many simulated 3D audio space related task to train people more efficiently.
We present an interactive tactile map tailored for older adults with vision impairments, which should help them to acquire spatial knowledge of complex indoor environments, like a residential care home. Interestingly current research largely omits visually impaired older adults. As the outcome of our iterative design process, we introduced the design of the tactile map that employs haptically salient key objects and large touch-sensitive segments along with the route-guidance function. Route-guidance function enriches the tactile map with the desired route information to support the route knowledge acquisition. Qualitative evaluation (mean age was 81.4 years) indicates the ability of our concept to help our audience building spatial knowledge of the indoor environment.
Literature on the challenges of building personas for marginalized populations has led to attempts to update or improve the format. Construction of personas for people with disabilities and intersecting identities are underexplored in this area. Drawing on past work on the importance of intersectionality in HCI, this paper presents results from a qualitative interview study of nine participants with a) multiple disabilities or b) a disability and one or more other marginalized identities. From these findings, the authors present considerations for designers or researchers interested in creating personas depicting users at the intersection of marginalized identities.
We present a system to allow blind people to stand in line in public spaces by using an off-the-shelf smartphone only. The technologies to navigate blind pedestrians in public spaces are rapidly improving, but tasks which require to understand surrounding people's behavior are still difficult to assist. Standing in line at shops, stations, and other crowded places is one of such tasks. Therefore, we developed a system to detect and notify the distance to a person in front continuously by using a smartphone with a RGB camera and an infrared depth sensor. The system alerts three levels of distance via vibration patterns to allow users to start/stop moving forward to the right position at the right timing. To evaluate the effectiveness of the system, we performed a study with six blind people. We observed that the system enables blind participants to stand in line successfully, while also gaining more confidence.
To make it easier to add American Sign Language (ASL) to websites, which would increase information accessibility for many Deaf users, we investigate software to semi-automatically produce ASL animation from an easy-to-update script of the message, requiring us to automatically select the speed and timing for the animation. While we can model speed and timing of human signers from video recordings, prior work has suggested that users prefer animations to be slower than videos of humans signers. However, no prior study had systematically examined the multiple parameters of ASL timing, which include: sign duration, transition time, pausing frequency, pausing duration, and differential signing rate. In an experimental study, 16 native ASL signers provided subjective preference judgements during a side-by-side comparison of ASL animations in which each of these five parameters was varied. We empirically identified and report users' preferences for each of these individual timing parameters of ASL animation.
Despite the need for representation of older adults in crowdsourced data, crowd work is generally not designed for older adults and participation by older adults is low. In this paper, we demonstrate a process for designing crowd work for older adults; identifying their needs, designing an approach to foster their participation, and verifying its effectiveness. We found when older people feel connected to others while doing crowd work, they are highly motivated. Furthermore, gamification is an effective tool for fostering their engagement when aligned with their needs and values, as opposed to the needs and values of younger participants. Lastly, we suggest important considerations and opportunities for designing crowd work approaches for senior citizens.
Graphical user web interfaces (GUIs) afford visual consumption and sensemaking of information, but present challenges for auditory, and often sequential, information seeking for people using screen readers. Information foraging theory illustrates that users' information behavior is guided by the use of information scent (mostly visual in GUIs) to assess value and cost of accessing information relevant to their goal, and rational decisions to maximize gain of information. Previous research about adaptive browsing behavior of screen reader users is associated with a lack of webpage usability. In this study, we observed and compared the information seeking behavior of ten sighted and ten screen reader users. Findings show that screen reader users demonstrate adapted mental models of the visual information search interface. Additionally, their adaptive search result exploration strategies in the context of query intent highlight key concern areas in specific parts of the search process.
Memory loss is one of the most frequent symptoms associated with dementia. Losing the memories of meaningful social activities, such as visits from family, can be confronting not only for the person with dementia, but also for relatives and caretakers. Through an iterative design process involving people with dementia, caretakers and relatives, the RelivRing concept was developed. The RelivRing enables people with dementia to relive the positive experience of visits from relatives. It allows relatives to leave audio messages of their visits, which the person with dementia might have otherwise forgotten. When listening to these messages, the person with dementia can re-experience the positive feeling of the visit. The design is adapted to a person with dementia's cognitive and perceptual abilities following literature research and in-context user studies. With the RelivRing, we aim to maintain and strengthen existing social relations between people with dementia and their relatives.
There is an overall shortage of accessible educational material available for blind and low vision learners. This shortage is especially pronounced in the domain of electronics, where the materials are historically visually-rendered and complex. To address this, we took a qualitative approach to designing and evaluating tactile graphics and textual descriptions when building circuits. To gain an understanding of their efficacy, we provided a circuit description , component diagrams (Figure 4), and a tactile schematic  as educational materials in a Blind Arduino workshop with eight participants and interviewed these participants about their experience. Our research revealed the complexities of designing these materials: our tactile component diagrams were usable and helpful, whereas our tactile schematics and circuit descriptions presented learning barriers in a microcontroller workshop. We provide recommendations for future research to design accessible materials to teach electronics.
We present a survey (77 responses) and 10 follow-up interviews investigating how technology professionals include accessibility in design and development and what challenges they face. We asked technology professionals what they learned about accessibility in school, what resources they used, if any, and what tools they needed. We found that formal education inadequately prepared them to handle accessibility challenges across the software development lifecycle. Other reasons include inadequate accessibility tools and resources, and not accounting retroactive changes in project timelines. This work attempts to provide updates to the current state of software accessibility by comparing results to previous research works.
Maintaining awareness of the presence of colleagues can be difficult when collaboration is distributed across separate offices. In this paper we present CoasterMe, a situated desktop widget that leverages the natural behaviour of drinking to support informal awareness of a colleague's availability in the workplace. A pilot field trial showed that CoasterMe helped coworkers to build in-the-moment awareness of availability and supported an improved understanding of work routines, enhancing social coordination and preventing wasted effort. CoasterMe also created a sense of co-presence and connectedness by making users feel as if they are sharing a drink over distance.
In this paper, we propose OmniGlobeVR, a novel collaboration tool based on an asymmetric cooperation system that supports communication and cooperation between a VR user (occupant) and multiple non-VR users (designers) across the virtual and physical platform. The OmniGlobeVR allows designer(s) to access the content of a VR space from any point of view using two view modes: 360° first-person mode and third-person mode. Furthermore, a proper interface of a shared gaze awareness cue is designed to enhance communication between the occupant and the designer(s). The system also has a face window feature that allows designer(s) to share their facial expressions and upper body gesture with the occupant in order to exchange and express information in a nonverbal context. Combined together, the OmniGlobeVR allows collaborators between the VR and non-VR platforms to cooperate while allowing designer(s) to easily access physical assets while working synchronously with the occupant in the VR space.
Collaborative Sequencing (CoSeq) is the process by which a group selects and arranges a set of items into a particular order. CoSeq is ubiquitous, occurring across diverse situations like trip planning or course scheduling. Although indicating preferences, communicating, and consensus building in CoSeq can be overwhelming for groups, little research has aimed at effectively supporting this process. To understand the design space of CoSeq, we ran a formative study to observe how participants utilize visualizations to strategically reduce their cognitive burden. We derived a novel design to enable sequence comparison using visualizations and evaluated its effect through a study. We found that attitudinal measures for the efficiency and effectiveness of the consensus building process were significantly improved with our design.
Here we present Flippo: A social wearable creature prototype. This design is meant to support people to take breaks away from their desks and move, as well as to socialize with others by caring for their creatures. Flippo takes the shape of a soft and fuzzy bug-like creature. It lives on people's shoes and occasionally nudges them when it needs to move and have social interaction with another creature from its species. It does this by making sounds and visual effects and requires that the wearers coordinate shaking their feet and helping the creatures face each other. If Flippo is satisfied with the interaction it displays a light animation and plays 'happy' tunes, and if not it nudges the wearer again. We ran a field study with 13 participants, preliminary results show the potential of the design to encourage and facilitate co-located social interaction.
The process of creating stories with others is fun and engaging for participants, but often labor-intensive. This is especially true for the central organizer, who is usually needed to provide scaffolds that guide participant contributions and enforce a cohesive story structure. To make this task easier for the organizer and participants, we propose the use of context-awareness to scaffold participant contributions in a collaborative storytelling experience. We present Cast, a platform that allows an organizer to concisely define a story script that is opportunistically deployed to users through a context-aware mobile app. Leveraging user contexts to define their roles within a story provides users with scaffolds that promote cohesive narrative development, and automating this process prevents burdening the organizer. Results from a pilot study show that creating stories with Cast is engaging yet low-effort for participants, showcasing the potential for context-awareness to make collaborative storytelling easier without sacrificing enjoyability.
Video conferencing is now a reality for primary care appointments. Although typical systems akin to Skype have been deployed for video appointments, it is not clear how these systems should be designed to meet the real needs of doctors. We conducted contextual interviews with family physicians to explore how to support video-based appointments with patients. Our findings reveal challenges in different themes and presents insights on design implications to support doctors' control in the workflow, privacy protection, and camera work for mobile devices.
The current work examines interactions that are enabled when depositing a human-safe hydrogel onto textile substrates. These hydrogel-textile composites are water-responsive, supporting reversible actuation. To enable these interactions, we describe a fabrication process using a consumer-grade 3D printer. We show how different combinations of printed hydrogel patterns and textiles create a rich actuator design space. Finally, we show an application of this approach and discuss opportunities for future work.
Mediated social touch has the potential to enhance our interactions with machines and with each other. We present three wearable tactile devices that generate affective haptic sensations via three localised skin stretching modalities; pinching, squeezing, and twisting. The Pinch device is adhered to the skin of the forearm, generating pinching sensations in three locations. The Squeeze and Twist devices are wristbands that elicit squeezing and twisting sensations on the skin of the wrist. All of these devices are powered by shape memory alloy actuators, enabling them to be quiet, lightweight and discreet wearable interfaces, unlike their vibrotactile or servo-motor driven counterparts. We investigate the potential for these devices to be used in mediated social touch interactions by conducting preliminary psychometric tests measuring affective response. The Pinch device and Squeeze wristband were found to simulate positive affective touch sensations, particularly in comparison to vibrotactile stimuli.
Text selection is a frequent task we do everyday to edit, modify or delete text. Selecting a word requires not only precision but also switching between selections and typing which influences both speed and error rates. In this paper, we evaluate a novel concept that extends text editing with an additional modality, that is gaze. We present a user study (N=16) where we explore how, the novel concepts called GazeButton can improve text selection by comparing it to touch based selection. In addition, we tested the effect of text size on the selection techniques by comparing two different text sizes.Results show that gaze based selection was faster with bigger text size, although not statistically significant. Qualitative feedback show a preference on gaze over touch which motivates a new direction of gaze usage in text editors.
The quality of air in office spaces can have far-reaching impacts on the well-being and productivity of office workers. We present a system, called Hilo, that can monitor the level of carbon dioxide (CO2) in an office and provide a fairly accurate forecast of its evolution in the next few minutes. The main objective is to inform and to support the users in taking preventive actions when a harmful level of CO2 is predicted. We elicited three main elements of such prediction — Risk, Temporal Proximity, and Certainty, and explored alternative ways of displaying indoor CO2 forecast through these elements. To evaluate our prototypes, we conducted a preliminary user study, in which three interfaces on Apple Watch were tested by 12 participants (within-subjects, a total of 36 sessions). In this paper, we describe the results of this study and discuss implications for future work on how to create an engaging interaction with the users about the quality of air in offices and particularly its forecast.
We present an identification method for 3D-printed objects that uses differences in inner structure patterns formed by printing conditions which we can configure in slicer software. Resonant properties change depending on the shape, material, and boundary conditions of the objects, so that our method identifies objects on the basis of differences in resonant properties caused by different inner structure patterns using a machine-learning algorithm. We measured resonant properties as frequency properties using active acoustic sensing. Our method is applicable to 3D-printed objects with a low filling rate while reducing the workload of modeling the inner structure to be used as a tag. To investigate the feasibility of our method, we conducted two experimental evaluations. The results of one showed that our method can identify eight objects with an average classification accuracy of 99.3%.
In most sports, the ability to forecast motions and trajectories is among the highest priority, which can be only earned from experience. How to predict the motion from image and visualize for training is a challenging topic for computer vision. In this paper, we present a real-time table tennis forecasting system using a long short-term pose prediction network. Our system can predict the landing point of a serve before the pingpong ball is even hit using the previous and present motions of a player, which is captured only using a single RGB camera. In the precision evaluation, our system shows an acceptable accuracy and the max difference is 8.9 cm. From another pilot study, we know that our system could help the amateur to return an expert's serve. As training application, this system can be either used for training beginner's prediction skill, or used for practitioners to train how to hide their serve from being predicted.
We present ScraTouch, an interaction technique using fingernails, as a new input modality for capacitive touch surfaces. We revealed the differences between the fingertip and the nail from the viewpoint of capacitance touch sensing. Differentiating between fingertip and fingernail touches requires only tens of milliseconds worth of shunt current data from the capacitive touch sensing mechanism, thus external sensors or other hardware are unnecessary. Owing to the small amount of friction generated when a nail touches the surface, not only tapping but also touch gestures by sliding a nail can be easily performed. In our initial investigation, we confirmed that setting a simple threshold on the measured shunt current for recognition works robustly.
Recent advancements in lighting design have focused on the visualization and simulation of programmable LED lighting fixtures. However, single-bulb conventional fixtures alongside subtractive color filter gels are still widely used in many art galleries and installations, photography studios, and experimental theatres due to their low cost and existing prevalence in industry. We introduce a novel approach to creating lighting effects for single-bulb fixtures with gels, which enables designers to quickly and inexpensively produce complex, multi-colored effects approximating a target digital image. Our system uses a grid-based approach which cuts small openings in different colored gels and layers them together, forming color combinations when lit. Our work expands the design space of lighting gels with a precise and expressive method, enabling designers to experiment with novel lighting effects through an iterative personal fabrication process.
An automated driving system is expected to pave the way for a new area of user experience in a vehicle. However, few studies have been conducted on the understanding what people want to do and how the vehicle can support user needs, specifically, in level 5, fully automated vehicles (FAVs). Therefore, the present study aimed at exploring user needs and design requirements for potential activities in FAVs. We conducted expert interviews and focus group interviews to collect data, and the qualitative analysis was applied to elicit user needs and design requirements. Twelve user needs and general design considerations in four categories were found. The findings will contribute to enhancing user experience in future FAVs by considering user needs and design requirements we elicited.
Recently, the idea of using BCIs in Augmented Reality settings to operate systems has emerged. One problem of such head-mounted displays is the distraction caused by an unavoidable display of control elements even when focused on internal thoughts. In this project, we reduced this distraction by including information about the current attentional state. A multimodal smart-home environment was altered to adapt to the user's state of attention. The system only responded if the attentional orientation was classified as "external". The classification was based on multimodal EEG and eye tracking data. Seven users tested the attention-aware system in comparison to the unaware system. We show that the adaptation of the interface improved the usability of the system. We conclude that more systems would benefit from awareness of the user's ongoing attentional state.
In this study, we propose a novel fabrication method to create large-scale 3D objects by bending and piling up an inflated single polyethylene tube. A unique point of our method is that we can redo the design and reconfigure it using the same tube after evacuating the air from the object. To provide detachable constraints on the tube for bending, we developed a novel technique with a hook-and-loop fastener attached on the tube. We also designed software to convert 3D targeted object shape to the bending pattern of the tube. Furthermore, we implemented a prototype of a machine for the automation of the attaching process of the fastener on the tube. In this paper, we describe the details of the design, implementation, results and future works.
Human-computer interaction is commonly designed for users who are awake and in a brightly lit environment. However, as computer technology becomes ubiquitous, opportunities for semi-awake users in dark environments to interact with computers. In this study, we propose a low stimulus and effortless interaction that does not prevent semi-awake users from falling asleep in dark bedrooms. We have developed a bedside box (BSB) that holds and charges a smartphone. The BSB has an optical lens that projects the smartphone display onto the ceiling. The green and low light intensity projection primarily stimulates human rod cells and provides low-intensity visual stimuli. An interface that allows the user to set an air conditioner, to adjust lighting, to set an alarm clock, to monitor a security camera, to make handwritten notes, and to lock an entrance door was designed, developed experimentally, and evaluated.
Eye-hand coordination training systems are used to improve user performance during fast movements in sports training. In this work, we explored gaze tracking in a Virtual Reality (VR) sports training system with a VR headset. Twelve subjects performed a pointing study with or without passive haptic feedback. Results showed that subjects spent an average of 0.55 s to visually find and another 0.25 s before their finger selected a target. We also identified that, passive haptic feedback did not increase the performance of the user. Moreover, gaze tracker accuracy significantly deteriorated when subjects looked below their eye level. Our results also point out that practitioners/trainers should focus on reducing the time spent on searching for the next target to improve their performance through VR eye-hand coordination training systems. We believe that current VR eye-hand coordination training systems are ready to be evaluated with athletes.
We present Lotus; an actuated plant-like device designed to guide the user through breathing exercises. The development of Lotus explores whether IoT devices can aid in the practice of mindfulness and reducing stress. Lotus uses heart rate (HR) monitoring to detect high levels of stress and trigger the start of its guided exercises through the movement of its petals. An initial study proved that Lotus may help users to reduce their stress.
The simulation of human behaviour in today's travel demand models is usually based on the assumption of a rational behaviour of its participants. Since travel demand models have been applied in particular for motorized traffic, only little is known about the influence of variables that affect both the choice of trip destination and the route decision in pedestrian and cycling models. In order to create urban spaces that encourage cycling and walking, we propose a VR (Virtual Reality) pedestrian simulator which involves walk-in-place locomotion. Thus, identical conditions are obtained for all subjects which is not feasible in real world field research with naturally varying environmental influences. As a first step, our qualitative and quantitative user study revealed that walking in a VR treadmill felt safest and most intuitive, although walking in it took in return more energy than walking-in-place with VR trackers only.
Levitating particle displays are an emerging technology where content is composed of physical pixels. Unlike digital displays, manipulating the content is not straightforward because physical constraints affect the placement and movement of each particle: dragging a particle may cause it to collide with others along its movement path. We describe initial work into four new interaction techniques that allow users to avoid collisions when directly manipulating display content. Techniques such as these are required for interactive levitating displays to be practical when scaled up to large sizes.
It is important for communication robots to offer an enjoyable interaction experience while being reasonably persuasive. It has been suggested that robots speaking in a high pitch could be perceived as more attractive than those speaking in a low pitch. However, it is not clear whether the use of a high-pitched voice favors teleoperated robots as well. In principle, teleoperated robots could be perceived differently than autonomous robots because they embody a human agent. To investigate this aspect, we conducted a 2 (voice pitch: original vs. high) × 2 (voice gender: male vs. female) × 2 (user gender: male vs. female) between-participants experiment to study the effects of the robot voice pitch, robot voice gender, and user gender on the attitudinal responses of the users toward a teleoperated robot and the associated decision-making. It was observed that the male and female participants perceived a high-pitched voice differently. The users' awareness of the robot being teleoperated and persuasiveness of the robot were noted to be related, which may provide a plausible explanation for the interaction effects between the voice pitch and user gender.
IoT devices deliver their functionality by accessing data. Users decide which data they are willing to share via privacy settings interfaces that are typically on the device, or in the app controlling it. Thus, users have to interact with each device or app which is time-consuming and settings might be overlooked. In this paper, we provide a stepping stone into a multi-device interface for adjusting privacy settings. We present three levels of information detail: 1) sensor name 2), sensor name and information about captured data and 3) detailed information on each collected data type including consequences. Through a pre-study with 15 participants, we found that users prefer the access to detailed information because this offers the best decision support. They also wish for a clear status communication, a possibility for rule-based settings, and delegation options.
Commensality is defined as "a social group that eats together" and eating in a commensality setting has a number of positive effects on humans. In this paper, we discuss how HCI and technology in general can be exploited to replicate the benefits of commensality for people who choose or are forced to eat alone. We discuss research into and the design of Artificial Commensal Companions that can provide social interactions during food consumption. We present the design of a system, consisting of a toy robot, computer vision tracking, and a simple interaction model, that can show non-verbal social behaviors to influence a user's food choice. Finally, we discuss future studies and applications of this system, and provide suggestions for future research into Artificial Commensal Companions.
Locomotion in virtual reality (VR) is one of the biggest problems for large scale adoption of VR applications. Yet, to our knowledge, there are few studies conducted in-the-wild to understand performance metrics and general user preference for different mechanics. In this paper, we present the first steps towards an open framework to create a VR locomotion benchmark. As a viability study, we investigate how well the users move in VR when using three different locomotion mechanics. It was played in over 124 sessions across 10 countries in a period of three weeks. The included prototype locomotion mechanics are arm swing,walk-in-place and trackpad movement. We found that over-all, users performed significantly faster using arm swing and trackpad when compared to walk-in-place. For subjective preference, arm swing was significantly more preferred over the other two methods. Finally for induced sickness, walk-in-place was the overall most sickness-inducing locomotion method.
Among VR devices on the market, 'Nintendo Labo VR Kit (Labo VR)' has various pieces of DIY (Do-It-Yourself) hardware which are constructed from corrugated cardboard blueprints. While using the Labo VR, the user must hold the Labo HMD with both hands and this prevents the simultaneous use of other input devices. To overcome this limitation, we developed L-Visor which mimics Labo VR HMD but frees the hands to widen the virtual reality experience. In addition, we developed four prototype games for use with L-Visor. The results of our pilot user study showed that participants had positive impressions of L-Visor, because they felt L-Visor was intuitively consistent with the game content. This paper shows how to expand DIY's value in VR by creating a new cardboard visor input device for the Labo head-mounted display and by customizing game contents which allow users to interact with virtual environments in their own way.
Virtual reality (VR) climbing systems registering physical climbing walls with immersive virtual environments (IVEs) have been a focus of past research. Such systems can provide physical user experiences similar to climbing in (extreme) outdoor environments. While in the real world, climbers can always see their hands and feet, virtual representations of limbs need to be spatially tracked and accurately rendered in VR, which increases system complexity. In this work, we investigated the importance of integrating virtual representations of the climber's hands and/or feet in VR climbing systems. We present a basic solution to track, calibrate and represent the climber's hands and feet, and report the results of a user study, comparing the importance of virtual limb representations in terms of perceived hand and feet movement accuracy, and enjoyability of the VR climbing experience. Our study suggests that the inclusion of feet is more important than having a hand visualization.
Smart speakers have become an almost ubiquitous technology as they enable users to access conversational agents easily. Yet, the agents can only be activated using specific voice commands, i.e. a wake word. This, in turn, requires the device to constantly listen to and process sound, which represents a privacy issue for some users. Further, using the trigger word for the agent in a conversation with another human may lead to accidental triggers. Here, we propose using gestural triggers for conversational agents. We conducted gesture elicitation to identify five candidate gestures. We then conducted a user study to investigate the acceptability and effort required to perform the gestures. Initial results indicate that the snap gesture shows the most potential. Our work contributes initial insights on using smart speakers with ubiquitous sensing.
This paper reports that the time-domain accuracy of bare-hand interactions in HMD-based Augmented Reality can be improved by using finger contact: touching a finger with another or tapping one's own hand. The activation of input can be precisely defined by the moment of finger contact, allowing the user to perform the input precisely at the desired moment. Finger contact is better suited to the user's mental model, and natural tactile feedback from the fingertip also benefits the user with the self-perception of the input. The experimental results revealed that using finger contact is the preferred method of input that increases the time-domain accuracy and enables the user to be aware of the moment the input is activated.
Contrary to our anticipation, consumption experiences do not always please us; instead, they sometimes impede our well-being. People often regret using their resources unfavorably (i.e., buyers' remorse or impulse buying) or take the pleasure of having something for granted before long (i.e., hedonic adaptation). Given its effects on optimizing positive experiences, positive emotion regulation can be useful for mitigating these issues. The present paper introduces PurPal, a self-administered behavioral intervention technology that enables users to up-regulate their positive emotions in the context of consumption. PurPal provides adaptive questions associated with users' purchase intentions, which stimulate reflection on possible future positive experiences with purchased items. A user study is planned to validate the efficacy of PurPal in a lab setting. The longer-term goal of the current research is to develop behavioral intervention technologies that support users' emotion regulation processes, enhancing their subjective well-being.
Human-computer authentication is a continuously important topic where besides security also the aspects of usability must be taken into consideration. Biometric authentication methods promise to fulfill both aspects to a high degree, yet they come with severe drawbacks, such as the lack of changeability of the utilized trait, in case it is leaked or stolen. To compensate for these disadvantages, we introduce a novel class of biometric authentication systems in this work, named "Functional Biometrics". This approach regards the human body as a function that transforms a stimulus which is applied to the body by the authentication system. Both, the stimulus and the measured body reflection form a pair that can subsequently be used for authentication, yet the underlying function remains secret. Following this approach, we intend to disprove some of the drawbacks of traditional biometrics.
Text entry is an integral component to many use cases in virtual and augmented reality. We present PinchType: A new method of virtual text entry that combines users' existing knowledge of the QWERTY keyboard layout with simple thumb and finger interactions. Users pinch with the thumb and fingertip to select from the same group of letters the finger would press on a QWERTY keyboard; a language model disambiguates. In a preliminary study with 14 participants, we investigated PinchType's speed and accuracy on initial use, as well as its physical comfort relative to a mid-air keyboard. After entering 40 phrases, most people reported that PinchType was more comfortable than the mid-air keyboard. Most participants reached a mean speed of 12.54 WPM, or 20.07 WPM without the time spent correcting errors. This compares favorably to other thumb-to-finger virtual text entry methods.
We conducted an exploratory interview study with 10 undergraduate college students (ages 18-21) to get their feedback on how to best design a research study that asks teens (ages 13-17) to share portions of their Instagram data with their parents and discuss their online risk experiences. These young adults felt that teens should have as much control as possible when sharing their data, including the way that it was used in discussions with their parents. Our findings highlight the need to ensure researchers preserve the privacy and confidentiality of teens' social media data.
An increasing number of Airbnb hosts are using smart home devices to manage their properties; as a result, Airbnb guests are expressing concerns about their privacy. To reconcile the tensions between hosts and guests, we interviewed 10 Airbnb hosts to understand what smart home devices they use, for what purposes, their concerns, and their unmet needs regarding smart home device usage. Overall, hosts used smart home devices to give remote access to their home to guests and safeguard their investment properties against misuse. They were less concerned about guest privacy and felt that smart home devices provided unique value to guests and, thus, a competitive advantage over other Airbnb properties.
With the increasing adoption of virtual reality (VR) in public spaces, protecting users from observation attacks is becoming essential to prevent attackers from accessing context-sensitive data or performing malicious payment transactions in VR. In this work, we propose RubikBiom, a knowledge-driven behavioural biometric authentication scheme for authentication in VR. We show that hand movement patterns performed during interactions with a knowledge-based authentication scheme (e.g., when entering a PIN) can be leveraged to establish an additional security layer. Based on a dataset gathered in a lab study with 23 participants, we show that knowledge-driven behavioural biometric authentication increases security in an unobtrusive way. We achieve an accuracy of up to 98.91% by applying a Fully Convolutional Network (FCN) on 32 authentications per subject. Our results pave the way for further investigations towards knowledge-driven behavioural biometric authentication in VR.
Deep neural networks (DNNs) are increasingly powering high-stakes applications such as autonomous cars and healthcare; however, DNNs are often treated as "black boxes" in such applications. Recent research has also revealed that DNNs are highly vulnerable to adversarial attacks, raising serious concerns over deploying DNNs in the real world. To overcome these deficiencies, we are developing Massif, an interactive tool for deciphering adversarial attacks. Massif identifies and interactively visualizes neurons and their connections inside a DNN that are strongly activated or suppressed by an adversarial attack. Massif provides both a high-level, interpretable overview of the effect of an attack on a DNN, and a low-level, detailed description of the affected neurons. Massif's tightly coupled views help people better understand which input features are most vulnerable and important for correct predictions.
In this work, we present ALEEDSA: the first system for performing interactive machine learning with augmented reality. The system is characterized by the following three distinctive features: First, immersion is used for visualizing machine learning models in terms of their outcomes. The outcomes can then be compared against domain knowledge (e.g., via counterfactual explanations) so that users can better understand the behavior of machine learning models. Second, interactivity with augmented reality along the complete machine learning pipeline fosters rapid modeling. Third, collaboration enables a multi-user setting, wherein machine learning engineers and domain experts can jointly discuss the behavior of machine learning models. The effectiveness of our proof-of-concept is demonstrated in an experimental study involving both students and business professionals. Altogether, ALEEDSA provides a more straightforward utilization of machine learning in organizational and educational practice.
Timestamped event sequences are analyzed to tackle varied problems but have unique challenges in interpretation and analysis. Especially in event sequence prediction, it is difficult to convey the results due to the added uncertainty and complexity introduced by predictive models. In this work, we design and develop ProFlow, a visual analytics system for supporting analysts' workflow of exploring and predicting event sequences. Through an evaluation conducted with four data analysts in a real-world marketing scenario, we discuss the applicability and usefulness of ProFlow as well as its limitations and future directions.
We introduce AVAR, a prototypical implementation of an agile situated visualization (SV) toolkit targeting liveness, integration, and expressiveness. We report on results of an exploratory study with AVAR and seven expert users. In it, participants wore a Microsoft HoloLens device and used a Bluetooth keyboard to program a visualization script for a given dataset. To support our analysis, we (i) video recorded sessions, (ii) tracked users' interactions, and (iii) collected data of participants' impressions. Our prototype confirms that agile SV is feasible. That is, liveness boosted participants' engagement when programming an SV, and so, the sessions were highly interactive and participants were willing to spend much time using our toolkit (i.e., median ≥ 1.5 hours). Participants used our integrated toolkit to deal with data transformations, visual mappings, and view transformations without leaving the immersive environment. Finally, participants benefited from our expressive toolkit and employed multiple of the available features when programming an SV.
Data visualization as a profession has been growing rapidly in recent years. Although some initiatives are in place to increase engagement between the academic and practitioner communities, we currently do not have a good understanding of how practitioners do their design work, including what methods, approaches, and principles they know and use in their everyday practice. We present a subset of results of a survey in which 87 DataVis practitioners identified their familiarity with popular design methods and the frequency with which they use them in their own work. We also discuss follow-up work to develop a deeper understanding of practitioners' perspectives on design methods.
We propose a visualization design space for representing unquantified uncertainty in percent composition drug checking test results using pie and cake charts during the opioid crisis. The design space generates alternatives for use in a visual drug report design study that may improve decision-making concerning illicit drug use. Currently, communication of drug checking test results does not capture the uncertainty in drug checking tests, leading to poor and potentially harmful decisions. The design alternatives generated by the design space aim to empower people who use drugs with drug sample information and facilitate harm reduction efforts. Our visualizations may apply to other drug checking services and to scenarios where uncertainty visualization researchers wish to notify end users of the presence of unquantified uncertainty in safety-critical decision-making contexts like those found during the opioid crisis.
The success of deep learning solving previously-thought hard problems has inspired many non-experts to learn and understand this exciting technology. However, it is often challenging for learners to take the first steps due to the complexity of deep learning models. We present our ongoing work, CNN 101, an interactive visualization system for explaining and teaching convolutional neural networks. Through tightly integrated interactive views, CNN 101 offers both overview and detailed descriptions of how a model works. Built using modern web technologies, CNN 101 runs locally in users' web browsers without requiring specialized hardware, broadening the public's education access to modern deep learning techniques.
Automated cars will need to observe pedestrians and react adequately to their behavior when driving in urban areas. Judging pedestrian behavior, however, is hard. When approaching it by machine learning methods, large amounts of training data is needed, which is costly and difficult to obtain, especially for critical situations. In order to provide such data, we have developed an online game inspired by Frogger, in which players have to cross streets. Accidents and critical situations are a natural part of the data produced in such a way without anybody getting hurt in reality. We present the design of our game and an analysis of the resulting data and its match to real world behavior observed in previous work. We found that behavior patterns in real and virtual environments correlated and argue that game data could be used to train machine learning algorithms for predicting real pedestrians' walking trajectories when crossing a road. This approach could be used in future automated vehicles to increase pedestrian safety.
Spreadsheets allow end users to blend calculations with arbitrary layout and formatting. However, when it comes to reusing groups of formulae along with layout and formatting, spreadsheets provide only limited support. Most users rely on copy and paste, which is easy to learn and use, but maintaining several copies can be tedious and error-prone. We present the concept of Gridlets, an abstraction over calculation and presentation applicable in common use case scenarios. Using the Cognitive Dimensions of Notations framework, we compare Gridlets to copy/paste and sheet-defined functions. We find that Gridlets are consistent with the spreadsheet paradigm, enable users to take advantage of secondary notation, and make common edit operations less viscous and less error-prone.
Research involving Virtual Reality (VR) headsets is becoming more and more popular. However, scaling VR experiments is challenging as researchers are often limited to using one or a small number of headsets for in-lab studies. One general way to scale experiments is through crowdsourcing so as to have access to a large pool of diverse participants with relatively little expense of time and money. Unfortunately, there is no easy way to crowdsource VR experiments. We demonstrate that it is possible to implement and run crowdsourced VR experiments using a pre-existing massively multiplayer online VR social platform - VRChat. Our small (n=10) demonstration experiment required participants to navigate a maze in VR. Participants searched for two targets then returned to the exit while we captured completion time and position over time. While there are some limitations with using VRChat, overall we have demonstrated a promising approach for running crowdsourced VR experiments.
LandSAGE is a program meant to advocate and train scientists and policy makers in Southeast Asia countries, such as Thailand, Vietnam, Cambodia, and Laos, to use collaborative large display systems (CyberCANOEs) to monitor and mitigate landslides. In this late breaking work we provide an overview of the first phase (out of three) of this program spread over five workshops conducted in Southeast Asia. We detail a design workshop meant to understand the needs of the local scientists and adapt them to large displays, an initial prototype we developed running on the SAGE2 platform, and conclude with some of the challenges we have encountered while bringing our workshops to Southeast Asia.
Evaluating the interaction between people and non-humanoid robots requires advanced physical prototyping, and in many cases is limited to lab setting with Wizard-of-Oz control. Virtual Reality (VR) was suggested as a simulation tool, allowing for fast, flexible and iterative design processes. In this controlled study, we evaluated whether VR is a valid platform for testing social interaction between people and non-humanoid robots. Our quantitative findings indicate that social interpretations associated with two types of gestures of a robotic object are similar in virtual and physical interactions with the robot, suggesting that the core aspects of social interaction with non-humanoid robots are preserved in a VR simulation. The impact of this work to the CHI community is in indicating the potential of VR as a platform for initial evaluations of social experiences with non-humanoid robots, including interaction studies that involve different facets of the social experience.
When it comes to algorithmic rights and protections for children, designers will need to face new paradigms to solve problems they are targeting. The field of Design typically deals with form and function and is executed in molecules or pixels. But algorithms have neither. More importantly, algorithms may be biased in their execution against those without privileged status such as people of color, children, and the non-affluent. In this paper, we review our work on exploring perceptions of fairness in AI through co-design sessions with children of color in non-affluent neighborhoods of Baltimore City. The design sessions aimed at designing an artificially intelligent librarian for their local branch. Our preliminary findings showcase three key themes of this group's perceptions of fairness in the context of an artificially intelligent authority figure.
Emotional arousal influences focus, attention and decision-making, which are critical when driving. To help promote an optimal arousal level, this work considers a closed-loop navigation system that adapts its voice tonality based on the physiology of the driver. In a controlled driving simulator study, 18 participants were requested to follow the instructions of a navigation system under three different conditions: 1) "neutral" in which the voice of the navigation was always the same, 2) "congruent" in which the perceived arousal of the voice mirrored the physiological arousal of the driver, and 3) "incongruent" in which the voice of the navigation system mirrored the inverted arousal of the driver. Our results show that adapting the voice tonality can significantly influence subjective self-reported ratings as well as different aspects of driving performance.
Attention Deficit Hyperactivity Disorder (ADHD) child patients face difficulty in maintaining focus of and completing daily tasks due to their impaired executive function. Failure in such aspects leads to the formation of negative self-images as well as sub-optimal relationships with parents. This investigative research presents a voice-bot and conversational agent design intervention supporting both the ADHD child patients and their parents in dealing with daily tasks. We conducted patient's parent interviews, created voice-bot scenario designs and conducted prototyping. Potential therapeutic benefits and assistive technology user experience perspectives are discussed in this paper.
Advancement in digital civics and the emergence of online platforms have enabled vast amounts of community members to share their input on various civic proposals. The intricacy of the community input analysis process, coupled with the increased scale of community engagement, makes community input analysis particularly challenging. Civic leaders, who gather, analyze, and make critical decisions based on community input, struggle to make sense of large-scale unstructured community input due to lack of time, analytical skills, and specialized technologies. In this qualitative study, we investigated civic leaders' requirements that can accelerate the community input analysis process and help them to gain actionable insights to make better decisions. Our interviews conducted with 14 civic leaders revealed a dichotomous nature of requirements based on their roles and analysis practices. The interviews also revealed the civic leaders' desire to understand the community's opinions beyond sentiments and how text analysis and visualization can bring structure and enable sensemaking of community input. This study is our first step towards exploring the design of community input analysis technologies for civic leaders that can contribute to democratic decision-making in digital civics.
Public space/furniture are amongst the new domains to apply a data-driven approach of design intervention and improvements. Open space is essentially dynamic, livable and interactive. Various types of people spend time for various purposes. Therefore, the "Measure-Test-Refine" loop is applicable for improving open spaces gradually.
In this research, we developed our original smart chair called "Proto-Chair" that can contribute to the new design method of public space. Our chair is made with 3D-printed soft auxetic patterns. It is morphable allowing users to sit in various ways. Also, our chair is equipped with two sensors, which collect data stream to distinguish four different states of the chair. Long-term sensor stream can be stored and used to refine the furniture.
In this paper, we propose our concept, prototypes, sensing methods and results of experiments. We also introduce our future vision of a sensor-based public design platform.
Binary rating methods are known to allow easier and quicker ratings, yet they do not allow users to express how much they like/dislike an item. In this study, we argue that response times in ratings can be used to gauge the degree or strength of user ratings. We asked users to rate a set of movies while collecting individual response times, confidence levels, and reasons for user ratings. We investigate (1) the possibility of utilizing response time as a way to further distinguish likes and dislikes, and (2) the various factors that affect rating time. We find that response time can be used to distinguish between sure and unsure answers, as well as the mental process users take before rating an item. We believe that our study will be informative to better find user preferences and relations between explicit ratings and implicit data.
Having good mentors and role models is important for personal growth. However, they are not always available at the time of need. Some of our personal heroes have passed away leaving only their wisdom through writings and other artifacts. We present Wearable Wisdom, an intelligent, audio-based system for mediating wisdom and advice from mentors and personal heroes to a user. It does so by performing automated semantic analysis on the collected wisdom database and generating a simulated voice of a mentor sharing relevant wisdom and advice with the user. The results show that our platform is statistically superior in delivering relevant, yet abstract wisdom as well as providing more inspiration compared to control. We describe the implementation of the Wearable Wisdom system, report on a user study, and discuss potential applications of wisdom computation for supporting personal growth and motivation.
Recommending fashion outfits requires learning a concept of style and fashionability that is typically human. There has been an increasing research effort into creating Machine Learning models able to learn such concepts, in order to distinguish between compatible and incompatible clothes and to select an item that would complete an outfit. However, most of the work done in literature tackles this problem from a pure Machine Learning point of view, disregarding real-case scenarios and the human interaction with systems able to generate outfits. This work tries to move the problem of generating outfits to the Recommender Systems domain by presenting as its main contribution a novel algorithm for a fashion-specific Recommender System that generates fashionable outfits, able to scale its inference time to be useful in real use case scenarios, and applies such algorithm on public and industrial datasets. In addition to this, this work shows preliminary results on how this algorithm can be employed in a real scenario and reports as preliminary results the evaluations provided by three professional stylists on the outfits generated by such algorithms.
We compared anthropomorphic language use in online forums about the Amazon Echo Show, Q.bo One, and Anki Vector; carrying out a content analysis of forum and Reddit threads, as well as a Facebook group. Our expectation was to find the highest amount of anthropomorphism for Q.bo One due to its humanoid shape, however, findings suggest that the life-likeness of the artifact is not pre-dominantly linked to the appearance, but to its interactivity and attributed agency and gender.
We report results from a survey on spreadsheet use and experience with textual programming languages (n=49). We find significant correlations between self-reported formula experience, programming experience, and overall spreadsheet experience. We discuss the implications of our findings for spreadsheet research and end-user programming research, more generally.
Introducing technology support in a complex, team-based work setting requires a study of teamwork effects on technology use. In this paper, we present our initial analysis of team communications in a trauma resuscitation setting, where we deployed a digital checklist to support trauma team leaders in guiding patient care. By analyzing speech transcripts, checklist interaction logs, and videos of 15 resuscitations, we identified several tensions that arose from the use of a checklist in a team-based process with multi-step tasks. The tensions included incorrect markings of in-progress tasks as completed, failure to mark completed tasks due to missed communications, failure to record planned tasks, and difficulties in recording dynamic values. From these findings, we discuss design implications for checklist design for dynamic, team-based activities.
Enterprises often use social networking services to open brand pages to transmit various information to consumers. Since businesses are currently becoming multinational rather than relying solely on one location, it is quite important for global enterprises to localize their content of brand pages to maximize their effect in each country. To appropriately localize online content, it is necessary to understand, in each country, what kind of information do the enterprise have to post and what do the consumers post to the brand page. Therefore, the purpose of our study is to provide findings that the brand page designer can exploit. We show distinct categories of content of brand page posts and user comments that delineate characteristics of country from results of quantitative and qualitative analysis of entities extracted from the existing brand pages on Facebook in various countries. The contribution of this study is to identify international differences of both brand page posts and user comments.
A musician performing on a digital musical instrument is modeled as a feed-forward communications channel. A variety of statements are made on the mutual information of the signals flowing through the model. For example, data processing can only reduce or retain mutual information, not increase it.
It is suggested that instrument designers should consider creating high-fidelity musical instruments that generally avoid discarding information. Noise and many-to-one mappings have a tendency to decrease the mutual information, potentially reducing the fidelity of a digital musical instrument. Also, musicians need to rehearse their performances in order to avoid making noisier gestures.
Overall, it is hoped that this paper and other related papers by the authors can show how to quantify information processing in user interfaces for continuous control.
This paper presents findings from an ongoing investigation into the public understanding of augmented reality (AR) technologies. Despite AR technologies becoming increasingly available to the general public, perceptions of its use and capabilities still vary based on a number of factors. To explore this concept, a survey was conducted into individual's definition of AR and classification of AR and traditional technologies. The themes elicited from responses indicated that digital and real components were both perceived as key characteristics, but the synthesis of these components was not significant. Responses also indicated that the public is still relatively unfamiliar with AR technologies, but familiarity does lend itself to a better understanding of what is or isn't AR. Trends in public perceptions of AR are presented, but also identify the need for more investigation into the public understanding of AR technologies.
Many instant messaging applications offer group chat feature where members can share messages and make voice or video calls with a group of social contacts. We conducted a study to understand how families use group chat and what challenges they face. We found family members develop certain habits in using their family group chat, and their behavior changes when they are separated by distance or during a specific situation such as a conflict. Family members might have challenges to construct meaning from a pile of messages, find a specific message from the past and catch up with the new messages posted in the group. They need more control over when they wish to deliver a message and better ways to share their experiences.
When we try to acquire moving targets such as shooting enemies in computer games, the shapes of these targets are often varied. Considering the effects of target shape in moving target selection is essential for predicting user performances such as error rate in user interfaces involving dynamic content. In this paper, we propose a model to be descriptive of the endpoint uncertainty in pointing of moving targets with arbitrary shapes. The model combines the Gaussian mixture model (GMM) with a Ternary-Gaussian model to describe the impacts of target shape and target motion on selection endpoints of moving targets. Compared to the-state-of-the-art, our model achieved higher performance in the fitting of endpoint distribution and predicting selection error rate.
One domain application of artificial intelligence (AI) systems is humanitarian aid planning, where dynamically changing societal conditions need to be monitored and analyzed, so humanitarian organizations can coordinate efforts and appropriately support forcibly displaced peoples. Essential in facilitating effective human-AI collaboration is the explainability of AI system outputs (XAI). This late-breaking work presents an ongoing industrial research project aimed at designing, building, and implementing an XAI system for humanitarian aid planning. We draw on empirical data from our project and define current and future scenarios of use, adopting a scenario-based XAI design approach. These scenarios surface three central themes which shape human-AI collaboration in humanitarian aid planning: (1) Surfacing Causality, (2) Multifaceted Trust & Lack of Data Quality, (3) Balancing Risky Situations. We explore each theme and in doing so, further our understanding of how humanitarian aid planners can partner with AI systems to better support forcibly displaced peoples.
Previous research has found that people often make time estimation errors in their daily planning at work. However, there is limited insight on the types of estimation errors found in different knowledge work tasks. This one-day diary study with 20 academics compared the tasks people aimed to achieve in the morning with what they actually did during the day. Results showed that participants were good at estimating the duration of time-constrained tasks, such as meetings, however they were biased when estimating the time they would spend on less time-constrained tasks. Particularly, the time needed for email and coding tasks was underestimated, whereas the time needed for writing research and planning activities was overestimated. The findings extend previous research by measuring in situ whether some tasks are more prone to time estimation errors than others. Planning and scheduling (AI) tools could incorporate this knowledge to help people overcome these time estimation biases in their work.
A strong research record has evidenced that individuals tend to conform with a group's majority opinion. In contrast to existing literature that investigates conformity to a majority opinion against an objectively correct answer, the originality of our study lies in that we investigate conformity in a subjective context. The emphasis of our analysis lies on the "switching direction" in favor or against an item. In an online experiment, groups of five had to create a music playlist. A song was added to the playlist with an unanimous positive decision only. After seeing the other group members' ratings, participants had the opportunity to revise their own response. Results suggest different behavior for originally favored compared to disliked songs. For favored songs, one negative judgement by another group member was sufficient to induce participants to downvote the song. For disliked songs, in contrast, a majority of positive judgements was needed to induce participants to switch their vote.
Biases in data, such as gender and racial stereotypes, are propagated through intelligent systems and amplified at end-user applications. Existing studies detect and quantify biases based on pre-defined attributes. However, in real practices, it is difficult to gather a comprehensive list of sensitive concepts for various categories of biases. We propose a general methodology to quantify dataset biases by measuring the difference of its data distribution with a reference dataset using Maximum Mean Discrepancy. For the case of natural language data, we show that lexicon-based features quantify explicit stereotypes, while deep learning-based features further capture implicit stereotypes represented by complex semantics. Our method provides a more flexible way to detect potential biases.
Device use in smart homes is becoming increasingly communal, requiring cohabitants to navigate a complex social and technological context. In this paper, we report findings from an exploratory survey grounded in our prior work on communal technology use in the home . The findings highlight the importance of considering qualities of social relationships and technology in understanding expectations and intentions of communal technology use. We propose a design perspective of social expectations, and we suggest existing designs can be expanded using already available information such as location, and considering additional information, such as levels of trust and reliability.
Information overload is the challenge of the modern era and text the medium. Every adult reader would benefit from faster reading, provided they could retain comprehension. The present work explores the reading speed gains possible solely by manipulating typeface. We consider that optimal typeface might be a matter of an individual's preferred font, or that some fonts might be better for all users. Indeed, eight in ten of our participants believed their favorite font would be their best. Instead, our findings showed that the preferred font was seldom best, and one font did not fit all. Adult readers in our study read better with varying fonts. An average 117 word per minute difference between worst and best typeface, or around 10 additional pages an hour, means font choice is of real-world significance. Our discussion focuses on the challenges of rapidly identifying an individual's optimal font, and the exciting individuation technologies such an advance allows.
The possibility of extending legal personhood to artificial intelligence (AI) and robots has raised many questions on how these agents could be held liable given existing legal doctrines. Intending to promote a broader discussion, we conducted a survey (N=3315) asking online users' impressions of electronic agents' liability. Results suggest the existence of what we call the punishment gap that refers to the public's demand to punish automated agents upon a legal offense, even though their punishment is currently not feasible. Participants were also negative in granting assets or physical independence to electronic agents, which are crucial liability requirements. We discuss possible solutions to this punishment gap and present how legal systems might handle this contradiction while maintaining existing legal persons liable for the actions of automated agents.
AI helps us make decisions in various domains such as healthcare, finance or entertainment (e.g. Netflix, IBM Watson and etc.). However, people's trust and acceptance of AI are highly susceptible to when and how the suggestion is presented. This study examined the role of the message framing and timing on acceptance when the performance of AI is stated. The study employed a 2 (message timing: before vs. after decision) x 3 (message framing: no information vs. negative framing vs. positive framing) between-subjects experiment where participants were told to solve the specific problem with AI in different conditions. The results showed that participants perceived the suggestion of AI more reasonable and accepted it more when the performance is not stated than any information is provided and they perceived the suggestion of AI more reasonable when the message is presented before the decision is made. The theoretical and practical implications are discussed.
The rapidly expanding family mobile app market provides a great opportunity for children's education and development. However, recent research has revealed a prevalence of persuasive designs and tracking of children's data in these apps, which may harm children's online privacy and self-regulation development. We conducted 20 interviews with Android family app developers to understand their design practices. We used the lens of Value Sensitive Design to identify developer's values and how they translate them into design choices. Our findings show that though developer values are generally aligned with the best interest of users, they often must make compromises due to market pressure, lack of monetisation options, and the use of biased design guidelines. Our findings show a need for centralised actionable guidelines and important directions for HCI research to support both end-users' and developers' values.
While in most countries, Google Play and Apple App Store dominate, Chinese mobile phone users can choose among dozens of different app markets, which differ greatly in the information presented. This makes the Chinese mobile ecosystem a unique case study for investigating whether users actively choose app markets that conform to their preferences. We investigated this question in a survey of 200 Chinese users aged 18-49. Scenarios covered apps that require disclosure of different types of sensitive information (shopping, dating, health), with gaming as a baseline. Users preferred markets that were easy to use and had a wide choice of apps. Only nine users highlighted security as a feature. Despite this, they primarily used only one app market - the pre-installed one. App-market specific features were important for the game scenario, but less important for all others. We suggest that download decisions for most apps are made before users enter an app market, and discuss implications for presenting privacy and security information.
International Graduate Students (IGS) are an integral part of the United States (US) higher education ecosystem. However, they face enormous challenges while transitioning to the US due to cultural shock, language barriers, and intense academic pressure. These issues can cause poor mental health, and in some cases, increased risk of self-harm. The relative ease of access and ubiquity of social technology have the potential for supporting IGS during socio-cultural transitions. However, little is known about how IGS use social technology for seeking support. To address this gap, we conducted a qualitative study with the IGS in Western Massachusetts to understand how they seek social support. Our preliminary findings indicate that our participants preferred seeking informational and network support through social technology. They expressed that they preferred to seek emotional support in-person and from their close contacts but we found a latent pattern that shows they use technology passively (e.g, following others posts, comments, etc). We also found that over time, their support-seeking preference changes from people of similar ethnicity to people with similar experiences. Finally, we identified language as the primary barrier to actively seek any kind of support through technology.
Adopting new technology is challenging for volunteer moderation teams of online communities. Challenges are aggravated when communities increase in size. In a prior qualitative study, Kiene et al. found evidence that moderator teams adapted to challenges by relying on their experience in other technological platforms to guide the creation and adoption of innovative custom moderation "bots." In this study, we test three hypotheses on the social correlates of user innovated bot usage drawn from a previous qualitative study. We find strong evidence of the proposed relationship between community size and the use of user innovated bots. Although previous work suggests that smaller teams of moderators will be more likely to use these bots and that users with experience moderating in the previous platform will be more likely to do so, we find little evidence in support of either proposition.
In recent years, online sexual exploitation targeting teenagers has been on the rise. Given teenagers' growing reluctance toward face-to-face communication, using a counseling chatbot could be a more effective way to provide teenage victims with necessary information and emotional support. There is a small number of counseling chatbots for victims of sexual crime, but none targeting teenagers specifically. This research suggests design guidelines for building a counseling chatbot for teenage victims of online sexual exploitation with a focus on establishing rapport by empathizing with their stories and providing them with the proper information. We conducted in-depth interviews with peer counselors at the Teenage Women's Human Rights Center, who have been consulting teenage victims in their age group using online messengers. The four key findings from our research suggested using open-ended questions, using teenager-friendly language,